Skip to content

Python API Release Notes

1.0.19

Python SDK


Version 1.0.19 of the relationalai Python package is now available!

To upgrade, activate your virtual environment and run the following command:

pip install --upgrade relationalai

New Features and Enhancements

  • Graph.is_acyclic() is now much faster on many large graphs, especially large trees and grids. In benchmarks for the change, the runtime for the largest tested grid graph dropped from about 751 seconds to 47 seconds.

Bug Fixes

  • Removed unsupported std.re APIs. These APIs would raise errors if you tried to use them. All documentation pertaining to std.re has also been removed.

  • Fixed active profile selection so that the RAI_ACTIVE_PROFILE environment variable takes precedence over active_profile in raiconfig.yaml.

  • Fixed prescriptive models that combined filtered aggregates in one expression. Before, PyRel could leak one aggregate's filter into another, so objectives and constraints could silently drop instead of being sent to the solver. For example:

    from relationalai.semantics import Float, Integer, Model
    from relationalai.semantics.reasoners.prescriptive import Problem
    
    model = Model("ScopedAggregates")
    X = model.Concept("X", identify_by={"i": Integer})
    X.v = model.Property(f"{X} has {Float:v}")
    
    model.define(X.new(i=1), X.new(i=2))
    
    problem = Problem(model, Float)
    problem.solve_for(X.v, name=["v", X.i], lower=0, upper=10)
    
    v = Float.ref()
    problem.satisfy(model.require(sum(v).where(X.v(v), X.i == 1) == 1.0))
    problem.satisfy(model.require(sum(v).where(X.v(v), X.i == 2) == 4.0))
    
    # The bug was triggered by combining two filtered aggregates in one expression.
    problem.minimize(sum(X.v).where(X.i == 1) + sum(X.v).where(X.i == 2))
    problem.satisfy(model.require(sum(X.v).where(X.i == 1) <= sum(X.v).where(X.i == 2)))
    
    problem.solve("highs")
    
    print(model.select(problem.num_min_objectives().alias("objective_count")).to_df())
    print(model.select(problem.num_constraints().alias("constraint_count")).to_df())
    print("objective_value:", problem.solve_info().objective_value)
    

    Output before the fix:

      objective_count
    0                0
      constraint_count
    0                 2
    objective_value: 0.0
    

    Output after the fix:

      objective_count
    0                1
      constraint_count
    0                 3
    objective_value: 5.0
    
  • Fixed model.data() calls created on the same source line, such as inside a loop. model.data() creates a temporary data source each time you call it. Before, if two calls happened on the same line, PyRel could mistake them for the same source. That could make later queries fail even when the input data was valid.

    For example:

    from relationalai.semantics import Integer, Model, String
    
    m = Model("DataLoop")
    Item = m.Concept("Item", identify_by={"id": Integer, "name": String})
    
    batches = [
        pd.DataFrame([(1, "a"), (2, "b")], columns=["id", "name"]),
        pd.DataFrame([(3, "c"), (4, "d")], columns=["id", "name"]),
    ]
    
    for batch in batches:
        m.define(Item.new(m.data(batch).to_schema()))
    
    print(m.select(Item.id, Item.name).to_df())
    

    Output before the fix:

    RelQueryError: Query error
    

    Output after the fix:

      id name
    0  1   a
    1  2   b
    2  3   c
    3  4   d
    

    Now PyRel gives each same-line data source its own deterministic sequence number, so queries compile and run correctly.

1.0.18

Python SDK


Version 1.0.18 of the relationalai Python package is now available!

To upgrade, activate your virtual environment and run the following command:

pip install --upgrade relationalai

New Features and Enhancements

  • You can now pass an existing Snowpark session to create_config() with snowflake_session=.

    For example:

    from snowflake.snowpark import Session
    from relationalai.config import create_config
    
    snowflake_connection_parameters = {...}
    snowpark_session = Session.builder.configs(snowflake_connection_parameters).create()
    
    config = create_config(snowflake_session=snowpark_session)
    assert config.get_session() is snowpark_session
    

    :::note If you already load config from raiconfig.yaml or another supported config source, queries use the injected session instead of the Snowflake connection from that config. :::

    This change makes it easier to use PyRel in environments where you already have a Snowpark session.

Bug Fixes

  • Fixed nested lookups that could drop rows when the inner lookup had no match. Before, PyRel could skip rows instead of keeping the row and showing NULL for the missing value. For example:

    from relationalai.semantics import Integer, Model, String
    
    m = Model("CustomerModel")
    
    Customer = m.Concept("Customer", identify_by={"id": Integer})
    Customer.name = m.Property(f"{Customer} has name {String:name}")
    ReferralCode = m.Relationship(f"{Integer:customer_id} maps to {String:referral_code}")
    
    m.define(
        Customer.new(id=1, name="Alice"),
        Customer.new(id=2, name="Bob"),
        ReferralCode(1, "ABC123")
    )
    
    customer_id = Integer.ref()
    code = String.ref()
    
    q = (
      m.where(
        Customer.id == customer_id,
        referral_code := m.where(ReferralCode(customer_id, code)).select(code),
      )
      .select(Customer.name, referral_code.alias("referral_code"))
    )
    
    print(q.to_df())
    

    Output before the fix:

      name referral_code
    0 Alice ABC123
    

    Output after the fix:

      name referral_code
    0 Alice ABC123
    1 Bob   NULL
    
  • Fixed datetime.date.range() and datetime.datetime.range() when the end value comes from a DSL value like a Concept, Property or Relationship instead of a regular Python date or datetime. Before, the query could return one combined date range instead of a separate date range for each entity. For example:

    from relationalai.semantics import Date, Integer, Model, define, select, std
    
    m = Model("dates")
    
    Foo = m.Concept("Foo", identify_by={"id": Integer})
    Foo.end_date = m.Property(f"{Foo} has {Date:end_date}")
    
    define(
        Foo.new(id=1, end_date=dt.date(2020, 1, 2)),
        Foo.new(id=2, end_date=dt.date(2020, 1, 4)),
    )
    
    select(
        std.datetime.date.range(dt.date(2020, 1, 1), Foo.end_date, freq="D")
    ).to_df()
    

    Before, that query could return one combined date range instead of a separate date range for each Foo row:

    # Before the fix:
      result
    0 2020-01-01
    1 2020-01-02
    2 2020-01-03
    3 2020-01-04
    

    Now there are duplicate dates because both Foo rows contribute to the result:

    # After the fix:
      result
    0 2020-01-01
    2 2020-01-02
    3 2020-01-02
    4 2020-01-03
    5 2020-01-04
    
  • Fixed inferred field naming for numeric aliases such as Integer. Before, if you let PyRel infer a field name from one of those types, it would generate a name like number_38_0 instead of integer, so string lookups such as Scores["integer"] would fail with a KeyError exception. For example:

    from relationalai.semantics import Integer, Model, String
    
    m = Model("scores")
    # The Integer field is unnamed here, so PyRel infers its field name.
    Scores = m.Relationship(f"{String:name} has {Integer}")
    
    # Prior to the fix, this lookup would raise a KeyError.
    m.select(Scores["integer"]).to_df()
    

    Now PyRel infers the correct field name, so Scores["integer"] resolves to the Integer field.

1.0.17

Python SDK


Version 1.0.17 of the relationalai Python package is now available!

To upgrade, activate your virtual environment and run the following command:

pip install --upgrade relationalai

New Features and Enhancements

  • PyRel now supports math.round(). You can round to the nearest integer or to a specified number of decimal places inside semantics expressions.

  • rai debugger now makes compiler output easier to inspect. You can choose which compiler steps to show in one dialog, see clearer color coding, and view job IDs directly in the UI. Before, you had to toggle those steps one by one and query cards did not show the job ID.

  • You can no longer create a property or relationship on a concept named Relationship, Property, Concept, or Table. If you try to use one of these names, PyRel raises an error such as:

    'Relationship' is a reserved name and cannot be used as a relationship name on concept 'Person'.
    

    For example:

    from relationalai.semantics import Model, String
    
    m = Model()
    Person = m.Concept("Person")
    
    # Allowed
    Person.relationship = m.Property(f"{Person} has {String:relationship}")
    
    # Raises the reserved-name error
    Person.Relationship = m.Property(f"{Person} has {String:relationship}")
    

Bug Fixes

  • Fixed a slowdown when you ran the first query on a second model in the same Python process. Before, PyRel could check data sources from all models instead of only the model you were querying. Now each model keeps its own data sources, so queries check only the relevant ones.

  • Fixed exports to Snowflake tables when you declare the destination schema with Model.Table(). Before, if the destination schema had more columns than the current export produced, PyRel could leave out the extra declared columns. Now it keeps those columns and fills missing values with NULL, so later writes to the same table can still succeed.

    For example:

    from relationalai.semantics import Integer, Model, String
    
    m = Model("MyModel")
    Order = m.Concept("Order", identify_by={"id": Integer})
    Order.status = m.Property(f"{Order} has {String:status}")
    
    # This query exports only order_id and status.
    q = m.select(Order.id.alias("order_id"), Order.status)
    
    # The destination schema also declares shipped_at.
    out = m.Table(
        "ANALYTICS.PUBLIC.ORDER_EXPORT",
        schema={
            "order_id": Integer,
            "status": String,
            "shipped_at": String,
        },
    )
    
    q.into(out).exec()
    # In 1.0.17, shipped_at stays in the exported Snowflake table and is filled with NULL.
    
  • Fixed fully qualified Snowflake names that use quoted lowercase or mixed-case object names. Before, PyRel could strip the quotes from names such as ANALYTICS.PUBLIC."orders" or ANALYTICS.PUBLIC."SalesSummary". That could make Snowflake read them as ANALYTICS.PUBLIC.ORDERS or ANALYTICS.PUBLIC.SALESSUMMARY, which might be different objects or might not even exist. Now PyRel preserves those quotes.

1.0.16

Python SDK


Version 1.0.16 of the relationalai Python package is now available!

To upgrade, activate your virtual environment and run the following command:

pip install --upgrade relationalai

Performance and Reliability

  • Repeated Graph API calls are now faster. When you call the same graph method with the same arguments, PyRel reuses the earlier graph relationship instead of rebuilding it. This applies to methods such as reachable(), degree and neighbor methods, and several similarity, clustering, and centrality methods. See Run a graph algorithm and the Graph API reference for details.

  • Problem.solve() now omits variable and expression when sending jobs to the prescriptive reasoner. This helps avoid exposing business data through generated names. If you need those names in printed solver output, set print_format to a format such as lp, mps, or moi.

Bug Fixes

  • Fixed property lookups in concept hierarchies with multiple parents. Before, when implicit properties were enabled, PyRel could create the property on the wrong parent and return missing or incorrect data. Now it uses the declared property instead. See Declare relationships and properties.

  • PyRel now warns about possible typos in implicitly created properties. Before, a near-match property name could silently create a new property. Now PyRel points out the likely mistake sooner. For example, if you access Person.friend after declaring Person.friends, PyRel warns Did you mean: friends.

  • Fixed supertype membership for derived subtypes. Before, some subtype facts did not propagate all the way up the inheritance chain, so counts and membership queries on parent concepts could be incomplete.

1.0.15

Python SDK


Version 1.0.15 of the relationalai Python package is now available!

To upgrade, activate your virtual environment and run the following command:

pip install --upgrade relationalai

New Features and Enhancements

  • Added support for a RAI_ACTIVE_PROFILE environment variable as a way to select the active profile. You can now set the active profile with RAI_ACTIVE_PROFILE=<profile_name> in addition to active_profile in raiconfig.yaml. See Use profiles to manage multiple configurations for details.

Performance and Reliability

  • Reduced Model.Table() overhead on large Snowflake schemas. Before, PyRel could scan the whole schema on each lookup. Now PyRel fetches metadata one table at a time, which reduces cold-start overhead when a model touches many tables in a large schema. For background on declaring Snowflake sources, see Declare data sources.

  • Improved behavior on read-only notebook and container file systems. Before, PyRel could fail while writing caches or trace files with misleading errors. Now it falls back to a writable temporary directory in those environments.

Bug Fixes

  • Fixed Snowflake authentication failures that could surface as a misleading No default Session error. PyRel now reports the underlying Snowflake auth failure when it cannot create a new session.

  • Fixed a startup crash in rai debugger. Before, some launches could fail with RuntimeError: You must call ui.run() to start the server. Now the debugger starts without that reload-related crash.

  • Fixed Model.data() so rows with mixed integer and floating-point columns preserve integer values. Before, some integer fields could be upcast and show up as values like 1.0 in query results.