Skip to content

What's New in Version 0.11.2

October 20, 2025 8:04 PM UTC

Version 0.11.2 of the relationalai Python package is now available!

To upgrade, activate your virtual environment and run the following command:

Terminal window
pip install --upgrade relationalai
  • You can now configure a maximum query execution time in the RelationalAI Python client using the new query_timeout_mins setting. By default, queries time out after 24 hours, but you can reduce this limit in your config file or per query to better control long-running operations.

    If a query exceeds the specified timeout, a clear QueryTimeoutExceededException is raised with guidance on how to resolve the issue—replacing the previous behavior where timeouts returned empty results without explanation.

    Example usage:

    In you raiconfig.toml file:

    query_timeout_mins = 30 # Set max query time to 30 minutes

    As a query-specific override:

    from relationalai.errors import QueryTimeoutExceededException
    import relationalai as rai
    model = rai.Model("MyModel")
    try:
    # Set a 10-minute timeout for this specific query
    with model.query(query_timeout_mins=10) as select:
    # ... some long-running query logic ...
    except QueryTimeoutExceededException as e:
    print(e)

    This enhancement helps prevent runaway queries to control costs and resource usage, while providing clear feedback when timeouts occur.

  • Improved the CLI progress indicator to better detect continuous integration (CI) environments by disabling certain terminal features. When running queries in CI, the progress display now shows a single, clean progress line instead of multiple lines, preventing cluttered logs.

  • Improved error messages for internal system errors encountered during query execution. When such errors occur, the message now includes clear troubleshooting steps to help you resolve the issue:

    1. Retry using a new model name to work around possible state-related problems.
    2. If the error persists, set the use_lqp flag to False when creating your model (e.g., model = Model(..., use_lqp=False)) to switch to the legacy backend, which may avoid the error at some performance cost.

    These enhancements provide actionable guidance directly in the error output, making it easier to diagnose and work around internal engine errors without needing additional support.

  • Fixed a bug where failed some transactions could incorrectly update the internal model cache when using use_lqp=True. Now, if a transaction or its result processing fails, the model cache is reverted to its previous valid state. This ensures that only successful transactions update the model cache, preventing future queries from using an inconsistent or partially updated model.