Skip to content

Configure execution behavior

Control how PyRel runs RelationalAI jobs by enabling optional SDK features (metrics, logging, retries) and setting related compiler and helper defaults. Use it when you need better visibility, more resilience to temporary failures, or consistent defaults across projects.

The following sections show how to configure these settings in both raiconfig.yaml and programmatically with Python code. See the API reference for details on all available options and their behavior.

  • You have access to a Snowflake account with the RelationalAI Native App installed. If you are unsure, contact your Snowflake administrator.
  • You have working PyRel installation. See Set Up Your Environment for instructions.

Execution metrics are small counters and timings collected by the PyRel SDK while it runs RelationalAI jobs. Enable them when you want visibility into how often operations run and how long they take. PyRel stores these metrics in-process and does not automatically export them to an external monitoring system.

  1. Set execution.metrics in raiconfig.yaml:

    connections:
    # ...
    execution:
    metrics: true
  2. View the collected metrics:

    from relationalai.client import connect_sync
    from relationalai.semantics import Model
    # Get a client.
    client = connect_sync()
    # Run a query to generate some metrics.
    m = Model("MyModel")
    m.select("hello world").to_df()
    # Inspect the metrics.
    metrics = client.execution_metrics
    print(metrics.counters)
    print(metrics.timings_ms)

Execution logs are structured log messages emitted by the PyRel SDK while it runs RelationalAI jobs. Enable them when you are debugging slow jobs, failures, or retries and want more context. Turning this on does not automatically print logs; you must configure Python logging to display the logger output.

  1. Set execution.logging in raiconfig.yaml:

    connections:
    # ...
    execution:
    logging: true
  2. Configure Python logging to display relationalai.client.executionlogs:

    import logging
    logging.basicConfig(level=logging.INFO)
    logging.getLogger("relationalai.client.execution").setLevel(logging.INFO)
  • Execution log lines include request kind, operation name, elapsed time, and meta.
  • PyRel does not log SQL statements or HTTP request bodies/headers in these execution logs.
  • Exception messages may include sensitive details, depending on what the backend returns.

Retries are automatic re-attempts the PyRel SDK can make when an operation fails due to a temporary error (for example, a short network hiccup). You can enable retries to reduce flaky failures in scripts and production jobs. max_attempts is the total number of attempts, including the first try.

Set execution.retries in raiconfig.yaml:

connections:
# ...
execution:
retries:
enabled: true
max_attempts: 5
base_delay_s: 0.25
max_delay_s: 5.0
jitter: 0.2

Strict mode makes the PyRel compiler validate more aggressively during compilation, so problems are caught earlier. It is disabled by default. Enable it when you want to fail fast in CI, tests, or production jobs where you prefer an error over a partial or ambiguous result.

  1. Set compiler.strict in raiconfig.yaml:

    connections:
    # ...
    compiler:
    strict: true
  2. Inspect the parsed config:

    from relationalai.semantics import Model
    m = Model("MyModel")
    print(m.config.compiler.strict)

Soft error mode treats compiler type errors as warnings instead of failures. It is disabled by default, which means type errors will cause compilation to fail. Enable it when you are iterating quickly (for example, in notebooks) and prefer to keep going while you fix types.

  1. Set compiler.soft_type_errors in raiconfig.yaml:

    connections:
    # ...
    compiler:
    soft_type_errors: true
  2. Inspect the parsed config

    from relationalai.semantics import Model
    m = Model("MyModel")
    print(m.config.compiler.soft_type_errors)

Set a client-side default timeout (in minutes) for PyRel queries when you want long-running operations to fail instead of hanging indefinitely (for example, in pipelines). It is unset by default, which means queries will not time out on the client side, though they may still be subject to server-side timeouts.

  1. Set data.query_timeout_mins in raiconfig.yaml:

    connections:
    # ...
    data:
    query_timeout_mins: 10
  2. Inspect the parsed config

    from relationalai.semantics import Model
    m = Model("MyModel")
    print(m.config.data.query_timeout_mins)

Enable or disable stream synchronization before query execution

Section titled “Enable or disable stream synchronization before query execution”

Set data.wait_for_stream_sync to control whether PyRel data helpers wait for data streams to synchronize before running queries. It is enabled by default, which means helpers will wait for streams to be fully synchronized before querying. Disable it when you prefer faster query execution and can tolerate potentially stale data.

  1. Set data.wait_for_stream_sync in raiconfig.yaml:

    connections:
    # ...
    data:
    wait_for_stream_sync: true
  2. Inspect the parsed config:

    from relationalai.semantics import Model
    m = Model("MyModel")
    print(m.config.data.wait_for_stream_sync)

Set the data freshness threshold for stream synchronization

Section titled “Set the data freshness threshold for stream synchronization”

You can set an optional data freshness threshold (in minutes) that helpers can use when waiting for stream synchronization.

Adjusting this threshold can help balance freshness guarantees with query latency by allowing queries to proceed when data is within an acceptable staleness window, even if streams are not fully synchronized. It is unset by default, which means data must be fully synchronized before queries run.

  1. Set data.data_freshness_mins in raiconfig.yaml:

    connections:
    # ...
    data:
    data_freshness_mins: 5
  2. Inspect the parsed config:

    from relationalai.semantics import Model
    m = Model("MyModel")
    print(m.config.data.data_freshness_mins)

Enable or disable automatic enablement of change tracking on Snowflake tables

Section titled “Enable or disable automatic enablement of change tracking on Snowflake tables”

Set data.ensure_change_tracking to control whether PyRel data helpers attempt to ensure Snowflake change tracking is enabled on tables they read from, which is required for data stream creation and synchronization.

When enabled, helpers will try to turn on change tracking for any table they read from if it is not already enabled. This setting is disabled by default, which means helpers will not modify Snowflake tables and will error if change tracking is not already enabled when they try to create a stream.

  1. Set data.ensure_change_tracking in raiconfig.yaml:

    connections:
    # ...
    data:
    ensure_change_tracking: false
  2. Inspect the parsed config:

    from relationalai.semantics import Model
    m = Model("MyModel")
    print(m.config.data.ensure_change_tracking)

Enable or disable column type checking during stream synchronization

Section titled “Enable or disable column type checking during stream synchronization”

Set data.check_column_types to control whether PyRel data helpers validate column types during data loading. It is enabled by default, which means helpers will check that incoming data types match the target table schema and fail if there is a mismatch or coercion issue.

  1. Set data.check_column_types in raiconfig.yaml:

    connections:
    # ...
    data:
    check_column_types: true
  2. Inspect the parsed config:

    from relationalai.semantics import Model
    m = Model("MyModel")
    print(m.config.data.check_column_types)

Set the data export URL type (internal vs external)

Section titled “Set the data export URL type (internal vs external)”

data.download_url_type selects which URL type helpers should request for export downloads: internal (default) URLs that are only accessible within Snowflake’s network, or external URLs that are accessible from outside Snowflake.

Setting the download URL to external can allow clients without network access to Snowflake (for example, those running on a local machine or in a different cloud) to access query results, but it requires that your Snowflake account is configured to allow external URLs and that you have network access to the external URL endpoints.

  1. Set data.download_url_type in raiconfig.yaml:

    connections:
    # ...
    data:
    download_url_type: external
  2. Inspect the parsed config:

    from relationalai.semantics import Model
    m = Model("MyModel")
    print(m.config.data.download_url_type)