Configure execution behavior
Control how PyRel runs RelationalAI jobs by enabling optional SDK features (metrics, logging, retries) and setting related compiler and helper defaults. Use it when you need better visibility, more resilience to temporary failures, or consistent defaults across projects.
The following sections show how to configure these settings in both raiconfig.yaml and programmatically with Python code.
See the API reference for details on all available options and their behavior.
- You have access to a Snowflake account with the RelationalAI Native App installed. If you are unsure, contact your Snowflake administrator.
- You have working PyRel installation. See Set Up Your Environment for instructions.
Enable or disable metrics collection
Section titled “Enable or disable metrics collection”Execution metrics are small counters and timings collected by the PyRel SDK while it runs RelationalAI jobs. Enable them when you want visibility into how often operations run and how long they take. PyRel stores these metrics in-process and does not automatically export them to an external monitoring system.
-
Set
execution.metricsinraiconfig.yaml:connections:# ...execution:metrics: true -
View the collected metrics:
from relationalai.client import connect_syncfrom relationalai.semantics import Model# Get a client.client = connect_sync()# Run a query to generate some metrics.m = Model("MyModel")m.select("hello world").to_df()# Inspect the metrics.metrics = client.execution_metricsprint(metrics.counters)print(metrics.timings_ms)
Set execution.metrics using ExecutionConfig or a plain Python dict.
These examples assume you already have a valid connection configured through file discovery.
-
Set the value with a typed config class:
from relationalai.config import ExecutionConfig, create_configfrom relationalai.semantics import Modelcfg = create_config(execution=ExecutionConfig(metrics=True))m = Model("MyModel", config=cfg)# Inspect the configured value:print(m.config.execution.metrics) -
Alternatively, set the value with a dict:
from relationalai.config import create_configfrom relationalai.semantics import Modelcfg = create_config(execution={"metrics": True})m = Model("MyModel", config=cfg)# Inspect the configured value:print(m.config.execution.metrics) -
View the collected metrics:
from relationalai.client import connect_syncfrom relationalai.semantics import Model# Get a client.client = connect_sync(config=cfg)# Run a job to generate metrics.m = Model("MyModel", config=cfg)m.select("hello world").to_df()# Inspect the metrics.metrics = client.execution_metricsprint(metrics.counters)print(metrics.timings_ms)
Enable or disable execution logs
Section titled “Enable or disable execution logs”Execution logs are structured log messages emitted by the PyRel SDK while it runs RelationalAI jobs. Enable them when you are debugging slow jobs, failures, or retries and want more context. Turning this on does not automatically print logs; you must configure Python logging to display the logger output.
-
Set
execution.logginginraiconfig.yaml:connections:# ...execution:logging: true -
Configure Python logging to display
relationalai.client.executionlogs:import logginglogging.basicConfig(level=logging.INFO)logging.getLogger("relationalai.client.execution").setLevel(logging.INFO)
Set execution.logging using ExecutionConfig or a plain Python dict.
These examples assume you already have a valid connection configured through file discovery.
-
Set the value with a typed config class:
from relationalai.config import ExecutionConfig, create_configfrom relationalai.semantics import Modelcfg = create_config(execution=ExecutionConfig(logging=True))m = Model("MyModel", config=cfg)# Inspect the configured value:print(m.config.execution.logging) -
Alternatively, set the value with a dict:
from relationalai.config import create_configfrom relationalai.semantics import Modelcfg = create_config(execution={"logging": True})m = Model("MyModel", config=cfg)# Inspect the configured value:print(m.config.execution.logging) -
Configure Python logging to display
relationalai.client.executionlogs:import logginglogging.basicConfig(level=logging.INFO)logging.getLogger("relationalai.client.execution").setLevel(logging.INFO)
- Execution log lines include request
kind, operation name, elapsed time, andmeta. - PyRel does not log SQL statements or HTTP request bodies/headers in these execution logs.
- Exception messages may include sensitive details, depending on what the backend returns.
Enable or disable retries
Section titled “Enable or disable retries”Retries are automatic re-attempts the PyRel SDK can make when an operation fails due to a temporary error (for example, a short network hiccup).
You can enable retries to reduce flaky failures in scripts and production jobs.
max_attempts is the total number of attempts, including the first try.
Set execution.retries in raiconfig.yaml:
connections: # ...
execution: retries: enabled: true max_attempts: 5 base_delay_s: 0.25 max_delay_s: 5.0 jitter: 0.2Set execution.retries using ExecutionConfig or a plain Python dict.
These examples assume you already have a valid connection configured through file discovery.
-
Configure retries and backoff with a typed model:
from relationalai.config import ExecutionConfig, create_configfrom relationalai.semantics import Modelcfg = create_config(execution=ExecutionConfig(retries=ExecutionConfig.RetriesConfig(enabled=True,max_attempts=5,base_delay_s=0.25,max_delay_s=5.0,jitter=0.2,),),)m = Model("MyModel", config=cfg) -
Alternatively, configure retries and backoff with a plain dict:
from relationalai.config import create_configfrom relationalai.semantics import Modelcfg = create_config(execution={"retries": {"enabled": True,"max_attempts": 5,"base_delay_s": 0.25,"max_delay_s": 5.0,"jitter": 0.2,},},)m = Model("MyModel", config=cfg)
Enable or disable strict mode
Section titled “Enable or disable strict mode”Strict mode makes the PyRel compiler validate more aggressively during compilation, so problems are caught earlier. It is disabled by default. Enable it when you want to fail fast in CI, tests, or production jobs where you prefer an error over a partial or ambiguous result.
-
Set
compiler.strictinraiconfig.yaml:connections:# ...compiler:strict: true -
Inspect the parsed config:
from relationalai.semantics import Modelm = Model("MyModel")print(m.config.compiler.strict)
Set compiler.strict using CompilerConfig or a plain Python dict.
These examples assume you already have a valid connection configured through file discovery.
-
Set the value with a typed config class:
from relationalai.config import CompilerConfig, create_configfrom relationalai.semantics import Modelcfg = create_config(compiler=CompilerConfig(strict=True))m = Model("MyModel", config=cfg)# Inspect the configured value:print(m.config.compiler.strict) -
Alternatively, set the value with a dict:
from relationalai.config import create_configfrom relationalai.semantics import Modelcfg = create_config(compiler={"strict": True})m = Model("MyModel", config=cfg)# Inspect the configured value:print(m.config.compiler.strict)
Enable or disable soft error mode
Section titled “Enable or disable soft error mode”Soft error mode treats compiler type errors as warnings instead of failures. It is disabled by default, which means type errors will cause compilation to fail. Enable it when you are iterating quickly (for example, in notebooks) and prefer to keep going while you fix types.
-
Set
compiler.soft_type_errorsinraiconfig.yaml:connections:# ...compiler:soft_type_errors: true -
Inspect the parsed config
from relationalai.semantics import Modelm = Model("MyModel")print(m.config.compiler.soft_type_errors)
Set compiler.soft_type_errors using CompilerConfig or a plain Python dict.
These examples assume you already have a valid connection configured through file discovery.
-
Set the value with a typed config class:
from relationalai.config import CompilerConfig, create_configfrom relationalai.semantics import Modelcfg = create_config(compiler=CompilerConfig(soft_type_errors=True))m = Model("MyModel", config=cfg)# Inspect the configured value:print(m.config.compiler.soft_type_errors) -
Alternatively, set the value with a dict:
from relationalai.config import create_configfrom relationalai.semantics import Modelcfg = create_config(compiler={"soft_type_errors": True})m = Model("MyModel", config=cfg)# Inspect the configured value:print(m.config.compiler.soft_type_errors)
Set the query timeout
Section titled “Set the query timeout”Set a client-side default timeout (in minutes) for PyRel queries when you want long-running operations to fail instead of hanging indefinitely (for example, in pipelines). It is unset by default, which means queries will not time out on the client side, though they may still be subject to server-side timeouts.
-
Set
data.query_timeout_minsinraiconfig.yaml:connections:# ...data:query_timeout_mins: 10 -
Inspect the parsed config
from relationalai.semantics import Modelm = Model("MyModel")print(m.config.data.query_timeout_mins)
Set data.query_timeout_mins using DataConfig or a plain Python dict.
These examples assume you already have a valid connection configured through file discovery.
-
Set the query timeout with a typed config class:
from relationalai.config import DataConfig, create_configfrom relationalai.semantics import Modelcfg = create_config(data=DataConfig(query_timeout_mins=10))m = Model("MyModel", config=cfg)# Inspect the configured value (in minutes):print(m.config.data.query_timeout_mins) -
Set the query timeout with a dict:
from relationalai.config import create_configfrom relationalai.semantics import Modelcfg = create_config(data={"query_timeout_mins": 10})m = Model("MyModel", config=cfg)# Inspect the configured value (in minutes):print(m.config.data.query_timeout_mins)
Enable or disable stream synchronization before query execution
Section titled “Enable or disable stream synchronization before query execution”Set data.wait_for_stream_sync to control whether PyRel data helpers wait for data streams to synchronize before running queries.
It is enabled by default, which means helpers will wait for streams to be fully synchronized before querying.
Disable it when you prefer faster query execution and can tolerate potentially stale data.
-
Set
data.wait_for_stream_syncinraiconfig.yaml:connections:# ...data:wait_for_stream_sync: true -
Inspect the parsed config:
from relationalai.semantics import Modelm = Model("MyModel")print(m.config.data.wait_for_stream_sync)
Set data.wait_for_stream_sync using DataConfig or a plain Python dict.
These examples assume you already have a valid connection configured through file discovery.
-
Set the value with a typed config class
from relationalai.config import DataConfig, create_configfrom relationalai.semantics import Modelcfg = create_config(data=DataConfig(wait_for_stream_sync=True))m = Model("MyModel", config=cfg)# Inspect the configured value:print(m.config.data.wait_for_stream_sync) -
Alternatively, set the value with a dict
from relationalai.config import create_configfrom relationalai.semantics import Modelcfg = create_config(data={"wait_for_stream_sync": True})m = Model("MyModel", config=cfg)# Inspect the configured value:print(m.config.data.wait_for_stream_sync)
Set the data freshness threshold for stream synchronization
Section titled “Set the data freshness threshold for stream synchronization”You can set an optional data freshness threshold (in minutes) that helpers can use when waiting for stream synchronization.
Adjusting this threshold can help balance freshness guarantees with query latency by allowing queries to proceed when data is within an acceptable staleness window, even if streams are not fully synchronized. It is unset by default, which means data must be fully synchronized before queries run.
-
Set
data.data_freshness_minsinraiconfig.yaml:connections:# ...data:data_freshness_mins: 5 -
Inspect the parsed config:
from relationalai.semantics import Modelm = Model("MyModel")print(m.config.data.data_freshness_mins)
Set data.data_freshness_mins using DataConfig or a plain Python dict.
These examples assume you already have a valid connection configured through file discovery.
-
Set the freshness threshold with a typed config class:
from relationalai.config import DataConfig, create_configfrom relationalai.semantics import Modelcfg = create_config(data=DataConfig(data_freshness_mins=5))m = Model("MyModel", config=cfg)# Inspect the configured value (in minutes):print(m.config.data.data_freshness_mins) -
Alternatively, set the freshness threshold with a dict:
from relationalai.config import create_configfrom relationalai.semantics import Modelcfg = create_config(data={"data_freshness_mins": 5})m = Model("MyModel", config=cfg)# Inspect the configured value (in minutes):print(m.config.data.data_freshness_mins)
Enable or disable automatic enablement of change tracking on Snowflake tables
Section titled “Enable or disable automatic enablement of change tracking on Snowflake tables”Set data.ensure_change_tracking to control whether PyRel data helpers attempt to ensure Snowflake change tracking is enabled on tables they read from, which is required for data stream creation and synchronization.
When enabled, helpers will try to turn on change tracking for any table they read from if it is not already enabled. This setting is disabled by default, which means helpers will not modify Snowflake tables and will error if change tracking is not already enabled when they try to create a stream.
-
Set
data.ensure_change_trackinginraiconfig.yaml:connections:# ...data:ensure_change_tracking: false -
Inspect the parsed config:
from relationalai.semantics import Modelm = Model("MyModel")print(m.config.data.ensure_change_tracking)
Set data.ensure_change_tracking using DataConfig or a plain Python dict.
These examples assume you already have a valid connection configured through file discovery.
-
Set the value with a typed config class:
from relationalai.config import DataConfig, create_configfrom relationalai.semantics import Modelcfg = create_config(data=DataConfig(ensure_change_tracking=False))m = Model("MyModel", config=cfg)# Inspect the configured value:print(m.config.data.ensure_change_tracking) -
Alternatively, set the value with a dict:
from relationalai.config import create_configfrom relationalai.semantics import Modelcfg = create_config(data={"ensure_change_tracking": False})m = Model("MyModel", config=cfg)# Inspect the configured value:print(m.config.data.ensure_change_tracking)
Enable or disable column type checking during stream synchronization
Section titled “Enable or disable column type checking during stream synchronization”Set data.check_column_types to control whether PyRel data helpers validate column types during data loading.
It is enabled by default, which means helpers will check that incoming data types match the target table schema and fail if there is a mismatch or coercion issue.
-
Set
data.check_column_typesinraiconfig.yaml:connections:# ...data:check_column_types: true -
Inspect the parsed config:
from relationalai.semantics import Modelm = Model("MyModel")print(m.config.data.check_column_types)
Set data.check_column_types using DataConfig or a plain Python dict.
These examples assume you already have a valid connection configured through file discovery.
-
Set the value with a typed config class:
from relationalai.config import DataConfig, create_configfrom relationalai.semantics import Modelcfg = create_config(data=DataConfig(check_column_types=True))m = Model("MyModel", config=cfg)# Inspect the configured value:print(m.config.data.check_column_types) -
Alternatively, set the value with a dict:
from relationalai.config import create_configfrom relationalai.semantics import Modelcfg = create_config(data={"check_column_types": True})m = Model("MyModel", config=cfg)# Inspect the configured value:print(m.config.data.check_column_types)
Set the data export URL type (internal vs external)
Section titled “Set the data export URL type (internal vs external)”data.download_url_type selects which URL type helpers should request for export downloads: internal (default) URLs that are only accessible within Snowflake’s network, or external URLs that are accessible from outside Snowflake.
Setting the download URL to external can allow clients without network access to Snowflake (for example, those running on a local machine or in a different cloud) to access query results, but it requires that your Snowflake account is configured to allow external URLs and that you have network access to the external URL endpoints.
-
Set
data.download_url_typeinraiconfig.yaml:connections:# ...data:download_url_type: external -
Inspect the parsed config:
from relationalai.semantics import Modelm = Model("MyModel")print(m.config.data.download_url_type)
Set data.download_url_type using DataConfig or a plain Python dict.
These examples assume you already have a valid connection configured through file discovery.
-
Set the download URL type with a typed config class:
from relationalai.config import DataConfig, create_configfrom relationalai.semantics import Modelcfg = create_config(data=DataConfig(download_url_type="external"))m = Model("MyModel", config=cfg)# Inspect the configured value:print(m.config.data.download_url_type) -
Alternatively, set the download URL type with a dict:
from relationalai.config import create_configfrom relationalai.semantics import Modelcfg = create_config(data={"download_url_type": "external"})m = Model("MyModel", config=cfg)# Inspect the configured value:print(m.config.data.download_url_type)