Manage the RAI Native App's SPCS Service
The RelationalAI (RAI) Native App runs securely in your organization’s Snowflake account as a service on Snowpark Container Services (SPCS). This section provides an overview of the RAI Native App and how to manage and monitor its status.
The RAI SPCS Service
Section titled “The RAI SPCS Service”The RAI Native App allows Python users to build and query models over data in your organization’s Snowflake account using the RAI Python API. Queries from RAI models are evaluated by RAI engines. The RAI SPCS service manages these engines, as well as the app’s CDC Service.
Resources
Section titled “Resources”The RAI SPCS service utilizes the following Snowflake resources:
Component | Description |
---|---|
App Database | The Snowflake database where the RAI Native App is installed. This database is named RELATIONALAI by default. It contains the stored procedures and views available in the RAI SQL API. |
App Warehouse | The Snowflake warehouse used by the RAI Native App to execute Snowflake queries. This warehouse is named RELATIONAL_AI_ERP_WAREHOUSE by default. |
App Compute Pool | The Snowflake compute pool used to run the RAI SPCS service. This compute pool is named RELATIONAL_AI_ERP_COMPUTE_POOL by default. |
Engine Compute Pools | Snowflake compute pools used to run the RAI engines that evaluate queries from RAI Python models. |
See Compute Resources for more information on the RAI Native App’s resources. For information on managing costs associated with these resources, see Cost Management.
Shared Data
Section titled “Shared Data”In order for Python users to create and query models, data for these models must be shared with the RAI Native App using data streams. These streams are managed by the CDC Service and ensure that queries from RAI models are evaluated against the most up-to-date data. See Data Management for more information on sharing data with the RAI Native App.
Direct Access
Section titled “Direct Access”By default, RAI clients connect to the RAI Native App using service functions, which proxy requests through Snowflake. Direct Access allows RAI clients to communicate directly with the app’s secure ingress, reducing per-call overhead by about 500ms compared to Service Functions, especially for users in close geographic proximity to your Snowflake account’s region. It is supported for local development, CLI workflows, and any service that can connect to the app’s secure network endpoint.
Use Direct Access for iterative workflows, frequent small calls, and latency-sensitive tasks.
Although direct access is configured by users in the client, it requires some setup in Snowflake by an admin. That setup depends on the authentication method chosen.
Supported Authentication Methods
Section titled “Supported Authentication Methods”Direct Access works with three authentication methods—choose the one that best fits your workflow and security needs. Some administrative setup is required in Snowflake to enable each authentication method:
- OAuth Authorization Code: Best for interactive sign‑in flows during local development.
- Programmatic Access Token (PAT): Best for headless automation and scripts.
- Snowflake Key‑Pair (JWT): Best for key‑managed, non‑interactive workflows.
Configuration
Section titled “Configuration”Users opt‑in to direct access at the Python client level, not the RAI Native App level. Python API users must enable it in their model’s configuration by setting this appropriate config keys. See the Configuration guide for details.
Ingress and Endpoint Discovery
Section titled “Ingress and Endpoint Discovery”An ingress endpoint is a secure web address where your RAI Native App instance accepts connections from clients. It’s the front door to your RAI Native App in the cloud.
You never need to hardcode this address.
The Python client automatically discovers the app’s ingress endpoint at runtime and stores it in a file on the user’s machine called /tmp/relationalai/endpoint.toml
.
If the address changes or goes down, the client will automatically attempt find the new one for you, helping ensure reliable connectivity even if the endpoint changes behind the scenes.
Security & RBAC
Section titled “Security & RBAC”Direct Access uses the same RAI application roles and Snowflake role-based access control (RBAC) as Service Functions. All data access and permissions are still governed by Snowflake roles and grants.
To use direct access, users must have the rai_user
application role for the RAI Native App.
If you created the RAI_DEVELOPER
Snowflake database role when setting up the RAI Native App, that role includes the necessary rai_user
application role.
Limitations
Section titled “Limitations”- Snowflake Notebooks and Snowflake Streamlit can’t use Direct Access because of network restrictions.
- For very large data transfers, the Python lient will automatically use Service Functions instead of Direct Access. This is because certain features needed by these formats are not supported by Direct Access and require Service Functions to work properly.
Service Management
Section titled “Service Management”The RAI SPCS service can be managed using SQL or Python.
Activate the App
Section titled “Activate the App”Requires the app_admin
application role.
To activate the RAI Native App after installing it, or to re-activate an app that has been deactivated, use the app.activate()
procedure:
CALL relationalai.app.activate();/*+----------------------------------------------+ | RelationalAI service activated successfully. | +----------------------------------------------+ */
To activate the RAI Native App, create a Provider
instance and call its .activate()
method:
import relationalai as rai
# Get a Provider instance.app = rai.Provider()
# Activate the app.app.activate()
When you activate the app for the first time, several Snowflake compute resources are provisioned. It may take several minutes to provision these resources prior to the app being fully activated. See Compute Resources for more information.
When you reactivate a deactivated app:
- All engines that were deleted when the app was deactivated are recreated.
- The CDC service is resumed if it was active when the app was deactivated.
Deactivate the App
Section titled “Deactivate the App”Requires the app_admin
application role.
You can deactivate the RAI Native App to reduce costs when you are not using it.
Use the app.deactivate()
procedure to deactivate the RAI Native App:
CALL relationalai.app.deactivate();/*+------------------------------------------------+ | RelationalAI service deactivated successfully. | +------------------------------------------------+ */
You can deactivate the RAI Native App to reduce costs when you are not using it.
To deactivate the RAI Native App, create a Provider
instance and call its .deactivate()
method:
import relationalai as rai
# Get a Provider instance.app = rai.Provider()
# Deactivate the app.app.deactivate()
When you deactivate the app:
- The CDC service is suspended.
- All engines are deleted and all in-progress transactions are cancelled. Engines deleted during deactivation are automatically re-created when the app is re-activated.
Get Service Details
Section titled “Get Service Details”Requires the app_user
application role.
Use the app.get_service()
procedure to get details about the RAI SPCS service:
CALL relationalai.app.get_service();/*+-----------------------------------------------------------+ | [ | | { | | "name": "SPCS_CONTROL_PLANE", | | "owner": "RELATIONALAI", | | "compute_pool": "RELATIONAL_AI_ERP_COMPUTE_POOL", | | "query_warehouse": "RELATIONAL_AI_ERP_WAREHOUSE", | | "resumed_on": "2024-10-27T22:10:05Z", | | "updated_on": "2024-10-27T22:10:05Z" | | } | | ] | +-----------------------------------------------------------+ */
To get details about the RAI SPCS service, create a Provider
instance and use its .sql()
method to execute the app.get_service()
SQL procedure:
import relationalai as rai
# Get a Provider instance.app = rai.Provider()
# Get service details.service = app.sql("CALL relationalai.app.get_service()", format="pandas")print(service)
Refer to the reference documentation for more information on the output fields.
Check Service Status
Section titled “Check Service Status”Requires the app_user
application role.
Use the app.service_status()
procedure to check the status of the RAI SPCS service:
CALL relationalai.app.service_status();/*+----------------------------------------------+ | [ | | { | | "message": "Running", | | "name": "main", | | "restartCount": 0, | | "startTime": "2024-10-27T22:10:05Z", | | "status": "READY" | | }, | | { | | "message": "Running", | | "name": "registry", | | "restartCount": 0, | | "startTime": "2024-10-27T22:10:06Z", | | "status": "READY" | | }, | | { | | "message": "Running", | | "name": "otel-collector", | | "restartCount": 0, | | "startTime": "2024-10-27T22:10:07Z", | | "status": "READY" | | } | | ] | +----------------------------------------------+ */
To check the status of the RAI Native App, create a Provider
instance and use its .sql()
method to execute the app.service_status()
SQL procedure:
import relationalai as rai
# Get a Provider instance.app = rai.Provider()
# Check the app status.status = app.sql("CALL relationalai.app.service_status()")print(status)
The output of app.service_status()
contains JSON objects with status details for the various SPCS containers associated with the RAI SPCS service.
If each container’s status is READY
, the service is activated and ready to use.
Refer to the reference documentation for more information on the output fields.
Create an OAuth Security Integration for Direct Access
Section titled “Create an OAuth Security Integration for Direct Access”Requires the ACCOUNTADMIN
or SECURITYADMIN
Snowflake roles.
To enable OAuth authentication for Direct Access, an admin must first create a Snowflake OAuth security integration. This enables users to authenticate interactively and obtain access and refresh tokens.
Use the following SQL to create a public OAuth client for Direct Access:
-
Create the OAuth security integration. Execute the following SQL to create a public OAuth client for Direct Access:
-- Create a public OAuth client for Direct Access.CREATE SECURITY INTEGRATION RAI_SECURITY_INTEGRATIONTYPE = OAUTHENABLED = TRUEOAUTH_CLIENT = CUSTOMOAUTH_CLIENT_TYPE = 'PUBLIC'OAUTH_ALLOW_NON_TLS_REDIRECT_URI = TRUEOAUTH_REDIRECT_URI = 'http://localhost:8001/snowflake/oauth-redirect'OAUTH_ISSUE_REFRESH_TOKENS = TRUEOAUTH_REFRESH_TOKEN_VALIDITY = 86400; -- Time in secondsUse a confidential client if your organization requires it and adjust
OAUTH_CLIENT_TYPE
and secret handling accordingly. Users will need to provide theoauth_client_secret
in their configuration if you choose this option. See the Snowflake docs for details on these options. -
Share required values with users. After creating the security integration, run the following SQL to retrieve the client ID:
-- Inspect the created integration. Change the integration name if you used-- a different one in step 1.DESC SECURITY INTEGRATION RAI_SECURITY_INTEGRATION;Share the client ID, client secret (if using a confidential client), and the redirect URI with users so they can configure their Direct Access connection.
Issue a Programmatic Access Token for Direct Access
Section titled “Issue a Programmatic Access Token for Direct Access”Requires the SECURITYADMIN
or ACCOUNTADMIN
Snowflake roles.
Use the following SQL to issue a Programmatic Access Token (PAT) for a user to authenticate headlessly.
-
Issue a PAT for the user.
-- Issue a PAT for a user with a role restriction and expiry.ALTER USER <user_name>ADD PROGRAMMATIC ACCESS TOKEN DIRECT_ACCESS_TOKENROLE_RESTRICTION = 'RAI_DEVELOPER' -- choose an appropriate roleDAYS_TO_EXPIRY = 90; -- adjust as needed -
Configure a network policy.
-- Create a permissive network policy (example; tighten for production).CREATE NETWORK POLICY RAI_DIRECT_ACCESS_POLICYALLOWED_IP_LIST = ('0.0.0.0/0');-- Assign the policy to the user.ALTER USER <user_name> SET NETWORK_POLICY = RAI_DIRECT_ACCESS_POLICY; -
Deliver the token securely. Deliver the PAT out of band and instruct the user to store it securely. Instruct users to reference the token file in their Direct Access configuration.
Configure Key-Pair (JWT) Authentication for Direct Access
Section titled “Configure Key-Pair (JWT) Authentication for Direct Access”Requires the SECURITYADMIN
or ACCOUNTADMIN
Snowflake roles.
To enable key‑pair (JWT) authentication for, users first generate a private/public key pair locally, then an admin (or the user, per policy) registers the public key in Snowflake.
-
Generate a private/public key pair locally. Run
openssl
or another tool on their machine to create an RSA key pair:# Generate a private key (omit -nocrypt to encrypt the key).openssl genrsa 2048 | openssl pkcs8 -topk8 -inform PEM -out rsa_key.p8 -nocrypt# Generate the corresponding public key.openssl rsa -in rsa_key.p8 -pubout -out rsa_key.pub -
Register the public key in Snowflake. An admin (or the user, per policy) must then run the following SQL to register the public key with the user’s Snowflake account:
-- Register the user's RSA public key.ALTER USER <user_name>SET RSA_PUBLIC_KEY = '<public key contents>'; -- paste the contents of rsa_key.pub -
Configure the user environment. After configuration the user must reference the private key (and password, if encrypted) in their Direct Access configuration.
Telemetry and Logs
Section titled “Telemetry and Logs”Application Telemetry
Section titled “Application Telemetry”When you install the RAI Native App, you are prompted to share continuous telemetry data with RelationalAI. This data is written to your account’s active event table and contains operational information such as internal system logs, engine sizes, and usage data. Customer data and personally identifiable information are not included in continuous telemetry data.
See the Snowflake documentation for details on viewing and working with event tables.
Share On-Demand Logs
Section titled “Share On-Demand Logs”Requires the sensitive_logs
application role.
Use the On-Demand Logs Notebook to create a secure-share view that can be shared with RelationalAI for support.
View a preview of the On-Demand Logs Notebook.
Sharing On-Demand Logs with RAI
Overview
This notebook creates and shares a secure-share view with RAI to provide access to on-demand logs. It’s designed to run as a Snowflake Notebook, but it can be adapted to run as a local Jupyter notebook by replacing the call to get_active_session()
with code to generate a Session
object locally. See Snowflake’s documentation for details.
from datetime import datetime, timedelta, date
def get_date_range(start_date, end_date): start = datetime.strptime(start_date, "%Y-%m-%d") end = datetime.strptime(end_date, "%Y-%m-%d") dates = [(start + timedelta(days=i)).strftime("%Y-%m-%d") for i in range((end - start).days + 1)] return "|".join(dates)
Edit this cell with your specific values:
# Override this to the date you want to get logs starting from in the format YYYY-MM-DD:start_date = date.today().strftime("%Y-%m-%d")
# Override this to the date you want to get logs until in the format YYYY-MM-DDend_date = date.today().strftime("%Y-%m-%d")
date_range = get_date_range(start_date, end_date)
# Override this to True if you want to share spcs_control_plane logsinclude_erps_logs = False
# Override this to the engine name (ex, 'testEngine') or leave it as it is if you want to get logs for all enginesengine_name = ".*"warehouse = ""
# Override this to a unique id and share it with RAIid = ""
# The account you want to share the logs with -- it should be in the format org.account_nameevent_sharing_account = ""
# Your native app name (usually relationalai)native_app_name = "relationalai"
engine_file_pattern = f'{engine_name}/clientlogs-.*({date_range}).*\\.json|{engine_name}/clientlogs-engine.json'erp_file_pattern = f'|clientlogs-cp-({date_range}).*\\.json|.*clientlogs-cp.json'if include_erps_logs: file_pattern = f'.*({engine_file_pattern}{erp_file_pattern}).*'else: file_pattern = f'.*({engine_file_pattern}).*'
sql_query = """BEGINUSE ROLE ACCOUNTADMIN;USE WAREHOUSE &warehouse; -- Update to use another warehouse if necessary.
CREATE DATABASE IF NOT EXISTS TELEMETRY_SHARING;USE DATABASE TELEMETRY_SHARING;CREATE SCHEMA IF NOT EXISTS LOGS;USE SCHEMA LOGS;
--*****---- Load staged data files to temporary tables--*****--CREATE OR REPLACE TABLE TELEMETRY_SHARING.LOGS.TELEMETRY_LOAD_TABLE_&id ( LOG_RECORD VARCHAR);
CREATE OR REPLACE TABLE TELEMETRY_SHARING.LOGS.TELEMETRY_SHARE_TABLE_&id ( TIMESTAMP TIMESTAMP, OBSERVED_TIMESTAMP TIMESTAMP, SPAN_ID VARCHAR, TRACE_ID VARCHAR, MESSAGE VARCHAR, LOG_RECORD VARCHAR);
--*****---- Create secure view from table with target log records--*****--CREATE OR REPLACE SECURE VIEW TELEMETRY_SHARING.LOGS.TELEMETRY_SHARE_VIEW_&idCOMMENT = 'View containing telemetry records to share with the RAI provider account'AS SELECT * FROM TELEMETRY_SHARING.LOGS.TELEMETRY_SHARE_TABLE_&id ;
--*****---- Share secure view with the RAI provider account--*****--CREATE OR REPLACE SHARE TELEMETRY_SHARE_&id;
CREATE DATABASE ROLE IF NOT EXISTS TELEMETRY_SHARE_ROLE;GRANT USAGE ON DATABASE TELEMETRY_SHARING TO SHARE TELEMETRY_SHARE_&id;GRANT USAGE ON SCHEMA TELEMETRY_SHARING.LOGS TO DATABASE ROLE TELEMETRY_SHARE_ROLE;GRANT SELECT ON VIEW TELEMETRY_SHARE_VIEW_&id TO DATABASE ROLE TELEMETRY_SHARE_ROLE;GRANT DATABASE ROLE TELEMETRY_SHARE_ROLE TO SHARE TELEMETRY_SHARE_&id;
ALTER SHARE TELEMETRY_SHARE_&id ADD ACCOUNTS = &event_sharing_account;
COPY INTO TELEMETRY_SHARING.LOGS.TELEMETRY_LOAD_TABLE_&idFROM ( SELECT $1 AS log_record FROM @&native_app_name.app_state.client_log_stage)PATTERN = '&file_pattern'FILE_FORMAT = (TYPE = JSON)ON_ERROR = CONTINUE; -- This will skip any log records that are invalid JSON.-- The output of the query will indicate if any records were skipped due to errors.
-- Copy from TELEMETRY_LOAD_TABLE_&id into TELEMETRY_SHARE_TABLE_&id and remove safe logsINSERT INTO TELEMETRY_SHARING.LOGS.TELEMETRY_SHARE_TABLE_&id SELECT to_timestamp(timeUnixNano) as timestamp, to_timestamp(observedTimeUnixNano) as observed_timestamp, spanId, traceId, a.value:value:stringValue as message, log_record FROM (SELECT value:timeUnixNano as timeUnixNano, value:observedTimeUnixNano as observedTimeUnixNano, value:spanId as spanId, value:traceId as traceId, value as log_record FROM TELEMETRY_SHARING.LOGS.TELEMETRY_LOAD_TABLE_&id, LATERAL FLATTEN( INPUT => TRY_PARSE_JSON($1):resourceLogs[0]:scopeLogs[0]:logRecords, OUTER => TRUE )), LATERAL FLATTEN( INPUT => log_record:body:kvlistValue:values, OUTER => TRUE) a, LATERAL FLATTEN( INPUT => log_record:attributes, OUTER => TRUE) b WHERE a.VALUE:key = 'message' and ( ( -- engine unsafe logs b.value:key = 'log.file.name' and b.value:value:stringValue ='engine-unsafe.log' ) or ( -- erps unsafe logs log_record not like '%___safe_to_log%' and log_record not like '%engine-safe.log%' and log_record like '%spcs_control_plane%'
) );END;"""sql_query = ( sql_query .replace('&warehouse', warehouse) .replace('&id', id) .replace('&event_sharing_account', event_sharing_account) .replace('&date_range', date_range) .replace('&native_app_name', native_app_name) .replace('&file_pattern', file_pattern))
# if running as a Snowflake notebook:session = get_active_session()
# # if running as a local Jupyter notebook:# from snowflake.snowpark import Session## connection_parameters = {# "account": "<your snowflake account>",# "user": "<your snowflake user>",# "password": "<your snowflake password>",# "role": "<your snowflake role>", # optional# "warehouse": "<your snowflake warehouse>", # optional# "database": "<your snowflake database>", # optional# "schema": "<your snowflake schema>", # optional# }## session = Session.builder.configs(connection_parameters).create()
session.sql(sql_query).collect()
Run the following cell to confirm that the logs are copied to the secure share:
session.sql("USE ROLE ACCOUNTADMIN").collect()session.sql(f"USE WAREHOUSE {warehouse}").collect()query = f"SELECT * FROM TELEMETRY_SHARING.LOGS.TELEMETRY_SHARE_TABLE_{id} LIMIT 5"session.sql(query).collect()