Wildlife Conservation Network
Use the Louvain community detection algorithm and degree centrality analysis to identify collaboration clusters among wildlife conservation organizations, helping optimize resource sharing and identify key coordination hubs.
What this template is for
Wildlife conservation requires coordination across many organizations—NGOs, research stations, wildlife reserves, veterinary services, and community programs. This template demonstrates how to use graph analytics to understand conservation networks:
- Louvain algorithm — a community detection method that finds clusters of densely connected nodes in a network — to identify groups of conservation organizations that work closely together
- Degree centrality — a connectivity measure that identifies the most influential organizations within each community
By analyzing a network of conservation partnerships, this template helps you discover natural collaboration clusters based on geography, species focus, or organizational mission. These communities reveal which organizations form tight-knit working groups, where coordination is already strong, and where opportunities exist for better cross-community collaboration. Degree centrality then identifies the hub organizations within each cluster that are well-positioned to lead coordination efforts and resource sharing initiatives.
Who this is for
- Beginners who want to learn community detection with a real-world use case
- Data scientists new to RelationalAI looking for a graph analytics example beyond centrality measures
- Conservation program managers optimizing partnership strategies and resource allocation
- Network analysts studying collaboration patterns in mission-driven organizations
What you’ll build
- Load a wildlife conservation network with 12 organizations and 19 undirected partnerships from CSV files
- Use RelationalAI’s Graph API to model the conservation partnership network
- Apply the Louvain algorithm to detect communities (collaboration clusters)
- Calculate degree centrality to identify which organizations serve as hubs within each community
- Analyze community characteristics (region, species focus, organization types, hub organizations)
- Generate insights on intra-community collaboration, hub organization leadership potential, and cross-community coordination opportunities
This template uses RelationalAI’s graph modeling capabilities with the Graph class to represent the network, the built-in Louvain algorithm for community detection, and degree centrality analysis to identify influential organizations within each cluster.
What’s included
- Shared model setup:
model_setup.py- Common model configuration and graph creation (used by both scripts) - Command-line script:
wildlife_conservation_network.py- CLI analysis script with detailed output - Interactive app:
app.py- Streamlit web application with visualizations and interactive analysis - Data:
data/organizations.csvanddata/partnerships.csv
Prerequisites
- Python >= 3.10
- A Snowflake account that has the RAI Native App installed.
- A Snowflake user with permissions to access the RAI Native App.
Quickstart
Follow these steps to run the template with the included sample data. You can customize the data and model as needed after you have it running end-to-end.
-
Download the ZIP file for this template and extract it:
Terminal window curl -O https://docs.relational.ai/templates/zips/v1/wildlife-conservation-network.zipunzip wildlife_conservation_network.zipcd wildlife_conservation_network -
Create and activate a virtual environment
Terminal window python -m venv .venvsource .venv/bin/activatepython -m pip install -U pip -
Install dependencies
From this folder:
Terminal window python -m pip install . -
Configure Snowflake connection and RAI profile
Terminal window rai init -
Run the template
Option A: Command-line script
Terminal window python wildlife_conservation_network.pyOption B: Interactive Streamlit app
Terminal window # Install additional dependencies for visualizationpython -m pip install .[visualization]# Launch the interactive appstreamlit run app.pyThe Streamlit app provides:
- Interactive network visualization colored by community with hover details
- Community breakdown with detailed statistics and member listings
- Geographic and species focus analysis
- Cross-community connector identification
- Summary statistics and key metrics
How it works
The template follows this flow:
CSV files → model_setup.create_model() → Apply Louvain → Analyze communities → Display results1. Shared Model Setup
Both the CLI script and Streamlit app use the same model setup from model_setup.py:
from model_setup import create_model
# Create the model, concepts, relationships, and graph (all in one call)model, graph, Organization = create_model()The create_model() function handles:
- Creating the RelationalAI model container
- Defining the
Organizationconcept with all properties - Loading organizations from CSV
- Defining the
Partnershipconcept for edges - Loading partnerships from CSV
- Creating the undirected, unweighted graph
- Returning all components for use in analysis
2. Apply Community Detection and Centrality Analysis
Use the built-in Louvain algorithm to detect communities and calculate centrality metrics:
# Apply Louvain algorithm for community detectionlouvain_communities = graph.louvain()
# Calculate degree centrality to identify hub organizations within communitiesdegree_centrality = graph.degree_centrality()
# Also calculate degree (raw partnership count) for additional analysisdegree = graph.degree()The Louvain algorithm works by:
- Optimizing modularity (a measure of how well the network divides into communities)
- Iteratively grouping nodes to maximize within-community connections
- Minimizing between-community connections
- Returning a community ID for each node
Degree centrality then normalizes the partnership counts to a 0-1 scale, making it easier to compare organizations across different community sizes. Organizations with higher centrality are well-positioned hubs that could lead coordination efforts within their community.
3. Query and Analyze Communities
Query the graph to retrieve community assignments and metrics:
from relationalai.semantics import where, Integer, Float
# Create variable referencesorg = graph.Node.ref("org")community_id = Integer.ref("community_id")centr_score = Float.ref("centr_score")partner_count = Integer.ref("partner_count")
# Query the graphresults = where( louvain_communities(org, community_id), degree_centrality(org, centr_score), degree(org, partner_count)).select( org.id, org.name, org.type, org.region, org.focus_species, community_id.alias("community"), centr_score.alias("degree_centrality"), partner_count.alias("partnerships")).to_df()
# Sort by community, then by centrality within each communityresults = results.sort_values(["community", "degree_centrality"], ascending=[True, False])4. CLI Script Analysis
The wildlife_conservation_network.py script displays:
- A table of all organizations with their community assignments and metrics
- Detailed breakdown of each detected community (size, region, species focus, hub organization)
- Network-wide summary statistics
- Actionable recommendations for conservation coordination
5. Interactive Streamlit App
The included app.py provides an interactive web interface using the same shared model:
import streamlit as stfrom model_setup import create_model
# Load the same model and query resultsmodel, graph, Organization = create_model()results = get_results(model, graph, Organization)The Streamlit app features:
- Interactive network graph: Nodes colored by community, sized by partnerships
- Community breakdown: Expandable sections with detailed metrics for each cluster
- Strategic analysis: Cross-community connectors, geographic distribution, species focus
- Summary statistics: Sidebar with key network metrics and hub organizations
Customize this template
Use your own data:
- Replace the CSV files in the
data/directory with your own conservation network, keeping the same column names (or update the logic in wildlife_conservation_network.py). - Make sure that organizations in partnerships.csv only reference valid organization IDs.
- You can add additional properties to organizations (budget, staff size, years active) by adding columns to the CSV and corresponding properties to the model.
Extend the model:
-
Add weighted partnerships: Weight edges by collaboration intensity (number of joint projects, funding shared, frequency of interaction). Update
weighted=Truein the Graph definition and add weight values to edges. -
Try different community detection algorithms: RelationalAI supports multiple algorithms:
graph.label_propagation()- Faster but less accurate for small networksgraph.weakly_connected_component()- Finds completely disconnected groups- Experiment to see which algorithm best reveals your network’s structure
-
Add temporal analysis: Include partnership start dates to analyze how communities evolve over time.
-
Calculate modularity: Measure how well the detected communities separate from each other (higher modularity = better community structure).
Troubleshooting
Why does authentication/configuration fail?
- Run
rai initto create/updateraiconfig.toml. - If you have multiple profiles, set
RAI_PROFILEor switch profiles in your config.
Why does the script fail to connect to the RAI Native App?
- Verify the Snowflake account/role/warehouse and
rai_app_nameare correct inraiconfig.toml. - Ensure the RAI Native App is installed and you have access.
Why does Louvain detect only 1 community?
- Your network might be very densely connected, or too small for meaningful community structure.
- Try adding more organizations and partnerships, or ensure there are distinct clusters in your data.
- For completely disconnected groups, use
graph.weakly_connected_components()instead.
Why are community IDs different each time I run the script?
- Community ID numbers (0, 1, 2…) are arbitrary labels assigned by the algorithm.
- What matters is which organizations are grouped together, not the specific ID number.
- The Louvain algorithm can have some randomness, so community assignments might vary slightly between runs, but the overall structure should be consistent.