Sprint Scheduling
Assign backlog issues to developers across sprints, minimizing weighted completion time while respecting capacity and skill constraints.
Browse files
What this template is for
Software development teams need to decide which developer works on which issue in which sprint. Manually balancing priorities, story points, skill requirements, and capacity across multiple sprints is error-prone and time-consuming, especially as the backlog grows. An optimization model can produce an assignment plan that minimizes delay on high-priority work while keeping every developer within their capacity.
This template assigns 30 backlog issues to 8 developers across 4 two-week sprints. It demonstrates how to filter issues by epoch timestamp to scope the backlog to a planning horizon, map epoch-based creation dates to categorical sprint periods, and build a binary assignment optimization that respects developer capacity and team skill constraints.
Prescriptive reasoning is well suited here because the problem has combinatorial structure — each issue must go to exactly one developer in one sprint, developers have capacity limits, and only developers with matching team skills can take on an issue. The solver explores the full space of valid assignments to find the schedule that minimizes weighted completion time, prioritizing high-urgency issues into earlier sprints.
Who this is for
- Intermediate users familiar with mixed-integer programming concepts (binary variables, assignment constraints)
- Engineering managers looking to automate sprint planning
- Project managers balancing team workloads across multiple sprints
- Data scientists working with epoch-timestamped event data who need temporal filtering patterns
What you’ll build
- Load developers, sprints, issues, and skill mappings from CSV files
- Filter issues by epoch timestamp to scope the backlog to a planning horizon
- Map each issue’s
created_atepoch to a target sprint (earliest eligible sprint) - Build a cross-product
Assignmentconcept linking developers, issues, and sprints where skill constraints hold - Define binary decision variables for each valid (developer, issue, sprint) assignment
- Enforce that each issue is assigned exactly once and developer capacity is not exceeded per sprint
- Minimize weighted completion time so high-priority issues land in earlier sprints
- Run scenario analysis across capacity multiplier levels (0.35, 0.5, 1.0) to see the impact of reduced team capacity
- Solve with HiGHS and display the assignment plan per scenario
What’s included
- Script:
sprint_scheduling.py— end-to-end model, solve, and results - Data:
data/developers.csv,data/sprints.csv,data/issues.csv,data/skills.csv - Config:
pyproject.toml
Prerequisites
Access
- A Snowflake account that has the RAI Native App installed.
- A Snowflake user with permissions to access the RAI Native App.
Tools
- Python >= 3.10
- RelationalAI Python SDK (
relationalai) >= 1.0.13
Quickstart
-
Download ZIP:
Terminal window curl -O https://docs.relational.ai/templates/zips/v1/sprint_scheduling.zipunzip sprint_scheduling.zipcd sprint_scheduling -
Create venv:
Terminal window python -m venv .venvsource .venv/bin/activatepython -m pip install --upgrade pip -
Install:
Terminal window python -m pip install . -
Configure:
Terminal window rai init -
Run:
Terminal window python sprint_scheduling.py -
Expected output:
Running scenario: capacity_multiplier = 0.35Status: INFEASIBLE -- skipping resultsRunning scenario: capacity_multiplier = 0.5Status: OPTIMAL, Objective: 112.0Planning horizon: 2024-10-01 to 2024-11-26Issues in scope: 25 (of 30 total)Assignments:assign_PROJ-106_Alice_Sprint 1 1.0assign_PROJ-107_Carol_Sprint 1 1.0assign_PROJ-108_Frank_Sprint 1 1.0assign_PROJ-109_Bob_Sprint 1 1.0assign_PROJ-110_Dave_Sprint 1 1.0assign_PROJ-111_Hank_Sprint 1 1.0assign_PROJ-112_Grace_Sprint 1 1.0assign_PROJ-113_Dave_Sprint 1 1.0assign_PROJ-114_Bob_Sprint 1 1.0assign_PROJ-115_Eve_Sprint 1 1.0assign_PROJ-116_Dave_Sprint 2 1.0assign_PROJ-117_Alice_Sprint 2 1.0...Running scenario: capacity_multiplier = 1.0Status: OPTIMAL, Objective: 112.0Planning horizon: 2024-10-01 to 2024-11-26Issues in scope: 25 (of 30 total)Assignments:assign_PROJ-106_Alice_Sprint 1 1.0assign_PROJ-107_Carol_Sprint 1 1.0...==================================================Scenario Analysis Summary==================================================capacity_multiplier=0.35: INFEASIBLE, obj=N/Acapacity_multiplier=0.5: OPTIMAL, obj=112.0capacity_multiplier=1.0: OPTIMAL, obj=112.0At 35% capacity the problem is infeasible — not enough hours to schedule all in-scope issues. At 50% and 100% capacity the solver finds the same optimal objective (112.0), meaning half capacity is sufficient to schedule all 25 issues within the 4-sprint horizon. The assignments front-load 10 issues into Sprint 1 and distribute the rest across Sprints 2-3.
Template structure
.├── README.md├── pyproject.toml├── sprint_scheduling.py└── data/ ├── developers.csv ├── sprints.csv ├── issues.csv └── skills.csvHow it works
1. Epoch filtering — scope the backlog to the planning horizon
Issues have a created_at column storing Unix epoch seconds. The script converts the planning horizon boundaries to epochs and filters:
planning_start = "2024-10-01"planning_end = "2024-11-26"
start_epoch = int(datetime.strptime(planning_start, "%Y-%m-%d").timestamp())end_epoch = int(datetime.strptime(planning_end, "%Y-%m-%d").timestamp())
filtered_issues = issues_df[ (issues_df["created_at"] >= start_epoch) & (issues_df["created_at"] <= end_epoch)].copy()This keeps only issues created within the planning horizon. Issues created before or after are excluded from scheduling.
2. Epoch-to-categorical-period mapping — assign target sprints
Unlike the date-to-integer mapping in the demand planning template, this template maps epochs to categorical sprint periods. Each issue is assigned to its earliest eligible sprint based on when it was created:
def map_to_sprint(created_at_epoch): for _, sprint in sprints_df.iterrows(): if created_at_epoch < sprint["startdate"]: return int(sprint["number"]) if sprint["startdate"] <= created_at_epoch < sprint["enddate"]: return int(sprint["number"]) return int(sprints_df["number"].max())
filtered_issues["target_sprint_number"] = filtered_issues["created_at"].apply(map_to_sprint)Issues created during Sprint 2 cannot be assigned to Sprint 1 (only Sprint 2 or later). Issues created before planning_start are excluded by the epoch filter in step 1.
3. Assignment domain with skill constraints
The Assignment concept is a cross-product of developers, issues, and sprints, filtered by two conditions: the developer must have the matching team skill, and the sprint must be at or after the issue’s target sprint:
Assignment = Concept("Assignment")Assignment.developer = Property(f"{Assignment} has {Developer}", short_name="developer")Assignment.issue = Property(f"{Assignment} has {Issue}", short_name="issue")Assignment.sprint = Property(f"{Assignment} has {Sprint}", short_name="sprint")
model.define( Assignment.new(developer=Developer, issue=Issue, sprint=Sprint)).where( Skill.developer_id == Developer.id, Skill.team == Issue.team, Sprint.number >= Issue.target_sprint_number,)This dramatically reduces the search space by only creating assignment variables where a valid assignment could exist.
4. Binary assignment variables and constraints
Each valid assignment gets a binary variable (1 = assigned, 0 = not assigned). The “each issue assigned exactly once” constraint is defined once as a named expression, while the capacity constraint references a capacity_multiplier that varies per scenario:
problem.solve_for( Assignment.x_assigned, type="bin", name=["assign", Assignment.issue.key, Assignment.developer.name, Assignment.sprint.name],)
# Static constraint: each issue assigned exactly onceissue_once = model.require( sum(Assignment.x_assigned).per(Issue) == 1).where(Assignment.issue == Issue)problem.satisfy(issue_once)
# Parameterized constraint: developer capacity per sprint (references capacity_multiplier)problem.satisfy(model.require( sum(Assignment.x_assigned * Assignment.issue.story_points).per(Developer, Sprint) <= Developer.capacity_points_per_sprint * capacity_multiplier).where(Assignment.developer == Developer, Assignment.sprint == Sprint))5. Weighted completion time objective
The objective minimizes a weighted sum where high-priority issues (lower priority number) incur a higher cost when placed in later sprints. It is defined once and reused across scenario iterations:
max_priority = 3weighted_completion = sum( Assignment.x_assigned * (max_priority + 1 - Assignment.issue.priority) * Assignment.sprint.number)problem.minimize(weighted_completion)A priority-1 issue in Sprint 4 costs (4-1+1) * 4 = 12, while a priority-3 issue in Sprint 4 costs (4-3+1) * 4 = 8. This pushes the most urgent work into the earliest sprints.
Customize this template
- Change the planning horizon: Edit
planning_startandplanning_endto include more or fewer sprints. Add corresponding rows tosprints.csv. - Adjust developer capacity: Modify
capacity_points_per_sprintindevelopers.csvor editSCENARIO_VALUESto sweep differentcapacity_multiplierlevels (e.g.,[0.35, 0.5, 1.0]). - Add cross-team skills: Append rows to
skills.csvto let developers work on issues outside their primary team. Grace and Hank already have cross-team skills in the sample data. - Change the priority scheme: Adjust
max_priorityand the weight formula in the objective to match your team’s priority scale. - Add sprint-specific constraints: For example, require that certain issues are completed by a specific sprint using additional
.where()clauses.
Troubleshooting
ModuleNotFoundError: No module named 'relationalai'
Make sure you have activated your virtual environment and installed dependencies:
source .venv/bin/activatepython -m pip install .Solver returns INFEASIBLE
Check that total developer capacity across all sprints is sufficient to cover the total story points in the backlog. With the default data, 8 developers with 14-20 points each across 4 sprints provide ample capacity for 30 issues. If you have added issues or reduced capacity, try increasing capacity_multiplier or adding more sprints.
Some issues are not assigned
Every issue must have at least one developer with a matching team skill. Verify that skills.csv covers all teams present in issues.csv. If a team has no skilled developers, the solver cannot assign those issues and will report infeasibility.
rai init fails or connection errors
Ensure your Snowflake account has the RAI Native App installed and your user has the required permissions. Run rai init to configure your connection profile. See the RelationalAI documentation for setup details.