The SANDAG Validation Tool is a Python-Dash web application designed to validate travel model outputs against observed traffic and transit counts. It allows transportation planners and analysts to visualize model performance across multiple scenarios with interactive plots, statistics, and geospatial maps. The app is deployed both locally and on Azure using automated GitHub Actions workflows.
🔗 Live App: validation-tool
📦 Tech Stack: Python · Dash · Plotly · Dash Leaflet · Azure Web App · Databricks · GitHub Actions
Scenario Comparison: Select and compare model scenarios for different time periods and vehicle classes.
Volume & VMT Validation: Compare modeled vs. observed flows and VMT by geography, volume category, and road class.
Truck Validation: Analyze light/heavy/mid-duty trucks with year-based filters.
Transit Validation: Evaluate model performance for transit boardings across modes and TODs.
Interactive Maps: Visualize segment-level or route-level performance gaps on styled Leaflet maps.
Automated CI/CD: Changes pushed to dev or main trigger deployment to Azure dev or production slots respectively.
.
├── .github
│ └── workflows
│ └── azure_dev_validation-tool.yml
| └──main_validation-tool.yml
├── .gitignore
├── README.md
├── app.py
├── config.yaml
├── load_data.py
├── requirements.txt
└── validation_plot_generator.py
- app.py: main script defining the layout of dash app. Including page layout design, scenario selector, menu and page switching and callbacks.
- load_data.py: script to read data from databricks and T drive according to environment
- validation_plot_generator.py: includes a series functions about generating graphs, maps and layouts
- requirements.txt: required python packages (for both local and Azure web service)
- config.yaml: config file for local use. Set up scenarios that will load in app.
- git workflow: automatically update changes into Azure web service and redeploy (main branch to production slot and dev branch to dev slot)
- Clone repo in local
- In dev branch, edit script and review changes by running app locally:
python app.py
- After checking, push changes to original dev branch
- It will automatically update dash app in dev slot in Azure web service by git workflow. Test updates by reviewing site.
- After testing, merge change from dev branch to main branch. And this updates in main branch will trigger workflow to update dash app in production slot in Azure web service.
-
Makre sure you have access to T drive. Connect to VPN if needed
-
Create a virtual environment and install packages in
requirements.txt
-
Set up scenarios that you want to load in app by
config.yaml
LOCAL_FLAG:1
LOCAL_SCENARIO_LIST: - T:\***
Define LOCAL_SCNEARIO_LIST as data paths of all scenarios that you want to compare in the visualization board
-
Launch app: Run
python app.py
in terminal and preview the dashboard in http://127.0.0.1:8050/ -
Press ctrcl c to stop
-
set up environment variables (use token to read data from databricks)
DATABRICKS_SERVER_HOSTNAME = ***
DATABRICKS_HTTP_PATH = ***
DATABRICKS_TOKEN = your_token
SCM_DO_BUILD_DURING_DEPLOYMENT=true
(required!) -
set up start up command under configuration
gunicorn --workers 4 app:server
-
Define the scenarios that you want to compare in Environment variables
AZURE_SCENARIO_LIST=1150,272,254