This project includes an automated data extraction system that pulls Warhammer character creation data from Google Apps Script and saves it locally for use in the application.
The data extraction script (extract-data.js) connects to a Google Apps Script web application to fetch 23 different types of Warhammer game data and saves them as JSON files in the data/ directory.
- Core Character Data: books, careers, careerLevels, species, classes, talents, characteristics
- Skills & Equipment: skills, spells, trappings
- Creatures & NPCs: creatures
- Character Details: stars, gods, eyes, hair, details, traits
- Game Mechanics: lores, magicks, etats, psychologies, qualities, trees
- Node.js: Version 14.0.0 or higher
- Google Apps Script URL: Access to the deployed web app endpoint
- Network Access: Internet connection to reach Google Apps Script
- Install dependencies:
npm install- Configure the environment (see Configuration section below)
The extraction script can be configured using environment variables or command-line arguments.
Create a .env file in the project root (this will be set up by Stream 2):
GOOGLE_APPS_SCRIPT_URL=https://script.google.com/macros/s/YOUR_DEPLOYMENT_ID/exec
DATA_DIR=./data
NODE_ENV=developmentPass the web app URL directly as an argument:
npm run extract -- https://script.google.com/macros/s/YOUR_DEPLOYMENT_ID/execThe current deployment URL is:
https://script.google.com/macros/s/AKfycbwMRbK8i_M-os8c279diiKxeoze7JWJTKsLA511bTBJDkjxYY3GRE8tfWucuBOeh0x6Hg/exec
Application ID: AKfycbwMRbK8i_M-os8c279diiKxeoze7JWJTKsLA511bTBJDkjxYY3GRE8tfWucuBOeh0x6Hg
Run the extraction script to fetch the latest data from Google Sheets:
npm run extractThis will:
- Connect to the Google Apps Script web application
- Download all 23 data types
- Save individual JSON files to
data/directory (e.g.,books.json,careers.json) - Create a combined
all-data.jsonfile with all data - Display extraction statistics
Run extraction with validation checks (to be implemented in Stream 3):
npm run extract:validateGet detailed logging information during extraction:
npm run extract:verboseThe build process automatically runs data extraction first:
npm run buildThis runs npm run extract before the build process (via the prebuild hook).
To skip extraction and use cached data:
npm run build:skip-extractAfter running extraction, your project will have:
project-root/
├── data/
│ ├── all-data.json # Combined data file
│ ├── books.json # Individual type files
│ ├── careers.json
│ ├── careerLevels.json
│ ├── species.json
│ ├── classes.json
│ ├── talents.json
│ ├── characteristics.json
│ ├── trappings.json
│ ├── skills.json
│ ├── spells.json
│ ├── creatures.json
│ ├── stars.json
│ ├── gods.json
│ ├── eyes.json
│ ├── hairs.json
│ ├── details.json
│ ├── traits.json
│ ├── lores.json
│ ├── magicks.json
│ ├── etats.json
│ ├── psychologies.json
│ ├── qualities.json
│ └── trees.json
├── extract-data.js
├── package.json
└── README.md
🚀 Démarrage de l'extraction des données...
📡 URL: https://script.google.com/macros/s/.../exec?json=true
⏳ Téléchargement des données...
✅ Données téléchargées avec succès!
✅ books.json - 39 entrées
✅ careers.json - 220 entrées
✅ careerLevels.json - 201 entrées
...
✅ all-data.json créé (toutes les données combinées)
============================================================
🎉 Extraction terminée!
📁 23 fichiers créés dans le dossier 'data/'
📊 Total: 2,847 entrées extraites
============================================================
The current version provides basic error handling:
- Missing URL: Displays usage instructions
- Network Errors: Shows connection error message
- Parse Errors: Displays JSON parsing error details
Enhanced error handling with retry logic and fallback mechanisms will be added in Stream 3.
The extraction process is integrated into the build pipeline via npm hooks:
- Automatic Extraction: The
prebuildhook ensures data is extracted before every build - Manual Control: Use
build:skip-extractto bypass extraction when needed - CI/CD Compatible: All scripts work in automated environments
# Install dependencies
npm install
# Extract data for the first time
npm run extract# Refresh data from Google Sheets
npm run extract
# Build with fresh data
npm run build# Build without fetching new data
npm run build:skip-extractSolution: Either set GOOGLE_APPS_SCRIPT_URL in .env file or pass the URL as an argument:
npm run extract -- https://script.google.com/macros/s/YOUR_ID/execSolution:
- Check your internet connection
- Verify the Google Apps Script URL is correct
- Ensure the web app is deployed and publicly accessible
Solution:
- Check that the Google Apps Script is returning valid JSON
- Verify the
?json=trueparameter is being appended to the URL - Test the URL directly in a browser to see the response
The following features will be added in subsequent streams:
- Stream 2: Environment configuration with
.envfile support - Stream 3: Data validation, retry logic, and fallback mechanisms
- Node.js Native HTTPS: Uses built-in
httpsmodule (no external fetch libraries) - Cross-Platform: Scripts work on Windows, macOS, and Linux
- JSON Format: All data files use pretty-printed JSON with 2-space indentation
- Redirect Handling: Automatically follows HTTP redirects (301, 302, 307)
| Script | Description |
|---|---|
npm run extract |
Fetch data from Google Apps Script |
npm run extract:validate |
Validate existing data without fetching |
npm run extract:verbose |
Extract with detailed logging |
npm run build |
Build project (includes automatic extraction) |
npm run build:skip-extract |
Build without extracting data |
ISC
For issues related to the Google Apps Script deployment or data structure, please refer to the main Warhammer project documentation.