AI-powered basketball shot form analyzer. Upload one clip, get real coaching cues in plain English β then dive into the tracked replay.
Most shot analysis tools dump raw numbers. FormFix gives you coaching, not metrics.
| Traditional Tools | FormFix |
|---|---|
| "Elbow angle: 78Β°" | "Keep your elbow tighter at the set point" |
| "Knee flexion: 115Β°" | "Sink a bit deeper β you're losing power" |
| "Release height: 2.1m" | "Your release is clean, matches a textbook set shot" |
One clean rep is all you need.
Plain-language cues appear in seconds β before any heavy visuals render. No waiting around.
MediaPipe Holistic extracts 33 body landmarks plus 21 hand landmarks per hand. We see your wrist snap, finger roll-through, and follow-through extension.
The shot is segmented into Load β Set β Rise β Release β Follow-through β each phase analyzed independently.
Your shot is matched against an archetype library. See which style family you're closest to, what traits you already share, and what to borrow next.
Native slow-motion uploads are preferred. FormFix preserves the master clip, renders a guided replay, and builds cue clips, proof frames, and frame strips so the feedback feels earned.
The ideal upload is the original 4K 240 fps or 1080p 120/240 fps clip from your phone. In-browser recording still works, but it is treated as a fallback path.
Terminal 1 β Backend:
cd backend && ./back.shTerminal 2 β Frontend:
cd frontend && ./front.shOpen http://localhost:3000 β upload or record a shot β get feedback.
- Python 3.10β3.12 (MediaPipe doesn't support 3.13 yet)
- ffmpeg for video encoding
# macOS
brew install ffmpeg
# Ubuntu/Debian
sudo apt install ffmpegpython3.11 -m venv .venv
source .venv/bin/activate
pip install -r backend/requirements.txt
python -m uvicorn backend.src.main:app --reload --port 8000cd frontend
python3 -m http.server 3000formfix/
βββ backend/
β βββ src/
β β βββ main.py # FastAPI endpoints
β β βββ schemas.py # Pydantic models
β β βββ data/
β β β βββ reference_profiles.json # Archetype library
β β β βββ research_bank.json # Biomechanics research
β β βββ services/
β β βββ analyzer.py # Pose extraction + phase detection
β β βββ comparison.py # Style matching engine
β β βββ video_utils.py # Frame extraction + encoding
β βββ requirements.txt
βββ frontend/
β βββ index.html # Single-page app (vanilla JS)
βββ tools/
β βββ reference_pipeline/ # Train your own archetype library
β βββ export_features.py
β βββ train_reference_library.py
β βββ build_player_profiles.py
βββ docs/
βββ comparison_engine.md # How style matching works
βββ datasets_references.md # Data sources + research
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β Upload β βββΆ β Pose β βββΆ β Phase β
β Video β β Extraction β β Detection β
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββ
βΌ
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β Quick β βββΆ β Style β βββΆ β Deep β
β Coaching β β Match β β Breakdown β
βββββββββββββββ βββββββββββββββ βββββββββββββββ
- Video Upload β Native slow-motion is preferred;
4K 240 fpsis the gold-standard capture - Media Inspection β
ffprobeclassifies frame density, resolution, and capture tier - Coarse Scan β A reduced rendition locates the shot rhythm and likely cue windows
- Dense Refinement β High-detail windows sharpen load, release, and follow-through timing
- Style Match + Coaching β Strongest findings turn into plain-language cues
- Evidence Pack β Guided replay, cue clips, proof frames, and frame strips are rendered as URL-based artifacts
Queue an async analysis job.
Request:
Content-Type: multipart/form-data
file: video file (.mp4, .mov, .webm, etc.)
shot_type: optional β "free_throw" | "spot_up" | "pull_up"
shooting_hand: optional β "left" | "right"
Response:
{
"job_id": "uuid",
"status": "queued",
"stage": "queued",
"progress_message": "Clip uploaded. Checking whether this is a high-detail slow-motion read.",
"media_profile": {
"fps": 240.0,
"detail_tier": "ultra_detail"
}
}Poll for status and fetch the final URL-based result payload.
Blocking compatibility wrapper for quick local testing.
Request:
Content-Type: multipart/form-data
file: video file (.mp4, .mov, etc.)
shot_type: optional β "free_throw" | "spot_up" | "pull_up"
return_visuals: "true" to include replay assets
Response:
{
"job_id": "uuid",
"status": "completed",
"result": {
"phases": [...],
"issues": [...],
"coaching_cues": [...],
"comparison": {
"style_family": "textbook_set",
"fit_score": 82,
"aligned_traits": [...],
"borrow_next": [...]
},
"media_profile": {...},
"artifacts": {
"annotated_replay_url": "/media/jobs/<id>/artifacts/annotated-replay.mp4",
"cue_clip_urls": [...],
"cue_still_urls": [...],
"frame_strip_urls": [...]
},
"playback_script": [...]
}
}Service health check.
Train your own archetype library from labeled clips:
# 1. Export features from your clip library
python tools/reference_pipeline/export_features.py \
--input clips/ --output features.jsonl
# 2. Cluster into archetypes
python tools/reference_pipeline/train_reference_library.py \
--input features.jsonl --output trained_library.json
# 3. (Optional) Build player profiles
python tools/reference_pipeline/build_player_profiles.py \
--input features.jsonl --output player_profiles.jsonOverride the default library:
export FORMFIX_REFERENCE_LIBRARY=/path/to/trained_library.jsonSee docs/comparison_engine.md for the full training roadmap.
- Biomechanical analysis of basketball shooting (KDU study) β optimal knee ~122Β°, elbow ~79Β° at free throw
- Kinematics of Arm Joint Motions (ScienceDirect) β shoulder rotation β vertical velocity; elbow/wrist β horizontal + backspin
- MediaPipe Holistic β 33 pose + 21Γ2 hand landmarks
MIT