This guide will help you set up WalkWrite for local development.
- macOS 13.0 or later
- Xcode 15.0 or later
- Git with LFS support
- CMake (for building whisper.cpp)
- At least 10GB free disk space
# Clone with Git LFS
git clone https://github.com/yourusername/WalkWrite.git
cd WalkWrite
# Ensure Git LFS is installed
git lfs install
# Pull the large model files
git lfs pullWalkWrite uses whisper.cpp for speech recognition. You need to build it as an XCFramework:
# Run the build script
./build-whisper-xcframework.shThis will:
- Clone whisper.cpp if needed
- Build for both iOS device and simulator
- Create
whisper.xcframeworkin the project root
Note: The project expects the framework at Vendor/whisper.cpp/build-apple/whisper.xcframework. After building, either:
- Move the framework to that location:
mkdir -p Vendor/whisper.cpp/build-apple && mv whisper.xcframework Vendor/whisper.cpp/build-apple/ - Or update the framework path in Xcode
-
Copy the configuration template:
cp Config.xcconfig.template Config.xcconfig
-
Edit
Config.xcconfigwith your development team ID:- Find your Team ID in Xcode → Preferences → Accounts
- Or leave empty for personal development
-
Update the bundle identifier prefix to your organization
- Open
WalkWrite.xcodeprojin Xcode - Add
whisper.xcframeworkto the project:- Select the WalkWrite project in the navigator
- Select the WalkWrite target
- Go to "Frameworks, Libraries, and Embedded Content"
- Click "+" and add
whisper.xcframework - Ensure "Embed & Sign" is selected
The project uses MLX Swift for running the Qwen language model. This should be automatically resolved by Xcode through Swift Package Manager.
If needed, add it manually:
- File → Add Package Dependencies
- Enter:
https://github.com/ml-explore/mlx-swift - Add to the WalkWrite target
Ensure these large model files are present:
WalkWrite/QwenModel/- Qwen language model (~1.4GB)WalkWrite/ggml-large-v3-turbo-q5_0.bin- Whisper model (~574MB)WalkWrite/ggml-large-v3-turbo-encoder.mlmodelc/- Core ML encoder
- Select your target device (physical iOS device recommended)
- Build and run (⌘R)
Note: The first launch will take longer as Core ML compiles the models for your specific device.
Run ./build-whisper-xcframework.sh to build it.
Ensure you ran git lfs pull to download the large files.
This is a common Xcode package resolution issue. To fix:
-
Run the reset script:
./reset-packages.sh
-
Open the project in Xcode and wait for package resolution
-
If still failing, in Xcode:
- File → Packages → Reset Package Caches
- File → Packages → Update to Latest Package Versions
- Clean build folder (⇧⌘K)
-
Be patient - initial MLX package download can take 5-10 minutes
The app is optimized for real devices. Some features may not work correctly on the simulator due to Metal/Core ML requirements.
The app uses significant memory when running AI models. Close other apps if you experience issues.
- Use a physical device for testing transcription and LLM features
- The app supports background audio recording
- All AI processing is done on-device for privacy
- Models are loaded on-demand to manage memory
To use different Whisper models:
- Download a different
.binmodel from whisper.cpp - Replace
ggml-large-v3-turbo-q5_0.bin - Update the model name in
WhisperEngine.swift
To use a different LLM:
- Ensure it's compatible with MLX Swift
- Replace the
QwenModeldirectory - Update
LLMEngine.swiftconfiguration
If you encounter issues:
- Check the Issues page
- Ensure you're using the correct Xcode and iOS versions
- Try cleaning the build folder (⇧⌘K) and rebuilding