A specialized Bittensor subnet miner for AI model training, focusing on LoRA fine-tuning of diffusion models like FLUX.1-dev.
You will need access to a VPS (or similar setup) with a capable enough GPu. We recommend at least 1 A100 80GB (PCIe or SXM will do fine). For managing python installation, we recommend uv and only officially support this configuration.
Before running this miner, you must:
-
Set up a HuggingFace account and token:
- Create an account at HuggingFace
- Generate an access token at HuggingFace Settings
- The token must have write permissions to create and upload models
-
Accept FLUX.1-dev model license:
- Visit the FLUX.1-dev model page
- Read and accept the license agreement
- Without accepting the license, the model download will fail
There are three components to running a miner:
This is the public entrypoint
-
Register on testnet via the base command:
btcli s register --netuid 231 --subtensor.network test
-
Transfer 0.01 testnet TAO to the address
5FU2csPXS5CZfMVd2Ahdis9DNaYmpTCX4rsN11UW7ghdx24A
to qualify for a mining permit. This will be managed internally. -
Set up the environment variables to load your miner hotkey, and the URLs for which you will host the inference and training servers
-
Setup the reverse proxy via the following:
a. Navigate to thereverse_proxy
directory b. Install dependencies viauv pip install -r requirements.txt
c. Start the miner viapython reverse_proxy/server.py
(Note: You will want to keep this service exposed to only the reverse proxy, guarded from the public internet)
- Install the requirements via
uv pip install requirements.txt
- Start the server via
python inference_server.py
(Note: You will want to keep this service exposed to only the reverse proxy, guarded from the public internet)
Docker handles all dependencies and environment setup for you. No virtual environment needed!
-
Create a
.env
file with your HuggingFace token:echo "HF_TOKEN=your_huggingface_token_here" > .env
-
Start the miner:
docker compose up -d
This will:
- Build the Docker image with all dependencies
- Start the miner API on port 8091
- Mount necessary volumes for model storage
- Enable GPU access for training
-
Check logs:
docker compose logs -f
The miner operates as a training service that:
-
Receives training requests via POST to
/train
endpoint with:job_type
: "lora_training"params
: Including prompt, image_b64, seedjob_id
: Unique identifiervalidator_endpoint
: Callback URL for results
-
Processes training jobs:
- Generates configuration from request parameters
- Performs LoRA fine-tuning on FLUX.1-dev
- Uses provided prompts and images for training
-
Uploads trained models to HuggingFace:
- Creates a new repository for each training job
- Uploads LoRA weights and metadata
- Makes models publicly accessible
-
Reports completion back to the validator endpoint
For testing or custom training jobs:
python run.py config/your_config.yaml --seed 42
- GPU: NVIDIA GPU with at least 24GB VRAM (recommended: A100, H100)
- CUDA: Version 11.8 or higher
- RAM: 32GB minimum
- Storage: 100GB+ for model weights and training data
- Docker: Latest version with nvidia-container-toolkit
- API logs: Check
docker compose logs -f
- Training progress: Monitor individual job outputs in logs
- GPU usage: Use
nvidia-smi
to monitor GPU utilization
If you encounter permission errors downloading FLUX.1-dev:
- Ensure you've accepted the model license on HuggingFace
- Verify your HF_TOKEN is correctly set
- Check that your token has read permissions
If training fails with CUDA out of memory:
- Reduce batch size in configuration
- Enable gradient checkpointing
- Use a GPU with more VRAM
If the miner can't connect to validators:
- Check firewall settings for port 8091
- Ensure Docker networking is properly configured
- Verify validator endpoints are accessible
For issues and questions:
- Check existing issues in the repository
- Join the Bittensor Discord community
- Review validator documentation for integration details