Run code on cloud GPUs with a single command.
# Local
python train.py
# Remote GPU - just add "gpu run"
gpu run python train.pyML & Training
- Model training and fine-tuning
- LoRA/QLoRA for LLMs
- Batch inference
- Jupyter notebooks
Image Generation
- ComfyUI
- Automatic1111 / SD WebUI
- Fooocus
- InvokeAI
- Kohya SS (LoRA training)
Video & Audio
- Wan, AnimateDiff, CogVideo
- Whisper transcription
- Music generation
LLM Inference
- Ollama
- llama.cpp
- text-generation-webui
- No more forgotten pods - Auto-stops when idle, so you only pay for what you use
- Real-time outputs - Files sync back as they're created
- Just works - Your local files sync to the pod, outputs sync back
# Install
curl -fsSL https://gpu-cli.sh/install.sh | sh
# Run on remote GPU
gpu run python train.pyRun a training job:
gpu run python train.py --epochs 100Spin up ComfyUI:
gpu run -p 8188:8188 python main.py --listen 0.0.0.0
# Opens http://localhost:8188 automaticallyInteractive shell:
gpu run bash