GPU CLI template for InvokeAI with the BiRefNet background removal node and model pre-installed.
# Start InvokeAI with BiRefNet (~5-10 min first run, downloads models)
gpu use .Open the URL shown in terminal to access InvokeAI.
Setup downloads BiRefNet (~500MB) for background removal.
Optional: Add Stable Diffusion models via InvokeAI's Model Manager if you need image generation.
Note: You'll see "No models installed" on the main screen - this is expected. BiRefNet is accessed through the Workflow Editor, not the main Launchpad.
-
Open InvokeAI in your browser (use the URL shown in terminal)
-
Go to Workflow Editor - Click the "Workflow Editor" tab at the top (next to "Launchpad")
-
Add the BiRefNet node:
- Right-click on the canvas (the dotted area)
- Search for "BiRefNet" or "Remove Background"
- Click to add the node
-
Load your image:
- Click on the dark "Image" box inside the node
- Select an image from your computer
-
Configure and run:
- Check the "Save To Gallery" checkbox
- Press Ctrl+Enter or click Invoke to process
-
Get your result:
- The output image (with transparent background) appears in the gallery on the right
- Right-click to download as PNG
- GPU: RTX 6000 Ada or equivalent (24GB VRAM)
- Storage: ~10GB for InvokeAI + BiRefNet model
- Python: 3.11+ required
- High-quality foreground/background separation
- Works with complex edges (hair, fur, transparent objects)
- State-of-the-art dichotomous image segmentation
- Model: ZhengPeng7/BiRefNet
gpu.jsonc- GPU CLI configuration (ports, startup, environment)startup.sh- Startup script: installs deps, runs setup, starts InvokeAI serverinstall_invokeai.py- Idempotent setup: downloads InvokeAI + BiRefNet node and modelrun.py- Standalone InvokeAI server launcher (used by startup.sh)
This is normal! The warning refers to Stable Diffusion models, not BiRefNet. BiRefNet is loaded separately and works via the Workflow Editor.
You need to load an image first. Click the dark "Image" box in the BiRefNet node and select an image.
Run setup again:
gpu use .BiRefNet works best with 24GB VRAM. For smaller GPUs, consider using smaller image sizes.
InvokeAI uses port 9090. GPU CLI will remap it automatically.
Generated images are saved to invokeai/outputs/ and automatically synced back to your local machine.