A lightweight, efficient tool designed for managing AI model storage. By leveraging "Centralized Vault + Logical Mapping", it solves common pain points like redundant model downloads, OS drive (root) overflow, and complex Docker volume configurations.
The script initializes your storage based on the following professional hierarchy:
/mnt/models (Vault Root)
├── base/
│ ├── llm # Language Models (Llama 3, Qwen, etc.)
│ ├── diffusion # Image Models (SDXL, Flux, etc.)
│ └── vlm # Vision Language Models
├── lora # LoRA / Adapter weights
├── tensorrt # Optimized FP4/FP8 engines for NVIDIA NIM
└── cache # Unified HuggingFace/ModelScope cache
Clone or download imaphelper.sh and grant execution permissions:
chmod +x imaphelper.sh