How to Run Local AI Models with OpenCode
Guide to connect open LLMs with OpenCode on your local device.
This guide walks you through connecting OpenCode Desktop to Unsloth to run open LLMs entirely locally. OpenCode is an open-source AI coding agent that reads, modifies, and executes code across your project using a connected model. This works with any local model exposed through Unsloth’s OpenAI-compatible API, including: DeepSeek, Qwen, Gemma, and more. OpenCode acts as the client, while Unsloth loads and serves models models via a local API.
After setup, OpenCode connects to Unsloth, where you can select a loaded model and use it as a coding agent.
In this tutorial, we’ll use unsloth/Qwen3.6-27B-GGUF loaded in Unsloth and access it directly inside OpenCode. Prefer a different model? Swap in any other model by loading it in Unsloth.
Installing OpenCode Desktop
Step 1: Download the OpenCode installer for Mac
Open opencode.ai/download in your browser of choice. Scroll down to the OpenCode Desktop (Beta) , and click the Download button next to the macOS image name corresponding to your Mac's architecture (Apple Silicon or Intel).

A popup will appear and ask you where you want to save the open code installer. It is ok to accept the defaults . Click on Save. This will save the OpenCode Installer into your Downloads folder.

Step 2: Install OpenCode
Locate and double click the OpenCode Desktop.dmg installer file in your downloads folder.

The installer window will open. Use your mouse to drag the OpenCode app icon on top of the Applications icon as shown.

Step 3: Launch OpenCode
Locate and double click the OpenCode icon under the Applications folder.

The OpenCode desktop app will open and is now ready for your next action.

Installing Unsloth
⚡ Quickstart
After installing OpenCode, we'll need install Unsloth Studio to enable OpenCode to serve and run inference of local models.
Install or update Unsloth Studio. Earlier versions don't expose the external API. See Installation.
Launch Unsloth. Note the port it starts on is usually
8000or8888. You'll see it in the terminal output and in the browser URL (http://localhost:PORT).Load a model. Click New Chat, pick or search a model (GGUF), and wait for it to finish loading.
Create an API key. In Unsloth, click your Unsloth avatar in the bottom-left → Settings → API → type a key name → Create. Copy the
sk-unsloth-…value that appears . Unsloth only shows it once.Point your client at Unsloth. Use
http://localhost:PORTas the base URL and yoursk-unsloth-…key for auth. Jump to the recipe for your tool below.
🔑 Creating an API key
Open the sidebar, click your Unsloth avatar at the bottom-left.
Go to Settings → API.
Enter a friendly name (e.g.
claude-code-macbook).(Optional) Set an expiry.
Click Create.
Copy the key immediately. Unsloth stores only a hash and you won't be able to view it again.

All keys start with the sk-unsloth- prefix. Revoke a key from the same page at any time. Requests made with a revoked key will fail with 401 Unauthorized.
Treat your API key like a password. Anyone with the key and network access to your Unsloth instance can send requests to your loaded model.
🖇️ Connecting Unsloth to OpenCode Desktop
Opencode supports any OpenAI-compatible provider, so you can wire Unsloth in as a Custom provider. The setup is a one-time flow inside opencode's Connect provider dialog.
1. Open the provider picker. In opencode, type /model (or click the model selector at the bottom of the input).

Then click Connect provider at the top-right of the select model dialog.

2. Choose "Custom". In the provider list, scroll to Other and pick Custom.

3. Fill in the custom provider form:
Provider ID
unsloth-studio (lowercase, hyphens allowed)
Display name
Unsloth Studio
Base URL
http://localhost:8888/v1/ (replace 8888 with your Unsloth port; keep the trailing /v1/)
API key
Your sk-unsloth-… key
In the Models section, add one row per model you want to expose. The left field is the model ID as Unsloth serves it; the right field is what opencode will display:
unsloth/Qwen3.6-27B-GGUF (the exact name of the model as shown in Studio)
unsloth/Qwen3.6-27B-GGUF (shown inside opencode)
Leave Headers empty unless you're proxying Unsloth through an auth layer that needs custom headers.

4. Click Submit. You should see an "Unsloth Studio connected. Unsloth models are now available to use" toast.

Restart opencode after adding the provider. The new provider only becomes selectable after a restart.
5. Select your Unsloth model. Once opencode is back up, type /model, search unsloth, and pick the model under the Unsloth Studio group. It'll be active on your next message.

Unsloth supports both OpenAI and Anthropic python SDKs.
Last updated
Was this helpful?

