ai

Docker

Verified Publisher

Verified Publisher

Docker

San Francisco, CA, USA

Displaying 1 to 30 of 63 repositories

Model

Qwen3-Coder is Qwen’s new series of coding agent models.

17d

100K+

21

Model

744B MoE language model with 40B active params for reasoning, coding, and agentic tasks (FP8)

17d

4.2K

2

Model

397B-parameter MoE multimodal LLM with 17B active params, 262K context, 201 languages

18d

3.9K

1

Model

397B MoE model with 17B activation for reasoning, coding, agents, and multimodal understanding

18d

10K+

3

Model

Advanced coding agent model with 80B params (3B active MoE) for code generation and debugging

25d

10K+

1

Model

Efficient 80B MoE coding model with 3B activated params, 256K context, and agentic capabilities

25d

10K+

1

Model

Image generation model, uses a base latent diffusion model plus a refiner.

1m

10K+

2

Model

GLM-4.7-Flash is a top 30B-A3B MoE, balancing strong performance with efficient deployment.

1m

10K+

3

Model

GLM-4.7-Flash is a top 30B-A3B MoE, balancing strong performance with efficient deployment.

2m

10K+

1

Model

Devstral Small 2 is an FP8 instruct LLM for agentic SWE tasks, codebase tooling, and SWE-bench.

2m

10K+

4

Model

FunctionGemma is a 270M open model for fine-tuned, offline function-calling agents on small devices.

2m

4.3K

1

Model

FunctionGemma is a 270M open model for fine-tuned, offline function-calling agents on small devices.

2m

6.6K

2

Model

Kimi K2 Thinking: open-source agent with deep reasoning, stable tool use, fast INT4, 256k context.

3m

10K+

1

Model

Kimi K2 Thinking: open-source agent with deep reasoning, stable tool use, fast INT4, 256k context.

3m

10K+

1

Model

DeepSeek-V3.2 boosts efficiency and reasoning with DSA, scalable RL, agentic data—IMO/IOI wins.

3m

10K+

9

Model

Ministral 3: compact vision-enabled model with near-24B performance, optimized for local edge use

3m

10K+

4

Model

Ministral 3: compact vision-enabled model with near-24B performance, optimized for local edge use

3m

50K+

2

Model

Multilingual reranking model for text retrieval, scoring document relevance across 119 languages.

3m

10K+

2

Model

Multilingual reranking model for text retrieval, scoring document relevance across 119 languages.

3m

8.9K

Model

Snowflake’s Arctic-Embed v2.0 boosts multilingual retrieval and efficiency

4m

4.1K

Model

Qwen3 Embedding: multilingual models for advanced text/ranking tasks like retrieval & clustering.

4m

10K+

1

Model

Qwen3 Embedding: multilingual models for advanced text/ranking tasks like retrieval & clustering.

4m

10K+

Model

OpenAI’s open-weight models designed for powerful reasoning, agentic tasks

4m

100K+

42

Model

The most advanced Qwen model yet, with major gains in text, vision, video, and reasoning.

4m

100K+

9

Model

Safety reasoning models for policy-based text classification and foundational safety tasks.

4m

10K+

2

Model

Qwen3 is the latest Qwen LLM, built for top-tier coding, math, reasoning, and language tasks.

4m

500K+

121

Model

Granite-4.0-nano: lightweight instruct model trained via SFT, RL, and merging on diverse data.

4m

8.8K

Model

Granite-4.0-h-nano: lightweight instruct model trained via SFT, RL, and merging on diverse data.

4m

4.1K

1

Model

Google’s latest Gemma, small yet strong for chat and generation

4m

10K+

1

Model

OpenAI’s open-weight models designed for powerful reasoning, agentic tasks

4m

10K+

1