Skip to content

Releases: open-multi-agent/open-multi-agent

v1.4.0

09 May 08:39
8bf80f2

Choose a tag to compare

Highlights

Official org package

Open Multi-Agent now has an official organization package:

npm install @open-multi-agent/core

New projects should use @open-multi-agent/core.

Plan-only orchestration

Adds PlanOnly mode so teams can inspect the coordinator's task DAG before running agent work. (#203 by @CodingBangboo)

LLM adapter improvements

  • Preserve reasoning blocks across Anthropic and Gemini turns. (#205 by @MyPrototypeWhat)
  • Forward reasoning_effort and backfill sampling-parameter parity across OpenAI-compatible, Copilot, and Azure paths. (#209 by @MyPrototypeWhat)
  • Add a Mistral provider example and README entry. (#206 by @mvanhorn)

Shared memory TTL

SharedMemory entries can now expire by turn count. (#213 by @MyPrototypeWhat)

Fixes

  • Keep text-tool extraction depth non-negative when a stray closing brace appears. (#217 by @voidborne-d)
  • Fix truncation behavior and tighten coordinator dependency guidance. (#215 by @CodingBangboo)

Examples and Docs

  • Add paper replication triage cookbook example. (#202 by @DaiMao-UT)
  • Add rare disease information triage example. (#211 by @oooooowoooooo)
  • Refresh README, hero animation, badges, docs, and repository links for the new GitHub organization. (#214 and #218 by @JackChen-me)

Compatibility

No intentional runtime API breaks were introduced. The package identity changed to @open-multi-agent/core.

The previous package path, @jackchen_me/open-multi-agent, remains supported during the migration window and is also published at 1.4.0.

Install

npm install @open-multi-agent/[email protected]

Legacy path during the migration window:

npm install @jackchen_me/[email protected]

v1.3.1

02 May 06:24

Choose a tag to compare

Features

Streaming reasoning events

StreamEvent now supports a reasoning type that carries the model's thinking tokens in real time. ReasoningBlock is also added to the ContentBlock union for non-streaming paths. Supported on Anthropic and OpenAI providers. (#174 by @SiMinus)

onAgentStream and onPlanReady hooks

Two new orchestrator hooks (runTeam only): onAgentStream delivers real-time per-token streaming events during agent runs, and onPlanReady fires after the coordinator decomposes the goal into a task DAG -- return false to abort before any agent work starts. (#182, #181 by @tizerluo; #184, #183 by @JackChen-me)

Agent Observation Pipeline: new trace events

plan_ready and agent_stream trace events join the trace pipeline, enabling downstream observers to react to plan generation and streaming agent output. (#188 by @ibrahimkzmv)

AWS Bedrock adapter

New LLM adapter for Amazon Bedrock, supporting the full adapter contract (chat + stream). (#194 by @CodingBangboo)

ToolCallTrace includes input/output

ToolCallTrace now carries the tool's input and output payloads, making it useful for debugging and audit without inspecting the raw conversation. (#124 by @MyPrototypeWhat)

Fixes

  • Strip image blocks before summarize compression to avoid ballooning token cost. (#196 by @MyPrototypeWhat)
  • Preserve tool_use/tool_result pairing during sliding-window truncation, fixing orphaned tool blocks. (#193 by @MyPrototypeWhat)
  • onAgentStream path now forwards the full RunOptions into the streaming runner so onTrace, delegation, and run metadata work during streaming. (#184 by @JackChen-me)
  • onPlanReady abort path now reports the real coordinator token cost instead of zero, and catches thrown callbacks. (#183 by @JackChen-me)

Examples

  • Express customer support pipeline: multi-agent triage, routing, and resolution. (#191 by @CodingBangboo)
  • Personalized interview simulator: dynamic question generation with structured output. (#189 by @mmjwxbc)
  • Incident postmortem DAG: reconstruct timeline from logs and deploys. (#187 by @binghuaren96)

Docs

Install

npm install @jackchen_me/[email protected]

Thanks to @SiMinus, @tizerluo, @ibrahimkzmv, @CodingBangboo, @MyPrototypeWhat, @mmjwxbc, and @binghuaren96 for the external contributions.

v1.3.0

26 Apr 06:23

Choose a tag to compare

New capabilities

Agent delegation

Agents in an orchestrated run can now hand a sub-prompt to another agent on the team and receive its final output as a tool result. Opt-in via registerBuiltInTools(registry, { includeDelegateTool: true }). Five guards: self-delegation, unknown agent, cycle detection, configurable depth cap (maxDelegationDepth, default 3), and pool deadlock. Delegated runs' token usage rolls into the parent's maxTokenBudget so sub-agents cannot silently bypass it. (#123 by @JackChen-me)

runTeam DAG dashboard CLI

oma runTeam ... --dashboard writes a static HTML view of the resolved task graph after a run, including dependencies and per-task status. (#122 by @ibrahimkzmv, follow-up docs in #141 by @JackChen-me)

outputSchema enforcement and defineTool passthrough

The previously advisory outputSchema on AgentConfig is now enforced: results are parsed and validated, with one retry on validation failure. defineTool schemas pass through to the LLM provider. (#149 by @Xin-Mai)

Pluggable shared memory

TeamConfig.sharedMemoryStore accepts any MemoryStore implementation (Redis, SQLite, your own). sharedMemory: true keeps the existing in-process default. (#157 by @JackChen-me)

Advanced LLM sampling

top_p, top_k, repetition_penalty, min_p, and extraBody are now first-class on agent and coordinator configs. Payload spread order is fixed so extraBody overrides sampling parameters but never transport. (#163 by @apollo-mg)

parallelToolCalls exposed for OpenAI

Was previously hardcoded; now configurable per agent. (#173 by @JackChen-me)

Two new providers

Fixes

  • Context compaction persistence and turn dropping. compact strategy was losing turns and not persisting compressed history. (#161 by @apollo-mg)
  • OpenAI mixed-content message ordering. Tool messages must precede user messages in mixed content; previously emitted in the wrong order. (#178 by @voidborne-d)
  • Provider type widening on configs. AgentConfig, CoordinatorConfig, and OrchestratorConfig were not using the full SupportedProvider union. (#158 by @JackChen-me)

Behavior changes

#163 removed two implicit defaults that some users may have relied on:

  • parallel_tool_calls: false is no longer forced. If you need the old behavior, set parallelToolCalls: false explicitly (now exposed via #173).
  • The default frequency_penalty override has been removed.

These are behavior changes, not API breaks, but worth checking if you depended on the old defaults.

The same PR also moved the local <think> tag parsing out of the agent layer into tool/text-tool-extractor.ts. This is internal cleanup with no user-visible impact.

Examples and cookbook

Nine new examples and a category reorganization (#125 by @JackChen-me):

Docs and infrastructure

Install

npm install @jackchen_me/[email protected]

Thanks to @ibrahimkzmv, @mvanhorn, @Klarline, @jadegold55, @zouhh22333-beep, @Kinoo0, @apollo-mg, @Optimisttt, @Agentscreator, @pei-pei45, @fault-segment, @Xin-Mai, @HuXiangyu123, @JackChiang233, @kenrogers, and @voidborne-d for the external contributions that make this release.

Full changelog: v1.2.0...v1.3.0

v1.2.0

18 Apr 12:24

Choose a tag to compare

First minor release since 1.1.0. MCP integration, three new LLM providers, context management strategies, a CLI, tool output cost controls, and fixes for abort and error propagation.

Features

  • MCP integration. New connectMCPTools() wires any MCP server (stdio) directly into agent tool use. @modelcontextprotocol/sdk is an optional peer dependency. Runnable example at examples/16-mcp-github.ts. (#89, by @ibrahimkzmv)

  • Three new LLM providers. First-class provider: 'deepseek' (deepseek-chat, deepseek-reasoner), provider: 'minimax' (global and China endpoints via MINIMAX_BASE_URL), and verified Groq via OpenAI-compatible baseURL in examples/19-groq.ts. (#113 and #114 by @hkalex; #121 by @mvanhorn)

  • Context management strategies. New AgentConfig.contextStrategy keeps long runs under token ceilings with four strategies: sliding-window, summarize, compact (rule-based, no extra LLM call), and custom. (#88 by @ibrahimkzmv; #111, #119 by @JackChen-me)

  • Tool output cost controls. New AgentConfig.maxToolOutputChars and per-tool ToolDefinition.maxOutputChars truncate large outputs (head + tail with a marker). New AgentConfig.compressToolResults compresses older tool results once the agent has moved on; errors are never compressed. (#110, #115, #116, #117, #118 by @JackChen-me)

  • CLI (oma). New binary for shell and CI with oma run, oma task, oma provider, JSON-first output, and stable exit codes. Docs at docs/cli.md. (#107 by @ibrahimkzmv)

  • AgentConfig.customTools. Inject tool definitions at config time from the orchestrator. Bypasses preset/allowlist filtering but still respects disallowedTools. (#109, #112 by @JackChen-me)

  • glob built-in tool. Find files by glob pattern, sorted by modification time. (#102 by @ibrahimkzmv)

Fixes

  • AbortSignal propagation. Abort now reaches tool execution, the Gemini adapter, and the abort queue path. (#104 fixes #99, #100, #101, by @JackChen-me)

  • Error event propagation. AgentRunner.run() now surfaces error events to callers. (#103 fixes #98, by @JackChen-me)

Examples

  • examples/16-mcp-github.ts: full MCP wiring
  • examples/17-minimax.ts, examples/18-deepseek.ts, examples/19-groq.ts: provider quickstarts
  • examples/with-vercel-ai-sdk/: Next.js + OMA runTeam() + AI SDK useChat

Docs

  • READMEs (EN/ZH) expanded: CLI, MCP, context strategies, tool output control, customTools. ZH caught up with EN on items that shipped in 1.1.

Install

npm install @jackchen_me/[email protected]

Thanks to @hkalex, @ibrahimkzmv, and @mvanhorn for the external contributions that make this release.

Full changelog: v1.1.0...v1.2.0

v1.1.0

11 Apr 07:21

Choose a tag to compare

First minor release since 1.0.1. Six new features, two fixes, two new examples, and one behavior change you should read before upgrading.

⚠️ Behavior change (read this before upgrading)

Agents now run with default-deny, dependency-scoped context (#87).
An agent only sees results from tasks it explicitly dependsOn, instead of every prior task in the run. This prevents context leakage between unrelated agents and keeps token usage predictable in larger teams.

If your existing teams relied on agents implicitly seeing all prior task output, add explicit dependsOn edges in your task graph. No API change is required for runTeam() users whose coordinator already produces a sensible DAG.

This change was prompted by a combination of competitive analysis (XCLI scopes sub-agent context to a minimum file set + tool allowlist by default) and a public post on X by guk2472 flagging inter-agent context pollution as the real production killer in multi-agent systems. Thanks for the signal.

Features

  • AbortSignal support for runTeam() and runTasks() (#69). Cancel a run mid-flight from the caller.
  • Skip coordinator for simple goals in runTeam() (#70). Single-agent goals no longer pay the coordinator round-trip.
  • Token budget management at agent and orchestrator level (#71). Stops runs that exceed a configured budget instead of silently burning tokens.
  • Tool allowlist / denylist / preset (#83). Restrict which tools an agent can call without rebuilding the registry.
  • Customizable coordinator (#85). Override the coordinator's model, system prompt, tools, toolPreset, and disallowedTools via CoordinatorConfig.
  • Dependency-scoped agent context (#87). See behavior change above.

Fixes

  • Per-agent mutex prevents concurrent runs on the same Agent instance from corrupting state (#77).
  • Duplicate progress events in the short-circuit path for runTeam() are gone, and completedTaskCount is no longer double-incremented (#82).

Examples

  • Multi-source research aggregation (#79)
  • Multi-perspective code review (#80)

Docs

  • README top fold rewritten and Examples section trimmed (#95)
  • Coverage badge updated to 88% (#57)
  • DECISIONS.md restructured to signal openness on MCP and A2A

Install

npm install @jackchen_me/[email protected]

v1.0.0

05 Apr 05:41

Choose a tag to compare

What's new since 0.2.0

Features

  • Structured output — optional outputSchema (Zod) on any agent, with auto-retry on validation failure (#36, #38)
  • Task retry with exponential backoffmaxRetries, retryDelayMs, retryBackoff per task (#37)
  • ObservabilityonTrace callback emits structured spans for LLM calls, tool calls, tasks, and agent runs (#40)
  • Lifecycle hooksbeforeRun / afterRun on AgentConfig for prompt rewriting and result post-processing (#45)
  • Human-in-the-looponApproval callback between task execution rounds to gate the next batch (#46)
  • Loop detection — detects stuck agents repeating the same tool calls or text, with configurable warn / terminate / custom handler (#49)
  • Grok (xAI) adapter — first-class support with dedicated GrokAdapter (#44)
  • Fallback tool-call extraction — local models that emit tool calls as plain text are now handled automatically (#47)

Testing & quality

  • 340 tests, 71% line coverage across src/ (#53)
  • Coverage badge added to README (#55)

Full changelog

v0.2.0...v1.0.0