Releases: open-multi-agent/open-multi-agent
v1.4.0
Highlights
Official org package
Open Multi-Agent now has an official organization package:
npm install @open-multi-agent/coreNew projects should use @open-multi-agent/core.
Plan-only orchestration
Adds PlanOnly mode so teams can inspect the coordinator's task DAG before running agent work. (#203 by @CodingBangboo)
LLM adapter improvements
- Preserve reasoning blocks across Anthropic and Gemini turns. (#205 by @MyPrototypeWhat)
- Forward
reasoning_effortand backfill sampling-parameter parity across OpenAI-compatible, Copilot, and Azure paths. (#209 by @MyPrototypeWhat) - Add a Mistral provider example and README entry. (#206 by @mvanhorn)
Shared memory TTL
SharedMemory entries can now expire by turn count. (#213 by @MyPrototypeWhat)
Fixes
- Keep text-tool extraction depth non-negative when a stray closing brace appears. (#217 by @voidborne-d)
- Fix truncation behavior and tighten coordinator dependency guidance. (#215 by @CodingBangboo)
Examples and Docs
- Add paper replication triage cookbook example. (#202 by @DaiMao-UT)
- Add rare disease information triage example. (#211 by @oooooowoooooo)
- Refresh README, hero animation, badges, docs, and repository links for the new GitHub organization. (#214 and #218 by @JackChen-me)
Compatibility
No intentional runtime API breaks were introduced. The package identity changed to @open-multi-agent/core.
The previous package path, @jackchen_me/open-multi-agent, remains supported during the migration window and is also published at 1.4.0.
Install
npm install @open-multi-agent/[email protected]Legacy path during the migration window:
npm install @jackchen_me/[email protected]v1.3.1
Features
Streaming reasoning events
StreamEvent now supports a reasoning type that carries the model's thinking tokens in real time. ReasoningBlock is also added to the ContentBlock union for non-streaming paths. Supported on Anthropic and OpenAI providers. (#174 by @SiMinus)
onAgentStream and onPlanReady hooks
Two new orchestrator hooks (runTeam only): onAgentStream delivers real-time per-token streaming events during agent runs, and onPlanReady fires after the coordinator decomposes the goal into a task DAG -- return false to abort before any agent work starts. (#182, #181 by @tizerluo; #184, #183 by @JackChen-me)
Agent Observation Pipeline: new trace events
plan_ready and agent_stream trace events join the trace pipeline, enabling downstream observers to react to plan generation and streaming agent output. (#188 by @ibrahimkzmv)
AWS Bedrock adapter
New LLM adapter for Amazon Bedrock, supporting the full adapter contract (chat + stream). (#194 by @CodingBangboo)
ToolCallTrace includes input/output
ToolCallTrace now carries the tool's input and output payloads, making it useful for debugging and audit without inspecting the raw conversation. (#124 by @MyPrototypeWhat)
Fixes
- Strip image blocks before summarize compression to avoid ballooning token cost. (#196 by @MyPrototypeWhat)
- Preserve
tool_use/tool_resultpairing during sliding-window truncation, fixing orphaned tool blocks. (#193 by @MyPrototypeWhat) onAgentStreampath now forwards the fullRunOptionsinto the streaming runner soonTrace, delegation, and run metadata work during streaming. (#184 by @JackChen-me)onPlanReadyabort path now reports the real coordinator token cost instead of zero, and catches thrown callbacks. (#183 by @JackChen-me)
Examples
- Express customer support pipeline: multi-agent triage, routing, and resolution. (#191 by @CodingBangboo)
- Personalized interview simulator: dynamic question generation with structured output. (#189 by @mmjwxbc)
- Incident postmortem DAG: reconstruct timeline from logs and deploys. (#187 by @binghuaren96)
Docs
- README capability tagline, per-area contributor attribution, and a JSDoc fix. (#190, #186, #180 by @JackChen-me)
Install
npm install @jackchen_me/[email protected]Thanks to @SiMinus, @tizerluo, @ibrahimkzmv, @CodingBangboo, @MyPrototypeWhat, @mmjwxbc, and @binghuaren96 for the external contributions.
v1.3.0
New capabilities
Agent delegation
Agents in an orchestrated run can now hand a sub-prompt to another agent on the team and receive its final output as a tool result. Opt-in via registerBuiltInTools(registry, { includeDelegateTool: true }). Five guards: self-delegation, unknown agent, cycle detection, configurable depth cap (maxDelegationDepth, default 3), and pool deadlock. Delegated runs' token usage rolls into the parent's maxTokenBudget so sub-agents cannot silently bypass it. (#123 by @JackChen-me)
runTeam DAG dashboard CLI
oma runTeam ... --dashboard writes a static HTML view of the resolved task graph after a run, including dependencies and per-task status. (#122 by @ibrahimkzmv, follow-up docs in #141 by @JackChen-me)
outputSchema enforcement and defineTool passthrough
The previously advisory outputSchema on AgentConfig is now enforced: results are parsed and validated, with one retry on validation failure. defineTool schemas pass through to the LLM provider. (#149 by @Xin-Mai)
Pluggable shared memory
TeamConfig.sharedMemoryStore accepts any MemoryStore implementation (Redis, SQLite, your own). sharedMemory: true keeps the existing in-process default. (#157 by @JackChen-me)
Advanced LLM sampling
top_p, top_k, repetition_penalty, min_p, and extraBody are now first-class on agent and coordinator configs. Payload spread order is fixed so extraBody overrides sampling parameters but never transport. (#163 by @apollo-mg)
parallelToolCalls exposed for OpenAI
Was previously hardcoded; now configurable per agent. (#173 by @JackChen-me)
Two new providers
- Azure OpenAI adapter, closing the long-standing #24. (#143 by @Klarline)
- Qiniu provider for users on Chinese infrastructure. (#154 by @JackChiang233, follow-up README/CLI docs in #165 by @JackChen-me)
Fixes
- Context compaction persistence and turn dropping.
compactstrategy was losing turns and not persisting compressed history. (#161 by @apollo-mg) - OpenAI mixed-content message ordering. Tool messages must precede user messages in mixed content; previously emitted in the wrong order. (#178 by @voidborne-d)
- Provider type widening on configs.
AgentConfig,CoordinatorConfig, andOrchestratorConfigwere not using the fullSupportedProviderunion. (#158 by @JackChen-me)
Behavior changes
#163 removed two implicit defaults that some users may have relied on:
parallel_tool_calls: falseis no longer forced. If you need the old behavior, setparallelToolCalls: falseexplicitly (now exposed via #173).- The default
frequency_penaltyoverride has been removed.
These are behavior changes, not API breaks, but worth checking if you depended on the old defaults.
The same PR also moved the local <think> tag parsing out of the agent layer into tool/text-tool-extractor.ts. This is internal cleanup with no user-visible impact.
Examples and cookbook
Nine new examples and a category reorganization (#125 by @JackChen-me):
- Meeting summarizer pattern. (#139 by @mvanhorn, moved into
cookbook/in #140 by @JackChen-me) - Translation / back-translation cookbook. (#145 by @zouhh22333-beep)
- Competitive monitoring. (#146 by @pei-pei45)
- Multi-perspective code review, upgraded to structured output and free providers. (#150 by @Kinoo0)
- Contract review DAG with step-level retry. (#155 by @fault-segment)
- Research aggregation with schema. (#159 by @Optimisttt)
- Engram integration: memory store, toolkit, two demos. (#160 by @Agentscreator, Ecosystem section refresh in #151 by @JackChen-me)
- @agentsonar/oma integration: sidecar from the agentsonar team detecting cross-run delegation cycles, repetition, and rate bursts. (e7aecf3 by @JackChen-me)
- Cost-tiered pipeline comparing flagship vs mixed model tiers. (#164 by @HuXiangyu123)
- OpenRouter provider example. (#167 by @kenrogers)
local-quantized.tsshowing tuned sampling on vLLM and llama-server. (ff987cf by @JackChen-me)
Docs and infrastructure
- README refresh: positioning, branding, hero block, integrations, examples section. (#126, #176, #177, #179 by @JackChen-me)
CLAUDE.mdarchitecture map synced with the currentsrc/layout. (#171 by @JackChen-me)- CLI dashboard documented and added to the flag table. (#141 by @JackChen-me)
- Real badges and Codecov integration. (#127, #128, #129, #130 by @JackChen-me)
- npm registry pinned to
npmjs.orgvia repo-level.npmrc(#170 by @JackChen-me). - Extended LLM adapter coverage for issue #54. (#144 by @jadegold55)
Install
npm install @jackchen_me/[email protected]Thanks to @ibrahimkzmv, @mvanhorn, @Klarline, @jadegold55, @zouhh22333-beep, @Kinoo0, @apollo-mg, @Optimisttt, @Agentscreator, @pei-pei45, @fault-segment, @Xin-Mai, @HuXiangyu123, @JackChiang233, @kenrogers, and @voidborne-d for the external contributions that make this release.
Full changelog: v1.2.0...v1.3.0
v1.2.0
First minor release since 1.1.0. MCP integration, three new LLM providers, context management strategies, a CLI, tool output cost controls, and fixes for abort and error propagation.
Features
-
MCP integration. New
connectMCPTools()wires any MCP server (stdio) directly into agent tool use.@modelcontextprotocol/sdkis an optional peer dependency. Runnable example atexamples/16-mcp-github.ts. (#89, by @ibrahimkzmv) -
Three new LLM providers. First-class
provider: 'deepseek'(deepseek-chat,deepseek-reasoner),provider: 'minimax'(global and China endpoints viaMINIMAX_BASE_URL), and verified Groq via OpenAI-compatiblebaseURLinexamples/19-groq.ts. (#113 and #114 by @hkalex; #121 by @mvanhorn) -
Context management strategies. New
AgentConfig.contextStrategykeeps long runs under token ceilings with four strategies:sliding-window,summarize,compact(rule-based, no extra LLM call), andcustom. (#88 by @ibrahimkzmv; #111, #119 by @JackChen-me) -
Tool output cost controls. New
AgentConfig.maxToolOutputCharsand per-toolToolDefinition.maxOutputCharstruncate large outputs (head + tail with a marker). NewAgentConfig.compressToolResultscompresses older tool results once the agent has moved on; errors are never compressed. (#110, #115, #116, #117, #118 by @JackChen-me) -
CLI (
oma). New binary for shell and CI withoma run,oma task,oma provider, JSON-first output, and stable exit codes. Docs atdocs/cli.md. (#107 by @ibrahimkzmv) -
AgentConfig.customTools. Inject tool definitions at config time from the orchestrator. Bypasses preset/allowlist filtering but still respectsdisallowedTools. (#109, #112 by @JackChen-me) -
globbuilt-in tool. Find files by glob pattern, sorted by modification time. (#102 by @ibrahimkzmv)
Fixes
-
AbortSignalpropagation. Abort now reaches tool execution, the Gemini adapter, and the abort queue path. (#104 fixes #99, #100, #101, by @JackChen-me) -
Error event propagation.
AgentRunner.run()now surfaces error events to callers. (#103 fixes #98, by @JackChen-me)
Examples
examples/16-mcp-github.ts: full MCP wiringexamples/17-minimax.ts,examples/18-deepseek.ts,examples/19-groq.ts: provider quickstartsexamples/with-vercel-ai-sdk/: Next.js + OMArunTeam()+ AI SDKuseChat
Docs
- READMEs (EN/ZH) expanded: CLI, MCP, context strategies, tool output control,
customTools. ZH caught up with EN on items that shipped in 1.1.
Install
npm install @jackchen_me/[email protected]Thanks to @hkalex, @ibrahimkzmv, and @mvanhorn for the external contributions that make this release.
Full changelog: v1.1.0...v1.2.0
v1.1.0
First minor release since 1.0.1. Six new features, two fixes, two new examples, and one behavior change you should read before upgrading.
⚠️ Behavior change (read this before upgrading)
Agents now run with default-deny, dependency-scoped context (#87).
An agent only sees results from tasks it explicitly dependsOn, instead of every prior task in the run. This prevents context leakage between unrelated agents and keeps token usage predictable in larger teams.
If your existing teams relied on agents implicitly seeing all prior task output, add explicit dependsOn edges in your task graph. No API change is required for runTeam() users whose coordinator already produces a sensible DAG.
This change was prompted by a combination of competitive analysis (XCLI scopes sub-agent context to a minimum file set + tool allowlist by default) and a public post on X by guk2472 flagging inter-agent context pollution as the real production killer in multi-agent systems. Thanks for the signal.
Features
- AbortSignal support for
runTeam()andrunTasks()(#69). Cancel a run mid-flight from the caller. - Skip coordinator for simple goals in
runTeam()(#70). Single-agent goals no longer pay the coordinator round-trip. - Token budget management at agent and orchestrator level (#71). Stops runs that exceed a configured budget instead of silently burning tokens.
- Tool allowlist / denylist / preset (#83). Restrict which tools an agent can call without rebuilding the registry.
- Customizable coordinator (#85). Override the coordinator's model, system prompt, tools,
toolPreset, anddisallowedToolsviaCoordinatorConfig. - Dependency-scoped agent context (#87). See behavior change above.
Fixes
- Per-agent mutex prevents concurrent runs on the same
Agentinstance from corrupting state (#77). - Duplicate progress events in the short-circuit path for
runTeam()are gone, andcompletedTaskCountis no longer double-incremented (#82).
Examples
Docs
- README top fold rewritten and Examples section trimmed (#95)
- Coverage badge updated to 88% (#57)
DECISIONS.mdrestructured to signal openness on MCP and A2A
Install
npm install @jackchen_me/[email protected]v1.0.0
What's new since 0.2.0
Features
- Structured output — optional
outputSchema(Zod) on any agent, with auto-retry on validation failure (#36, #38) - Task retry with exponential backoff —
maxRetries,retryDelayMs,retryBackoffper task (#37) - Observability —
onTracecallback emits structured spans for LLM calls, tool calls, tasks, and agent runs (#40) - Lifecycle hooks —
beforeRun/afterRunon AgentConfig for prompt rewriting and result post-processing (#45) - Human-in-the-loop —
onApprovalcallback between task execution rounds to gate the next batch (#46) - Loop detection — detects stuck agents repeating the same tool calls or text, with configurable
warn/terminate/ custom handler (#49) - Grok (xAI) adapter — first-class support with dedicated GrokAdapter (#44)
- Fallback tool-call extraction — local models that emit tool calls as plain text are now handled automatically (#47)