Skip to content
  • Chat SDK now supports concurrent message handling

    Chat SDK now lets you control what happens when a new message arrives before a previous one finishes processing, with the new concurrency option for the Chat class.

    const bot = new Chat({
    concurrency: {
    strategy: "queue",
    maxQueueSize: 20,
    onQueueFull: "drop-oldest",
    queueEntryTtlMs: 60_000,
    },
    // ...
    });

    Multiple options are supported to customize your concurrency strategy.

    Four strategies are available:

    • drop (default): discards incoming messages

    • queue: processes the latest message after the handler finishes

    • debounce: waits for a pause in conversation, processes only the final message

    • concurrent: processes every message immediately, no locking

    Read the documentation to get started.

  • Chat SDK now supports scheduled Slack messages

    Chat SDK now supports scheduled messages on Slack, allowing you to deliver a message at a future time.

    Use thread.schedule() and pass your message and a postAt date, like:

    const scheduled = await thread.schedule("Reminder: standup in 5 minutes!", {
    postAt: new Date("2026-03-09T08:55:00Z"),
    });
    // Cancel before delivery
    await scheduled.cancel();

    Read the documentation to get started.

  • Elastic build machines now available in beta

    Elastic Build Machine Settings - DarkElastic Build Machine Settings - Dark

    Elastic build machines are now available in beta for all paid plans, giving teams control over build performance without project-level micromanagement. You can configure elastic builds at the team or project level.

    Rather than a one-size-fits-all approach, Vercel evaluates each project individually and assigns the right machine for its actual needs. Smaller, simpler projects may benefit from cost-efficient Standard build machines while more complex workloads can automatically scale up to Enhanced or Turbo machines.

    This smart assignment prevents over-provisioning, with teams automatically getting optimal performance at the right cost for every project.

    Enable elastic builds in your team settings or project settings, or read the builds documentation.

  • Enterprise teams can now set their default build machine

    Team Default Build Machine selection (dark)Team Default Build Machine selection (dark)

    Enterprise team owners can now set a default build machine at the team level. This setting automatically applies to newly created projects, though you can still override it on a per-project basis.

    Existing projects retain their current configurations unless you explicitly choose to apply the new team default to all of them when saving.

    Learn more about build machines or try it out from settings.

  • View specific error codes in runtime logs

    You can now view specific error codes in runtime logs.

    Runtime logs helps you view and troubleshoot errors in your applications on Vercel. In addition to the HTTP status response code, we also list the specific error code the request details panel of the runtime logs page of the Vercel dashboard.

    This makes it easier to diagnose why a request failed.

    Learn more about errors on Vercel.

    Chandan Rao, Mark Knichel

  • Sandbox SDK adds file permission control

    Vercel Sandbox SDK 1.9.0 now supports setting file permissions directly when writing files.

    By passing a mode property to the writeFiles API, you can define permissions in a single operation.

    This eliminates the need for an additional chmod execution round-trip when creating executable scripts or managing access rights inside the sandbox.

    sandbox.writeFiles([{
    path: 'run.sh',
    content: '#!/bin/bash\necho "ready"',
    mode: 0o755
    }]);

    See the documentation to learn more.

  • MiniMax M2.7 is live on AI Gateway

    MiniMax M2.7 is now available on Vercel AI Gateway in two variants: standard and high-speed. M2.7 is a major step up from previous M2-series models in software engineering, agentic workflows, and professional office tasks.

    The model natively supports multi-agent collaboration, complex skill orchestration, and dynamic tool search for building agentic workflows. M2.7 also improves on production debugging and end-to-end project delivery.

    The high-speed variant delivers the same performance for 2x the cost of standard at ~100 tokens per second for latency-sensitive use cases.

    To use M2.7, set model to minimax/minimax-m2.7 or minimax/minimax-m2.7-highspeed in the AI SDK.

    import { streamText } from 'ai';
    const result = streamText({
    model: 'minimax/minimax-m2.7-highspeed',
    prompt:
    `Analyze the production alert logs from the last hour,
    correlate them with recent deployments, identify the
    root cause, and submit a fix with a non-blocking
    migration to restore service.`,
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.