Skip to content
'; user_status_content.firstChild.appendChild(avatarContainer); } else { // Placeholder for LoggedOutUserMenu let loggedOutContainer = document.createElement('div'); // if LoggedOutUserMenu fallback let userBtn = document.createElement('button'); userBtn.style.width = "33px"; userBtn.style.height = "33px"; userBtn.style.display = "flex"; userBtn.style.alignItems = "center"; userBtn.style.justifyContent = "center"; userBtn.style.color = "var(--ds-gray-900)"; userBtn.style.border = "1px solid var(--ds-gray-300)"; userBtn.style.borderRadius = "100%"; userBtn.style.cursor = "pointer"; userBtn.style.background = "transparent"; userBtn.style.padding = "0"; // user icon ( from geist) let svg = document.createElementNS('http://www.w3.org/2000/svg', 'svg'); svg.setAttribute('data-testid', 'geist-icon'); svg.setAttribute('height', '16'); svg.setAttribute('stroke-linejoin', 'round'); svg.setAttribute('style', 'color:currentColor'); svg.setAttribute('viewBox', '0 0 16 16'); svg.setAttribute('width', '16'); let path = document.createElementNS('http://www.w3.org/2000/svg', 'path'); path.setAttribute('fill-rule', 'evenodd'); path.setAttribute('clip-rule', 'evenodd'); path.setAttribute('d', 'M7.75 0C5.95507 0 4.5 1.45507 4.5 3.25V3.75C4.5 5.54493 5.95507 7 7.75 7H8.25C10.0449 7 11.5 5.54493 11.5 3.75V3.25C11.5 1.45507 10.0449 0 8.25 0H7.75ZM6 3.25C6 2.2835 6.7835 1.5 7.75 1.5H8.25C9.2165 1.5 10 2.2835 10 3.25V3.75C10 4.7165 9.2165 5.5 8.25 5.5H7.75C6.7835 5.5 6 4.7165 6 3.75V3.25ZM2.5 14.5V13.1709C3.31958 11.5377 4.99308 10.5 6.82945 10.5H9.17055C11.0069 10.5 12.6804 11.5377 13.5 13.1709V14.5H2.5ZM6.82945 9C4.35483 9 2.10604 10.4388 1.06903 12.6857L1 12.8353V13V15.25V16H1.75H14.25H15V15.25V13V12.8353L14.931 12.6857C13.894 10.4388 11.6452 9 9.17055 9H6.82945Z'); path.setAttribute('fill', 'currentColor'); svg.appendChild(path); userBtn.appendChild(svg); loggedOutContainer.appendChild(userBtn); loggedOutContainer.style.display = 'flex'; loggedOutContainer.style.gap = '8px'; loggedOutContainer.style.alignItems = 'center'; user_status_content.firstChild.appendChild(loggedOutContainer); } })();
Menu

Vercel Groq Integration
Native Integration

Last updated February 10, 2026

Groq is a high-performance AI inference service with an ultra-fast Language Processing Unit (LPU) architecture. It enables fast response times for language model inference, making it ideal for applications requiring low latency.

You can use the Vercel and Groq integration to:

  • Connect AI models such as Whisper-large-v3 for audio processing and Llama models for text generation to your Vercel projects.
  • Deploy and run inference with optimized performance.

Groq provides a diverse range of AI models designed for high-performance tasks.

DeepSeek R1 Distill Llama 70B

Type: Chat

A generative text model

Distil Whisper Large V3 English

Type: Audio

A distilled, or compressed, version of OpenAI's Whisper model, designed to provide faster, lower cost English speech recognition while maintaining comparable accuracy.

Llama 3.1 8B Instant

Type: Chat

A fast and efficient language model for text generation.

Mistral Saba 24B

Type: Chat

Mistral Saba 24B is a specialized model trained to excel in Arabic, Farsi, Urdu, Hebrew, and Indic languages. Designed for high-performance multilingual capabilities, it delivers exceptional results across a wide range of tasks in these languages while maintaining strong performance in English. With a 32K token context window and tool use capabilities, it's ideal for complex multilingual applications requiring deep language understanding and regional context.

Qwen QWQ 32B

Type: Chat

Qwen QWQ 32B is a powerful large language model with strong reasoning capabilities and versatile applications across various tasks.

Whisper Large V3

Type: Audio

A state-of-the-art model for automatic speech recognition (ASR) and speech translation, trained on 1M hours of weakly labeled and 4M hours of pseudo-labeled audio. Supports 99 languages with improved accuracy over previous versions.

Whisper Large V3 Turbo

Type: Audio

A faster version of Whisper Large V3 with reduced decoding layers (4 instead of 32), providing significantly improved speed with minimal quality degradation. Supports 99 languages for speech recognition and translation.

Llama 3.3 70B Instruct Turbo

Type: Chat

Meta's Llama 3.3 is an auto-regressive language model that uses an optimized transformer architecture. Supports 128K context length and multilingual processing.

Llama 4 Scout 17B 16E Instruct

Type: Chat

Meta's natively multimodal model with a 17B parameter mixture-of-experts architecture that enables text and image understanding, supporting 12 languages.

The Vercel Groq integration can be accessed through the AI tab on your Vercel dashboard.

To follow this guide, you'll need the following:

  1. Navigate to the AI tab in your Vercel dashboard
  2. Select Groq from the list of providers, and press Add
  3. Review the provider information, and press Add Provider
  4. You can now select which projects the provider will have access to. You can choose from All Projects or Specific Projects
    • If you select Specific Projects, you'll be prompted to select the projects you want to connect to the provider. The list will display projects associated with your scoped team
    • Multiple projects can be selected during this step
  5. Select the Connect to Project button
  6. You'll be redirected to the provider's website to complete the connection process
  7. Once the connection is complete, you'll be redirected back to the Vercel dashboard, and the provider integration dashboard page. From here you can manage your provider settings, view usage, and more
  8. Pull the environment variables into your project using Vercel CLI
    terminal
    vercel env pull
  9. Install the providers package
    Terminal
    pnpm i @ai-sdk/groq ai
  10. Connect your project using the code below:
    app/api/chat/route.ts
    import { groq } from '@ai-sdk/groq';import { streamText } from 'ai';
    // Allow streaming responses up to 30 secondsexport const maxDuration = 30;
    export async function POST(req: Request) {  // Extract the `messages` from the body of the request  const { messages } = await req.json();
      // Call the language model  const result = streamText({    model: groq('llama-3.1-8b-instant'),    messages,  });
      // Respond with the stream  return result.toDataStreamResponse();}
    
  1. Add the provider to your project using the Vercel CLI install command
    terminal
    vercel install groq
    During this process, you will be asked to open the dashboard to accept the marketplace terms if you have not installed this integration before. You can also choose which project(s) the provider will have access to.
  2. Install the providers package
    Terminal
    pnpm i @ai-sdk/groq ai
  3. Connect your project using the code below:
    app/api/chat/route.ts
    import { groq } from '@ai-sdk/groq';import { streamText } from 'ai';
    // Allow streaming responses up to 30 secondsexport const maxDuration = 30;
    export async function POST(req: Request) {  // Extract the `messages` from the body of the request  const { messages } = await req.json();
      // Call the language model  const result = streamText({    model: groq('llama-3.1-8b-instant'),    messages,  });
      // Respond with the stream  return result.toDataStreamResponse();}
    

Was this helpful?

supported.