Using Email Webhooks for AI and LLM Pipelines

An email address is a natural input interface for AI agents. Users already know how to write an email; you don't need to build a UI. This guide shows how to wire an incoming email straight into an LLM and do something useful with the result.

The pattern

User sends email โ†’ SMTP receives it โ†’ webhook fires โ†’ your server calls LLM โ†’ result is stored or acted on

Your server is the integration layer. email-webhook delivers the parsed email as JSON; everything else โ€” the LLM call, prompt construction, result handling โ€” happens in your code.

What the payload gives you

When an email arrives, your endpoint receives a JSON body with these fields:

Field What to use it for
message The plain-text body โ€” feed this directly to the LLM
from The sender's address โ€” use as user identity
subject Often a good prompt prefix or task description
attachments Array of base64-encoded files, if any

The message field is plain text. For HTML-only emails (no plain-text part), it will contain the raw HTML โ€” worth stripping tags before sending to an LLM.

Minimal example: email โ†’ LLM โ†’ stored result

Node.js (Express)

import express from "express";
import OpenAI from "openai";

const app = express();
app.use(express.json());

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

app.post("/ai-webhook", async (req, res) => {
  // Authenticate the request first โ€” see the authentication guide
  if (req.headers["x-api-key"] !== process.env.WEBHOOK_SECRET) {
    return res.sendStatus(401);
  }

  const { from, subject, message } = req.body;

  const completion = await openai.chat.completions.create({
    model: "gpt-4o-mini",
    messages: [
      { role: "system", content: "You are a helpful assistant. Reply concisely." },
      { role: "user", content: `Subject: ${subject}\n\n${message}` },
    ],
  });

  const reply = completion.choices[0].message.content;

  // Store or forward the result โ€” e.g. save to a database, call another API
  await saveToDatabase({ from, subject, reply });

  res.sendStatus(200);
});

app.listen(3000);

Python (Flask)

import os
from flask import Flask, request
from anthropic import Anthropic

app = Flask(__name__)
client = Anthropic()

@app.post("/ai-webhook")
def handle_email():
    if request.headers.get("X-Api-Key") != os.environ["WEBHOOK_SECRET"]:
        return "", 401

    data = request.get_json()
    prompt = f"Subject: {data['subject']}\n\n{data['message']}"

    message = client.messages.create(
        model="claude-haiku-4-5-20251001",
        max_tokens=1024,
        messages=[{"role": "user", "content": prompt}],
    )

    reply = message.content[0].text
    save_to_database(sender=data["from"], reply=reply)
    return "", 200

Return 200 and email-webhook records the delivery as successful. Any non-2xx response is a failure โ€” the delivery will not be retried, so make sure your handler is robust before returning a success status.

Restricting who can trigger the pipeline

By default a webhook fires for mail from any sender. For an AI pipeline you usually want to restrict this โ€” otherwise anyone who discovers your address can invoke the LLM on your bill.

Set the From email field on your webhook to an exact address (e.g. trusted@mycompany.com). Mail from any other sender is silently discarded before your endpoint is ever called. See Getting Started for details on the From email field.

For a multi-user setup where you want to allow a known list of addresses, create one webhook per trusted sender โ€” each pointing at the same endpoint URL โ€” rather than trying to filter in application code.

Secure the endpoint

Always authenticate the incoming request with a custom header. Add an X-Api-Key or Authorization header in the webhook dashboard; validate it as the first thing your handler does. Details in the authentication guide.

The X-email-webhook-id header on every request is a UUID unique to that delivery. Store it alongside any records you create so you can detect and ignore duplicates safely.

Handling attachments in AI pipelines

If users send files (PDFs, images, CSVs), they arrive in the attachments array as base64-encoded strings. Decode them before passing to a model that accepts file input:

const { attachments } = req.body;

for (const attachment of attachments) {
  const buffer = Buffer.from(attachment.content, "base64");
  // Pass buffer.toString() or buffer to your LLM's file API
}

For large attachments (emails can carry up to ~34 MB), decode to a stream and write to cloud storage rather than holding the whole file in memory before the LLM call.

Keeping latency in check

email-webhook makes a single synchronous HTTP call to your endpoint and waits for a response. LLM calls add latency โ€” a few seconds is normal. That's fine: the webhook timeout is generous enough to accommodate a standard completion call.

If you need to do heavier processing (multiple LLM calls, retrieval, tool use), return 200 immediately and do the work asynchronously in the background:

app.post("/ai-webhook", async (req, res) => {
  // Authenticate...
  res.sendStatus(200); // acknowledge immediately

  // Heavy work happens after the response is sent
  setImmediate(() => runPipeline(req.body));
});

Next steps

  • Lock down the sender: set the From email filter so only trusted addresses trigger the pipeline.
  • Add a custom header: add an Authorization or X-Api-Key header in the webhook dashboard and validate it in your handler.
  • Enable message logs: turn on Message Logs on your webhook to see status codes, durations, and delivery metadata while you're building.