Content operations at most agencies follow the same pattern: a human does research, a human writes a brief, a human writes the draft, another human edits it, and then it sits in a queue waiting to be published. The whole thing takes days and costs real money per piece.
The pipeline I'm going to show you takes a topic as input and produces a research-backed, structured draft ready for human review in about 4 minutes. It runs on n8n and Claude API, requires no Python, no complex infrastructure, and can be built in about 30 minutes if you already have n8n running.
This is exactly what I built for our internal content operation and for several agency clients. Here's how it works.
What the Pipeline Does
Before touching a single node, it helps to be clear on what you're building and where the human stays in the loop.
Input: A topic or title string (can come from a manual trigger, a spreadsheet, a Slack message, a Monday.com item — wherever your content queue lives)
Steps:
- Research the topic using Brave Search API — pulls current articles, competitor content, and data points
- Feed research into Claude to generate a content brief (audience, angle, key sections, differentiators)
- Use the brief to generate a full structured draft
- Save the draft to Google Docs or a Notion page for human review
Output: A researched, structured ~1500-word draft ready for an editor to review, fact-check, and refine
Where humans stay in the loop: Review and editing before publication. The pipeline does the work nobody likes — research aggregation, structure, first draft. It doesn't replace your editor's judgment or your brand voice.
Prerequisites
- n8n (self-hosted or cloud — I'll use n8n.io Cloud for this walkthrough, but self-hosted works identically)
- Anthropic API key (get one at console.anthropic.com)
- Brave Search API key (free tier gives you 2,000 queries/month — enough for testing, cheap to scale)
- A destination for the output — I'll use Google Docs but you can swap this for Notion, Airtable, a webhook, or just email
Step 1: Set Up Your Trigger
Create a new workflow in n8n. Start with a Manual Trigger for now — you can replace it with a Schedule Trigger, a Webhook, or a Google Sheets Trigger once the pipeline is working.
Add a Set node after the trigger. This is where you define your input variables:
topic: "AI Agent Cost Optimization for Agencies"
target_word_count: 1500
target_audience: "Agency owners running AI workflows"
content_type: "how-to guide"
Having these as explicit variables makes the pipeline reusable — you change the Set node for each piece rather than rewiring the entire workflow.
Step 2: Research With Brave Search
Add an HTTP Request node. Configure it as follows:
- Method: GET
- URL:
https://api.search.brave.com/res/v1/web/search - Authentication: Generic Credential Type → Header Auth
- Name:
X-Subscription-Token - Value: your Brave API key
- Name:
- Query Parameters:
q:={{ $json.topic }} {{ new Date().getFullYear() }} guidecount:10freshness:pm(past month — keeps results current)
This gives you the top 10 current results for your topic. You'll use these as research context in the next step.
Add a second HTTP Request node for a follow-up search targeting data and statistics:
q:={{ $json.topic }} statistics data research 2026count:5
Run the workflow through this point and inspect the output. You should see a JSON array of search results with title, URL, and description for each result. That's your raw research material.
Step 3: Format the Research Context
Add a Code node to transform the raw search results into a clean research brief that Claude can work with.
const mainResults = $('HTTP Request - Main Search').item.json.web?.results || [];
const dataResults = $('HTTP Request - Data Search').item.json.web?.results || [];
const formatResults = (results) => results
.map(r => `- ${r.title}\n ${r.url}\n ${r.description || ''}`)
.join('\n\n');
const researchContext = `## Current Content on This Topic
${formatResults(mainResults)}
## Data & Statistics Sources
${formatResults(dataResults)}`;
return {
topic: $('Set').item.json.topic,
target_word_count: $('Set').item.json.target_word_count,
target_audience: $('Set').item.json.target_audience,
content_type: $('Set').item.json.content_type,
research: researchContext
};
Step 4: Generate the Content Brief
Add an HTTP Request node to call the Claude API. This is the first Claude call — it generates a brief, not the full article. Briefs are cheaper (shorter output) and give you a checkpoint before generating long-form content.
- Method: POST
- URL:
https://api.anthropic.com/v1/messages - Authentication: Header Auth
- Name:
x-api-key - Value: your Anthropic API key
- Name:
- Headers: Add
anthropic-version: 2023-06-01 - Body (JSON):
{
"model": "claude-sonnet-4-6",
"max_tokens": 1024,
"system": "You are a content strategist for a digital marketing agency. You create clear, practical content briefs that guide writers to produce genuinely useful articles — not SEO fluff.",
"messages": [
{
"role": "user",
"content": "Create a content brief for the following:\n\nTopic: {{ $json.topic }}\nTarget Audience: {{ $json.target_audience }}\nContent Type: {{ $json.content_type }}\nTarget Word Count: {{ $json.target_word_count }}\n\nResearch context:\n{{ $json.research }}\n\nBrief should include: recommended angle/hook, 5-7 main sections with brief descriptions, 2-3 key differentiators from existing content, and the primary takeaway for the reader. Be specific and opinionated — avoid generic advice."
}
]
}
Parse the response with a Code node:
const response = $input.item.json;
const brief = response.content[0].text;
return { brief, ...($('Format Research').item.json) };
Step 5: Write the Draft
Add another HTTP Request node for the main generation call. This one uses Claude with the brief as additional context:
{
"model": "claude-sonnet-4-6",
"max_tokens": 4096,
"system": "You are Carlos Aragon, founder of VIXI Agency — an AI automation specialist with 8 years building production automation systems for marketing agencies. You write in a direct, practitioner voice: specific, honest about tradeoffs, with real examples from actual deployments. Avoid buzzwords, hedging, and generic advice. Write like you're explaining something to a smart colleague.",
"messages": [
{
"role": "user",
"content": "Write a {{ $json.content_type }} on: {{ $json.topic }}\n\nTarget audience: {{ $json.target_audience }}\nTarget length: {{ $json.target_word_count }} words\n\nContent brief to follow:\n{{ $json.brief }}\n\nResearch context (use specific data points and cite sources inline):\n{{ $json.research }}\n\nFormat: Use H2 and H3 headers, include code examples where relevant, be specific and actionable throughout. End with a brief CTA that's relevant but not pushy."
}
]
}
Step 6: Save to Google Docs
Add a Google Docs node (or Notion, or a webhook to your CMS). Configure it to create a new document:
- Operation: Create
- Title:
[DRAFT] {{ $('Set').item.json.topic }} — {{ $now.format('YYYY-MM-DD') }} - Content:
={{ $json.content[0].text }}
Add a final Set node to capture metadata:
return {
title: $('Set').item.json.topic,
doc_url: $('Google Docs').item.json.documentId,
word_count_estimate: $('HTTP Request - Write Draft').item.json.usage.output_tokens * 0.75,
model_used: 'claude-sonnet-4-6',
created_at: new Date().toISOString(),
status: 'draft_ready_for_review'
};
Making It Production-Ready
The basic pipeline above works. For production, add a few more layers:
Error handling: Wrap each HTTP Request node in a Try/Catch structure. If Brave fails, you can still run without research context. If Claude returns an error, log it and notify via Telegram rather than silently failing.
Cost tracking: Log usage.input_tokens and usage.output_tokens from each Claude response to a Supabase table. After a week you'll know exactly what each piece costs.
Queue management: Replace the manual trigger with a Google Sheets or Airtable trigger. Have a "Content Queue" tab where you drop topics with a status column. The workflow polls for rows with status = "pending", processes them, and sets status = "draft_ready".
Review notification: After saving to Google Docs, send a Telegram or Slack message: "New draft ready: [title] — [link]". This closes the loop and makes the pipeline feel like a real production system rather than a script you run manually.
What This Actually Costs
At current Claude Sonnet pricing:
- Brief generation: ~$0.003 per piece (800 input tokens, 400 output tokens)
- Draft generation: ~$0.025 per piece (2000 input tokens, 2500 output tokens)
- Brave Search: ~$0.001 per piece on the standard plan
Total: roughly $0.03 per draft. For a blog post that would take a writer 2-3 hours, the math is obvious.
The ceiling on quality is your system prompt and your editor's time on the backend. The floor is better than most first drafts that go unreviewed. For agencies producing content at volume, this changes the economics entirely.
The workflow I described here runs our blog publishing operation. We produce 3-5 posts per week from this pipeline. Each one goes through one round of human editing before publication. The whole process takes about 45 minutes of human time per post, down from 6-8 hours.
Want to see the full n8n workflow JSON you can import directly? Get in touch and I'll send it over — it's the same one we use in production.