Bring Your Own Agent (Vercel AI SDK, OpenAI, Anthropic)
Wire the docx-editor-agents tool catalog to Vercel AI SDK, OpenAI function calling, Anthropic tool use, or any framework that takes OpenAI-shape tools.
The toolkit is provider-agnostic. A first-party adapter ships for the Vercel AI SDK, and any framework that consumes OpenAI-shape tools (OpenAI, Anthropic, LangChain, custom loops) wires up against the same tool catalog. The editor bridge is decoupled from the model and the agent loop.
Vercel AI SDK (recommended)
Two adapters: @eigenpal/docx-editor-agents/ai-sdk/server for streamText({ tools }); @eigenpal/docx-editor-agents/ai-sdk/react bridges useChat's UIMessage[] to the <AgentChatLog> shape.
Server route:
import { getAiSdkTools } from '@eigenpal/docx-editor-agents/ai-sdk/server';
import { streamText, stepCountIs, convertToModelMessages } from 'ai';
export async function POST(req: Request) {
const { messages, context } = await req.json();
const result = streamText({
model: 'openai/gpt-5.4-mini',
system: `You are reviewing a DOCX. ${JSON.stringify(context)}`,
messages: await convertToModelMessages(messages),
tools: getAiSdkTools(),
stopWhen: stepCountIs(12),
});
return result.toUIMessageStreamResponse();
}Client hook:
import { useDocxAgentTools } from '@eigenpal/docx-editor-agents/react';
import { useChat } from '@ai-sdk/react';
import { DefaultChatTransport, lastAssistantMessageIsCompleteWithToolCalls } from 'ai';
const { executeToolCall, getContext } = useDocxAgentTools({ editorRef, author: 'Assistant' });
const chat = useChat({
transport: new DefaultChatTransport({
api: '/api/chat',
prepareSendMessagesRequest: ({ messages }) => ({
body: { messages, context: getContext() },
}),
}),
sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithToolCalls,
onToolCall: ({ toolCall }) => {
const result = executeToolCall(toolCall.toolName, (toolCall.input ?? {}) as Record<string, unknown>);
// forward to chatRef.addToolResult — see the live editor page for the full pattern
},
});Full wiring with <AgentChatLog>, the chatRef + addToolResult pattern, and a working interactive demo on the Live editor page.
OpenAI direct
The hook returns OpenAI-shape schemas and the executor; you run the tool-call loop yourself. The complete loop, headless against DocxReviewer so the snippet runs anywhere:
import OpenAI from 'openai';
import { DocxReviewer, createReviewerBridge } from '@eigenpal/docx-editor-agents';
import { getToolSchemas, executeToolCall } from '@eigenpal/docx-editor-agents/bridge';
const openai = new OpenAI();
const reviewer = await DocxReviewer.fromBuffer(buffer, 'AI');
const bridge = createReviewerBridge(reviewer);
const tools = getToolSchemas();
const messages: OpenAI.ChatCompletionMessageParam[] = [
{ role: 'system', content: 'Review this contract. Comment on every paragraph that deserves it.' },
{ role: 'user', content: reviewer.getContentAsText() },
];
for (let step = 0; step < 12; step++) {
const res = await openai.chat.completions.create({
model: 'gpt-5.4-mini',
messages,
tools,
});
const msg = res.choices[0].message;
messages.push(msg);
if (!msg.tool_calls?.length) break;
for (const call of msg.tool_calls) {
const args = JSON.parse(call.function.arguments);
const result = executeToolCall(call.function.name, args, bridge);
messages.push({
role: 'tool',
tool_call_id: call.id,
content: result.success ? JSON.stringify(result.data) : `Error: ${result.error}`,
});
}
}
const output = await reviewer.toBuffer();Same pattern works against EditorBridge in the browser — swap createReviewerBridge for useDocxAgentTools and run the loop client-side.
Server-side review with the OpenAI SDK, no live editor:
// app/api/review/route.ts
import { NextRequest, NextResponse } from 'next/server';
import OpenAI from 'openai';
import { DocxReviewer } from '@eigenpal/docx-editor-agents';
const openai = new OpenAI();
export async function POST(request: NextRequest) {
const formData = await request.formData();
const file = formData.get('file') as File;
if (!file) return NextResponse.json({ error: 'No file' }, { status: 400 });
const reviewer = await DocxReviewer.fromBuffer(await file.arrayBuffer(), 'AI Reviewer');
const response = await openai.chat.completions.create({
model: 'gpt-5.4-mini',
response_format: { type: 'json_object' },
messages: [
{
role: 'system',
content: `Review this document. Return JSON:
{
"comments": [{ "paragraphIndex": <number>, "text": "<feedback>" }],
"replacements": [{ "paragraphIndex": <number>, "search": "<phrase>", "replaceWith": "<better>" }]
}`,
},
{ role: 'user', content: reviewer.getContentAsText() },
],
});
const actions = JSON.parse(response.choices[0]?.message?.content || '{}');
reviewer.applyReview({ comments: actions.comments, proposals: actions.replacements });
const output = await reviewer.toBuffer();
return new NextResponse(output, {
headers: {
'Content-Type': 'application/vnd.openxmlformats-officedocument.wordprocessingml.document',
},
});
}Anthropic Claude
Claude's tool use takes a similar tool array shape. Anthropic uses { name, description, input_schema }; OpenAI uses { type: 'function', function: { name, description, parameters } }. Map getToolSchemas() output before passing to client.messages.create({ tools }).
Headless review, no tool calling (single-shot JSON):
import Anthropic from '@anthropic-ai/sdk';
import { DocxReviewer } from '@eigenpal/docx-editor-agents';
const client = new Anthropic();
const reviewer = await DocxReviewer.fromBuffer(buffer, 'Claude Reviewer');
const response = await client.messages.create({
model: 'claude-sonnet-4-7',
max_tokens: 4096,
messages: [{
role: 'user',
content: `Review this document and return JSON with comments and replacements:\n\n${reviewer.getContentAsText()}`,
}],
});
const actions = JSON.parse(response.content[0].text);
reviewer.applyReview({ comments: actions.comments, proposals: actions.replacements });LangChain, custom loops, anything else
Any agent framework that consumes OpenAI-shape tools (or can adapt them) works. Pass getToolSchemas() to your provider, hand each tool call to executeToolCall. The toolkit owns the editor bridge; you own the orchestration.