Custom Tools
Vertesia supports custom tools to let you integrate your own logic or connect to external data sources beyond the built-in tools it provides. These tools can be written in any programming language and exposed via a RESTful HTTP interface. Once registered, Vertesia acts as a broker between the LLM and your custom tools.
When the LLM issues a tool_use message, Vertesia identifies whether the requested tool is built-in or custom. If it's a custom tool, Vertesia sends a POST request to your tool server, waits for the response, and then passes the result back to the LLM.
Your tool server must expose an endpoint that handles both GET and POST HTTP methods:
GETis used to discover the tools exposed by your server.POSTis used to invoke a tool with user-provided input when requested by the LLM.
Authentication
The POST request includes a Vertesia-signed JWT that contains metadata such as the user ID, roles, project, and organization. Your tool server must validate this token using Vertesia's public key, which is available via a JWKS endpoint:
{vertesia_server}/api/v1/.well-known/jwks
You can find the {vertesia_server} location in the token property endpoints.studio.
To find the correct signing key, use the kid (key ID) field from the JWT header to match it with a key in the JWKS.
API Specification
GET /path/to/tools/endpoint
This endpoint returns the list of tools available on the server. No authorization is required for this request.
The response must be a JSON object describing the tool server and its available tools.
Here's the TypeScript interface for the expected response:
interface GetToolsResponse {
/**
* The URL of the tool server (same as the URL where this response is served)
*/
src: string;
/**
* A human-readable title for this tool server
*/
title: string;
/**
* A short description of the tool server
*/
description: string;
/**
* The list of tools exposed by this server
*/
tools: {
/**
* The name of the tool (used in tool_use messages)
*/
name: string;
/**
* A short description of what the tool does
*/
description: string;
/**
* A JSON Schema describing the expected input for the tool
*/
input_schema: JSONSchema;
}[];
}
Where JSONSchema is a standard JSON Schema definition of the tool's input.
Example Response:
{
"src": "http://localhost:5173/api/test",
"title": "Development Tools",
"description": "A collection of test tools for development purposes",
"tools": [
{
"name": "weather",
"description": "Get the current weather for a given location.",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The location to get the weather for, e.g., 'New York, NY'."
}
},
"required": [
"location"
]
}
}
]
}
POST /path/to/tools/endpoint
This endpoint is called by Vertesia when the LLM requests the execution of a specific tool. The request includes the tool name, input arguments, and a signed JWT in the Authorization header. Your server must verify the JWT before executing the tool logic.
Request Headers
Authorization: Bearer
Content-Type: application/json
Request Body
The request body contains information about the tool being used and optional metadata about the execution context.
interface ToolExecutionRequest {
/**
* Contains the name of the tool to execute and the input arguments.
*/
tool_use: ToolUse;
/**
* Optional metadata related to the current execution context.
*/
metadata?: Record<string, any>;
}
interface ToolUse {
/**
* The unique ID of this tool use request (used for traceability).
*/
id: string;
/**
* The name of the tool to execute (must match the name provided in the GET response).
*/
tool_name: string;
/**
* The arguments to pass to the tool (must match the tool's input_schema).
*/
tool_input: unknown;
}
Successful Response
On success, your tool server must return a JSON response that includes the tool use ID and the result of the tool execution.
interface ToolExecutionResponse {
/**
* The tool use id of the tool use request. For traceability.
*/
tool_use_id: string;
/**
* The tool result as a string (can be a serialized JSON object).
*/
content: string;
/**
* Optional file URLs to attach to the response. Useful for sending images to the LLM.
*/
files?: string[];
/**
* Metadata can be used to return more info on the tool execution like stats or user messages.
*/
metadata?: Record<string,any>
}
Error Response
If an error occurs during execution, your server must return a non-2xx HTTP status code and a JSON body describing the error.
interface ToolExecutionResponseError {
/**
* The tool use ID of the request (for traceability).
*/
tool_use_id: string;
/**
* The HTTP status code.
*/
status: number;
/**
* A short error message.
*/
error: string;
/**
* Optional additional details about the error.
*/
data?: Record<string, any>;
}
Response Headers
Your tool server should include standard HTTP headers in all responses. the Content-Type should be set on application/json.
Streaming Large Tool Outputs to Artifacts
Some tool executions can produce large textual outputs (for example, long reports, detailed logs, or large JSON payloads). To keep responses manageable for the model while still preserving full results, tools can accept an optional output_artifact parameter in their input schema.
When present, Vertesia will:
- Store the full tool output as an artifact in the agent workspace (for example, under a
files/path). - Return a compact response that includes a short preview and the
artifact_pathpointing to the full content.
The agent can then:
- Use the artifact tools (
read_artifact,list_artifacts,grep_artifacts,patch_artifact) to inspect or refine the result. - Use
execute_shellto run follow-up processing over the artifact inside the Daytona sandbox.
This pattern is particularly useful for remote tools and skills that generate large analysis reports or intermediate datasets that would otherwise exceed the model context.
Building a Custom Tool Server
Quick Start with TypeScript
The fastest way to build a custom tool server is using the @vertesia/create-plugin scaffold:
npm init @vertesia/plugin@latest
The generated project is a unified plugin with both a Hono tool server and a React UI, including example tools and skills. The comprehensive README covers creating resources, local development, building, and deployment.
Manual Setup
If you prefer to build from scratch, use the @vertesia/tools-sdk package directly. The SDK implements the entire protocol for you, including:
- Handling GET and POST endpoints
- JWT verification and validation against Vertesia's JWKS
- Input schema validation
- Tool routing and execution handling
Using the SDK helps you focus on writing tool logic instead of boilerplate.
Deploying to Vercel
Vercel is the easiest way to deploy your tool server — its generous free tier is more than enough for development and small-scale production. The generated project includes a vercel.json and api/index.js serverless adapter. Deploy with:
npm i -g vercel
vercel --prod
Static files (UI builds) are served from dist/, and API requests are routed through the serverless function. After deploying, note the production URL (e.g. https://my-tools.vercel.app).
Important: Disable deployment protection in Vercel project settings so Vertesia can reach your tool server endpoint.
Registering Custom Tools
To make your custom tools available to Vertesia Agent Runner, register your tool server as a Vertesia Application. Create a manifest.json with your deployed endpoint:
{
"name": "my-tools",
"title": "My Tools",
"publisher": "your-org",
"visibility": "private",
"status": "beta",
"endpoint": "https://my-tools.vercel.app/api"
}
Then use the CLI to create and install it in your project:
vertesia apps create --install -f manifest.json
The endpoint URL points to your tool server. Vertesia will call GET on it to discover available tools and POST to execute them. Once registered, your custom tools are available to agents just like built-in ones.
For more details on applications, refer to the Applications section.
