Custom Tools

Vertesia supports custom tools to let you integrate your own logic or connect to external data sources beyond the built-in tools it provides. These tools can be written in any programming language and exposed via a RESTful HTTP interface. Once registered, Vertesia acts as a broker between the LLM and your custom tools.

When the LLM issues a tool_use message, Vertesia identifies whether the requested tool is built-in or custom. If it's a custom tool, Vertesia sends a POST request to your tool server, waits for the response, and then passes the result back to the LLM.

Your tool server must expose an endpoint that handles both GET and POST HTTP methods:

  • GET is used to discover the tools exposed by your server.
  • POST is used to invoke a tool with user-provided input when requested by the LLM.

Authentication

The POST request includes a Vertesia-signed JWT that contains metadata such as the user ID, roles, project, and organization. Your tool server must validate this token using Vertesia's public key, which is available via a JWKS endpoint: {vertesia_server}/api/v1/.well-known/jwks

You can find the {vertesia_server} location in the token property endpoints.studio.

To find the correct signing key, use the kid (key ID) field from the JWT header to match it with a key in the JWKS.


API Specification

GET /path/to/tools/endpoint

This endpoint returns the list of tools available on the server. No authorization is required for this request.

The response must be a JSON object describing the tool server and its available tools.

Here's the TypeScript interface for the expected response:

interface GetToolsResponse {
  /**
   * The URL of the tool server (same as the URL where this response is served)
   */
  src: string;
  
  /**
   * A human-readable title for this tool server
   */
  title: string;
  
  /**
   * A short description of the tool server
   */
  description: string;

  /**
   * The list of tools exposed by this server
   */
  tools: {
    /**
     * The name of the tool (used in tool_use messages)
     */
    name: string;

    /**
     * A short description of what the tool does
     */
    description: string;

    /**
     * A JSON Schema describing the expected input for the tool
     */
    input_schema: JSONSchema;
  }[];
}

Where JSONSchema is a standard JSON Schema definition of the tool's input.

Example Response:

{
  "src": "http://localhost:5173/api/test",
  "title": "Development Tools",
  "description": "A collection of test tools for development purposes",
  "tools": [
    {
      "name": "weather",
      "description": "Get the current weather for a given location.",
      "input_schema": {
        "type": "object",
        "properties": {
          "location": {
            "type": "string",
            "description": "The location to get the weather for, e.g., 'New York, NY'."
          }
        },
        "required": [
          "location"
        ]
      }
    }
  ]
}

POST /path/to/tools/endpoint

This endpoint is called by Vertesia when the LLM requests the execution of a specific tool. The request includes the tool name, input arguments, and a signed JWT in the Authorization header. Your server must verify the JWT before executing the tool logic.

Request Headers

Authorization: Bearer 
Content-Type: application/json

Request Body

The request body contains information about the tool being used and optional metadata about the execution context.

interface ToolExecutionRequest {
    /**
     * Contains the name of the tool to execute and the input arguments.
     */
    tool_use: ToolUse;

    /**
     * Optional metadata related to the current execution context.
     */
    metadata?: Record<string, any>;
}

interface ToolUse {
    /**
     * The unique ID of this tool use request (used for traceability).
     */
    id: string;

    /**
     * The name of the tool to execute (must match the name provided in the GET response).
     */
    tool_name: string;

    /**
     * The arguments to pass to the tool (must match the tool's input_schema).
     */
    tool_input: unknown;
}

Successful Response:

On success, your tool server must return a JSON response that includes the tool use ID and the result of the tool execution.

interface ToolExecutionResponse {
    /**
     * The tool use id of the tool use request. For traceability.
     */
    tool_use_id: string;

    /**
     * The tool result as a string (can be a serialized JSON object).
     */
    content: string;

    /**
     * Optional file URLs to attach to the response. Useful for sending images to the LLM.
     */
    files?: string[];

    /**
     * Metadata can be used to return more info on the tool execution like stats or user messages.
     */
    metadata?: Record<string,any>
}

Error Response

If an error occurs during execution, your server must return a non-2xx HTTP status code and a JSON body describing the error.

interface ToolExecutionResponseError {
    /**
     * The tool use ID of the request (for traceability).
     */
    tool_use_id: string;

    /**
     * The HTTP status code.
     */
    status: number;

    /**
     * A short error message.
     */
    error: string;

    /**
     * Optional additional details about the error.
     */
    data?: Record<string, any>;
}

Response Headers

Your tool server should include standard HTTP headers in all responses. the Content-Type should be set on application/json.

JavaScript / TypeScript Support

If you plan to build your tool server using JavaScript or TypeScript, we recommend using the @vertesia/tools-sdk package.
This library implements the entire protocol for you, including:

  • Handling GET and POST endpoints
  • JWT verification and validation against Vertesia's JWKS
  • Input schema validation
  • Tool routing and execution handling

Using the SDK helps you focus on writing tool logic instead of boilerplate.

Registering Custom Tools

To make your custom tools available to the LLM, you must register your tool server with Vertesia by creating an Application.

Applications are the way to extend and customize the Vertesia platform with custom logic and integrations. Once registered, your custom tools will be available just like built-in ones.

For details on how to define and register an application that exposes custom tools, refer to the Applications section.

Was this page helpful?