Built-in Tools
This page documents all the built-in tools available in Vertesia Studio for use with Agents. For reusable, code-centric capabilities that are packaged and exposed as tools (such as spreadsheet analysis or ETL pipelines), see also Agent Skills.
Core Tools
Fundamental tools for reasoning, planning, and task organization. These are the essential building blocks for complex agent workflows.
Think Tool
Name: think
A tool for deep thinking and analysis of complex problems step by step. Useful for brainstorming and planning.
Plan Tool
Name: plan
Creates structured, executable plans with tracked progress.
Update Plan Tool
Name: update_plan
Updates multiple tasks in your active plan simultaneously with visual progress tracking. Works in conjunction with the plan tool to maintain live status updates.
Parallel Execution Tools
Advanced tools for decomposing complex problems into parallel workstreams. These tools enable sophisticated multi-threaded problem solving by creating dedicated sub-agents.
Execute Parallel Work Streams Tool
Name: execute_parallel_work_streams
Executes multiple parallel work streams to solve complex problems by decomposing them into independent tasks that run concurrently. Perfect for complex problems that can be broken down into separate components that can be solved independently. Each task runs as a dedicated sub-agent with access to the specified tools and will work independently in parallel, dramatically speeding up complex multi-part tasks. Results can be automatically aggregated or returned individually.
Document Management Tools
Tools for managing documents in the Vertesia knowledge base. These tools handle CRUD operations for documents with full metadata support.
Query Documents Tool
Name: query_documents
A powerful tool for searching and analyzing documents with two distinct modes: Search Mode for high-level queries and DSL Mode for direct Elasticsearch access.
Search Mode (Recommended)
Use Search Mode for most document searches. It provides a high-level API with automatic processing:
Query Parameters:
| Parameter | Type | Description |
|---|---|---|
query.name | string | Partial name match (autocomplete-style) |
query.type | string | Filter by document type ID |
query.status | string | Filter by document status |
query.full_text | string | Full-text search with stemming and fuzzy matching |
query.vector | object | Vector similarity search (see below) |
query.weights | object | Weights for hybrid search (e.g., { full_text: 2, vector: 3 }) |
query.score_aggregation | string | Score aggregation method: rrf, rsf, or smart |
query.dynamic_scaling | string | Dynamic weight scaling: on or off |
Vector Search Options:
| Parameter | Type | Description |
|---|---|---|
query.vector.text | string | Text to embed and search semantically |
query.vector.objectId | string | Reuse embeddings from an existing object |
query.vector.image | string | Image URL or base64 for vision embedding |
query.vector.config | object | Embedding types to use (text, properties, vision, code) |
Example - Hybrid Search:
{
"query": {
"full_text": "quarterly financial report",
"vector": { "text": "company earnings analysis" },
"weights": { "full_text": 2, "vector": 3 },
"score_aggregation": "smart"
},
"limit": 20
}
DSL Mode (Power Users)
Use DSL Mode for direct Elasticsearch Query DSL access. Ideal for analytics, complex aggregations, and full control:
DSL Parameters:
| Parameter | Type | Description |
|---|---|---|
dsl.query | object | Elasticsearch query clause (e.g., match_all, term, bool) |
dsl.aggs | object | Aggregations for analytics (e.g., terms, date_histogram) |
dsl.size | number | Results to return (0-10,000; use 0 for aggregations-only) |
dsl.from | number | Pagination offset (0-100,000) |
dsl.sort | array | Sort order (e.g., [{ "created_at": "desc" }]) |
Example - Aggregation:
{
"dsl": {
"aggs": {
"by_status": { "terms": { "field": "status" } },
"by_type": { "terms": { "field": "type.name" } }
},
"size": 0
}
}
Shared Options (Both Modes)
| Parameter | Type | Description |
|---|---|---|
limit | number | Maximum results (default: 100) |
offset | number | Skip n documents for pagination |
format | string | Output format: json, csv, or table |
count_only | boolean | Return only document count |
all_revisions | boolean | Include all revisions, not just latest |
collection_id | string | Search within specific collection |
analyze | boolean | Run LLM analysis on results |
analyzer_prompt | string | Custom instructions for LLM analysis |
facets | array | Compute aggregated counts (e.g., [{ name: "types", field: "type.name" }]) |
output_artifact | object | Stream large results to artifact file |
For more details on search configuration, see Search Configuration.
Fetch Document Tool
Name: fetch_document
Retrieves a specific document by its identifier. Supports multiple modes (full document, properties only, content, sections, instrumented views, or AI-powered analysis) and can optionally stream large results to a workspace artifact for downstream processing with other tools.
Create Document Tool
Name: create_document
Creates new documents in the system with specified content, metadata, and properties.
Update Document Tool
Name: update_document
Updates existing documents with new content or properties.
Create Content Object Tool
Name: create_content_object
Creates persistent content objects from external locations such as HTTPS URLs or cloud storage (for example, https://…, s3://…, gs://…). You can attach custom metadata, tags, and an optional collection so that imported files become searchable and available for later analysis.
Spreadsheet Workflows with Skills
Spreadsheet creation and analysis are implemented using a combination of document tools, skills, artifacts, and the Daytona sandbox rather than dedicated spreadsheet-specific built-ins:
- Use
query_documentsandfetch_documentto locate and access spreadsheet files stored in the knowledge base. - Use skills (for example, data-analysis skills) to generate or transform spreadsheets and to declare any required packages.
- Use
write_artifactto create helper scripts and data files, andexecute_shellto run those scripts inside the sandbox, reading from/home/daytona/filesand writing results to/home/daytona/out. - Use
read_artifact,list_artifacts, and related tools to inspect outputs, andcreate_documentorcreate_content_objectto persist final results.
This pattern replaces the legacy spreadsheet tools and gives agents more flexibility and control over how spreadsheet data is processed.
Type Management Tools
Tools for managing object type definitions and schemas. These tools control the structure and validation rules for different types of objects in the system.
Get Object Type Tool
Name: get_object_type
Retrieves details about specific object type definitions.
Create or Update Type Tool
Name: create_or_update_object_type
Creates new or updates existing object type definitions.
Collection Management Tools
Tools for organizing and grouping related documents into collections. Collections provide hierarchical organization and bulk operations on document sets.
Create Collection Tool
Name: create_collection
Creates a new collection for organizing related documents. Collections act as containers that group documents together for easier management and access.
Update Collection Tool
Name: update_collection
Modifies an existing collection's properties, such as name, description, or schema definition. This tool updates collection metadata without affecting the documents contained within it.
Add to Collection Tool
Name: add_to_collection
Places one or more existing documents into a collection for organization and grouping. This tool establishes relationships between documents and collections, without modifying the documents themselves.
Remove from Collection Tool
Name: remove_from_collection
Removes documents from a collection while preserving the documents themselves. This tool only breaks the association between documents and a collection; it does not delete the documents from the system.
Get Collection Tool
Name: get_collection
Accesses detailed information about an existing collection, including its name, description, schema, and member documents. This tool retrieves the full definition of a collection along with metadata about contained documents.
Search Collections Tool
Name: search_collections
Finds collections by searching for partial matches in collection names. This tool searches through all existing collections and returns those whose names contain the specified search term using case-insensitive partial matching.
Temporary Artifact Tools
Tools for managing temporary artifacts in the agent workspace. Artifacts are per-run files (scripts, intermediate data, and outputs) that are automatically deleted when the workflow completes. Use these tools together with execute_shell for robust code and data workflows.
Write Artifact Tool
Name: write_artifact
Writes a temporary file into the agent workspace. Use type: "script" for code (synced to /home/daytona/scripts/) or type: "file" for data (synced to /home/daytona/files/).
Read Artifact Tool
Name: read_artifact
Reads the content of a temporary artifact, with optional line ranges and line numbers for precise inspection.
List Artifacts Tool
Name: list_artifacts
Lists available artifacts, optionally filtered by a path prefix such as scripts/, files/, or out/.
Grep Artifact Tool
Name: grep_artifacts
Searches for a regular-expression pattern across artifacts, useful for finding errors or specific content in generated files.
Patch Artifact Tool
Name: patch_artifact
Applies literal find-and-replace edits inside an artifact, typically after inspecting it with read_artifact or grep_artifacts.
View Image Tool
Name: view_image
Exposes an image from an artifact (for example, out/plot.png) or a stored Vertesia document as an image attachment that the model can see and combine with analyze_image.
Web and External Tools
Tools for interacting with external services and executing custom code. These tools extend agent capabilities beyond the core platform functionality.
Web Search Tool
Name: web_search
Searches the web for information using specified queries and options.
This activity requires an API key for serper. Go to Setting in Studio to configure your API key.
Execute Shell Tool
Name: execute_shell
Executes shell commands inside a managed Daytona sandbox.
The sandbox is created on first use for a workflow run and reused across calls, preserving installed packages and files until the workflow completes.
Use this tool to:
- Run Python or other language scripts stored under
/home/daytona/scripts(for example, data analysis with pandas). - Manipulate files under
/home/daytona/files,/home/daytona/documents, and/home/daytona/out. - Install additional packages needed by skills using the tool input (for example, extra Python or system packages).
Artifacts created with the temporary artifact tools are automatically synced into the sandbox on each call:
scripts/*→/home/daytona/scripts/files/*→/home/daytona/files/skills/*→/home/daytona/skills/out/*↔/home/daytona/out/for derived outputs that should be reused later.
You can also use the documents parameter to download Vertesia documents into /home/daytona/documents/ (as original files or extracted text) before running shell commands that analyze them.
Ask User Tool
Name: ask_user
Requests input from users during workflow execution.
Analyze Image Tool
Name: analyze_image
Executes ImageMagick commands on images and PDFs, storing results to cloud storage. Supports standard image formats and PDF documents with command chaining capabilities.
Combine this with the view_image tool to first surface images from artifacts or stored documents into the conversation so the model can inspect and transform them.
Conversation Tools
Tools for searching and analyzing past conversations and agent runs.
Search Conversations Tool
Name: search_conversations
Searches workflow runs (conversations) by status, time range, initiator, or interaction name, with pagination support and an optional output_artifact setting to stream full result sets into an artifact while returning a small preview.
Analyze Conversation Tool
Name: analyze_conversation
Loads the conversation from another workflow run and analyzes it using an analyzer_prompt, optionally constrained by a result_schema. Useful for reviewing agent behavior, extracting key outcomes, or monitoring progress of running workflows.
Communication Tools
Tools for sending notifications and messages to external recipients.
Send Email Tool
Name: send_email
Sends emails using the Resend email service. Accepts markdown content which is automatically converted to HTML with a plain text fallback. Supports email conversations where recipients can reply and have their responses routed back to the workflow.
App Settings Configuration:
This tool requires the following settings to be configured in the app installation:
| Setting | Required | Description |
|---|---|---|
from | Yes | The sender email address (e.g., Your Company <noreply@example.com>) |
resend_api_key | Yes | Your Resend API key |
allowed_domains | No | Array of allowed recipient domains for whitelisting (e.g., ["company.com", "partner.org"]) |
base_url | No | Base URL for resolving internal links (defaults to https://cloud.vertesia.io) |
inbound_domain | No | Domain for receiving email replies (e.g., vertesia.io). Required for email reply routing. |
Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
to | string[] | Yes | Array of recipient email addresses |
subject | string | Yes | The email subject line |
markdown | string | Yes | The email body in markdown format |
cc | string[] | No | Array of CC recipient addresses |
bcc | string[] | No | Array of BCC recipient addresses |
artifact_run_id | string | No | Workflow run ID for resolving artifact paths and routing replies |
enable_reply | boolean | No | Enable email reply routing. Defaults to true when artifact_run_id is provided. |
Email Reply Routing:
When enable_reply is true and inbound_domain is configured, the tool automatically generates a reply-to address in the format r+{routeKey}@{inbound_domain}. When a recipient replies to the email:
- The reply is received by Resend at the inbound domain
- Resend sends a webhook to the Vertesia API
- The webhook extracts the run ID and sends a
userInputsignal to the workflow - The workflow receives the email content as user input and can respond
This enables email-based conversations with agents, where users can communicate via email instead of the UI.
Custom URL Schemes:
The tool automatically resolves custom URL schemes in markdown content:
artifact:path/to/file- Resolves to a signed download URL for workflow artifactsimage:path/to/image- Resolves to a signed download URL for imagesstore:objectId- Links to a store object in the Vertesia UIdocument://objectId- Links to a document in the Vertesia UIcollection:collectionId- Links to a collection in the Vertesia UI
Domain Whitelisting:
When allowed_domains is configured, emails can only be sent to addresses matching those domains. This applies to to, cc, and bcc recipients. Leave empty or omit to allow any domain.
Data Platform Tools
Tools for managing data stores, tables, queries, and dashboards. These tools enable agents to work with structured data using DuckDB databases.
For comprehensive documentation, see the Data Platform Tools Reference.
Key Tools
- data_get_schema - Get the schema of a data store
- data_list_tables - List tables with metadata
- data_create_database - Create a new DuckDB database
- data_create_tables - Create tables atomically
- data_import - Import data from files or inline data
- data_preview_dashboard - Preview Vega-Lite dashboards
- data_create_dashboard - Create saved dashboards
- data_render_dashboard - Render dashboards to PNG
Automation Tools
Tools for automating and scheduling recurring tasks.
Schedule Workflow Tool
Name: schedule_workflow
Creates recurring schedules for agent/workflow execution using cron expressions. This tool is not enabled by default and must be explicitly added to an agent's tools.
Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Name of the schedule for identification |
description | string | No | Description of what this scheduled workflow does |
interaction | string | Yes | The interaction/agent ID to execute on schedule |
cron_expression | string | Yes | Cron expression defining when to run |
timezone | string | No | Timezone for the cron expression (defaults to UTC) |
vars | object | No | Variables to pass to the scheduled workflow |
enabled | boolean | No | Whether to enable immediately (defaults to true) |
Cron Expression Format:
The cron expression uses 5 fields: minute hour day month weekday
| Field | Values | Description |
|---|---|---|
| minute | 0-59 | Minute of the hour |
| hour | 0-23 | Hour of the day |
| day | 1-31 | Day of the month |
| month | 1-12 or JAN-DEC | Month of the year |
| weekday | 0-6 or SUN-SAT | Day of the week (0=Sunday) |
Special characters:
*- any value,- list separator (e.g., 1,3,5)-- range (e.g., 1-5)/- step (e.g., */15 for every 15)
Common Cron Examples:
| Expression | Description |
|---|---|
0 9 * * * | Every day at 9:00 AM |
0 9 * * MON | Every Monday at 9:00 AM |
0 9 * * MON-FRI | Weekdays at 9:00 AM |
0 0 1 * * | First day of each month at midnight |
0 */2 * * * | Every 2 hours |
30 8 * * * | Every day at 8:30 AM |
Important Notes:
- Scheduled workflows run non-interactively but can still use
ask_userfor async input via email, webhooks, or headless UX listening to the stream - Use meaningful names for easy identification in the UI
- Consider timezone when scheduling for specific business hours
- This tool must be explicitly added to an agent's tool list
Index Configuration Tools
Tools for querying and managing search index configuration. These tools allow agents to inspect and update embedding settings and trigger reindexing operations.
Get Index Configuration Tool
Name: get_index_configuration
Retrieves the current index status and configuration for the project's search infrastructure.
Returns:
- Index existence and health status
- Document count and storage size
- Embedding dimensions for text, image, and properties
- Field mappings and index version
Example Response:
{
"exists": true,
"healthy": true,
"documentCount": 15234,
"sizeInBytes": 52428800,
"dimensions": {
"text": 1536,
"image": 1536,
"properties": 1536
},
"version": 3
}
Update Index Configuration Tool
Name: update_index_configuration
Updates index configuration settings, including embedding dimensions. Can trigger reindexing when configuration changes require it.
Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
embedding_dimensions | object | No | New dimensions for embedding types |
embedding_dimensions.text | number | No | Dimensions for text embeddings |
embedding_dimensions.image | number | No | Dimensions for image embeddings |
embedding_dimensions.properties | number | No | Dimensions for properties embeddings |
force_reindex | boolean | No | Trigger a full reindex of all documents |
user_confirmed | boolean | Yes | Must be true - requires confirmation via ask_user first |
Important: This tool requires user confirmation before making changes. Always use ask_user to confirm the operation before calling this tool with user_confirmed: true.
Example:
{
"embedding_dimensions": {
"text": 3072
},
"force_reindex": true,
"user_confirmed": true
}
For more details on index configuration, see Search Configuration.
