Built-in Tools

This page documents all the built-in tools available in Vertesia Studio for use with Agents. For reusable, code-centric capabilities that are packaged and exposed as tools (such as spreadsheet analysis or ETL pipelines), see also Agent Skills.

Core Tools

Fundamental tools for reasoning, planning, and task organization. These are the essential building blocks for complex agent workflows.

Think Tool

Name: think

A tool for deep thinking and analysis of complex problems step by step. Useful for brainstorming and planning.

Plan Tool

Name: plan

Creates structured, executable plans with tracked progress.

Update Plan Tool

Name: update_plan

Updates multiple tasks in your active plan simultaneously with visual progress tracking. Works in conjunction with the plan tool to maintain live status updates.

Parallel Execution Tools

Advanced tools for decomposing complex problems into parallel workstreams. These tools enable sophisticated multi-threaded problem solving by creating dedicated sub-agents.

Execute Parallel Work Streams Tool

Name: execute_parallel_work_streams

Executes multiple parallel work streams to solve complex problems by decomposing them into independent tasks that run concurrently. Perfect for complex problems that can be broken down into separate components that can be solved independently. Each task runs as a dedicated sub-agent with access to the specified tools and will work independently in parallel, dramatically speeding up complex multi-part tasks. Results can be automatically aggregated or returned individually.

Document Management Tools

Tools for managing documents in the Vertesia knowledge base. These tools handle CRUD operations for documents with full metadata support.

Query Documents Tool

Name: query_documents

A powerful tool for searching and analyzing documents with two distinct modes: Search Mode for high-level queries and DSL Mode for direct Elasticsearch access.

Search Mode (Recommended)

Use Search Mode for most document searches. It provides a high-level API with automatic processing:

Query Parameters:

ParameterTypeDescription
query.namestringPartial name match (autocomplete-style)
query.typestringFilter by document type ID
query.statusstringFilter by document status
query.full_textstringFull-text search with stemming and fuzzy matching
query.vectorobjectVector similarity search (see below)
query.weightsobjectWeights for hybrid search (e.g., { full_text: 2, vector: 3 })
query.score_aggregationstringScore aggregation method: rrf, rsf, or smart
query.dynamic_scalingstringDynamic weight scaling: on or off

Vector Search Options:

ParameterTypeDescription
query.vector.textstringText to embed and search semantically
query.vector.objectIdstringReuse embeddings from an existing object
query.vector.imagestringImage URL or base64 for vision embedding
query.vector.configobjectEmbedding types to use (text, properties, vision, code)

Example - Hybrid Search:

{
  "query": {
    "full_text": "quarterly financial report",
    "vector": { "text": "company earnings analysis" },
    "weights": { "full_text": 2, "vector": 3 },
    "score_aggregation": "smart"
  },
  "limit": 20
}

DSL Mode (Power Users)

Use DSL Mode for direct Elasticsearch Query DSL access. Ideal for analytics, complex aggregations, and full control:

DSL Parameters:

ParameterTypeDescription
dsl.queryobjectElasticsearch query clause (e.g., match_all, term, bool)
dsl.aggsobjectAggregations for analytics (e.g., terms, date_histogram)
dsl.sizenumberResults to return (0-10,000; use 0 for aggregations-only)
dsl.fromnumberPagination offset (0-100,000)
dsl.sortarraySort order (e.g., [{ "created_at": "desc" }])

Example - Aggregation:

{
  "dsl": {
    "aggs": {
      "by_status": { "terms": { "field": "status" } },
      "by_type": { "terms": { "field": "type.name" } }
    },
    "size": 0
  }
}

Shared Options (Both Modes)

ParameterTypeDescription
limitnumberMaximum results (default: 100)
offsetnumberSkip n documents for pagination
formatstringOutput format: json, csv, or table
count_onlybooleanReturn only document count
all_revisionsbooleanInclude all revisions, not just latest
collection_idstringSearch within specific collection
analyzebooleanRun LLM analysis on results
analyzer_promptstringCustom instructions for LLM analysis
facetsarrayCompute aggregated counts (e.g., [{ name: "types", field: "type.name" }])
output_artifactobjectStream large results to artifact file

For more details on search configuration, see Search Configuration.

Fetch Document Tool

Name: fetch_document

Retrieves a specific document by its identifier. Supports multiple modes (full document, properties only, content, sections, instrumented views, or AI-powered analysis) and can optionally stream large results to a workspace artifact for downstream processing with other tools.

Create Document Tool

Name: create_document

Creates new documents in the system with specified content, metadata, and properties.

Update Document Tool

Name: update_document

Updates existing documents with new content or properties.

Create Content Object Tool

Name: create_content_object

Creates persistent content objects from external locations such as HTTPS URLs or cloud storage (for example, https://…, s3://…, gs://…). You can attach custom metadata, tags, and an optional collection so that imported files become searchable and available for later analysis.

Spreadsheet Workflows with Skills

Spreadsheet creation and analysis are implemented using a combination of document tools, skills, artifacts, and the Daytona sandbox rather than dedicated spreadsheet-specific built-ins:

  • Use query_documents and fetch_document to locate and access spreadsheet files stored in the knowledge base.
  • Use skills (for example, data-analysis skills) to generate or transform spreadsheets and to declare any required packages.
  • Use write_artifact to create helper scripts and data files, and execute_shell to run those scripts inside the sandbox, reading from /home/daytona/files and writing results to /home/daytona/out.
  • Use read_artifact, list_artifacts, and related tools to inspect outputs, and create_document or create_content_object to persist final results.

This pattern replaces the legacy spreadsheet tools and gives agents more flexibility and control over how spreadsheet data is processed.

Type Management Tools

Tools for managing object type definitions and schemas. These tools control the structure and validation rules for different types of objects in the system.

Get Object Type Tool

Name: get_object_type

Retrieves details about specific object type definitions.

Create or Update Type Tool

Name: create_or_update_object_type

Creates new or updates existing object type definitions.

Collection Management Tools

Tools for organizing and grouping related documents into collections. Collections provide hierarchical organization and bulk operations on document sets.

Create Collection Tool

Name: create_collection

Creates a new collection for organizing related documents. Collections act as containers that group documents together for easier management and access.

Update Collection Tool

Name: update_collection

Modifies an existing collection's properties, such as name, description, or schema definition. This tool updates collection metadata without affecting the documents contained within it.

Add to Collection Tool

Name: add_to_collection

Places one or more existing documents into a collection for organization and grouping. This tool establishes relationships between documents and collections, without modifying the documents themselves.

Remove from Collection Tool

Name: remove_from_collection

Removes documents from a collection while preserving the documents themselves. This tool only breaks the association between documents and a collection; it does not delete the documents from the system.

Get Collection Tool

Name: get_collection

Accesses detailed information about an existing collection, including its name, description, schema, and member documents. This tool retrieves the full definition of a collection along with metadata about contained documents.

Search Collections Tool

Name: search_collections

Finds collections by searching for partial matches in collection names. This tool searches through all existing collections and returns those whose names contain the specified search term using case-insensitive partial matching.

Temporary Artifact Tools

Tools for managing temporary artifacts in the agent workspace. Artifacts are per-run files (scripts, intermediate data, and outputs) that are automatically deleted when the workflow completes. Use these tools together with execute_shell for robust code and data workflows.

Write Artifact Tool

Name: write_artifact

Writes a temporary file into the agent workspace. Use type: "script" for code (synced to /home/daytona/scripts/) or type: "file" for data (synced to /home/daytona/files/).

Read Artifact Tool

Name: read_artifact

Reads the content of a temporary artifact, with optional line ranges and line numbers for precise inspection.

List Artifacts Tool

Name: list_artifacts

Lists available artifacts, optionally filtered by a path prefix such as scripts/, files/, or out/.

Grep Artifact Tool

Name: grep_artifacts

Searches for a regular-expression pattern across artifacts, useful for finding errors or specific content in generated files.

Patch Artifact Tool

Name: patch_artifact

Applies literal find-and-replace edits inside an artifact, typically after inspecting it with read_artifact or grep_artifacts.

View Image Tool

Name: view_image

Exposes an image from an artifact (for example, out/plot.png) or a stored Vertesia document as an image attachment that the model can see and combine with analyze_image.

Web and External Tools

Tools for interacting with external services and executing custom code. These tools extend agent capabilities beyond the core platform functionality.

Web Search Tool

Name: web_search

Searches the web for information using specified queries and options.

This activity requires an API key for serper. Go to Setting in Studio to configure your API key.

Execute Shell Tool

Name: execute_shell

Executes shell commands inside a managed Daytona sandbox.

The sandbox is created on first use for a workflow run and reused across calls, preserving installed packages and files until the workflow completes.

Use this tool to:

  • Run Python or other language scripts stored under /home/daytona/scripts (for example, data analysis with pandas).
  • Manipulate files under /home/daytona/files, /home/daytona/documents, and /home/daytona/out.
  • Install additional packages needed by skills using the tool input (for example, extra Python or system packages).

Artifacts created with the temporary artifact tools are automatically synced into the sandbox on each call:

  • scripts/*/home/daytona/scripts/
  • files/*/home/daytona/files/
  • skills/*/home/daytona/skills/
  • out/*/home/daytona/out/ for derived outputs that should be reused later.

You can also use the documents parameter to download Vertesia documents into /home/daytona/documents/ (as original files or extracted text) before running shell commands that analyze them.

Ask User Tool

Name: ask_user

Requests input from users during workflow execution.

Analyze Image Tool

Name: analyze_image

Executes ImageMagick commands on images and PDFs, storing results to cloud storage. Supports standard image formats and PDF documents with command chaining capabilities.

Combine this with the view_image tool to first surface images from artifacts or stored documents into the conversation so the model can inspect and transform them.

Conversation Tools

Tools for searching and analyzing past conversations and agent runs.

Search Conversations Tool

Name: search_conversations

Searches workflow runs (conversations) by status, time range, initiator, or interaction name, with pagination support and an optional output_artifact setting to stream full result sets into an artifact while returning a small preview.

Analyze Conversation Tool

Name: analyze_conversation

Loads the conversation from another workflow run and analyzes it using an analyzer_prompt, optionally constrained by a result_schema. Useful for reviewing agent behavior, extracting key outcomes, or monitoring progress of running workflows.

Communication Tools

Tools for sending notifications and messages to external recipients.

Send Email Tool

Name: send_email

Sends emails using the Resend email service. Accepts markdown content which is automatically converted to HTML with a plain text fallback. Supports email conversations where recipients can reply and have their responses routed back to the workflow.

App Settings Configuration:

This tool requires the following settings to be configured in the app installation:

SettingRequiredDescription
fromYesThe sender email address (e.g., Your Company <noreply@example.com>)
resend_api_keyYesYour Resend API key
allowed_domainsNoArray of allowed recipient domains for whitelisting (e.g., ["company.com", "partner.org"])
base_urlNoBase URL for resolving internal links (defaults to https://cloud.vertesia.io)
inbound_domainNoDomain for receiving email replies (e.g., vertesia.io). Required for email reply routing.

Parameters:

ParameterTypeRequiredDescription
tostring[]YesArray of recipient email addresses
subjectstringYesThe email subject line
markdownstringYesThe email body in markdown format
ccstring[]NoArray of CC recipient addresses
bccstring[]NoArray of BCC recipient addresses
artifact_run_idstringNoWorkflow run ID for resolving artifact paths and routing replies
enable_replybooleanNoEnable email reply routing. Defaults to true when artifact_run_id is provided.

Email Reply Routing:

When enable_reply is true and inbound_domain is configured, the tool automatically generates a reply-to address in the format r+{routeKey}@{inbound_domain}. When a recipient replies to the email:

  1. The reply is received by Resend at the inbound domain
  2. Resend sends a webhook to the Vertesia API
  3. The webhook extracts the run ID and sends a userInput signal to the workflow
  4. The workflow receives the email content as user input and can respond

This enables email-based conversations with agents, where users can communicate via email instead of the UI.

Custom URL Schemes:

The tool automatically resolves custom URL schemes in markdown content:

  • artifact:path/to/file - Resolves to a signed download URL for workflow artifacts
  • image:path/to/image - Resolves to a signed download URL for images
  • store:objectId - Links to a store object in the Vertesia UI
  • document://objectId - Links to a document in the Vertesia UI
  • collection:collectionId - Links to a collection in the Vertesia UI

Domain Whitelisting:

When allowed_domains is configured, emails can only be sent to addresses matching those domains. This applies to to, cc, and bcc recipients. Leave empty or omit to allow any domain.

Data Platform Tools

Tools for managing data stores, tables, queries, and dashboards. These tools enable agents to work with structured data using DuckDB databases.

For comprehensive documentation, see the Data Platform Tools Reference.

Key Tools

  • data_get_schema - Get the schema of a data store
  • data_list_tables - List tables with metadata
  • data_create_database - Create a new DuckDB database
  • data_create_tables - Create tables atomically
  • data_import - Import data from files or inline data
  • data_preview_dashboard - Preview Vega-Lite dashboards
  • data_create_dashboard - Create saved dashboards
  • data_render_dashboard - Render dashboards to PNG

Automation Tools

Tools for automating and scheduling recurring tasks.

Schedule Workflow Tool

Name: schedule_workflow

Creates recurring schedules for agent/workflow execution using cron expressions. This tool is not enabled by default and must be explicitly added to an agent's tools.

Parameters:

ParameterTypeRequiredDescription
namestringYesName of the schedule for identification
descriptionstringNoDescription of what this scheduled workflow does
interactionstringYesThe interaction/agent ID to execute on schedule
cron_expressionstringYesCron expression defining when to run
timezonestringNoTimezone for the cron expression (defaults to UTC)
varsobjectNoVariables to pass to the scheduled workflow
enabledbooleanNoWhether to enable immediately (defaults to true)

Cron Expression Format:

The cron expression uses 5 fields: minute hour day month weekday

FieldValuesDescription
minute0-59Minute of the hour
hour0-23Hour of the day
day1-31Day of the month
month1-12 or JAN-DECMonth of the year
weekday0-6 or SUN-SATDay of the week (0=Sunday)

Special characters:

  • * - any value
  • , - list separator (e.g., 1,3,5)
  • - - range (e.g., 1-5)
  • / - step (e.g., */15 for every 15)

Common Cron Examples:

ExpressionDescription
0 9 * * *Every day at 9:00 AM
0 9 * * MONEvery Monday at 9:00 AM
0 9 * * MON-FRIWeekdays at 9:00 AM
0 0 1 * *First day of each month at midnight
0 */2 * * *Every 2 hours
30 8 * * *Every day at 8:30 AM

Important Notes:

  • Scheduled workflows run non-interactively but can still use ask_user for async input via email, webhooks, or headless UX listening to the stream
  • Use meaningful names for easy identification in the UI
  • Consider timezone when scheduling for specific business hours
  • This tool must be explicitly added to an agent's tool list

Index Configuration Tools

Tools for querying and managing search index configuration. These tools allow agents to inspect and update embedding settings and trigger reindexing operations.

Get Index Configuration Tool

Name: get_index_configuration

Retrieves the current index status and configuration for the project's search infrastructure.

Returns:

  • Index existence and health status
  • Document count and storage size
  • Embedding dimensions for text, image, and properties
  • Field mappings and index version

Example Response:

{
  "exists": true,
  "healthy": true,
  "documentCount": 15234,
  "sizeInBytes": 52428800,
  "dimensions": {
    "text": 1536,
    "image": 1536,
    "properties": 1536
  },
  "version": 3
}

Update Index Configuration Tool

Name: update_index_configuration

Updates index configuration settings, including embedding dimensions. Can trigger reindexing when configuration changes require it.

Parameters:

ParameterTypeRequiredDescription
embedding_dimensionsobjectNoNew dimensions for embedding types
embedding_dimensions.textnumberNoDimensions for text embeddings
embedding_dimensions.imagenumberNoDimensions for image embeddings
embedding_dimensions.propertiesnumberNoDimensions for properties embeddings
force_reindexbooleanNoTrigger a full reindex of all documents
user_confirmedbooleanYesMust be true - requires confirmation via ask_user first

Important: This tool requires user confirmation before making changes. Always use ask_user to confirm the operation before calling this tool with user_confirmed: true.

Example:

{
  "embedding_dimensions": {
    "text": 3072
  },
  "force_reindex": true,
  "user_confirmed": true
}

For more details on index configuration, see Search Configuration.

Was this page helpful?