Configuration

Vertesia is a bring your own key service which supports many model inference service providers such as OpenAI, Google Vertex AI, AWS Bedrock and many others. Users can configure and use several providers of their choice, either by using Studio, the REST API or the SDK.

Please refer to each provider documentation page to learn how to use it with Vertesia:

For AWS Bedrock and GCP VertexAI, helper scripts to automate and the configuration are available here.

Vertesia Execution Environments

Vertesia introduces the concept of Execution Environment. An environment is configured to use an inference provider and contains the individual models that are available. Vertesia also provides managed environments, including AWS Bedrock and Google Vertex AI, that you can use to get started quickly and try out the platform — just accept the default setting when creating your new project.

Environment List

Execution Environment

An Execution Environment attributes include:

  • An inference service provider
  • An API key
  • An Endpoint URL
  • Available models from the provider that can be enabled
  • Enabled models
  • Projects within the user's Vertesia organization that can use environment

Environment Detail

Was this page helpful?