Tutorials - Basic

Logging in

Platform users must be authenticated.

Steps to follow:

  • Select an authentication service where you have an account
  • Enter your email address and password
  • Then please fill in information about your Account Type, Company and Project Maturity

Managing a New project

Each organization has its own set of projects.

Creating a Project

Steps to follow:

  • In the left panel, click on Create New Project in the Projects dropdown list below the organization.
  • If you are experimenting, you may want to prefix its name with your name or trigram (e.g. ‘John_Experiments’).

Editing the project settings

Edit the project settings that apply to all interactions defined within it.

  • In the left panel, go to Settings > Project.
  • Feel free to modify the project name.
  • Feel free to modify the namespace to which the project belongs.
  • Explain the Project Context, background, and objectives. This will be used to pass more context to the model, to guide it and achieve better results.
  • Set Default Environment and Model for generation of contents, metadata and embeddings (Text, Properties, Image).
  • Activate embeddings generation if needed.

Inviting Users to the Project

  • In the left panel, go to Settings > Users.
  • For each user to invite, enter its email address, then select a Project and a Role. If you do not select any project, the role is defined at the whole organization level.
  • Finally click on Invite User.
  • You may later add roles to the users.

Creating an Environment

Create an environment that refers to an existing LLM API Key from one of your providers.

Simple Environment

Steps to follow:

  • In the left panel, Models section, click on Environments.
  • Then click on Add New Environment at the top right.
  • Give it a name, select one of the supported Providers for which you have an API Key.
  • Enter the associated URL (usually optional; typically used to target a specific data center/region).
  • And finally enter the API Key value itself (e.g. copy and paste from OpenAI).
  • Once the Environment created, look at the available Models on the right panel and add a few ones you are interested in.
  • Set the Default Model.

Environment With Failover

This approach allows dealing with unavailable providers or models.

Steps to follow:

  • Similar to the first steps of creating a simple environment.
  • In the provider’s dropdown list, select Virtual - Load Balancer.
  • You do not need to specify any API Key in this case.
  • Once the Environment created, look at the available Models on the right panel and add a few ones you are interested in.
  • Then set the Weightof the main nominal model to 100%, and 0% to the other(s).
  • In case of unavailability of the nominal model, the platform will automatically switch to the second, and if failing too successively to the third, etc.

Environment With Load Balancing

This approach allows balancing the workload on multiple models or providers.

Steps to follow:

  • Similar to the first steps of creating an environment with failover.
  • In the provider’s dropdown list, select Virtual - Load Balancer.
  • You do not need to specify any API Key in this case.
  • Once the environment created, look at the available Models on the right panel and add a few ones you are interested in.
  • Then set the Weight of each model.
    • For instance if you want to equally balance the workload on four models, you may set their weight to 25% each.
    • The first model will be called for the first interaction call.
    • The second model will be called for the second interaction call.
    • The third model will be called for the third interaction call.
    • The fourth model will be called for the fourth interaction call.
    • Loopback: the first model will be called for the fifth interaction call.
    • Etc.

Environment with Mediator

TBD

Designing Your First Interaction

Let's design an interaction that analyses an input document and generates key points as a result.

Configuration

  • Click on the Interactions menu in the left panel.
  • Click on Add Interaction.
  • Give it a name and select a Default Environment.
  • In the Configuration tab, feel free to add a description, and specify the Default Model associated with your Environment.
  • The Output Modality is by default text. You may change it to image if relevant.
  • The Advanced Configuration allows further tuning the technical configuration.
  • Keep Max Tokens empty for starting (this indicates the maximum number of tokens to be exchanged with the LLM in the context of an interaction).
  • Set a Temperature at 0.5 for starting.

Result Schema

The Result Schema merely defines the output parameters you expect: here one topic, and an array of key points.

  • Look at the right panel named Result Schema.
  • Add a property named Topic as a text - do not forget to click on the checkmark.
  • Add a property named Keypoint as a text[] (array) - do not forget to click on the checkmark.
  • Click on Save changes

Prompt

Create a first segment to tell the LLM what persona it should play.

  • Go to the Prompts tab.
  • Watch the prompt library on the right panel (Available Prompts).
  • Click on + to create a first Prompt Segment.
  • Give it a name such as “Legal affairs expert” and assign it the System role.
  • In the Template section, enter a sentence telling the LLM model what persona it should play.
  • Finally click on Create Prompt.
  • Click on the + sign on the right of the created Prompt Segment to add it.
  • Click on Save changes on the top right corner.

Now let's create a second segment to tell the LLM about the task to execute.

  • Similarly to the previous step, create a new Prompt segment “Extract key points”.
  • Assign it the User role this time, since it represents the task users want the LLM to execute.
  • The Prompt Schema section relates to the input parameters: add one named Input_text of type text.
  • In the Template section, define a task that refers to the input parameter : ${Input_text}.
  • Click on Create Prompt and add it by clicking on the + sign just aside it.
  • Click on Save changes.

Playground

Testing an interaction takes place within the Playground.

  • From the Interaction Composer, click on the Playground tab.
  • Change the model in Select a Model if you like.
  • Copy and paste some document text in the Input text parameter.
  • The Estimated Token Count helps you dealing with max tokens constraint.
  • And now you are ready to Run your interaction for testing.

Result Analysis

Analyse the results returned by the LLM and parsed by Vertesia.

  • Look at the Execution Result panel.
  • Streaming tab displays raw results.
  • Result tab renders results nicely; note the values placed in the two output parameters Topic and Highlights, as well as the execution time at the bottom.
  • Prompt tab displays the global prompt sent by Composable to the LLM.

Run object

Access all of the Run Objects for comparing execution results for this Interaction.

  • Click on the Runs tab to access all of the Run objects.
  • Then click on a Run Object to display details: Output, Input, Final prompt.
  • You may want to come back to the Playground tab and test various input text, select another model, then compare results.

Publishing an HTTP API Endpoint for a new version of your Interaction

Transform your functional interaction into an HTTP REST API endpoint they may be called by applications.

  • From the Interaction Composer, just click on Publish.
  • Do not make it public, add a Tag, and … that’s it !
  • You may instantly test it e.g. from Postman, or from any automation tool.
  • To do so, you need first to create a Vertesia API Key.
  • In the left panel, click on Settings.
  • Click on Create New API Key, with developer as role and Secret key as type.
  • Click on Create.

Was this page helpful?