Tutorials - Basic
Logging in
Platform users must be authenticated.
Steps to follow:
- Select an authentication service where you have an account
- Enter your email address and password
- Then please fill in information about your Account Type, Company and Project Maturity
Managing a New project
Each organization has its own set of projects.
Creating a Project
Steps to follow:
- In the left panel, click on
Create New Projectin the Projects dropdown list below the organization. - If you are experimenting, you may want to prefix its name with your name or trigram (e.g. ‘John_Experiments’).
Editing the project settings
Edit the project settings that apply to all interactions defined within it.
- In the left panel, go to
Settings>Project. - Feel free to modify the project name.
- Feel free to modify the
namespaceto which the project belongs. - Explain the
Project Context, background, and objectives. This will be used to pass more context to the model, to guide it and achieve better results. - Set
Default EnvironmentandModelfor generation of contents, metadata and embeddings (Text,Properties,Image). - Activate embeddings generation if needed.
Inviting Users to the Project
- In the left panel, go to
Settings>Users. - For each user to invite, enter its email address, then select a
Projectand aRole. If you do not select any project, the role is defined at the whole organization level. - Finally click on
Invite User. - You may later add roles to the users.
Creating an Environment
Create an environment that refers to an existing LLM API Key from one of your providers.
Simple Environment
Steps to follow:
- In the left panel,
Modelssection, click onEnvironments. - Then click on
Add New Environmentat the top right. - Give it a name, select one of the supported
Providersfor which you have anAPI Key. - Enter the associated
URL(usually optional; typically used to target a specific data center/region). - And finally enter the
API Keyvalue itself (e.g. copy and paste from OpenAI). - Once the
Environmentcreated, look at the availableModelson the right panel and add a few ones you are interested in. - Set the
Default Model.
Environment With Failover
This approach allows dealing with unavailable providers or models.
Steps to follow:
- Similar to the first steps of creating a simple environment.
- In the provider’s dropdown list, select
Virtual - Load Balancer. - You do not need to specify any
API Keyin this case. - Once the
Environmentcreated, look at the availableModelson the right panel and add a few ones you are interested in. - Then set the
Weightof the main nominal model to 100%, and 0% to the other(s). - In case of unavailability of the nominal model, the platform will automatically switch to the second, and if failing too successively to the third, etc.
Environment With Load Balancing
This approach allows balancing the workload on multiple models or providers.
Steps to follow:
- Similar to the first steps of creating an environment with failover.
- In the provider’s dropdown list, select
Virtual - Load Balancer. - You do not need to specify any
API Keyin this case. - Once the environment created, look at the available
Modelson the right panel and add a few ones you are interested in. - Then set the
Weightof each model.- For instance if you want to equally balance the workload on four models, you may set their weight to 25% each.
- The first model will be called for the first interaction call.
- The second model will be called for the second interaction call.
- The third model will be called for the third interaction call.
- The fourth model will be called for the fourth interaction call.
- Loopback: the first model will be called for the fifth interaction call.
- Etc.
Environment with Mediator
TBDDesigning Your First Interaction
Let's design an interaction that analyses an input document and generates key points as a result.
Configuration
- Click on the
Interactionsmenu in the left panel. - Click on
Add Interaction. - Give it a name and select a
Default Environment. - In the Configuration tab, feel free to add a description, and specify the
Default Modelassociated with yourEnvironment. - The
Output Modalityis by defaulttext. You may change it to image if relevant. - The
Advanced Configurationallows further tuning the technical configuration. - Keep
Max Tokensempty for starting (this indicates the maximum number of tokens to be exchanged with the LLM in the context of an interaction). - Set a
Temperatureat 0.5 for starting.
Result Schema
The Result Schema merely defines the output parameters you expect: here one topic, and an array of key points.
- Look at the right panel named Result Schema.
- Add a property named Topic as a text - do not forget to click on the checkmark.
- Add a property named Keypoint as a text[] (array) - do not forget to click on the checkmark.
- Click on Save changes
Prompt
Create a first segment to tell the LLM what persona it should play.
- Go to the
Promptstab. - Watch the prompt library on the right panel (
Available Prompts). - Click on
+to create a firstPrompt Segment. - Give it a name such as “Legal affairs expert” and assign it the
Systemrole. - In the
Templatesection, enter a sentence telling the LLM model what persona it should play. - Finally click on
Create Prompt. - Click on the
+sign on the right of the createdPrompt Segmentto add it. - Click on
Save changeson the top right corner.
Now let's create a second segment to tell the LLM about the task to execute.
- Similarly to the previous step, create a new
Prompt segment“Extract key points”. - Assign it the
Userrole this time, since it represents the task users want the LLM to execute. - The
Prompt Schemasection relates to the input parameters: add one namedInput_textof typetext. - In the
Templatesection, define a task that refers to the input parameter :${Input_text}. - Click on
Create Promptand add it by clicking on the + sign just aside it. - Click on
Save changes.
Playground
Testing an interaction takes place within the Playground.
- From the
Interaction Composer, click on thePlaygroundtab. - Change the model in
Select a Modelif you like. - Copy and paste some document text in the
Input textparameter. - The
Estimated Token Counthelps you dealing with max tokens constraint. - And now you are ready to
Runyour interaction for testing.
Result Analysis
Analyse the results returned by the LLM and parsed by Vertesia.
- Look at the
Execution Resultpanel. Streamingtab displays raw results.Resulttab renders results nicely; note the values placed in the two output parameters Topic and Highlights, as well as the execution time at the bottom.Prompttab displays the global prompt sent by Composable to the LLM.
Run object
Access all of the Run Objects for comparing execution results for this Interaction.
- Click on the
Runstab to access all of the Run objects. - Then click on a
Run Objectto display details:Output,Input,Final prompt. - You may want to come back to the
Playgroundtab and test various input text, select another model, then compare results.
Publishing an HTTP API Endpoint for a new version of your Interaction
Transform your functional interaction into an HTTP REST API endpoint they may be called by applications.
- From the
Interaction Composer, just click onPublish. - Do not make it public, add a
Tag, and … that’s it ! - You may instantly test it e.g. from Postman, or from any automation tool.
- To do so, you need first to create a Vertesia
API Key. - In the left panel, click on
Settings. - Click on
Create New API Key, withdeveloperas role andSecret keyas type. - Click on
Create.
