Building your integration

Building your integration

In this section we'll explain how to build your integration with Helios using the Activity Plugin. We'll cover the small amount of frontend and backend code that will enable your users to complete thymia activities and view their wellness results.


You'll need an Activation Key so sign up now if you've not already done so. It might also help to have read the getting started guide.

Architecture overview

As we saw in the getting started guide, there's 3 steps to using the Activity Plugin:

  1. Call the Helios API to create a new Model Run, receiving back an activity link and a unique identifier.

  2. Embed the activity link in your app and let your users complete their activity.

  3. Call the API again with the unique identifier to retrieve wellness results and present to users.

The Helios API is only intended to be called from backend infrastructure as the Activation Key should not be exposed. Therefore you'll need a small amount of backend code when implementing steps 1 and 3. The nature of this is up to you and will depend on your existing infrastructure - anywhere that a secure call can be made to the Helios API is good. In this guide we'll use curl to describe the API calls made from your backend, but you can use any language & framework able to make HTTP calls (for example Python, Node, Java).

Step 2 runs on the frontend of the app or site that your users interact with. The Activity Plugin works across different browsers and devices (mobile, tablet, laptop for example). Anywhere you can embed an iframe or webview should be fine.

How your frontend and backend communicate is again specific to your infrastructure, but after the backend retrieves an activity link it will need to arrange for that link to be embedded in a page on the frontend. Then, once an activity is complete, the frontend will need to signal in some way to the backend so that retrieval of results from the Helios API can begin. And finally, when those results are retrieved from the API they will need to be placed in a page on the frontend.

Step 1 - Request an activity link

To begin the process your backend should call the /v1/models/mental-wellness-activity endpoint. This creates a new run of the Mental Wellness model for a specific user, returning a link for them to perform a thymia activity in your frontend. The recording of the user captured during the activity is then automatically used as input to the model.

The endpoint requires several fields to be specified, including some information about the user such as their demographic details. This is in order to benchmark individuals and improve accuracy of your results.

Here's the list of mandatory fields you'll need to have available in order to call the endpoint; depending on your setup you may have some or all of these available already in a user database for example, or may need to prompt the user for additional input before calling the Helios API:

  • user.userLabel: A label identifying a user in your system. This can be in any format but should be unique - for example the id of a user record in your database. The purpose of this field is to allow a new model run to use input & output from previous runs submitted for the same user. Model runs with the same user label are assumed to refer to the same user.

  • user.dateOfBirth: Date of birth of the user. If day or month are unknown, supply the parts which are known and use '01' for the rest.

  • user.birthSex: The sex assigned to the user at birth.

  • activityType: The type of activity to show to users. You can select from 3 built-in types - read-aloud, image-description & question. These all cycle through different variations as a user repeats the same activity over time. If you would like to customise the activities please contact thymia.

  • language: The code of the language the user will be speaking when completing their activity. The built-in activities all elicit speech in English so choose the most appropriate code for the user's English accent or dialect. If you created a custom activity that elicits speech in a different language then choose a different code.

See the API reference for the format and allowed values of these fields, as well as details of other optional fields you may want to pass.

Putting these together you will end up with an API request looking like this:

curl -H 'x-api-key: your_key_here' \
  -X 'POST' \
  -H 'Content-Type: application/json' \
  '' \
  -d '{
        "user": {
          "userLabel": "test-user-1",
          "dateOfBirth": "1990-01-01",
          "birthSex": "MALE"
        "activityType": "read-aloud",
        "language": "en-GB"

Reminder: This and all other API calls require your Activation Key to be passed in the x-api-key header.

Call the endpoint and you should receive a JSON response similar to this:

  "id": "ddebd75b-99b3-42b4-ac65-5ce852c688e9",
  "activityLink": ""

The id field is a unique identifier for the new Model Run that was created for you. We'll use this in step 3 to retrieve results of the wellness model. The activityLink field is a URL that will be used in step 2.

Step 2 - Users complete activity

Pass the activityLink from the previous step to your frontend and embed it in an iframe - exactly how you do this will depend on your infrastructure:

<iframe src="activityLink goes here" allow="microphone">

Depending on your required user experience you might want to make this full screen or a fixed size.

As soon as the iframe is loaded your users will see the thymia activity flow begin, starting with instructions and then proceeding to a check of the user's hardware (for example ensuring their microphone is working). The main activity then starts, where the user's speech is recorded. When they have finished speaking they click Complete.

The wellness model requires at least 20 seconds of speech. However, more than 30 seconds means longer processing times to get results.

Your frontend code will need to know when the activity has been completed so it can take control again - this is achieved by listening for a Javascript event we publish called thymia:activity-complete-ok:

const ACTIVITY_COMPLETE_OK_EVENT = "thymia:activity-complete-ok";

window.addEventListener("message", (event) => {
    // Handle activity complete event
    // e.g. take user elsewhere or start polling for results

What you do upon receiving this event is up to you; usually removing the iframe is the first thing and then initiating polling for results will follow, as in step 3.

Before we move on to the final step, here’s a sample React component that wraps use of the Activity Plugin:

const ACTIVITY_COMPLETE_OK_EVENT = "thymia:activity-complete-ok"

export default function ThymiaActivityPlugin({activityLink, onComplete}) {
    window.addEventListener("message", (event) => {
        if ( === ACTIVITY_COMPLETE_OK_EVENT) {

    return (
            style={{ width: "100%", height: "100%", border: "none" }}

Step 3 - Retrieve wellness results

As soon as users complete their activity in the previous step the wellness model will begin processing the recording of speech that was automatically collected. Your backend can now start polling the Helios API for results. Once results are available you can display them to your users in the form you want.

Here we're assuming you want to show wellness results directly to your users as soon as they're available. Depending on your workflow you may not want this - instead perhaps you want to poll for results on the backend and send them onwards (to a queue or email for example), or maybe not retrieve results until they are required at some point in the future. As we'll see, as long as you retain the unique identifier of the model run from step 1 you can always access results when you need them.

Results polling is achieved by calling a second endpoint, /v1/models/mental-wellness/{model_run_id}. The model_run_id in the URL should be the unique identifier returned in step 1:

curl -H 'x-api-key: your_key_here' \

Wellness results will take a few seconds to be available, so the response from this endpoint should be examined to see what the current status is. Use the value of the status field to decide how to proceed:

  • CREATED or RUNNING: Results are not yet available so keep polling this endpoint. We suggest calling every 3 seconds.

  • COMPLETE_OK: Model execution completed ok and results are available elsewhere in the endpoint response.

  • COMPLETE_ERROR: Model execution completed with error so no results available. Error details available elsewhere in the endpoint response.

See the API reference for full details of the endpoint response.

If your endpoint response was in status COMPLETE_ERROR, look at the errorReason and errorCode fields for more details. errorReason is intended for developers to look at while debugging, while errorCode is intended to be checked programatically to decide what to do next in your app. When using the Activity Plugin, the most common values for errorCode are:

  • ERR_RECORDING_TOO_SHORT or ERR_TRANSCRIPTION_FAILED: In both cases the recording of the activity contained less than the minimum required amount of 20 seconds of speech. Ask your users to speak for a little longer and then present them with a new activity by returning to step 1.

  • ERR_RECORDING_TOO_LONG - The recording of the activity exceeded the maximum allowed length of 3 minutes. This happens rarely, but if it does, ask your users to speak for less time and then present them with a new activity by returning to step 1.

Assuming the endpoint response was in status COMPLETE_OK you can now read the wellness results. These can be found in the results field:

  "status": "COMPLETE_OK",
  "results": {
    "sections": [
        "startSecs": "0",
        "finishSecs": "8.6",
        "transcript": "The North Wind and the Sun had a quarrel about which of them was the stronger....",
        "distress": {
          "value": "0.333"
        "stress": {
          "value": "0.333"
        "exhaustion": {
          "value": "0.333"
        "sleepPropensity": {
          "value": "0.333"
        "lowSelfEsteem": {
          "value": "0.333"
        "mentalStrain": {
          "value": "0.333"

Note: The results field structure allows for multiple section children, each one covering a different timed sections of the recording. However currently there will always be a single section within the results structure, covering the full recording.

The wellness results consist of 6 scores, with each one linked to its own thymia-created AI model. All models produce a score from 0 to 1:

  • Mental Strain produces a continuous score as a percentage

  • All other scores are presented in 4 distinct buckets:

    • Low (0.0)

    • Moderately Low (0.33)

    • Moderately High (0.66)

    • High (1.0)

Bucketing is based on benchmarking against relevant users in our database matching the assessed individual in age, gender and language (with accent). In other words, a low score means that the score is low in comparison to other users in our database with the same age, gender and accented language. This is the reason it is important to pass us accurate demographic information for each user. Inaccurate information will result in poor comparisons and less accurate outputs.

Our whitepaper contains guidance to help decide how to present these scores to users. Please note that based on customer feedback we have updated some terms from the raw API results when recommending how to communicate with users:

  • To make clearer the distinction between fatigue related to workload (burnout) versus fatigue related to sleep issues (tiredness):

    • The score exhaustion in API results is referred to as "burnout" in the whitepaper

    • The score sleepPropensity in API results is referred to as "tiredness" in the whitepaper

  • To allow clearer explanations of results, the score lowSelfEsteem in API results is referred to as "confidence" in the whitepaper. This change also requires an inversion of the score depending on the bucket:

    • lowSelfEsteem 0.0 -> confidence 1.0

    • lowSelfEsteem 0.33 -> confidence 0.66

    • lowSelfEsteem 0.66 -> confidence 0.33

    • lowSelfEsteem 1.0 -> confidence 0.0

Where next?