Skip to main content

Tools and settings

You can find advanced settings in the tools section, that will help you to finely adjust your AI-agent's performance. You can calibrate language understanding through NLU, configure conversational behavior, add constant values that do not change in any conversations, and prepare for upcoming Voice capabilities. These tools collectively elevate your AI-agent's performance and user engagement.

Access tools

To access the tools section:

  1. Go to Automation > Train > Intents > Tools

The tools section consists of five segments:

drawing
ToolDescription
Test your botAnalyse user-input sentences.
ConversationTailor Conversation settings, such as Behavior, Intelligent switching, Step validation, and Auto skip settings.
NLUCustomize technical specifics based on training.
ConstantsIncorporate constant values stored in the database, such as names or filenames, ensuring continuity throughout interactions.
VoiceAdjust settings linked to the Voice feature on our platform.

Test your bot

This section helps you check whether the associated intent will be triggered or not. The AI-agent utilizes prediction to determine the intention behind the input and evaluates the accuracy of the prediction. It also identifies the AI-agent's response based on the input text, including automatically recognized entities.

For instance, when you type an utterance in the "What user says?" section, you'll see the corresponding response and its confidence level displayed along with the related flow.

drawing

Response parameters

The following is a comprehensive description of the parameters in the response received for the typed input.

ParameterDescription
IntentRepresents the utterance that you used to test flow prediction. The AI-agent predicts the intention behind the input.
ConfidenceA percentage (0 to 1) reflecting how accurate the predicted intent is. Higher values indicate greater certainty.
Confidence is 1 when the AI-agent is absolutely sure about the intent.
If the confidence is over 0.8, the prediction is considered accurate.
Default ResponseThe AI-agent's response based on the input text.
EntityWords or phrases representing nouns within the text. For example, in I want to buy a phone, Buy is the Intent and Phone is the Entity.
Global EntityEntities (for example, dates, countries) recognized automatically by the platform.
For Dates: DD-MM-YY, Today, Yesterday, Tomorrow, and more.
For Countries: Japan, India, etc.
Global ModelPre-trained phrases like Small Talk or Contexts, that help identify the intent.

Global model vs Global entity

  • Global Model identifies values based on phrases trained in Small talk and Context management, you can add multiple contexts based on your industry use-case.

  • Global entities identify values that are trained by the platform only for Dates and Locations. You cannot add/delete/modify the training.

To access global entities, use this snippet {{{prediction.gloablEntities.0.text}}}.

Identify the emotion via Verbose

Enable Verbose to identify the emotion (sentiment) behind the text.

drawing

Test your bot in multiple languages

You can try out your AI-agent in various languages by just picking your preferred language from the dropdown menu.

drawing
note

To add languages to your AI-agent, check out the steps here.

Conversation

Here you can manage how conversations unfold, how messages are shown, and various elements related to the conversation.

You can oversee modifications through the following categories:

Update AI-agent behaviour

This section consists of fields to control the behaviour of the conversation.

drawing
FieldsDescriptions
Target languageThe default language in which the AI-agent will converse in before auto-suggestion or language change occurs. You can modify this if required.
Translate quick reply responsesTranslate quick reply responses
(Does not work).
Enable HinglishSet Yes to allow the AI-agent to understand Hinglish (Hindi + English) utterances, tailored for Indian users.
Auto Detect LanguageSet Yes to enable the AI-agent to auto-identify the language a user types in and respond accordingly (if configured).
Enable Go Back/Go HomeShortcut for users to move to the previous step or go back home.
Go back AliasesConfigure keywords to trigger the Go back action and navigate to the previous conversation step.
Go home AliasesSpecify keywords to trigger the Go home action and return to the beginning of the conversation.
Negation journeyCollection of flows the AI-agent goes through when a user rejects an action.
For example:

When the user inputs "I want to talk to the manager", the AI-agent takes to the Transfer to Agent flow.
If the user inputs " I dont want to talk to the manager", the AI-agent takes the selected negation journey.

Intelligent switching

This section helps the AI-agent to switch the conversation based on user input.

For instance, consider a scenario where:

  • AI-agent asks: "Choose the type of account," offering options like Savings account or Fixed deposit.
  • User responds: "Can you explain the difference between Savings account and Fixed deposit?"

With intelligent switching, the AI-agent would answer the user's query and then smoothly transition back to the previous flow, prompting the user to "Select a type of account" again.

drawing
FieldsDescriptions
EnableSelect Yes from the drop-down list to activate Intelligent Switching.
Sticky JourneysIdentify complex flows as sticky to minimize user interruption from the expected flow.
When interruptions occur, a customizable sticky journey prompt encourages users to stay on that path.
If chosen, the current journey continues; if not, an alternative flow is suggested in a follow-up message.
Prompt for sticky journeysDefine the message to be displayed when a Sticky Journey is selected. This encourages users to complete the current flow.
Followup messageRedirects the conversation back to the desired flow like "Do you want to continue where you left off?" or "What would you like to do next?"

Step validation settings

This helps you configure the settings related to validating the steps involved in the AI-agent conversation. In simple words, you can configure this to validate prompts.

When platform quick replies are configured, they appear in WhatsApp as a list of items in text.

For example - What do you want to do next?

  1. Check order status
  2. Receive notification
  3. Go back to Main Menu

You can customise this format Whatsapp Quick reply index and Structure prefix fields.

FieldsDescriptions
Whatsapp Quick reply indexChoose the preferred indexing: Numbers (default), Alphabets, or Emojis (numerical emojis).
Structure prefixDisplay the complete prefix with formatting. Default is {{index}}, as shown above.

For instance, Type {{index}} for will show Type 1 for Gate Mechanical, Type 2 for AE & JE Mechanical which simplifies selection of options.
Show prompt againEnable to show the original prompt after a validation failure message with Yes from the dropdown.

For instance, when an incorrect phone number is entered, the prompt can be displayed again.
Enable limit on retriesEnable the default retry limit (3 times) with Yes.
Error messageSet the error message for validation failure.

For example: Hey, you've reached the maximum retry limit.
Unknown messageDefine the message for system inability to validate the prompt response.

For example: It seems I can't understand your input, could you rephrase it?

Autoskipping settings

Enable this option to allow the AI-agent to inform the user that it already has the information that is being provided.

You can skip a prompt using entitity or variable if the value already exists. This helps avoid asking users the same question multiple times depicting the memory of your AI-agent.

FieldsDescriptions
AcknowledgmentActivate this option for the AI-agent to receive user acknowledgment to automatically skip the upcoming flow.
Acknowledgment promptType the message to be displayed to the user when the AI-agent suggests auto-skipping the upcoming step in the flow.
Invalid promptThis message will appear when the user enters an invalid prompt.
Confirm button labelThe label on the confirm button which users click to confirm that the option can be skipped.
Modify button labelThe label on the edit button when users choose to modify their selection.

To configure autoskip at node level, click here.

Global autocomplete

In the What do you want to show for autocomplete option, you can select the choice that the AI-agent will use to autocomplete the user's input.

NLU

Prediction

Our machine learning system matches user input sentences to specific intents with confidence scores between 0 and 1. You can adjust the desired confidence level. The default confident score on the platform is 0.85.

FieldsDescriptions
Min ConfidenceSet the minimum value below which the AI-agent won't trigger an intent. For instance, if set to 0.85, the AI-agent responds only when the intent's confidence level for the input is over 85%. Use case:

When a user says Talk to your agent and Min Confidence is 0.85, the AI-agent responds correctly only if the intent predicted is Transfer Agent with a confidence of 1.
However, if the user types Talk to tech support, the AI-agent won't reply as the confidence for the predicted intent is uncertain.
Context ConfidenceSpecify the minimum confidence score required for context accuracy.
Secondary Model ConfidenceA global contextual model value. If the predicted value is below the threshold entered, the intent won't trigger.

Document search settings

OptionDescription
Document search thresholdTo improve document search accuracy, adjust the threshold value between 0 and 1:
Value set to 0: Irrelevant results might be shown.
Value set to 1: More relevant matching results will be shown.
An ideal confidence level is between 0.6 and 0.8. You can increase or decrease this based on the document cognition search results for the uploaded documents.
Boost document rank byChoose the preference by which the user query should match - Headers or Paragraph. This parameter can be used to boost the document ranks.

For example, if a document has a header with the user data and rest of content below it and Boost document rank by is enabled, this document will show up higher in ranks as the query users data matches the header in the document.

Multi Intent Settings

Enabling Multi-Intent allows the model to identify two intents in a single user message. For example, if this option is on and the user types Book a flight and reserve a hotel (assuming proper training to the AI-agent, the model will detect both Book a flight and Reserve a hotel as intents.

After detecting the intents, the model will acknowledge this (via an Acknowledgment message) and ask the user which task they want to start with. The options will be provided as quick reply choices.


Other options and the Go home option will also be suggested as quick replies, along with a follow-up message (similar to Intelligent Switching).

FieldsDescriptions
EnableSet Yes to enable multi-intent.
Acknowledgement questionThe acknowledge message to display when multiple intents are detected.
Sample message: I understand, What would you like to do first?
Followup questionQuestion to ask in follow up to the previous question.
Sample message: Would you like to proceed?

Add constant values

This section helps you add values that remain constant throughout the conversation. It can be a person's name, a file name or any value that will not be modified as the conversation progresses.

Add constants by clicking the +Add Constants button and click Save to store those values in the database.

Voice

note

You can configure these settings only when IVR Channel is connected. Click here to learn how.

The voice global options that are configured will be applicable for all the nodes and journeys for the AI-agent. Node-level options can be configured for each node specifically. Whenever a global option and also node level option are defined, for that specific node, the node level option will be given more priority. For example,

  • Global level: You can select an STT/TTS engine globally so that you don’t have to configure it for each node.
  • Node level: You can configure different “recording max duration” for different nodes i.e. 10 seconds for address and 5 seconds for name node.

Voice bot global options/settings are classified depending upon different uses as below:

  1. Telephony: For settings related to telephony like call forwarding, calling line identity, etc.
  2. Recording: Recording options such as beep sound after a question is asked.
  3. Speech to Text: You can customise a speech recognition software that enables the recognition and translation of spoken language into text.
  4. Text-to-Speech: You can customise the Text-to-Speech (TTS) capabilities to play back text in a spoken voice.
  5. Conversation: Yellow cloud provides additional conversational options to further customize and elevate the experience on the IVR channel.
  6. Others: Miscellaneous settings to handle invalid and blank user responses and fallbacks.

note

Most of the options can be configured globally.

If they are configured at the node level, node level customisation takes priority over the global level settings.

FieldsDescription
Custom SIP headerThis can be used as an additional parameter that can be passed to an agent while transferring the call to an Agent to pass along AI-agent collected information. You can pass a key-value pair in JSON format which will get passed in the SIP header.

An example of the Custom SIP header:

[{“key”:“User-to-User”, “value”:“name=david&product=heater&query=not turning off&priority=high&number=12345”}]

FieldsDescription
Recording after call forwardWhen this option is enabled the call will get recorded even after it has been transferred to an agent. This can be disabled for use cases with recording sensitive information.
Enable recording beepWhen this is enabled, a beep sound will be played after the AI-agent asks a question giving an auditory response to the end-user to respond.
Recording ActionWith the recording management options, you can select to pause/resume/stop recording depending upon different use-cases and conversations. By default, the recording is ON only. Also, in a call, once you STOP the recording (for recording sensitive dialogues), it can’t be resumed back.
FieldsDescription
STT engineSelect an engine from the dropdown- Google/Microsoft.
STT modeSelect mode from the dropdown. Microsoft provides "Static", "Streaming" or "Streaming Advanced". Google provides "Static".
STT languageAI-agent Language(ISO code) can be selected from the dropdown. Default- English. Click Microsoft or Google for more information on the languages)
Recording max durationThis value is the Max duration for which the AI-agent will wait after asking a question (in any step) even while the user is speaking. For example, after asking “Which city are you from?” and the recording duration value is “5" - the AI-agent records only 5 seconds of user response. This option is necessary to avoid consuming unwanted information and to stay with the conversational flow. If the user mistakenly replies with long paragraphs when a question is asked or if the user's response is getting shadowed with constant background noises, the AI-agent must not process those long inputs. Hence, with this configuration, the AI-agent only takes the necessary response and can quickly process the user response.
Recording silence durationApart from recording max duration which caps the maximum time of user response, to further make the conversation lively and realistic, another parameter is configuring the expected silence duration. Recording silence duration is the max SILENCE duration for which the AI-agent will wait after asking a question (in any step) for the user to respond. While setting the silence duration, please note that it is applicable to the whole duration of user response, meaning, the silence at any point of user response be it at - (a) initial thinking/processing time OR (b) in between pauses of user response shouldn’t be greater than configured silence duration. Applicable with Microsoft and Google with STT mode set as STATIC.
Initial silence durationTo provide more customization on the silence duration parameter, “streaming” and “streaming-advanced” STT modes (of Microsoft STT engine) allow to specifically configure the maximum acceptable silence duration before the user starts speaking. For example, the acceptable initial silence duration for the application number question could be higher (~3/4 seconds) but in the case of a quick conversational binary question, it could be configured to 1 second.
Final silence durationSimilar to the initial silence duration, the final silence duration is indicative of the maximum duration of pause that the AI-agent will wait for once the user has started speaking. For example, for binary/one-word questions like yes/no we could set the final silence duration to ~0.5/1.0 seconds and for address-like fields where taking a pause is intrinsic in conversation, we can set the final silence duration to ~1.5/2.5 seconds.
FieldsDescription
TTS engineSelect the engines from the dropdown- Microsoft Azure, Google Wavenet, Amazon Polly.
Text typeSelect Text/SSML from the dropdown.
TTS languageAI-agent Language(ISO code) can be selected from the dropdown.
PitchPitch value can be any decimal value depending on the base of voice required, 0 is ideal. You can add this for Microsoft if text_type = "text" and for Google for text_type = "text" and "SSML".
Voice IDType the characters of voice ID. You can add this for Microsoft if text_type = "text" and for Google if text_type = "text" and "SSML".
TTS SpeedThis value defines how fast the AI-agent must converse. This value can be 0.9 - 1.5 for the AI-agent to soundly humanly. You can add this for Microsoft if text_type = "text" and for Google if text_type = "text" and "SSML".
FieldsDescription
Enable acknowledgement messageWhen this is enabled, an acknowledgement kind message (“hmmm” OR “okay”) could be spoken in the conversation immediately. This is a small custom feature built to bring more human touch to the conversation.
Acknowledgement messageEnter a text/SSML message depending upon the configuration under the Text Type field. Keep it short for a better user experience. . Ex: "Do you want to confirm?"
Boost phrasesSome user responses can be confusing for the AI-agent to understand. Region-specific words, new Genz lingos, internet terminologies, trending phrases, and abbreviations are trained especially so that the AI-agent understands the exact intention. For example, COVID is a new term that has been used frequently, the phrase COVID must be boosted, otherwise, it gets translated to kovind/ go we/ co-wid etc. Ex: you should add the phrases that you expect from the user response like, < I want to take covid vaccine >

Other voice settings

FieldsDescription
Repeat limitIn cases of a blank user response to the question, this is the number of times a repeat message should be played. For example, if the value is 3, the AI-agent asks the user to respond 3 times before following the fallback Configuration.
Repeat fallback flowSelect the conversation fallback to be configured in cases of blank user response even after repeated tries. Currently only support - disconnect and agent transfer as the fallback options.
Disconnect messageMessage to be played before disconnecting the call as a part of fallback. For example, "Have a nice day. Bye!"