Voice FAQs
What are the languages supported for Voice Bot?
Language support depends on the STT/TTS engine selected. Languages supported in Microsoft engine.
Can yellow voice bots support DTMF inputs?
Yes, voice bots support both speech recognition and DTMF (keypad) inputs. Learn more here.
How will voice-bot work with third-party CRM?
It can integrate with any CRM for picking up information or posting back updates as long as we have APIs available to configure.
How can the voice bot transfer contextual information (like name, number, etc) collect from the end user to the contact center as well?
We can use SIP Header transfer or Tonetag transfer to pass extra information while doing the call transfer.
What are the STT engines provided for configuration?
Currently we have native integrations with Microsoft and Google for our STT services.
Can the bot be configured for regional languages?
Yes, a voice bot (same as a chatbot) can be configured for multiple languages.
How a voice bot can capture alphanumeric inputs accurately from user speech?
Accuracy depends on many factors like the complexity of the input, background noise, etc. If the list of these characters is available (for example a list of Product IDs or an Order ID) we can train the bot on the same using boost phrases.
Can voice bot dynamically understand different languages and if required, switch the language on the fly?
Yes, this can be done using the Auto-Language Detection feature. Currently, this is under Beta. Learn more here.
Why is the voice data different in the Insights and Engage dashboards?
In Engage, there is a 2-5 minute window for checking the status of voice campaign calls. During this time, calls are queued in the voice queue. The status is then sent in the notification report. If the call status remains unchanged after this period, Engage considers the calls as failed to connect and moves the users to the next node. Hence, there might be a mismatch in the data displayed on the Insights vs. Engage dashboards/reports.