Yellow.ai's integration with LLM (Language Model) is designed for scenarios where the objective is to obtain a single response from GPT and then conclude the interaction with that specific response. This integration can be effectively employed in various situations, including providing responses to frequently asked questions (FAQs) for your customers.
Configure LLM in Yellow.ai
Once you configure the LLM integration, the LLM node will be available which can further used to set up interaction. To connect your LLM account with Yellow.ai, follow these steps:
- Login to your yellow.ai account in cloud.yellow.ai and go to Integrations. Search for LLM.
- Fill in the following fields:
- Give account name: Provide a name to your LLM account.
- LLM Provider: Choose your LLM provider.
You can derieve API Key and Resource from the endpoint of your GPT 3.5 or GPT 4 credentials as mentioned below:
|For GPT 3.5
|For GPT 4
|Endpoint URL (API Key) from the Curl
- Click Connect.
Once the integration is successful, the LLM node will be available in the Studio section.
Build a flow to use LLM node
- Go to Studio and create a flow for your usecase.
- Include the LLM node at the point in the flow where you want the LLM node to take over the flow. To accomplish this, navigate to Integrations and select LLM.
- Since the LLM configuration does not happen via variable, disable the Var toggle in LLM configuration field and click the gear icon.
- In Model name, click OR, and input the relevant model name. For example, in case of gpt 4, it will be GPT-4.
- In the User query input field, disable the Var toggle click the gear icon.
- Disable both the Var toggles and click the gear icon.
- Click OR and type any text.