Key Terms | Description |
---|---|
Agent Access Token | An Agent Access Token (also sometimes called an API key) is a long string of text that is included in API calls to Clarity agents as part of the authentication step. |
Application Programming Interface (API) | An API acts as a messenger that enables different software applications to communicate with each other. For example, in interacting with a Large Language Model (LLM), an API allows you to programmatically send prompts to the LLM and receive responses. |
Completions | Completions are the responses generated by an LLM in response to requests. For instance, if you ask the model a question, the completion is the provided answer. |
Custom Agent | A Clarity custom agent allows users to interact with an agent created for a specific use case. Similar to ChatGPT’s custom GPTs, custom agents let you define which LLM to use, upload contextual files, and configure settings like the system prompt and temperature. Specific user groups can be granted access to the custom agent within the Clarity user portal. |
Fine-tuning | Fine-tuning improves LLM performance by further training a pre-trained model on a smaller, topic-specific dataset. This additional training enhances skill in specific tasks but requires time and high compute costs to complete. Clarity does not currently support fine-tuning of models. A popular alternative to fine-tuning is to use methods such as RAG to help ground responses from a generic LLM on specific knowledge sources. |
Hallucinations | Hallucinations refer to where large language models produce content that is entirely fabricated. Please verify the accuracy of the answers generated and remember that you are responsible for any content you create or share, including AI-generated content. |
Latency | Latency is the time delay between the submission of a prompt to an AI model and receiving a response. Lower latency means quicker responses. |
Management Portal |
The Clarity Management Portal is the administrative interface you can access as the owner of a custom agent or an API key. You will access and use the Management Portal to configure custom agents, generate API keys, among other functionalities.
|
Prompt | A prompt is a sentence or question you give to an AI model to get a response. It’s like asking a question or giving instructions to the AI. For example, if you type “Tell me a joke” in a chatbot, “Tell me a joke” is your prompt. The AI will then use this prompt to generate a reply, such as a joke in this case. |
Prompt Engineering | The technique of designing prompts to produce the most relevant responses from an AI model. |
Retrieval-Augmented Generation (RAG) | RAG is a method that enhances LLM responses by combining information retrieval with content generation. This allows the model to access specialized datasets and files, producing more relevant and accurate outputs. |
System Prompt | A system prompt is the predefined text that sets an agent’s personality and includes rules for generating responses. In Clarity, you can customize this prompt for custom agents and choose whether to show it to the end users. For more information, please refer to the documentation on how to craft an effective system prompt for your custom agent. |
Temperature | In the context of LLMs, temperature is a parameter that controls the randomness of the text generated by the model. The closer the temperature is to 0, the more predictable the responses will be. With higher temperatures i.e. closer to 1, the more creative and random the response will seem, sometimes at the expense of accuracy. For use cases where accuracy is important, use a lower temperature setting, but for agents that are supporting creative work, set a higher temperature. This is a setting that can be configured for custom agents in Clarity. |
Token | Tokens are individual units (words or characters) used to measure the length of chat inputs and outputs. The process of breaking text into tokens is called tokenization. |
User Portal | The Clarity User Portal is where you chat with Clarity’s agents, and you can access it at: https://clarity.yale.edu. If you have access to a custom agent, or any agent, you can select it from the Agent dropdown and interact with it as you would with any Clarity agent. |