You have the ability to integrate AI within your RevCent account through the use of a RevCent AI Assistant. Add an OpenAI account to RevCent, create a RevCent AI Assistant and take advantage of AI via web chat or automation.
A RevCent AI Assistant combines your RevCent account with the capabilities of AI using OpenAI's technology. You can create multiple RevCent AI assistants, each performing different roles.
An AI Assistant can be one of two types:
Select the assistant type which determines how the AI Assistant will be utilized. Additional settings are available depending on the assistant type chosen.
When an assistant is of type Autonomous, you do not message directly with the AI. Instead, the assistant runs automatically based on it's trigger settings. Communication with OpenAI, including messages and actions, are done in the background.
When an assistant is of type Web App it is interactive. You message directly with the AI and wait for a response through the AI chat box in the RevCent web app. Simply type your messages and the AI will respond.
There are important concepts to understand when using a RevCent AI Assistant, especially as it related to performance and costs within your OpenAI account.
When creating or modifying the third party integration linked to the AI Assistant, you should always select the most advanced model OpenAI has available. Selecting an older model will cause problems such as: less context tokens allowed, limited capabilities and overall poor results.
When you want to change the model for an AI Assistant you must do the following:
RevCent does not charge extra for integrating AI or using a RevCent AI Assistant. However, OpenAI does cost money and can be expensive depending on model used and the models' price per token. RevCent is not responsible for charges you incur within your OpenAI account. However, RevCent does provide the ability to limit token usage via usage limits.
Each time a RevCent AI Assistant interacts with the AI, tokens are consumed. The price per token differs based on the OpenAI model chosen in your third party integration settings. RevCent gives a best effort to minimize the number of tokens consumed, but most interactions with the AI will consume numerous tokens given the nature of combining RevCent data with AI capabilities. For example, when asking the AI information on a sale, the AI must analyze the entire sale, which consumes on average 2700 tokens. Please take token usage into consideration when creating and using a RevCent AI Assistant.
When a new web chat is started, or an autonomous assistant is triggered, it generates a thread within OpenAI. A thread in OpenAI is a series of back and forth messages between the RevCent AI Assistant and the AI.
Examples:
Threads Are Stateless
At this time, an OpenAI thread is stateless, i.e. it does not “remember” previous messages in a thread. Due to AI's stateless nature, a thread can consume a high number of tokens. Every time a new message is sent by the user or assistant, the AI must first process all prior messages before responding to the new message. Hence the longer a thread is, the consumption of tokens becomes exponential based on the number and length of all messages. This can lead to high costs, so please keep thread length in mind. You have the ability to limit token usage at the thread level via usage limits.
The stateless nature of AI is it's achilles heal and well known throughout the industry, hence why even simple GPT chat length is an issue. It is the current technology within OpenAI, and not RevCent's implementation of OpenAI. Hopefully in the future OpenAI can optimize prior message token consumption, or give the AI the ability to “remember” prior messages without having to re-read them.