- ThoughtSpot Sage will be a new service, integrating GPT-3 with the company’s unique search algorithm. More LLMs will be supported in the future.
- Translating queries into ThoughtSpot’s native Sage Grammar query language before presenting them to the user for approval has been the primary aspect.
ThoughtSpot Inc. has joined the large language model bandwagon recently by announcing the integration of the popular GPT-3 LLM into its business intelligence platform.
GPT-3 is the model that underpins OpenAI LLC’s hugely popular ChatGPT chatbot. Users may query corporate data from different sources using search engine-like phrases and view the results as charts, graphs, and maps using ThoughtSpot’s platform. The firm, which has generated over USD 660 million in funding, claims to have four of the top five largest companies in the US as customers and more than one-third of the Fortune 100.
ThoughtSpot Sage will be a new service, integrating GPT-3 with the company’s unique search algorithm. More LLMs will be supported in the future.
According to Chief Development Officer Sumeet Arora, one of the most significant advantages of GPT-3 integration is the ability to query data and obtain responses using natural language. ThoughtSpot Sage may also be used to help with data modeling, and the company’s support staff now provides LLM-based assistance.
The integration of LLM assistance is anticipated to alleviate the burden on data analysts who tend to “get caught up building dashboards and taking small requests,” said Arora. “That is the drudgery we are aiming to remove. We allow analysts to create the guard rails for how search analytics work.”
He mentioned that natural language processing will make drag-and-drop query construction archaic. “The mouse-and-click interface is dead. We have harmonized human behavior with a human experience in the search bar,” he added.
Precision is something that natural language lacks, though. A computer could respond to a question about the company’s most popular product based on sales, customer feedback, profitability, or other factors. ThoughtSpot overcomes this by categorizing questions using search tokens, which include column, operator, value, and keyword criteria.
Arora stated, “We started by creating a relational interface with tokens that dramatically reduces the effort to get answers, but you have to be analytics-fluent. We’ve now added a layer with search tokens that guarantees accuracy but is natural language-like.”
Translating queries into ThoughtSpot’s native Sage Grammar query language before presenting them to the user for approval has been the primary aspect. “We will tell you how we translated your query and allow you to edit it,” he said.
Taking into consideration that GPT-trained models are often called confident liars, Arora claimed that ThoughtSpot has been planning to integrate color-coded confidence ratings into its answers, a feature that even ChatGPT lacks. The LLM model for different users is voluntarily trained for the period with respect to the queries and terms relevant to the organization. He stated that users can define the examples to train the system, and the training data is not to be shared among the users.
The company’s cloud service will integrate GPT-3 over the next few weeks at no extra charge.