June 14, 2024
In a step towards increasing the precision of our AI, we've quietly rolled out the Intent Classification feature in the backend. This new feature enables Brainfish to not only understand the customer's query but also interpret the underlying intent of the question. Understanding this can help customers in future to customise their responses to cater to the specific needs or concerns raised by the customer.
June 7, 2024
Widget Behaviour Exposure to DOM
We've further exposed our widget behaviour as Javascript events to provide our customers with enhanced flexibility and control. This means customers can now trigger the widget based off JavaScript events, as well as initiate events based on the closure of the widget.
This improved accessibility and control should allow power users more leeway in customising their widget's implementation, creating a richer, more personalised user experience.
Check out the Help article on the widget here.
May 31, 2024
We've been busy at Brainfish, introducing new multimodal support in search alongside this month's search engine upgrades:
In response to customer feedback, we've upgraded our multimodal search with native video support. Now, Brainfish can deliver videos in Help Center answers, as well as images (when enabled from Search Configuration). To support this new advancement, we have incorporated native video support within the editor. Customers can now upload videos up to 500MB in size directly into the editor, eliminating the need for external hosting platforms such as YouTube, Vimeo, or Loom.
We've also launched our own Brand Guidelines to assist with the increasing demand for design and PR. The guidelines offer a curated set of specifications for Brainfish in a user-friendly format. This makes it easier for everyone to represent our brand accurately and consistently.
May 24, 2024
We're delighted to announce another significant update aimed at amplifying our answer experience:
Combination Language Models (LLM) for More Customers
We are now expanding the reach of our combination LLM models to serve more customers. These advanced models perform multiple queries and generations in real time behind the scenes to create a "best of all" answer for users. This key advancement will lead to more precise, comprehensive, and satisfactory answers.
We've also improved our spell checking capability on our search engine, to better handle misspelled searches contextually depending on the business.