Elastic’s Elastisearch Relevance Engine Enables Generative AI Search

0

It’s impossible to avoid large language models (LLM), the generative AI technology that has captured the world’s attention. How we think about interacting with computers has changed overnight, where generative AI applications communicate with natural language. OpenAI’s ChatGPT has become a shorthand for the technology, but we see new LLMs appear almost weekly. The potential of large language models seems endless.

The challenge for an enterprise wanting to harness the power of LLMs is that a language model is only as capable as the data it’s trained on and understands. This hampers the ability to leverage the technology to solve real-world business problems. LLMs become infinitely more powerful when deeply integrated with data relevant to the problem the user is trying to solve. However, training an LLM from scratch can be daunting for even the most sophisticated IT organization.

Elastic, the company behind Elasticsearch, one of the industry’s most popular open-source search and analytics engines, is bridging the gap between LLMs and search to enable new capabilities to create highly relevant AI search and generative AI applications. Elastic calls its new technology the Elastisearch Relevance Engine (ESRE).

Elastic’s ESRE

The new Elasticsearch Relevance Engine, powered by built-in vector search and transformer models, is designed to allow organizations to bring together their proprietary structured and unstructured data with the latest in LLM technology. This will enable organizations to build custom generative AI applications without the cost and complexity of training a new LLM from scratch.

Elastisearch supports multiple features that enable advanced AI-enabled text search capabilities. This includes support for BM25 similarity scoring and an AI-ready vector search with exact match and approximate k-Nearest-Neighbor search capabilities. This allows Elastisearch to leverage traditional, vector, or hybrid search with BM25 and kNN to deliver results with unparalleled precision.

Elastic allows developers to go beyond the capabilities of its built-in models, allowing them to manage and deploy their own transformer models. This enables Elasticsearch to be tuned to the business-specific needs of the organization. Of course, developers can quickly enable new applications using the models bundled with ESRE, including a technical preview of its new Learned Spare Encoder model.

If you’re interested in how all this works, Elastic has a nice blog post on how the new ESRE operates. ESRE is available now on Elastic Cloud.

Analyst Take

Elastic’s new ESRE technology is just the latest milestone in a long history of delivering AI-enabled insights. Elastic introduced support for supervised and unsupervised learning to its products in 2018 when it also introduced support for forecasting in observability to Elastisearch. In the years since, Elastisearch gained support for anomaly detection and AIOps, along with support for ML-powered detection rules for cyber-security. This year, it introduced integration with generative AI and LLMs.

The innovation is paying off. Elastic beat top- and bottom-line estimates in its most recent earnings, delivering a $280M fiscal Q4, up 17% year-on-year. Its Elastic Cloud revenue, which is where its new ESRE functionality is available, grew 28% year-on-year to $112M, totaling $424M for the full year. I’m a technology analyst, not a stock analyst, and I only look at these numbers as a gauge of customer adoption. It’s clear that customers like what Elastic is delivering.

Elastic’s new ESRE capabilities will change how companies deliver search-related data to their customers. ESRE enables organizations to leverage domain-specific generative AI models to ensure users receive factual, contextually relevant, and up-to-date answers to their queries. It will change the user experience and set a new standard for information retrieval and AI-powered assistance.

Effective search is directly tied to customer engagement, impacting revenue and productivity. Search results need to be relevant. Large language models promise to change the fundamental engagement model for search, allowing users to query using natural language, where systems understand the query’s intent. Applications adopting the technology will deliver unprecedented levels of query precision while altogether redefining the user experience. This is precisely what Elastic is delivering.

Disclosure: Steve McDowell is an industry analyst, and NAND Research an industry analyst firm, that engages in, or has engaged in, research, analysis, and advisory services with many technology companies, which may include those mentioned in this article. Mr. McDowell does not hold any equity positions with any company mentioned in this article.

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

Read original article here

Denial of responsibility! Rapidtelecast.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment