-
Notifications
You must be signed in to change notification settings - Fork 273
docs: restore details lost during OpenAPI migration #3558
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
ef6dc1f
20cc2ac
86a2cce
7197a6d
a1c802a
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -4,16 +4,16 @@ sidebarTitle: Overview | |||||
| description: Conversational search allows people to make search queries using natural languages and receive AI-generated answers grounded in your data. | ||||||
| --- | ||||||
|
|
||||||
| <Warning> | ||||||
| **Conversational search is still in early development and conversational agents can hallucinate.** LLMs may occasionally produce inaccurate or misleading answers even when the retrieved source documents are correct. Monitor responses closely in production, follow the [hallucination reduction guide](/capabilities/conversational_search/advanced/reduce_hallucination), and configure [guardrails](/capabilities/conversational_search/how_to/configure_guardrails) to minimise this risk. | ||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Use American English spelling: "minimize". Suggested fix-**Conversational search is still in early development and conversational agents can hallucinate.** LLMs may occasionally produce inaccurate or misleading answers even when the retrieved source documents are correct. Monitor responses closely in production, follow the [hallucination reduction guide](/capabilities/conversational_search/advanced/reduce_hallucination), and configure [guardrails](/capabilities/conversational_search/how_to/configure_guardrails) to minimise this risk.
+**Conversational search is still in early development and conversational agents can hallucinate.** LLMs may occasionally produce inaccurate or misleading answers even when the retrieved source documents are correct. Monitor responses closely in production, follow the [hallucination reduction guide](/capabilities/conversational_search/advanced/reduce_hallucination), and configure [guardrails](/capabilities/conversational_search/how_to/configure_guardrails) to minimize this risk.As per coding guidelines: "Use American English spelling (e.g., analyze, behavior, color, center, license, canceled, cancelation, program, labeling, initialed, favorite, dependent)". 📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||
| </Warning> | ||||||
|
|
||||||
| Conversational search is an AI-powered feature built on top of Meilisearch's search engine. It works as a built-in Retrieval Augmented Generation (RAG) system: when a user asks a question, Meilisearch retrieves relevant documents from its indexes, then uses an LLM to generate a response grounded in those results. | ||||||
|
|
||||||
| With proper configuration, such as [system prompt engineering](/capabilities/conversational_search/advanced/reduce_hallucination#system-prompt-engineering) and [guardrails](/capabilities/conversational_search/how_to/configure_guardrails), you can ensure that responses are based on your indexed data rather than the LLM's general knowledge. | ||||||
|
|
||||||
| This is similar to how [Perplexity](https://www.perplexity.ai/) works: every answer comes with source documents so users can verify the information. Meilisearch brings the same pattern to your own data. | ||||||
|
|
||||||
| <Warning> | ||||||
| Conversational search relies on large language models (LLMs) to generate responses. LLMs may occasionally hallucinate inaccurate or misleading information, even when provided with correct source documents. Follow the [hallucination reduction guide](/capabilities/conversational_search/advanced/reduce_hallucination) and configure [guardrails](/capabilities/conversational_search/how_to/configure_guardrails) to minimize this risk in production environments. | ||||||
| </Warning> | ||||||
|
|
||||||
| ## Use cases | ||||||
|
|
||||||
| Conversational search supports three main use cases, all powered by the same `/chats` API route: | ||||||
|
|
||||||
Uh oh!
There was an error while loading. Please reload this page.