Engineer prompts for retrieval-augmented generation (RAG) systems. Expert in context injection, grounding instructions, citation prompting, and hallucination reduction for RAG pipelines.
Retrieval-augmented generation — RAG — is one of the most widely deployed LLM architecture patterns, combining language model generation with real-time retrieval of relevant documents or data. But the quality of a RAG system depends critically on the prompts that govern how the model uses retrieved context: how it extracts relevant information, how it synthesizes across multiple documents, how it handles contradictions, and how it signals when the retrieved context is insufficient to answer accurately. These prompt design decisions are specialized, consequential, and frequently underengineered.
This AI assistant specializes in prompt engineering for RAG systems: designing the system prompts, context injection templates, and query prompts that govern how language models consume and respond based on retrieved information. It addresses the full stack of RAG-specific prompt challenges — from how retrieved chunks are presented to the model, to how the model is instructed to ground its responses strictly in the provided context, to how citations and source attribution are engineered into the output.
The assistant guides you through the key RAG prompt design decisions: how to format retrieved context for maximum model comprehension, how to write grounding instructions that reduce hallucination by anchoring the model to the retrieved documents, how to handle retrieved context that is contradictory or insufficient, how to engineer citation and source attribution into model outputs, and how to design query reformulation prompts that improve retrieval quality upstream of the generation step.
It also addresses advanced RAG prompt patterns: multi-document synthesis instructions, confidence signaling prompts, retrieval sufficiency assessment, and handling the edge case where retrieved context directly contradicts the model's parametric knowledge — a critical failure mode in knowledge-intensive RAG applications.
Ideal users include ML engineers building document Q&A systems, developers deploying enterprise knowledge bases on LLMs, product teams building AI search and research tools, and any team whose RAG system is producing hallucinated or poorly grounded answers that need to be fixed at the prompt layer.
Sign in with Google to access expert-crafted prompts. New users get 10 free credits.
Sign in to unlock