The initial wave of large language model (LLM) development centered on prompt engineering – carefully crafting the right question or instruction to coax a desired answer. LLM developers and enthusiasts prided themselves on clever one-shot prompts like “You are an expert X. Do Y like Z”; and the term “prompt engineer” gained traction accordingly. However, as LLM-powered applications have grown more complex and production-ready, developers have realized that achieving good results is about more than formulating a clever query. Enter…








