Updated on November 13, 2025

Exemplars – Case-Based Natural Language Generation | NLG-Wiki

Learn about Exemplars, a case-based NLG approach that reuses prior texts (“exemplars”) to generate new outputs. See core ideas, architecture, use cases, pros/cons, and references on NLG-Wiki.

What is Exemplars?

Exemplars refers to a case-based natural language generation paradigm where new texts are produced by retrieving, adapting, and recombining previously authored examples (exemplars). Instead of generating sentences purely from rules or from scratch, the system mines a library of well-formed texts paired with structured representations and adapts the closest matches to the new communicative goal. Key idea: “Find something similar, then edit intelligently.”

Core Architecture

  1. Input representation
    Structured data or communicative intent (slots, features, constraints, discourse goals).

  2. Case base (exemplar library)
    Curated set of example texts aligned to inputs (domain facts, rhetorical intents, style metadata).

  3. Retrieval
    Similarity metrics (feature overlap, semantic distance, constraints) select the best candidate exemplars.

  4. Adaptation / Revision
    Targeted edits: slot filling, lexical substitutions, morphological agreement, and local syntactic reshaping.

  5. Realization & Surface checks
    Grammar-level fixes (agreement, tense), style consistency, and fluency post-processing.

  6. Learning & case maintenance (optional)
    Newly validated outputs can be added back to the case base to improve future coverage.

When to Use a Case-Based / Exemplars Approach

  • High-stakes fluency: Domains with abundant good examples and strict stylistic norms (e.g., customer emails, product blurbs, medical after-visit summaries vetted by editors).

  • Rapid authoring: Need quick wins without building a full grammar or training a large model.

  • Controlled variation: Desire to keep tone/format consistent while customizing content slots.

Strengths & Limitations

Strengths

  • Fluent, on-brand outputs by design (reuse of vetted texts).

  • Fast to deploy in well-documented domains.

  • Transparent editing path (trace how an output was adapted from an exemplar).

Limitations

  • Coverage gaps if the case base lacks examples close to a novel request.

  • Adaptation complexity rises with domain variability.

  • Case base curation and maintenance are essential for quality.

Typical Application Areas

  • Customer support macros → personalized yet consistent replies

  • E-commerce → product descriptions, feature highlights, size/fit notes

  • Healthcare & legal → templated narratives with domain-safe phrasing

  • Education → automated feedback comments, rubric-aligned tips

  • Reporting → status updates, incident summaries, routine announcements

Design Tips for an Exemplars System

  • Tag your cases richly (intent, stance, formality, length, audience, domain entities).

  • Define similarity smartly (semantic features > simple keyword overlap).

  • Constrain adaptation with light grammatical rules (agreement, tense, determiners).

  • Add validation passes (readability, forbidden phrases, PII redaction).

  • Close the loop: capture successful outputs as new exemplars (with reviewer approval).

FAQ

Is Exemplars the same as templates?

Not exactly. Templates are hand-written with slots; exemplars are full texts that the system retrieves and edits. Many deployments combine both.

How does it differ from neural text generation?

Exemplars are explicitly retrieved and adapted; neural models implicitly recall patterns. Exemplars offer auditability and style control; neural models offer broader generalization.

Can it support multiple languages?

Yes, if your case base and adaptation rules cover each language (or if you maintain parallel multilingual exemplars).

What about quality control?

Use linting (style/grammar), domain lexicons, disallowed lists, and human-in-the-loop review for new or sensitive outputs.

Natural Language Generation – Research Hub on NLG-Wiki.org
@ 2025 nlg-wiki.org