Updated on November 11, 2025

Table of Natural Language Generation Systems

Explore the definitive table of natural language generation (NLG) systems: names, authors, domains and architectures. A must-have resource for NLG researchers, educators and practitioners.
Table of Natural Language Generation Systems

Discover a comprehensive, curated overview of major natural language generation (NLG) systems — from early template-based engines to modern neural architectures — in our “Table of NLG Systems”. This resource gives researchers, practitioners and educators a single reference point for exploring the evolution, variety and application of NLG technology.

What you’ll find in the table

  • A structured list of NLG systems: including system names, key authors, start and end years, operative domains, and salient characteristics.

  • Coverage of both academic and commercial systems, spanning decades of NLG research and deployment.

  • Links or references (where available) to further documentation, source code, or publications for each system.

  • A format designed to support comparative analysis: enabling users to track trends such as from rule-based to statistical to neural NLG, domain-specific to open-domain, and monolithic to pipeline architectures.

Why this table matters

  • Historical perspective – See how NLG system design has evolved, how research emphasis shifted from templates and grammars to data-driven and neural methods.

  • Survey and benchmarking – For researchers planning to build or evaluate new NLG systems, the table provides a ready reference of prior work and established systems.

  • Teaching and course material – Useful for instructors who are designing modules on NLG: they can assign students to explore specific systems listed in the table and compare architectures, input/output formats and evaluation approaches.

  • Knowledge-base for system selection – Practitioners who need to select or evaluate NLG technologies can use the list to identify candidate systems, their domain fit, and maturity.

Key sections and features to highlight

  • System name & citation key: Provides the canonical name of each system, including bibliographic key for further lookup (e.g., via BibTeX).

  • Authors / Principal Investigators: Names of the researchers or organisations responsible for the system’s development.

  • Operational years: Start and end year (or ongoing) to help understand the system’s lifecycle and currency.

  • Domain / application area: Specifies whether the system was used for weather reports, business summarisation, dialogue generation, controlled language, ontology verbalisation, etc.

  • Input representations & architecture style: Whether the system used templates, grammar-based rules, statistical methods or neural network architectures.

  • Availability / links: Where applicable, pointers to source code, documentation, or demos.

How to use this table in your workflow

  • Filter by domain: If you’re working on e.g. data-to-text reporting or ontology verbalisation, scan the “Domain” column for matching systems and then review their architecture and output style.

  • Compare architectural trends: Observe shifts in the “Input representation & architecture” field to inform design decisions (for example: rule-based vs neural).

  • Identify maturity and stability: Systems with longer operational lifespans and multiple authors tend to have more robust documentation — useful if you are choosing a system to adapt or extend.

  • Integrate into teaching: Use the table as a baseline reading for students: ask them to pick one system, summarise its architecture and critique its applicability today.

Natural Language Generation – Research Hub on NLG-Wiki.org
@ 2025 nlg-wiki.org