Glossary
Definitions of terms used in the AI transformation framework and its methodology.
Scales and maturity levels
Organizational Scale (Levels 1–3) — AI maturity scale that applies across the entire organization — engineering, marketing, sales, finance, customer service. See the reference framework.
Level 1 — AI-Assisted — AI is a tool that individuals choose to use. Same structures, same processes, same roles. See the reference framework.
Level 2 — AI-Integrated — AI is integrated into workflows and systems. Some processes are redesigned around AI capabilities. See the reference framework.
Level 3 — AI-Native — Organizational design assumes AI as a first-class resource. Roles are defined by judgment and direction, not execution. See the reference framework.
Engineering Scale (Rungs 0–5) — A more granular maturity scale, specific to software engineering. Ranges from autocomplete to dark factory. See the reference framework.
Rung 0 — Autocomplete — Human codes, AI suggests completions. See the reference framework.
Rung 1 — Intern — Human assigns scoped tasks, AI writes the code, human reviews everything. See the reference framework.
Rung 2 — Junior developer — Human supervises multi-file changes. See the reference framework.
Rung 3 — Manager — Human directs, reviews at feature/PR level. See the reference framework.
Rung 4 — Product manager — Human writes the spec, verifies results. Tests verify the code, not the human. See the reference framework.
Rung 5 — Dark factory — Spec goes in, software comes out. The human doesn't touch the code. See the AI Lab.
Tiers
Leadership Tiers (1–3) — Maturity scale for leaders. The organization can't exceed the tier of its leadership. See the reference framework.
Tier 1 — AI-Supportive — The leader publicly endorses AI and uses it personally, but doesn't push adoption. See the reference framework.
Tier 2 — AI-Operational — The leader sets expectations by role, asks "how did AI help?", funds automation before hiring. See the reference framework.
Tier 3 — AI-Strategic — The leader redesigns the organizational structure, rewrites roles and KPIs. See the reference framework.
Individual Tiers (1–3) — Maturity scale for individuals in relation to AI. See the reference framework.
Tier 1 — AI-Aware (Consumer) — "AI helps me do my job faster." See the reference framework.
Tier 2 — AI-Augmented (Operator) — "AI helps us do this task better and more systematically." See the reference framework.
Tier 3 — AI-Native (Architect) — "This role should exist differently because AI exists." See the reference framework.
Methodology
Universal Translation Rule — The operating principle of the entire transformation: replace "the human produces the artifact" with "the human defines the spec → the system produces the artifact." See the reference framework.
The 4 Work Layers — Every AI workflow must define four layers, from the simplest to the most demanding. See the execution standards.
Layer 1 — Prompt Craft — Baseline skill: write clear instructions, specify format, include examples, resolve ambiguity upfront. See the execution standards.
Layer 2 — Context Engineering — Maintain a structured context file (goals, constraints, terminology, standards) loaded before AI tasks. See the execution standards.
Layer 3 — Intent Engineering — Define objective hierarchy, tradeoff rules, escalation conditions. See the execution standards.
Layer 4 — Specification Engineering — The highest standard: every non-trivial task has a complete written specification. See the execution standards.
Specification Primitives — Five distinct skills that make up specification engineering. See the execution standards.
Primitive 1 — Self-Contained Problem Statements — State the problem with enough context that it's solvable without additional information. See the execution standards.
Primitive 2 — Acceptance Criteria — Define what done looks like so an independent observer can verify the output. See the execution standards.
Primitive 3 — Constraint Architecture — Four categories for every task: Must, Must not, Prefer, Escalate. See the execution standards.
Primitive 4 — Decomposition — Break tasks into components that can be executed, tested, and integrated independently. See the execution standards.
Primitive 5 — Evaluation Design — Build test cases with known-good outputs to validate and catch regressions. See the execution standards.
Organizational roles
Spec Owner — Person responsible for the quality and completeness of a production AI system's specification. See the execution standards.
Context Owner — Person responsible for maintaining a production AI system's structured context file. See the execution standards.
Evaluation Owner — Person responsible for evaluation tests and output quality of a production AI system. See the execution standards.
Practices and deliverables
AI literacy — The ability to understand what AI tools can do, to use them in a structured way, and to distinguish Level 1 usage (ad hoc tool) from Level 2 (workflow integration). A condition of employment at Level 3. See the employee guide.
Transition brief — A structured document delivered by an employee that describes their current role, AI-first vision, gap, systems to build, metrics, and 30/60/90 plan. Built from six primitives. See the employee guide.
AI clinics — Regular sessions (weekly or biweekly) where the team shares discoveries, blockers, and workflows. Short format (30 min). The goal is peer learning. See the manager guide.
Transition pairing — Pairing an advanced member with a learning member to transfer skills by working together on a real case. See the manager guide.
Operational concepts
Dark factory — Engineering model where the spec goes in and software comes out without human intervention on the code. Corresponds to Rung 5 on the engineering scale. See the AI Lab.
Greenfield — A project started from scratch, with no existing code. The most natural terrain for the AI Lab. See the AI Lab.
Brownfield — A project with existing code and habits, transitioned to Rung 5. Harder than greenfield, but more impactful. See the AI Lab.
Non-interactive development — The AI Lab's working mode where specifications and scenarios drive autonomous agents. The human doesn't code and doesn't converse with the agent during execution. See the AI Lab.
Adoption J-curve — The predictable productivity dip during AI adoption. Productivity drops before it rises. Organizations that climb out are the ones that redesign their workflows around AI capabilities. See the manager guide.
Deliberate naivety — The AI Lab's stance of removing traditional development conventions and systematically asking: "Why am I doing this? The model should be doing it instead." See the AI Lab.
Satisfaction metric — The AI Lab's evaluation approach that measures the fraction of trajectories across all scenarios that satisfy the user, rather than a binary green/red test result. See the AI Lab.
Scenarios — End-to-end user journeys that describe expected behavior from the user's perspective. Favored over unit tests in the AI Lab because they are harder for agents to circumvent. See the AI Lab.
← Back to home · The reference framework · AI Execution Standards