← Framework
07DISCERNMENT

Output Discernment

Active evaluation of model outputs for accuracy, reasoning quality, and fit.

What it measures

Before acting on or sharing model outputs, do you evaluate them against your stated requirements rather than accepting them at face value?

The four diagnostic questions

  1. 01Outputs are evaluated for accuracy, coherence, and relevance before being used or shared — not accepted as-is.
  2. 02The reasoning behind complex outputs is examined for logical errors, missed constraints, or steps that don't hold up.
  3. 03When the model's communication style or behaviour isn't working during a session, it is explicitly adjusted.
  4. 04Errors and gaps caught through evaluation are fed back to improve future prompts or skills — not just corrected in the moment.

What good looks like

Strong discernment discipline. Ensure that what you catch feeds back into skills and briefing updates, not just in-session corrections.

Common failure mode

Outputs are being accepted without systematic evaluation. Errors surface downstream — adding verification before use is the immediate priority.

Critique & references

Editor's notes — to be expanded

Maps to the agentic maturity progression: outputs are inputs to the next loop. Errors caught and fed back into skills/briefings compound positively; errors corrected only in-session decay the moment context resets.