Active evaluation of model outputs for accuracy, reasoning quality, and fit.
Before acting on or sharing model outputs, do you evaluate them against your stated requirements rather than accepting them at face value?
Strong discernment discipline. Ensure that what you catch feeds back into skills and briefing updates, not just in-session corrections.
Outputs are being accepted without systematic evaluation. Errors surface downstream — adding verification before use is the immediate priority.
Editor's notes — to be expanded
Maps to the agentic maturity progression: outputs are inputs to the next loop. Errors caught and fed back into skills/briefings compound positively; errors corrected only in-session decay the moment context resets.