A smarter way to approach AI prompting

Generative AI has become a practical tool in search, content, and analytical workflows.
But, as adoption increases, so does a familiar and costly problem: confidently incorrect outputs.
Also called “hallucinations,” the term implies that an AI model is malfunctioning.
But here’s the truth: This behavior is often predictable and results from unclear instructions. Or, more accurately, unclear prompts.
For example, prompt AI for a “cookie recipe,” and nothing more. Don’t offer details about allergies, preferences, or constraints.
The result might be Christmas cookies in July, a peanut-packed option, or a recipe so bland and basic as to be unworthy of the name “sweet treat.” This lack of detail can lead to misaligned outputs.
It’s best to expect a model to misbehave and preempt this by creating explicit guardrails.
This can be done effectively with rubrics.
We’ll examine how rubric-based prompting works, why it improves factual reliability, and how you can apply it to AIs to produce more trustworthy results.
Fluency vs. restraint: Which is better?
When AI is asked to produce complete, polished answers without specific instructions on how to handle uncertain information or missing data, it often prioritizes fluency over restraint.
That is, continuing the response smoothly (fluency) rather than pausing, qualifying, or declining to answer when information is missing (restraint).
This is the moment AI “makes stuff up” – because uncertainty was not established as a stopping point. The consequences can be financially costly and can also harm reputation, efficiency, and trust.
Professional service firm Deloitte was required to repay 440,000 Australian dollars after errors in an AI-assisted government report were found to include fabricated citations and a misattributed court quote, as reported by the Associated Press in late 2025.
One academic reviewer noted that it:
- “Misquoted a court case then made up a quotation from a judge… misstating the law to the Australian government in a report that they rely on.”
Should Deloitte have skipped the use of AI?
Evaluating data and generating reports is an AI superpower. The lesson here is to keep AI in the workflow, but to constrain it – define, in advance, what a model must do when it doesn’t know something.
This is where rubrics enter the fray.
The role of rubrics in AI
It’s common for users to implement generic safeguards against potential patterns of hallucination, but they often don’t hold up in practice.
Why not? Because they usually describe an outcome and not a decision-making process. This leaves the AI model to make inferences when required information isn’t available.
This is where rubric-based prompting is essential.
A rubric – a scoring guide or set of criteria to evaluate work – can feel like an old-school, academic concept.
Think of a grid teachers traditionally used to grade papers, often shared ahead of time so students knew what “good,” “OK,” and “not acceptable” papers looked like.
AI rubrics rely on the same structural idea but serve a different purpose.
Rather than scoring answers after prompting, they shape decision-making during the response generation process.
They do this by defining what an AI model should do when the required criteria cannot be met.
By defining explicit criteria, rubrics set clear boundaries, priorities, and even failure behaviors, reducing the risk of hallucination.
Writing better prompts is not enough
Advice around prompting often focuses on better wording. Typically, this implies being more specific or issuing clearer instructions. It may even mean nudging a model toward a specific format or tone.
These are not useless steps, and techniques of this kind can improve surface-level quality. But they will not erase the underlying cause of hallucination.
Users frequently prompt AI models with outcomes rather than rules.
Prompt phrases like “be accurate,” “cite sources,” or “use only verified information” sound sensible but leave too much space for interpretation.
The model will remain stuck deciding substantial details for itself.
Long or complex prompts can also create competing goals.
A single prompt might demand clarity, completeness, confidence, and speed – conflicting goals that can lead models to default behaviors, causing them to produce fluent and “complete” responses.
Without a clear hierarchy of priorities, accuracy may be lost or diminished.
Whereas a prompt might be effective at describing tasks, a rubric governs the decision-making process within tasks.
AI rubrics do this by switching decision-making from inference to explicit instruction.
Dig deeper: Advanced AI prompt engineering strategies for SEO
What rubrics do that prompts can’t
Prompts focus on tone, format, and level of detail.
They frequently fail to address uncertainty. Missing or ambiguous information forces an AI model to decide whether to stop, qualify a response, or infer an answer.
Without human guidance, inference usually is the victor.
Rubrics cut down on ambiguity through the use of clear decision boundaries.
A rubric formally defines what is required, optional, and unacceptable. These criteria supply the model with a concrete framework to evaluate all outputs generated.
Identifying priorities explicitly means AI models are less likely to fill in the blanks to maintain fluency.
The rubric that clarifies which constraints matter can allow factual accuracy to take precedence over “completeness” or narrative flow.
Most importantly, a rubric defines failure behavior, what the model needs to do if success is impossible.
Strong rubrics establish that a model can acknowledge missing information, return a partial response, or even decline to answer rather than making up a single word.
Get the newsletter search marketers rely on.
See terms.
Anatomy of an effective AI rubric
There is an old adage about “too many chefs spoiling the soup,” and this is the perfect analogy for rubric creation.
Effective AI rubrics don’t need to fill pages or show up as heavily detailed queries. In the same way a recipe can be ruined by fussiness or too many competing flavors, so too can a prompt be overdone.
Too many details or demands can introduce confusion. Reliable rubrics are those that focus on a small set of enforceable criteria that directly address the risks of hallucination.
At a minimum, a well-written rubric should include:
- Accuracy requirements: Clear rules about what must be supported, what counts as evidence, and whether approximation is unacceptable.
- Source expectations: Guidance on whether sources must be provided, whether they are to come from supplied materials, or how to handle conflicting information.
- Uncertainty handling: Explicit instructions for what the model must do when information is unavailable, ambiguous, or incomplete.
- Confidence/tone constraints: Limitations on tone to prevent speculative answers from being presented with certainty.
- Failure behavior: Permission and preference for stopping, qualifying, or deferring rather than guessing.
How to create a rubric for an AI model
A rubric doesn’t make an AI model smarter, it makes its decision-making process more reliable.
Here’s an example of a competitive analysis to explain the value of rubrics:
A team asks an Al model to explain why their competitors are outperforming them in search results, and what they can do about it. Their prompt is written like this:
- “Evaluate why [competitor] is outranking us for [specific topic]. Identify the keywords they rank for, the SERP features they win, and recommend changes to our content strategy.”
On the surface, this seems reasonable. In practice, it is an invitation for hallucination.
The prompt lacks concrete inputs and the model has no constraints. The risk is high that the AI will invent plausible-sounding rankings, features, and strategic conclusions.
Writing the rubric
In practice, your rubric is included directly within the prompt. It must be clearly separated from the task, which describes what to analyze or generate.
The rubric then defines the rules the model must follow to perform its task.
This is a critical distinction: prompts ask for outputs, while rubrics govern how that prompt is created.
Using the criteria in the section above, the prompt, followed by the rubric, would now read:
- “Analyze why [competitor] may be outperforming our site for [topic]. Provide insights and recommendations.
- Do not claim rankings, traffic, or SERP features unless explicitly provided in the prompt.
- If required data is missing, state what cannot be determined and list the inputs needed.
- Frame recommendations as conditional when evidence is incomplete. Avoid definitive language without supporting data.
- If analysis cannot be completed reliably, return a partial response rather than guessing.”
When the rubric is incorporated, the model can’t infer. Instead, it treats uncertainty as a constraint.
Dig deeper: Proxies for prompts: Emulate how your audience may be looking for you
How rubrics and prompts work together
As seen in the example above, rubrics don’t replace the prompt. They add to and often come after the prompt. They should be viewed as a stabilizing layer.
The prompt is always responsible for defining the task: what is summarized, analyzed, or generated. Rubrics define the rules under which that task is performed.
In practice, prompts can vary, while rubrics remain relatively stable across similar types of work, regardless of the topic. Defining sourcing, uncertainty, and failure behavior stays consistent, reducing error rates over time.
For many workflows, a rubric can be embedded directly after the prompt. In others, they can be referenced or applied programmatically – for example, through reusable templates, automated checks, or system instructions. The format doesn’t matter, only the clarity of the criteria.
Avoid overengineering
Despite their effectiveness, rubrics can be easy to misuse. A common mistake users make is overengineering.
The rubric that seeks to anticipate every possible scenario often results in an unwieldy, inconsistent one.
Another mistake involves adding conflicting criteria without clarifying which takes precedence.
Rubrics must be concise, prioritized, and explicit about failure behavior to reduce hallucinations.
Use AI rubrics like a pro
Prompting like a pro is about anticipating where AI will be forced to guess, then defining and constraining how it operates.
Rubrics tell AI models to slow down, qualify, or stop when information is missing. In doing so, rubrics can help you leverage the power of AI for your work and create outputs that are accurate and trustworthy.



Recent Comments