What I Correct First When Teams Use AI. #05 Prompt Structuring
Apr 06, 2026Are you thinking about the structure of your prompts in ChatGPT, Copilot or Gemini or just the information you enter?
Because in almost every workshop I run, people are very focused on what they give to the tool. The document, the question, the data, the instructions.
And that makes sense. It feels logical.
If I give better information, I should get a better answer.
That is true. But only up to a point.
Because what actually makes the difference, and this is the part that is usually missing, is not just the information you give, it is how you structure the interaction around it.
Where things start to go wrong
What most people do, without really realising it, is treat AI like a single step. They open ChatGPT or Copilot, they type everything they need in one go, and they expect something clean, usable and complete to come back.
Sometimes it looks good. Sometimes it even feels good.
But when you look closely, it often needs rework. It is slightly off. Not wrong, but not right either.
So they try again. Change a few words. Add a bit more detail. Maybe remove something.
And they repeat this a few times.
It feels like progress, but it is not a reliable way of working.
Let me explain it in a very simple way
There is an exercise I use in workshops that makes this obvious straight away.
I take three pencils.
I pick one person and I ask them, “Can you catch?”
They say yes.
I throw one pencil. They catch it.
Then another. They catch that as well.
Then a third. Most of the time, they catch that too.
So far, no problem.
Then I take the pencils back, pause for a second, and ask the same question again.
“Can you catch?”
They say yes, confidently.
This time, I throw all three pencils together.
And almost every time, they drop at least two.
This is exactly what is happening with AI
When you try to do everything in one prompt, you are throwing all three pencils at once.
You are asking the model to understand your intent, interpret the context, decide what matters, structure the answer and deliver the final output, all in a single step.
It can do it. Sometimes.
But most of the time, something gets dropped.
Not because the tool is weak, but because the task you are giving it is too compressed into one moment.
What people who get good results do differently
They do not try to complete the task in one go. They break it down.
They structure the interaction in the same way they would naturally approach the task themselves.
And this is where things start to become consistent.
Because instead of hoping the model gets everything right, they guide it through the process.
A more practical way to work
If we slow this down slightly, the structure becomes quite simple.
Not complicated. Just deliberate.
1. Start with the direction, not the task
Most people begin with the task.
“Summarise this document.”
But if you think about it, that is still quite vague. Summarise it for who? For what purpose? What matters in that summary?
So instead, you start by setting the direction.
You might say something like:
“Summarise this document for a senior leadership team. Focus on key risks, impact on deadlines and any decisions required. Keep it short enough to read in 5 minutes.”
Now the model is not guessing what a good summary looks like. It has a clear target.
2. Then give the context it needs
This is the part that is often missing. Because in your own head, the context is obvious. You know the situation. You know the audience. You know why this matters.
But the model does not.
So you need to bring it into that environment.
“You are supporting a project manager preparing for a weekly operations meeting. The audience has limited time and wants to focus on issues, not background.”
What you are doing here is not adding complexity. You are removing ambiguity.
And that changes how the model prioritises information.
3. Only then ask for the task
Now, when you actually ask for the task, it becomes much clearer.
“Based on the document, produce a summary that highlights current delays, key risks and the decisions required from leadership.”
Notice how this feels simpler, even though it is more structured. Because the thinking has already been done.
4. Then work with the output, not against it
This is another change that makes a big difference.
Most people expect the first answer to be final but in practice, the real value comes from what happens next.
You review the output. You guide it. You adjust.
“Make this more direct.”
“Reduce this to five bullet points.”
“Focus more on financial impact.”
“Rewrite this for a non-technical audience.”
You are shaping the result step by step.
Just like you would if you were working with another person.
5. Finally, define how it should be presented
If you already know how you want to use the output, then say it.
“Present this as five bullet points, each one sentence, starting with the most critical issue.”
Or:
“Write this as a short email with a clear opening, key message and closing action.”
This removes the last layer of guesswork.
What changes when you start working like this
You stop relying on a single prompt to do everything and you start guiding a process and that is the real change.
Because most of the frustration people feel with AI is not coming from the tool itself. It is coming from trying to compress too much into one step. Once you break that, everything becomes easier.
More predictable. More usable. More consistent.
The one thing to take away
If you take nothing else from this, take this.
Do not just think about what you are asking, think about how you are structuring the interaction. That is where the difference is.
Book Your AI Training Strategy Call
If AI productivity tools are available in your organisation but productivity hasn’t improved, a 30-minute call will determine where the gains should be and whether structured, task-tailored training is right for your team
