Learn to Understand: How Teams Make Sense of What They See.
- David Yates

- Nov 13
- 6 min read
Why experience sharpens our judgement, but can also lead it quietly astray.
We don’t see the world as it is. We see the world as we believe it to be.
In complex, high-pressure work, that distinction matters. Because performance is not just about gathering facts. It’s about how we make sense of them and in environments where information is fast, partial, and uncertain, the mind relies on something far more efficient than constant analysis: mental models.
These internal maps help us interpret what’s happening, make predictions, and decide what to do next.
They are essential for expertise, but they come with risk. Because what we see depends on the model we’re using. And if that model is outdated, biased, or incomplete, our decisions, however confident, will quietly drift away from reality.
This is the second layer of situational awareness: comprehension. Understanding what we’re seeing. Not just noticing cues, but making meaning from them. It’s also where many of the most dangerous failures begin. Not in the absence of information, but in the misinterpretation of it.
From Perception to Meaning.
In Part 1, we explored the fragility of attention under pressure. How tunnel vision, fatigue and working memory limits can cause people to miss key signals, but even when the signals are seen, what matters next is interpretation.
Endsley’s model of situational awareness defines this as Level 2: comprehension. The integration of observed data into a coherent understanding of the situation. This step is powered by mental models, structured knowledge in long-term memory that helps us organise new information, detect patterns, and project outcomes.
In theory, the more experienced someone is, the richer and more accurate their mental models should be. An air traffic controller who’s handled thousands of flight plans sees a traffic conflict emerging well before a novice would.
A senior trader notices correlations between markets that juniors might not even register. A site engineer reads patterns in temperature shifts and load data that hint at an early-stage systems drift.
This fluency is what makes experts fast, confident and adaptive. But it’s also what makes them vulnerable to routine. Because the same mental models that speed us up can also narrow what we consider.
The Double-Edged Map in the Mind.
Mental models reduce complexity by giving us a working explanation of how a system behaves. But that explanation is never perfect. It’s a simplification and simplifications, by their nature, filter out nuance.
This is usually helpful. But when conditions change, or when something looks familiar but isn’t, those filters can block vital information. We fall into what human factors researchers call expectation capture: we see what we expect to see and explain events through the lens we’ve used before.
The result is that smart, experienced professionals can make confident, well-intentioned decisions that completely misread the moment. Not because they weren’t paying attention. But because the story they built around what they saw was subtly wrong.
Daniel Kahneman called this cognitive ease: the mind defaults to ideas that feel familiar, comfortable, or previously successful. We don’t reanalyse from scratch, we retrieve the nearest working script and apply it. In most cases, this shortcut works. Until it doesn’t.
In high-pressure environments, this can be catastrophic. Because once we believe we understand the situation, we stop looking for other explanations. And if the model is wrong, everything that follows, decisions, actions, even team coordination, builds on that initial error.
Local Rationality and the Logic of Misjudgement.
Sydney Dekker, in his work on safety and complexity, reframed these kinds of errors as “locally rational decisions.”
His point was simple but profound: people do not make decisions in a vacuum. They act within systems, with partial information, time pressure, and histories that shape what feels reasonable.
In hindsight, their decisions may appear flawed but in context, they were often the most rational option available at the time.
This is critical when we talk about situational awareness. Because the failure isn’t always a lack of vigilance. It can be a reasonable conclusion drawn from flawed inputs or flawed assumptions.
The mental model matched the last hundred incidents, just not this one.
James Reason described this too in his Swiss Cheese model: latent system conditions, like outdated training, interface design flaws, or normalised deviance, quietly shape the models people use. When something goes wrong, we see the visible mistake. But underneath it is a longer, slower drift.
When Mental Models Mislead.
These failures are not confined to safety-critical sectors. They happen anywhere decisions must be made under uncertainty, with speed.
In financial trading, experienced managers may default to models that worked during previous market cycles, failing to adapt to changing macro conditions.
A trader might interpret volatility through the lens of liquidity risk, missing an underlying credit contagion. Their mental model isn’t lazy, it’s just misaligned with the new context.
In energy control rooms, teams might respond to a pressure fluctuation using a familiar diagnostic say, a known sensor fault and delay further investigation. The assumption feels justified because that explanation has been right before. But this time, the cause is different. And the delay gives it time to escalate.
In software engineering, an outage attributed to a common deployment bug may turn out to be a silent configuration drift. The mental model that “we’ve seen this before” directs effort toward a solution that doesn’t work until someone questions the original framing.
In each of these cases, the problem wasn’t the absence of awareness. It was certainty built on the wrong frame.
Confidence Without Correction.
Over time, repeated success reinforces mental models, even when those models are fragile.
Rich Gasaway, writing on decision-making in emergency services, warns that this can lead to overconfidence built not on accuracy, but on luck. When suboptimal choices go unpunished, they start to feel like the right call.
This is one of the most subtle threats to situational awareness. The illusion that “we know this one” that previous outcomes prove current comprehension. It leads to routine drift and in high-reliability environments, routine drift is what allows blind spots to grow.
As Reason observed, “The absence of errors does not imply the presence of safety.” A team that hasn’t had a problem in years may not be more competent, they may simply not have encountered the condition that will test their assumptions.
Rebuilding Models Through Practice, Exposure, and Reflection.
So how do we protect situational awareness when comprehension is shaped by models we barely know we’re using?
The answer begins with exposure. Simulation training, particularly when it includes unfamiliar, edge-case or contradictory scenarios, helps expand the mental library people draw from.
In aviation, pilots are routinely trained on rare but high-risk events. Not because they’re expected to memorise a script, but to build pattern recognition in unfamiliar terrain.
In the energy sector, control room simulations expose operators to abnormal sequences that can’t be predicted but must be recognised. The aim is to train the model, not just the procedure.
In finance and tech, war games, red-team exercises and post-incident reviews serve a similar function. They force people to re-examine their assumptions, and update their internal maps.
But exposure alone isn’t enough. It must be accompanied by reflection. After-action reviews, debriefs, and facilitated learning sessions allow teams to surface where their thinking diverged from reality and how their assumptions shaped that gap.
Importantly, this reflection must be safe. If people fear judgement, they won’t admit where their model was wrong. Psychological safety isn’t a soft concept here it’s a prerequisite for honest model correction.
Creating Space for Doubt and Curiosity.
Mental models don’t need to be perfect. They just need to be flexible.
In high-performing teams, you’ll often hear phrases like:
“What might we be missing?”
“Let’s test our assumptions.”
“Is there a story that fits this data better?”
These aren’t signs of uncertainty. They’re signs of adaptive thinking. They show a willingness to treat comprehension as provisional and to change the narrative when the facts don’t match.
Leaders play a critical role here. When a leader verbalises their own doubt, or invites challenge to their framing, they create space for others to do the same. When they pause to ask, “Is this still the right model for what we’re seeing?” they prevent momentum from outrunning accuracy.
Resilience, in this sense, is the ability to rethink fast. To notice when your map no longer matches the terrain and redraw it before the gap becomes a hazard.
Comprehension Is a Conversation, Not a Conclusion.
Level 2 situational awareness isn’t about knowing for sure. It’s about maintaining a working understanding that can adapt in real time. That means:
Accepting that your mental model is always partial
Staying alert to signs that it needs updating
Inviting others to test it
Reflecting regularly on where it’s been wrong before
Comprehension is not static. It is dynamic, social, and shaped by systems. People can only build accurate mental models if they have access to diverse experience, safe spaces to question assumptions, and a culture that values insight over speed.
The risk is not ignorance. The risk is false confidence.
When things go wrong, the facts were often visible. What failed was the story built around them.
Coming Next: Nobody Knows Alone
In Part 3, we move from the individual mind to the collective one exploring how shared situational awareness is built, lost, and restored. Because in most high-consequence environments, no one operates alone. And the systems that fail are rarely held by a single person.
Situational awareness may begin with perception and comprehension. But it is sustained, or broken, by communication, culture, and team design.
We’ll explore how awareness becomes a team achievement and what it takes to keep everyone on the same page when the picture keeps changing.

Comments