Q&A About ‘Realistic Simulations’ and their Effectiveness

By | July 10, 2009

I was asked recently to answer several questions about simulations for a colleague working through an academic program related to the Fire Service.  I figured I might as well share my views with others because I think there are a lot of interesting discussions possible.

1. Do you feel realistic simulations are effective teaching tool for improving decision making? Why or why not?

Getting right to the bottom line, throwing caution into the wind regarding loaded, assumption-laden words like “realistic”, “effective”, and “simulation”, I do believe that simulations can be used effectively in training for improving decision making, for example, when the environment, available interactions, and instruction adequately enable the student to develop or practice an effective process.  But I also believe that simulations can be used ineffectively and actually hurt decision making.  For example, putting students on their first day of flight training into full-motion flight simulators is not a good idea, for several reasons, including that it is not the right tool for teaching what the student needs to acquire at his or her current level of training.

Without delving into the question of what constitutes a simulation (many people try to address this broad concept, but I don’t think it fits neatly into one definition), I think that the plain meaning of the question states the important part of the answer: simulation is a tool, and thus by itself cannot improve decision-making.  It depends on the skills one is trying to teach or demonstrate, and of course on the instructor (live or programmed).

I like to say that developing good simulation-based training foremost is about developing good training, and therefore about how appropriate the stimuli and resources are to the intended outcome(s), rather than purely about fidelity to the system in question.  The ultimate goal with good simulation-based training, or any training, for that matter, is to capture the “right” level of detail—detail in the environment, interactivity, etc. appropriate for the skills being transferred/taught. Therefore, to determine the “right” level of detail, one must articulate the training objectives to identify the relevant clues in the environment and the range of likely interactions and consequences.  Fidelity/faithfulness to real-world systems plays an important part when the clues require that correspondence.

I think a lot of money and time are wasted on the innocuous and unproven assumption that good simulation is about precision to the real-world counterpart.  That is not to say fidelity is unimportant, only that fidelity may not be so crucial to good training as simulation developers would have one believe.

2. Do you feel that simulations can improve intuitive decision making?

I think that all practice can reinforce making decisions, by virtue of the fact one is presented with the situations repeatedly, but I don’t believe simulations can necessarily improve any intuitions.  At best it can reinforce the action-consequences that one witnesses in a simulation, to help prepare for those possibilities, but it is debatable whether this is all good—can a simulation pigeonhole someone’s thinking inadvertently, if the student runs up against a false assumption that is embodied in the simulation?  In that case, the simulation is actually providing negative training.

When people talk about simulations, particularly computer or mathematical simulations, without having the experience of developing them, they get a false sense of security about the simulation.  Stripping it down to its essence, a simulation embodies some model of the real-world.  Every model has an assumption—if it did not, then it would be the real thing itself!

This sounds academic, but the important point is that all simulations have assumptions, and as a trainer, you want make sure that students are operating with assumptions are that are compatible with the assumptions of the model.  Therefore, you as an instructor need to be comfortable with the range of predictions that the simulation will make based on the student interaction.  This points back to my argument that you have to understand your training objectives very well, so that you allow the student to stay comfortably in the range of correct predictions of the system (correct in the sense of what you want the student to infer from the system’s behavior).

3. Based on your experience what are the limitations of simulation training?

If all simulations (models) have assumptions, then it stands to reason that all simulations have limitations as they violate their assumptions.  Regarding training, it’s easy to say that limitations show up when the type of simulation does not capture the “right” level of detail for the skill(s) being taught.  For example, we originally thought that we could help teach correct search skills (right or left-handed searches) using our photograph-based command simulation system.  We realized, once we put the scenarios into practice, however, that teaching skills based on realistic navigation was a limitation of our approach to scenario building—the constrained stimuli presented through the discrete photographs did not provide adequate physical orientation when moving to overcome the artificiality of the discrete movement.

In a positive example, we believe our system is ideally suited to teaching how to “read smoke” because we spent a great deal of time putting the elements, or clues, into the way in which the smoke can be modeled and placed.  Most other fire training programs don’t invest in the details to make practicing smoke reading as effective in our system.

I think an important point about limitations of simulation training has to do with issues outside of modeling technique—if simulation is applied as a cure-all, with less regard for the training problem/issue to address than a desire to make the training like the real world, then the simulation can limit the effectiveness of the skill transfer.  One of my favorite phrases I use all the time is from Thomas Gilbert.  In his talks about addressing performance issues, he coined the term (I believe) “worthy” performance problems.  What I glean from this is that there are many problems that can be addressed with training, and simulations as a part of training, but good training should focus on solving the problems that make a difference to worthwhile performance.  Therefore, if you value a technological approach over really understanding what you are trying to teach, and crafting the model to capture the right level of detail, you could be limiting the effectiveness of training by introducing details that detract from the training objectives, even if you are using very precise analogies to the real world.

An obvious example is that if you are performing maintenance on some components of a device behind a panel, and you first have to expose the panel by undoing clasps or unscrewing support screws, you can make training tedious if you require the student to turn the screws just as they would in the real world.  If the act of turning the screws is not relevant to the skills you need to teach, you are making the training tedious, which can interfere with the skill training process you are teaching.

4. What are the benefits of simulation training?

In a nutshell, I think that practicing appropriate skills in a hands-on, similar context/approach to real-life situations can introduce, reinforce, and evaluate conformance with proper procedures.  Therefore, simulation training can be better than conventional training methods (both classroom and field) in the simulations are

  • Safer to conduct
  • Often less costly in the long run
  • Repeatable
  • Measurable
  • May be easier to deploy
  • May provide more varied environments in which to apply skills.

Some types of ‘learning to perform’ can be accomplished with carefully-constructed questions (I really like the work of Will Thalheimer [http://www.work-learning.com/] in this respect), since the setup or question itself is a model of the context (in a limited form, but nonetheless a model that may be appropriate if it expresses the right level of detail).

I am obviously also a huge fan of simulation for practicing ‘hard skills’ (in contrast with ‘soft skills’), namely with equipment, which is why I am also surprised that there doesn’t seem to be more of it, especially in the B2B arena.  I think the technology has arrived long ago to provide valuable experiences, but still the perceived costs are high, especially as the simulation community (the ones who develop simulations) tends to focus on fidelity over tuning to training objectives.

5. How can simulation training be improved?

I don’t see technological limitations to most types of training problems, but I think that instructors by in large do not know how to teach with simulations, that is, learning how to meaningfully engage students.  That may be why many who talk about simulations (without understanding how they’re built) tend to be conservative about how simulations should perform (“make it perform like in the real world”).  This tends to produce bloated simulations that may be overly complicated by virtue of the fact they use the real-world situation as the gold standard.

6. Based on your experience using educational and simulations techniques, how do people make decisions under stress? Intuitively or analytically? Please explain.

I don’t have the experience to answer this regarding stressful situations, except to state the obvious reference to Gary Klein’s work on Recognition-Primed Decision (RPD) making.

7. What do you feel is the future of simulation training?

My answer to #5 addresses where I think it can be improved, which I hope is the future.  I also believe that as simulations are applied in more disciplines, we will get better case studies and processes for what works or does not work in those situations.  Certainly there will be better, easier tools for developing simulations, but I think real progress will occur in a field when the training process and integration of simulation is laid out more clearly, typically through a process of “we did it this way and here were the deficiencies, and here is how we learned to do it better.”

In the Fire Service, I think Brunacini’s and Abbott’s Command Training Center in Phoenix is a wonderful example of really nailing the training process using simulation.  People are drawn to the tools, and many software sales have been made as a result of viewing their setup, but the genius is in how they have codified the training process, through years of iterative development.  While the software has enabled them to explore training methods beyond conventional means, its deficiencies (and all software has deficiencies) may have constrained their exploration.

While it’s not relevant to the Fire Service, since I brought up equipment training, I see the future of simulation training stepping over the line of training into product marketing.  There is a commonly-used cliché that “advertising is education,” but I think we still have a lot to explore in how product marketing and training interact using simulation.

8. Do you see simulation applications for the fire service potentially improving incident command decision making?

I am very biased here because I am out selling a simulation platform for the Fire Service (and other public and private safety organizations).  I definitely believe it can improve incident command decision making, so long as it is applied in the context of teaching good practice.  In simple terms, I see simulation applications enabling organizations to train and evaluate adherence to accepted practice, and potentially exploring what was “accepted practice” to devise better practices.  For example, SOP’s are often devised based on what the authors believe should be accepted practice, along with an integration of local, state, and national standards.  But SOP’s are rarely tested until something bad happens.  Simulations have a great potential for virtual ‘field testing’ of SOP’s, and ultimately devising new ones.

For several years, we’ve been selling our system on the premise that one can create scenarios to practice and evaluate real-life situations for command, strategy, tactics, and communication.  Interestingly enough, even the immersive, 3-D systems that claim to be more realistic have these same objectives, and I have yet to hear exactly why their approach, given the extra technological burdens, contributes meaningfully more to any of these goals.

However, the issue of SOP adherence has led to our current focus of research, which is using simulation to help organizations answer the basic question “are my company officers and battalion chiefs following SOP’s”, in a performance-based way.

Leave a Reply

Your email address will not be published. Required fields are marked *