Banner

Ways of Managing Decision Errors to Reduce Aviation Accidents

Question

My major is aviation management , and I have individual study l class and I need to write around 20 pages about any topic in aviation field so feel free to choice any topic, fro example aviation safety or aviation accidents etc… . you need to expand your self in the topic.

for the sources use 5-7 sources.
I want to make sure again is this the topic ”Ways of managing decision errors to reduce aviation accidents”?

Answer

Name of Student

Name of Professor

Aviation Paper

15 July 2015.

Contents

Introduction. 1

The Meaning of Decision Errors in Aviation. 2

Contextual and Cognitive Factors in Aviation Accidents. 4

Ways of Managing Decision Errors. 7

Ways of Assisting Crews in Situation Assessment. 8

Assisting Crews in Course of Action. 11

Social and Organizational Pressures. 15

Insights from Accident Analysis. 16

Conclusion. 18

Works Cited. 20

Introduction

The term “decision errors” has been used widely to describe situations in which the behavior of crew contributed to aircraft accidents. For example, the NTSB (National Transportation Safety Board) used the term “tactical decision errors” in reference to the actions of highly experienced crew that led to 25 out of 37 aircraft accidents that it investigated (Orasanu, Martin and Davison 209). In some situations, errors are inevitable even when experts use their knowledge in the best possible way to perform tasks. There are systemic causes that underlie all outcomes, such that it may be wrong to stop at apportioning blame for accidents on specific culprits. Nevertheless, both the crews and the aviation companies they work for have a critical role to play in reducing decision errors in order to avoid aviation accidents.

ORDER A PAPER LIKE THIS NOW

The aim of this paper is to investigate the various ways in which decision errors can be managed to reduce aviation accidents. It begins with an overview of how decision errors have been defined in literature with specific focus on the context of aviation. Next, the paper addresses the contextual and cognitive factors influencing decision errors. In the part that follows, efforts are made to highlight the various ways in which these factors can be addressed as way of managing decision errors to reduce aviation accidents. This is followed by a section on how crews can be assisted with various decision aids in regards to situation assessment and course of action. The paper also explores the social and organizational pressures that influence the occurrence of decision errors among pilots. Lastly, insights from aviation accident analysis are presented.

The Meaning of Decision Errors in Aviation

It is difficult to define a decision error especially in the aviation industry because one must strike a balance between the decision process and its outcomes. This essentially means that someone may follow the right decision process but fail to achieve positive outcomes. Conversely, desirable outcomes may be achieved despite a crew’s decision to follow an unconventional decision process due to the existence of redundancies within the system. Another difficulty arises from the occurrence of accidents following the adoption of prior decisions, whereby the same decision may have been made under similar circumstances in the past with no negative outcomes. These situations demonstrate that an acceptable meaning of decision should address both the decision process and the outcome of the event.  One such definition is presented by Lipshitz, who defines decision error as a deviation from a standard decision process that increases the probability of bad outcomes (152).

A different view projects decision errors as the outcome of the absence of a person-system fit, which occurs through efforts to respond to novel situations through normal human activity. Within this perspective, errors are considered potentially adaptive, such that one’s ability to dispense less efforts to achieve better outcome constitutes successful adaptation. At this point, it is possible for the adaptation efforts to be overextended, especially in unstable conditions, leading to decision errors. In this regard, the issue of prior decisions should not arise because the only way to test a decision is through an assessment of responses provided by the environment. This approach also highlights the best way of identifying decision errors as one that entails assessing the interaction between individual cognitive factors and contextual features (Orasanu, Martin and Davison 213).

In aviation, decisions normally involve the use of cues to assess a situation and using the outcome of this assessment as a basis for choosing the most appropriate course of action. In situation assessment, the problem must be defined and then evaluated in terms of the accompanying risk level. The action plan that the crews choose should be informed by the structural aspects of the available options. For instance, some courses of action require creativity while others must be rule-based. Based on this decision process model, there are two main ways in which pilots may make a decision error. First, they may interpret the situation incorrectly, leading to a wrong decision. A classic example of this type of error is when a pilot diagnoses a problem in one of the engines, but shuts down the one that is functioning properly and leaves the faulty one intact. This is precisely what happened to Boeing 737 in January 1989 during the Kegworth accident in which 47 passengers died (Plant and Stanton 305). Second, they may visualize the situation inaccurately, leading to the wrong course of action. The first type of decision error occurs because of faulty assessment of a situation while the second type is attributed to wrong selection of action.

Selection errors that occur because of faulty selection of a course of action may manifest themselves in different ways. For example, in rule-based situations, crews may fail to apply the most appropriate response because they do not know anything about it or a specific contextual factor was mitigating against it. For example, different types of mechanical failure may occur during a flight. Some of them demand that the crew land at the earliest possible opportunity while others may wait until the end of the flight without leading to an accident. A selection error can occur if a pilot decides to continue with a journey even after diagnosing a mechanical problem that calls for immediate landing. On the other hand, decision errors may occur in situations where creativity is called for simply because various possible solutions do not fit the existing conditions and the goals being pursued. Whether aviation errors occur due to an incorrect interpretation of a situation or a faulty selection of a course of action, the interaction between individual cognitive factors and contextual features plays a critical role in the identification of decision errors. By understanding these two aspects and the interactions between them, aviation managers can come up with viable ways of managing the errors with a view to reduce aviation accidents.

Contextual and Cognitive Factors in Aviation Accidents

Traditionally, investigators of aviation accidents have tended to focus primarily on the cognitive capacities of crew or their choice of strategies. The objective has been to identify instances of limited cognitive capacities or inappropriate choice of strategy. A major problem with these approaches is that they fail to assess the role of contextual aspects of decision processes. By understanding contextual factors, investigators can be in a better situation to evaluate operational situations and the extent to which they are prone to performance errors.

ORDER A PAPER LIKE THIS NOW

            According to Klein (1993), decision errors are contributed to by lack of knowledge, lack of information, and failure by personnel to simulate the consequences of various decisions. Lack of knowledge reflects a cognitive deficiency, and may be attributed to external factors such as failure by a company to offer adequate job-specific training. On the other hand, lack of information is largely an embodiment of situational factors, such as a malfunction of systems used to display information. Failure to simulate different possible consequences of an action may be regarded as a cognitive factor because they may be attributed to the personnel’s situational stressors and habitual strategies.

            Cognitive factors play a critical role in decision errors. For example, crews may be influenced by a sense of familiarity, easily leading them to do what they are used to doing in terms of a plan of action instead of taking another course of action that is warranted. In other words, they tend to be guided primarily by routine knowledge while ignoring other existing knowledge not because it is inappropriate but rather because they are simply not used to doing things that way. Typically, this conception of cognitive factors is best used to explain errors of omission among crews. In terms of errors of commission, focus is on situations where crews decide to take unordinary courses of action. Some of these decisions may be successful, although it takes a lot of confidence on the part of the implementers.  

            When an individual lacks relevant knowledge, he is likely to come with the wrong diagnosis of the problem, leading to a wrong choice of solution.  This lack of knowledge may be attributed to various factors that are not necessarily linked to inadequate training. For example, a well-trained pilot may lack current knowledge of specific situations simply because he was off duty or was recently transitioned to a new aircraft. Similarly, the pilot may not be familiar with the airport. Such situational factors greatly contribute to decision errors that lead to accidents. For example, an experienced pilot who is unfamiliar with weather conditions at an airport may choose to take off instead of waiting for snow to be cleared from the aircraft.

            One important dimension through which cognitive factors may be evaluated is ambiguity. It is normally difficult to make out cues that hint at a problematic situation. This is because the cues can be interpreted in many ways. Pilots should be adequately trained to identify ambiguity in developing situations. They should be trained in such a way that they know clearly what they should do whenever ambiguity arises. Moreover, pilots and crews should be aware of the difficulties they are likely to face in an attempt to justify a change of plan in the event that ambiguity manifests itself. More importantly, it is important for stakeholders to determine why ambiguity exists in the first place. Some of the factors that may contribute to ambiguity include poorly designed information interface, absence of good system information, and lack of diagnostic information in the cockpit. For example, a crash may occur after the wrong engine is shut simply because of poor display of a problem in the cockpit.

Another crucial dimension is the dynamism of the situation. The situation in which the crews operate may be changing so quickly that they may end up underestimating the risks that come with failure to change the existing plan. In this case, the problem is not ambiguity but rather the interpretation of the cues. This problem is particularly common among inexperienced crews. However, even experienced crews may encounter the problem if the specific circumstances in which the cues occur render them incapable of retrieving the knowledge required to undertake an assessment of the risk implied by the cue. Another explanation for such a decision error may be grounded on the bounded rationality theory (Betsch and Haberstroh 373; Orasanu, Martin and Davison 5), whereby experienced pilots expect the current course of action to succeed simply because it has worked in the past under similar circumstances. In this case, the crews happen to be aware of a more appropriate course of action but may decide to go with what they are used to. However, this practice may be viewed in a positive light as an indicator that, contrary to the conventional view, decision makers actually use base rates or prior probabilities when choosing a course of action.

Unfortunately, the benefits that accrue from the use of base rates may be overshadowed by the risk-taking behavior that is normally promoted by past success. In many cases, prior probabilities are even misinterpreted as time goes by, the crews become more experienced, and the situation becomes more familiar. In this case, one may argue that if a pilot only has good experiences, he lacks base rates through which to establish that a situation is deteriorating. Risk-taking behavior in aviation is also being promoted by framing, where people normally seek risk in loss situations but tend to be risk-averse whenever gain situations persist (Orasanu and Davison 60). For example, a pilot who is confronted with the option of diverting a flight to the nearest airport faces a sure loss, albeit one that is less dangerous (or not dangerous at all), in terms of on-time arrival. In case the mechanical problem is one that can be managed to ensure the continuation of the flight, the pilot may consider sticking to the initial plan of action. In this second option, loss is less probable though more disastrous, yet many pilots normally go with this decision (Orasanu and Davison 60).

Ways of Managing Decision Errors

For aviation accidents to be reduced, it is necessary for ways of managing decision errors to be devised. One way in which this goal can be accompanies is through continued support for improvement in decision making by crews. Pilots require a lot of assistance to enable them to deal prudently with diverse situations, thereby making better decisions. Within the realm of aviation technology, a lot of focus has been on the possibility of designing support systems that can guide decision makers in highly dynamic and complex circumstances where humans are likely to make decision errors. It is unfortunate that the technology that is being used in the aviation industry today does not always focus on helping the decision maker to address issues where difficulties may arise. A lot of progress needs to be made in the development of decision aids that can enable crews interpret situations and highly all likely decision errors.

Technologies aimed at reducing decision errors may be divided into two groups: behaviorally-based and mechanistic interventions (O’Neil, Jr, Andrews and O’Neil 170). Behaviorally-based interventions encompass training, procedures, and checklists while mechanistic technologies include software traps, aircraft controls, and hardware interlocks. The main advantage of behavioral interventions is that they are adaptable while that of mechanistic approaches is that they tend to be more reliable. Better decisions on which type of technology to be deployed in a specific can be arrived at if focus is on situation assessment and the chosen course of action.

During the situation assessment, one may be able to evaluate the interplay between cognitive and contextual factors in determining the outcome of the decision process. At this point, it is possible for one to get a fairly accurate idea of the best choice of decision aid. For example, many behaviorally-based interventions such as training can be used to address cognitive factors by adding to the cognitive resources available to the crews. On the other hand, mechanistic interventions can serve as excellent decision aids for crews particularly in situations where ambiguous cues emerge, for example, in situations where one of the engines develops mechanical problems. In such a case, cockpit display panels that succinctly indicate which engine is faulty can contribute positively to the crews’ decision process.

Ways of Assisting Crews in Situation Assessment

            Improvements on situation assessment can greatly contribute to better decisions by pilots. The objective should be to equip them with all the decision aids they need to recognize and respond appropriately to situations that require a change of course of action. For instance, crews cannot understand the problem if they have no access to up-to-date diagnostic information at all times. The manner in which this information is displayed matters, and some of the essential features include accessibility, integration, and comprehensibility. These features should allow the pilot to discern trends and change the course of action in the most appropriate way.

            Fortunately, a lot of progress has been made in the area of system diagnostics. Today’s pilots have access to up-to-date information on dynamic elements of different situations, including many unpredictable ones. For example, numerous improvements have been made in the presentation of weather information, for example in terminal areas, in an integrated format that shows trends. However, further refinements to the existing technology may be required to ensure that the most critical weather information at each phase of the flight is highlighted. Furthermore, corresponding improvements in traffic displays may be required particularly along the path of integration with weather information with a view to create a better picture of the situation (Williams 18). These efforts fall mainly in the category of mechanistic technologies aimed at ensuring that pilots and other crews take as much control as possible of different situations, including the most unpredictable ones.

ORDER A PAPER LIKE THIS NOW

            Apart from contextual factors, there is also the aspect of cognitive resources, which also requires continuous improvements if decision errors are to be reduced. Pilots who have the requisite cognitive resources at all times perform better in situation assessment compared to those who are deficient in that regard. Aviation companies must provide better training to enable decision makers to gain the requisite job-specific skills and competencies. The concept of continuous improvement should also be applied to procedures and checklists. The higher the number of exemplars to choose from in solving a problem, the higher the level of excellence in crews’ decision processes and outcomes. This means that crews must commit to memory as many potential courses of action as possible in order to achieve optimal decision outcomes in diverse situations.

            During situation assessment, both contextual and cognitive dimensions must be examined against the backdrop of two components: time available and risk. It is extremely difficult to encompass risk and temporal factors in cockpit and terminal displays because they are normally dependent on context (Endsley and Garland 358). Nevertheless, some risk information is normally contained in weather information (for example, color coding to indicate the severity of weather condition) and traffic displays (for example, Traffic Collision Avoidance System alerts). Over and above this risk information, predictive models are needed to guide pilots on decisions regarding the time it might take for a situation to deteriorate to a critical level and the time available for changes to the prevailing course of action to be effected successfully. Thus, a pilot may get a better image of indicators such as fuel consumption, how soon a storm will hit or weather improve, rate of traffic dissipation in a certain region, and possible changes in the life span of a reserve battery.

Klein highlights two main determinant of decision errors during situation assessment: information and experience (368). The manner in which information is presented at the point where decisions have to be made greatly influences decision outcomes. An important rule of thumb is to ensure that all information is user-centered at all times. In some cases, crews may misuse relevant information, leading to accidents. In other cases, a decision error may be traced to the use of incomplete or imperfect information, and this may be evident through misjudged distances, descent rates, altitude, and inappropriate response to visual illusions.

The issue of visual conditions is particularly important in this discussion because it is related to the debate on the kind of visual information that pilots must have to fly safely. Today’s navigation instrumentation has become sufficiently sophisticated to allow pilots to fly safely in situations where no external visual cues are available. However, piloting in such situations requires that all instruments should be in perfect condition and that the crews are well-trained on how to use them. Any errors in terms of the way information is presented, interpreted, or used may lead to disaster. It is hardly surprising that in one study involving an in-depth analysis of accident data, nearly 50 percent of accidents occurred in visually impoverished environments, and 70 percent of them resulted in fatalities (Shappell et al. 239). The same study also found that of the accidents that happed during broad daylight, only 30 percent resulted in fatalities (Shappell et al. 239).

On the other hand, pilots normally develop experience through training, such that they get used to patterns on how situations develop in different circumstances and the decision errors to be avoided. The resulting awareness of sequences of events and the trends they represent can greatly contribute to the avoidance of aviation accidents. A disturbing observation is that some pilots fail to follow some basic concepts and precepts even after recurrent training, especially in the area of CRM (crew resource management). This problem may be attributed to the national culture of the concerned pilots. Another way to explain it is through the aforementioned theory of bounded rationality. In yet other cases, pilots may have valid, empirical grounds for rejecting certain types of training. For example, a small but significant number of pilots have been rejecting CRM training because it has been associated with an increase in decision errors among CRM-trained pilots Kearns 47).

Assisting Crews in Course of Action

Other than situation assessment, the other area where crews can benefit from improved assistance is course of action. Many decision errors relating to the choice of course of action are normally attributed to inadequate simulation. Simulation plays a critical role in CRM training. Using sophisticated simulators, crews are able to practice on specific ways of dealing with errors without jeopardizing the lives of the passengers and how to receive feedback on performance at both individual and team levels. In this regard, inadequate simulation before choosing a course of action may be an indicator of inadequate CRM training. It may also indicate that training is not being conducted on an ongoing basis, leading to a decay in attitudes and practices. Additionally, it is imperative for the training to be adapted to specific organizational contexts, conditions, and experiences to forestall rejection by some of the pilots particularly those who are highly experienced and have gotten used to doing things in a certain way.

When choosing a course of action, pilots should be assisted to understand the interactions between the variables of error, threat, and the way they are managed to determine the outcomes of safety efforts. This necessitates the development of a model that can be used to analyze the causes of errors, strategies for avoiding mitigating those errors, and the effectiveness of those strategies. The model should describe the course of action that should be chosen in specific situations, the types of decision that are likely to occur during the execution process, and how to manage those errors and the threat they present. One thing that is certain to emerge in the process of applying such a model is that decision errors and accidents are rarely attributed to a single cause, but rather to a constellation of contributing factors. In many cases, pilots fail to put into consideration all these factors due to inadequate simulation.

Presently, flight manuals typically provide instructions on how crews can deal with different system malfunctions. For instance, in the event of an engine anomaly, the crews can refer to the manual for information on when it would be most prudent to reduce power, leave the engine running, or shut it down. In such a situation, decision aids may take the form of prompts directing crews to consider several options before jumping to action or to assess the disadvantages of the selected course of action. The prompt should also enlighten the pilot on the various outcomes that may arise from the course of action. In other words, the decision aids should conduct “what if” reasoning and provide answers to different hypothetical situations to the crews with a view to encourage forward thinking.  Importantly, the aids can serve the purpose of enlightening decision makers on the worst-case scenarios that may arise from the chosen course of action and ways of managing them.

Decision aids are indispensable even in situations where crews have carried out adequate simulation to assess possible outcomes. This is because cognitive factors such as anxiety and confusion may lead to impairment in decision making. In such situations, the crews may perform poorly in terms of integrating numerical probabilities into the broader picture of the course of action and its possible outcomes. To avoid decision errors, it is always better to present such information using graphic representations. Furthermore, humans are naturally better at understanding information presented through graphics compared to computing numerical probabilities. Such aids afford pilots time to carry out context-sensitive estimates on the trade-offs that must be made whenever certain situations arise.

One of the arguments against the introduction of decision aids is that they are likely to have a negative effect on the decision making process by pilots. This may happen through recalculation of the risks involved within the new system and a subsequent change of behavior to maintain the previous risk level. This argument creates the impression that it is impossible for decision aids to have a lasting impact on the quality of decisions that pilots make. It promotes the view that pilots who use decision aids are likely to take risky courses of action that they would otherwise not have taken in the absence of decision aids. Such a behavioral change is more likely to occur among crews who are under growing pressure to exploit the increased safety and functionality of aviator systems for economic gain. For example, during the Habsheim airshow in 1988, an aviation accident happened after a pilot misjudged the airplane’s capability and decided to fly at a lower altitude (Ladkin 348). During the airshow, the airplane, an Airbus A320, was scheduled to perform a low-altitude flyover at 100 feet above the ground but the pilot decided to fly at 30 feet above the ground. Consequently, the place ploughed into nearby trees before crashing to the ground, killing three passengers (Ladkin 348). Such behavior can be avoided if crews are enlightened about the importance of maintaining the original risk levels even when decision aids are introduced.

Despite these risk-related challenges, it is evident that pilots need assistance with decision making elements that are particularly problematic for humans. If properly used, the aids can enable them to avert potential disaster. Nevertheless, this is not to say that the aids can lead to the detection of every incident situation. Yet changing avionic systems designed to pinpoint hitherto unidentifiable errors may easily alter pilot behavior, leading to a corresponding change in the source of error. The solution to this dilemma may be achieved through a radical change in the way pilot decision errors are conceptualized. In essence, these errors should be viewed as a moving target that needs to be redefined on a period basis in responses to advancements in the field of avionics.

Currently, better displays have been introduced in the cockpit, and automation is already being used to control numerous aircraft functions. Despite these developments, there are many situations where human judgment must be relied on in choosing the most appropriate course of action. In the future, aviation managers will face the challenge of adapting existing human decision processes to technological advances in the cockpit. There will be a growing need for the human agent to be more adaptive than ever before in efforts by the aviation industry to exploit optimally the strengths presented by advanced tools and job-specific training.

Social and Organizational Pressures

            Pilots have been known to make decision errors due to social and organizational pressures. No matter how well-trained or experienced crews are, they remain prone to pressure from different people, including passengers and management. For example, pilots normally face social pressure from passengers and other aircrew to continue with the journey despite the existence of cues indicating that the flight should be delayed, grounded, or diverted. Similarly, airline managers may categorize diversion of a flight as a loss. Since such a decision reflects negatively on the aircrew’s performance, it may greatly influence their decision to continue with the flight even during adverse weather, huge traffic, or mechanical problems.

            Social factors easily create goal conflicts in aviation. A case in point is socially implied expectations that many pilots perceive, which induce them to engage in risky behavior or to behave like highly experienced professions even when they are not. Many pilots are reluctant to admit that they are not conversant with a certain situation. For example, a number of runway collisions have been attributed to pilots’ unfamiliarity with the airport and their failure to clarify information and instead choosing to go along. Other accidents are caused by crews who embark on an unsupervised flight after resuming work from a long off-duty duration. Such crews may perceive the need to meet social expectations, such that the resulting social pressures outweigh safety goals particularly in situations where ambiguous cues emerge.

            Similarly, organizational pressure may lead to the setting up of productivity goals that conflict with safety. In this regard, the example of on-time arrival rates is usually given. Pilots are always well aware that the airlines they work for depend heavily on the concept of on-time arrival to win customers. This means that a pilots’ failure to arrive on time creates the impression that he or she has failed in his duty of delivering an optimum level of productivity. Another example is that of fuel economy, an issue that is often highlighted widely by airlines, thereby discouraging pilots from diverting flights, albeit in a subtle manner. As a result, some pilots may choose to fly into a gathering storm instead of flying around it in order not to violate the organization’s stated goal of fuel economy.

            One aviation accident that highlights the powerful influence of social and organizational pressures is the U.S. Air Force Boeing CT-43A crash in 1996, which killed Ron Brown, the then U.S. Secretary of Commerce and 34 other passengers (Romzek and Ingraham 246). The crews of the aircraft seem to have bowed to organizational factors that pressured them to take big risks, possibly against better judgment. For example, the aircraft had not met the legal requirements for approaching Dubrovnik. This is because it had only one Direction Finder radio yet it need to have two as per the law. Moreover, the crew lacked adequate training on how to read charts in readiness for instrument approach. Another problem was that the crew used civilian charts, which they were not authorized to use. It is highly likely that the high-profile nature of the mission and the expectations to operate within set timelines influenced the pilots to fly the plane despite the numerous problems they encountered.

Insights from Accident Analysis

Evidence from accident analysis has provided very important insights regarding decision errors and how to avoid them. For example, 60 to 80 percent of all aviation accidents are attributed to human error (Shappell et al. 228). Another important observation is that during the last decade, access to pilot demographic and situational data has only made modest contributions to aviation safety. This situation may be attributed to the tendency by pilots to reorient their behavior to fit the improving safety levels in an effort to enhance productivity, thereby creating a situation where risk levels remain unchanged. A major concern is that if the current rate of aviation accidents continues, their financial costs will increase astronomically (Shappell and Wiegmann 29). Nevertheless, one positive finding from aviation accident analysis is that the overall accident rate in both military and civil aviation is excellent (Shappell and Wiegmann 29).

Critics of the traditional approach to aviation accident analysis argue that most investigations focus on assigning blame. This may be true in some cases particularly because of the implications of the outcomes of those investigations for insurance claims and legal proceedings against the airlines involved. However, most investigators of aviation accidents would insist that their work is always geared towards ensuring that similar accidents are prevented from happening again. Shappell and Wiegmann argue that most accident reporting and investigation systems that are being used to day lack a theoretical basis for explaining human error (269). Meanwhile, the U.S. government has already established a legal framework for investigating all aviation accidents. During these investigations, human errors are grouped into four categories: preconditions for unsafe courses of action, unsafe courses of action by operators, unsafe supervision, and organizational influences.

Under preconditions for unsafe courses of action, investigators look at the condition of operators, environmental factors, and personnel factors. Under unsafe courses of action by operators, focus shifts to decision errors, perceptual errors, and skill-based errors. At this juncture, the investigators also explore the possibility that routine and exceptional violations were committed. In terms of unsafe supervision, investigators seek to determine whether there was failure to correct a problem, inadequate supervision, supervisory violations, or preplanned inappropriate operations. Lastly, organizational influences encompass aspects such as resource management, organizational process, and organizational culture.

Using this approach, a number of insightful findings have been generated following investigations of aviation accidents. For example, it emerged that majority of aviation accidents are caused by the aircrew as well as the environment in which they operate (Shappell et al. 227). Another finding was that a smaller section of the accidents are attributed to organizational and supervisory causes. Visual conditions and regional differences were also found to be instrumental factors in the occurrence of aviation accidents.

It is unfortunate that the concepts of decision errors have not been given the prominence they deserve through an in-depth assessment of processes and outcomes. Similarly, there is an equally wide practice-investigation gap as far as a review of situation assessment and course of action is concerned. This may be one of the reasons why the traditional approach to aviation accident investigations has been widely criticized. To ensure that decision errors do not occur again, investigators should focus primarily on how best aircrews should have assessed the situation or chosen a course of action and the factors that prevented them from doing so.

Conclusion

            In aviation, a decision error is often defined as a deviation from a standard decision process that increases the probability of bad outcomes. The worst outcome is the loss of human life due to aviation accidents. In all airline operations, both civil and military, a major area of priority is the avoidance of decision errors with a view to reduce the occurrence of negative outcomes that may lead to disaster. Despite the best intentions of aircrews, decision errors tend to occur. This explains why it is important for stakeholders to look for ways of managing them.

            This paper’s findings indicate that one effective way of managing decision errors is by identifying their causes. Most aviation accidents are attributed to human error, and this is a crucial pointer to the role of cognitive factors in decision errors. Contextual factors such as bad weather also contribute to decision error, and by extension, accidents. To manage these errors, crews need both behaviorally-based and mechanistic interventions. Specifically, they need decision aids that can help them with the identification and interpretation of ambiguous cues. With proper decision aids, pilots can perform better in terms of both situation assessment and choice of course of action. Another important finding is that airline operators must stop subtly imposing social and organizational pressures on pilots in a manner that puts safety goals in conflict with organizational goals.

Works Cited

Betsch, Tilmann and Haberstroh, Susanne. The Routines of Decision Making. Mahwah: Lawrence Erlbaum Associates, Inc., Publishers, 2005. Print.

Endsley, Mica and Garland, Daniel. “Pilot Situation Awareness Training in General Aviation.” Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 44.11 (2000): 357-360.

Kearns, Suzanne. e-Learning in Aviation. Burlington: Ashgate Publishing Company, 2010. Print.

Klein, G. (1993). “Sources of Error in Naturalistic Decision Making Tasks.” Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 37.4 (1993): 368-371

Ladkin, Peter. “Causal Reasoning about Aircraft Accidents.” Lecture Notes in Computer Science, 1943 (2000): 344-360.

Lipshitz, Raanan. Naturalistic decision making perspectives on decision errors. London: Routledge, 1997. Print.

O’Neil, Jr., Harold., Andrews, Dee and O’Neil, Harold. (Eds.) Aircrew Training and Assessment. Mahwah: Lawrence Erlbaum Associates, Inc., Publishers, 2009. Print.

Orasanu, Judith and Davison, Jeannie. “The Role of Risk in Aviation Decision Making: How Pilots Perceive and Manage Flight Risks.” Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 45.2 (2001): 58-62.

Orasanu, Judith, Martin, Lynne and Davison, Jeannie. Errors in Aviation Decision Making: Bad Decisions or Bad Luck? Moffet Field: NASA-Ames Research Center, 1998. Print.

Orasanu, Judith., Martin, Lynne and Davison, Jeannie. “Cognitive and Contextual Factors in Aviation Accidents: Decision Errors.” In Eduardo Salas, Gary A. Klein (Eds). Linking Expertise and Naturalistic Decision Making. Mahwah: Lawrence Erlbaum Associates, Inc., Publishers, 2009. Print.

Plant, Katherine and Stanton, Neville. “Why did the pilots shut down the wrong engine? Explaining errors in context using Schema Theory and the Perceptual Cycle Model.” Safety Science, 50 (2012): 300–315.

Romzek, Barbara and Ingraham, Patricia. “Cross Pressures of Accountability: Initiative, Command, and Failure in the Ron Brown Plane Crash.” Public Administration Review, 60.3 (2000): 240–253.

Shappell, Scott and Wiegmann, Douglas. “A Human Error Approach to Accident Investigation: The Taxonomy of Unsafe Operations.” The International Journal of Aviation Psychology, 7.4 (1997): 269-291.

Shappell, Scott and Wiegmann, Douglas. A Human Error Approach to Aviation Accident Analysis: The Human Factors. Burlington: Ashgate Publishing Company, 2003. Print.

Shappell, Scott., Detwiler, Cristy., Holcomb, Kali., Hackworth, Carla., Boquet, Albert and Wiegmann, Douglas. “Human Error and Commercial Aviation Accidents: An Analysis Using the Human Factors Analysis and Classification System.” Human Factors, 49.2 (2007): 227–242.

Williams, Kevin. “Impact of Aviation Highway-in-the-Sky Displays on Pilot Situation Awareness.” Human Factors: The Journal of the Human Factors and Ergonomics Society, 44.1 (2002): 18-27.

Get a 10% discount on an order above $50
USE THE FOLLOWING COUPON CODE :
SPRINGDISCOUNT