Security Screening: improving human performance for greater reliability

Cet article n’est disponible qu’en anglais.

In an article published in ASI last June, I wrote that the security checkpoint is where commercial and security interests can merge to benefit operators, passengers and screeners. I suggested as well, that planning and investing in the design of checkpoints, applying innovative queuing techniques, valuing the time passengers spend waiting and offering security services with aptly recruited and trained personnel, is a source of value creation that has not yet been fully exploited.

Rest assured, I did not change my mind, if anything, I’m more convinced than ever! Moreover I am encouraged by the innovative designs and security solutions that have been tested since and deployed in some of our large airports, including technologies that improve the detection capacities of the screening equipment.

At trade shows and conferences, our industry continues promoting innovation as a mean to improve the passenger experience and we should indeed be as concerned with the passenger experience as we are with the reliability of the screening process.

As we transform security checkpoints with those innovations, we may want to pause and reflect, to ensure that we are not deviating from the screening mission and to validate that our services effectively deliver the level of reliability we expect from this security layer.

Innovation can extend beyond technology; it can apply to people and procedures, especially when it comes to detection reliability. In fact, security screening is a process that requires an effective and efficient alignment of people, procedures and equipment, where technology provides the tools that assist screeners in detecting threat items. However, improvements in technology won’t necessarily increase detection if it plagued by errors and incidents.

Researchers in the United States who investigated the causes of errors have discovered « that about 80 percent of all events are attributed to human error. In some industries, this number is closer to 90 percent. Roughly 20 percent of events involve equipment failures. When the 80 percent human error is broken down further, it reveals that the majority of errors associated with events stem from latent organizational weaknesses, whereas about 30 percent are caused by the individual worker touching the equipment and systems in the facility. »[1]

Screening authorities and providers can improve the detection reliability of screening services, by promoting the reporting and analysis of incidents and errors, to identify the systemic causes behind most screening errors.

Organizations can use the findings of such analysis, to share performance feedback with its workers, increasing their awareness and explaining as well the occurrence of error precursors: « Error precursors are unfavourable prior conditions at the job site that increase the probability for error during a specific action; that is, error-likely situations. An error-likely situation, an error about to happen, typically exists when the demands of the task exceed the capabilities of the individual or when work conditions aggravate the limitations of human nature. Error-likely situations are also known as error traps. »[2] Sharing information about failures and learning from errors are some of the best practices found in High Reliability Organizations (HROs).

Examples of error precursors:[3]

HROs pay a lot of attention to errors: why it is taking place and how can it be avoided. « Perhaps the most important distinguishing feature of high reliability organisations is their collective preoccupation with the possibility of failure.

They expect to make errors and train their workforce to recognise and recover them… Instead of isolating failures, they generalise them. Instead of making local repairs, they look for system reforms. »[4]

HROs’ encourage the reporting of errors in a non punitive manner, so that it can learn from its mistakes, a concept aligned with the values promoted by a « just » corporate culture: « Human error is an effect of trouble deeper inside the system. To do something about a human error problem, then, we must turn to the system in which people work: the design of equipment, the usefulness of procedures, the existence of goal conflicts and production pressure. »[5]

Encouraging employees to report errors and incidents requires a high level of trust between screeners and employers. To foster this trust, employers must invest in building a strong relationship with its employees, especially through its frontline management and supervisors.

Training supervisors to act as a coach, to manage operations mindfully with a sense of wariness and to develop a relationship with employees based on trust and mutual respect, can increase the screeners’ engagement toward the screening team, which in turn should improve human performance.

This is particularly important as « active errors can be minimized by performing work with a sense of uneasiness; maintaining situational awareness and avoiding unsafe or at-risk work practices, and being supported through the use of teamwork. »[6]

Airports, airlines and other stakeholders rely on screening operations to detect improvised explosive devices (IEDs) and/or weapons. The reliability of the system could be at risk if the screeners’ performance for critical tasks is deficient and in turn, deficient screening will impact the other layers of our security system.

So, how reliable are our screening operations? It’s a sensitive matter and there is little public evidence available for analysis, with the exception of a GAO investigation conducted in 2006:

« GAO investigators succeeded in passing through TSA security screening checkpoints undetected with components for several improvised explosive devices (IED) and an improvised incendiary device (IID) concealed in their carry-on luggage and on their persons . »[7]

Based on fragmented data and our own personal experience, we believe that there is indeed some room to improve the reliability of the screening process, to deliver a more effective and consistent performance.

This process is composed of critical steps, where the margin of error is slim to none. In his swiss cheese metaphor, Reason postulated « that the holes in the defences can arise for two reasons: active failures and latent conditions. Nearly all organizational accidents involve a complex interaction between these two sets of factors. »[8]

Failures and errors at a checkpoint are not exclusively the consequence or outcome of poorly performing or unreliable screeners; it is caused very often by latent conditions that create error precursors and error traps. The vast majority of screening officers want to excel at what they do; they come in to work with a good intent, but that good intent can only carry them so far!

They must be supported by an organization and a system determined on rooting out the underlying conditions that lead to errors, while engaging screeners in performance improvement. This can be achieved notably by sharing and using information about errors and incidents: « The common content thread in cultures that strive to be mindful, informed, and safe is that they all focus on wariness…The best way to maintain these states of wariness is to collect and disseminate information about incidents,near misses and the state of the system’s vital signs. »[9].

As part of our analysis, we have identified critical steps that are essential to improve the reliability of screening operations. Most of those steps centre around the search of passengers and their personal items. This is where errors take place and this is where the system’s reliability must be tested and interventions focused.

Our analysis shows that to implement systemic and lasting solutions, to achieve continuous improvement of screening reliability, an organization needs to assess procedures and data concerning errors and security failures in three areas of its operations:

  1. Organizational: trust; reporting errors; mindful management; situational awareness; communications

The reliability of the screening operations are invariably tied to the culture of the organization. If we are to learn from errors or failures, the employer should create an environment based on trust and respect, where employees are encouraged to report honest mistakes or incidents.

Does the organization’s culture encourage constructive questioning? Are screeners encouraged to seek advice and assistance from peers and supervisors? Are situational and procedural awareness promoted among frontline staff? Are supervisors sufficiently proficient to coach and direct employees’ focus on critical tasks and on the presence of error precursors.

Are supervisors mindful of the critical nature of the screening operations and wary of its reliability? Are they able to convey this mindfulness to their respective teams? Considering the importance of the role played by the frontline supervisor, how does the organization select, train, supervises and assess the performance of its supervisors?

Does the organization systematically gather and analyze data concerning errors and failures? Does it share this information and use it to brief and provide performance feedback to screeners? Does it use the analysis to improve its management of the operations and its overall performance?

  1. Individual: skills, biases cognitive limits & work design (rules)

Individual performance is impacted by such attributes as attitude, skills and knowledge to perform the task. Did we select employees for their attitude and are we training them for skills? Is performance assessment aligned with clear performance expectations set by the employer?

How are supervisors motivating screeners to perform repetitive tasks? Do screeners feel the pressure associated to performance targets, especially if there is insufficient time to apply the measure effectively, such as in the case of image analysis?

When are errors taking place, at the beginning or the end of the shift? Are screeners allowed to go on health breaks? Is fatigue a factor?

Is the screener encourage to stop and pause (self-checking) when faced with uncertainty, or to check with peers or supervisors, to avoid cognitive biases and assumptions?

Why are screeners not complying with SOPs? Is it a competency issue or a lack of exposure to a specific task? Is deviance from SOPs allowed at the checkpoint? Do SOPs promote the most efficient and effective mean to do the job or are screeners finding better ways to meet the same security outcome? Are the procedural rules too complicated to apply within the timeframe that is allotted?

  1. The Environment

Screeners operate in a noisy environment, under intense scrutiny, performing repetitive tasks during peak and off peak periods of activity. They are expected to remain vigilant and attentive to the critical tasks they perform, throughout their shift and at different positions. Are these expectations realistic?

Screeners are affected by the ambient noise, passengers and their colleagues, so how can adequate levels of vigilance, attention and motivation be maintained and most importantly how can it be measured?

There are technological solutions on the horizon, some of which are being tested right now, such as remote screening, which could reduce errors while increasing productivity. But technology will only take us so far, if we don’t improve human performance in parallel.

By asking systematic questions about the current level of reliability of our screening operations and about human performance, we’re heading in the right direction.

A number of corporations and organizations have developed methods to improve the reliability and quality of their products or services, by building quality into the process and by analyzing their performance through data, including performance indicators.

Whether we’re Kaizen or Six Sigma practitioners, what these approaches have in common is their avoidance of assumptions and their diligence in drilling down to the root cause of a problem or error. It does also parallel our proposal to improve human performance and the reliability of our screening operations

We hope to be able to test the results of our analysis in an operational screening environment in the near future, to validate our findings. If and when we do, we will look forward to sharing the results with our industry’s practitioners.

[1] US Department Of Energy (DOE) Standard, (2009), « Human performance improvement handbook, Vol. 1 », p. 1-10; retrieved from http://energy.gov/sites/prod/files/2013/06/f1/doe-hdbk-1028-2009_volume1.pdf

[2]US Department Of Energy (DOE) Standard, (2009), « Human performance improvement handbook, Vol. 1 », p. 1-15; retrieved from http://energy.gov/sites/prod/files/2013/06/f1/doe-hdbk-1028-2009_volume1.pdf

[3]US Department Of Energy (DOE) Standard, (2009), « Human performance improvement handbook, Vol. 1 », p. 2-32; retrieved from http://energy.gov/sites/prod/files/2013/06/f1/doe-hdbk-1028-2009_volume1.pdf

[4] Reason, James (2000). « Human error: models and management ». Retrieved from: BMJ VOLUME 320 18 MARCH 2000 www.bmj.com p. 770

[5] Dekker, Sydney (2012). « Just Culture: Balancing safety and accountability, 2nd edition». Hampshire, England: Ashgate Publishing, p.80

[6] Wachter, Jan and Yorio, Patrick (2013), « Human performance tools: Engaging workers as the best defense against errors & error precursors ». Retrieve from: Safety management, www.asse.org, p. 54-64

[7] United States Government Accountability Office (2007). « Vulnerabilities Exposed through Covert Testing of TSA’s Passenger Screening Process ». GAO-08-47T: Retrieved from: http://www.gao.gov/assets/120/118622.pdf

[8] Reason, James (2012), « A life in error: from little slips to big disasters». Surrey, England: Ashgate Publishing, p. 75

[9] Weick, Karl and Sutcliffe, Kathleen (2007), « Managing the unexpected : Resilient performance in an age of uncertainty, 2nd edition». Jossey-Bass: San Francisco, CA. p. 175