safety training

Safety Drivers: The First Level of Leading Indicators

The way we measure safety has contributed to our tendency to manage safety reactively.

All of our early safety metrics were reactive, i.e. chronologically after accidents or incidents occurred. Since our metrics essentially were failure metrics, we fell into a pattern of managing safety to produce fewer failures. 

The serious problem with this approach is that it reaches the limits of its effectiveness before it tells us how to prevent all accidents. As we fail less, our failure data diminishes, losing its statistical significance before our performance reaches zero accidents.

This limitation of traditional safety metrics and management has spawned a search for what commonly are called “leading indicators” of safety that will allow us to better predict and prevent accidents before they occur. Although this thinking is going in the right direction, it hasn’t gone far enough. Ultimately, safety will have multiple metrics connected by algorithms, which provide truly prescriptive metrics with which to manage safety. 

This set of multiple metrics will form something similar to the balanced scorecard used by strategic managers. It will have at least four major sets of metrics, the first of which might be called “safety drivers.” These are key performance indicators of our major safety efforts designed to improve organizational safety conditions and behaviors. They fall into five major categories: leadership, supervision, conditional control, onboarding practices and knowledge/skill building. 

Leadership is considered a driver of safety and is measured in many organizations with excellent safety performance. Leaders’ activities often are the crux of such metrics: the percentage of their official communications that mention safety topics; their reinforcement of safety strategies in regular interactions and performance appraisals with direct reports; their contributions to ongoing safety strategy development; and their drop-in rate on safety meetings and training sessions. Executive-level personnel who do not supervise people directly (such as planners and engineers) often are measured on their consideration of safety in their plans and designs, and their inclusion of workers for input on such plans and designs. 

Supervision often is measured in terms of safety coaching. Some organizations measure the amount of safety-coaching training and refresher training supervisors attend. Others also measure the supervisors’ efforts to create focus on specific safety-improvement targets. Still others measure the number of supervisor-to-worker contacts that result in safety feedback on performance being given. Some organizations also measure the number of influences on worker behavior the supervisor addresses, such as perceptions about best practices, providing well-spaced reminders to help workers form safety habits and ensuring the availability of tools and equipment convenient to the worksites.

Conditional control of safety issues most often is measured as the percentage of safe vs. unsafe conditions discovered on periodic audits of the workplace. There also are opportunities to measure the percentage of discovered unsafe conditions actually addressed with action plans and brought to resolution. Some advanced programs measure the discovery of new or previously undetected risks, or solutions to older ones. Some organizations actually give the conditions scores based on the projected probability that the risk could cause an accident and the potential severity of the accident.

Onboarding practices in safety include selection and screening of potential candidates as well as the initial orientation, formal training and on-the-job training or mentoring new employees receive. The most common metric derived from these practices simply is a completeness score. Was the candidate put through all interviewing and onboarding steps in the prescribed order and within designated time frames? However, many organizations have developed qualitative as well as quantitative metrics, although the former often are more subjective than the latter. Many organizations have made great improvements to onboarding practices when scoring the efforts and comparing them over time to employee safety performance on the job.

Knowledge/skill-building activities can include supervisory safety coaching, but more often focus on training for general and job-specific safety. Safety training can be instructor-led, classroom-type training (both in-house and outsourced), computer-based training or on-the-job types of activities. Although many organizations still rely on the Kirkpatrick metrics for evaluating training (training evaluation, knowledge gain, transfer to the workplace and sometimes ROI on training investment), more and more actually are testing for competence in doing the job safely. This usually is a job performance demonstration by the trainee and an evaluation of demonstrated ability by a certified professional in the specific job field. Organizations with goals of excellent safety performance often state that every employee is expected to become a safety expert at his or her job as well as a competent worker.

These measurements of activities designed to drive safety performance often are given weighted scores and combined into an overall score of safety drivers. Most of these are based on a 1-10 or 1-100 scale with the higher numbers reflecting the better scores. Many organizations give ranges of performance a color code and develop a dashboard of each metric to scan overall performance. For example: 90-100 could be green, 80-89 could be yellow and anything below 79 could be red. A table of these metric titles and their corresponding colors provides a focus on problem areas at a glance, which could be followed up with improvement discussions and action plans.

It’s important to remember that this “safety driver” is not the ultimate, stand-alone leading indicator of safety. It simply is a metric that tells an organization if it is working its plans to drive safety performance. If the plan is being worked, we need to know if the plan is working, i.e. having the desired results. 

The answer to that question involves two other sets of leading indicators and their correlation to the lagging indicators. If we drive safety performance, do we significantly change individual and organizational competency, does this competency in our controlled conditional environment produce more excellent performance and does that performance produce superior lagging indicators? This approach to a balanced scorecard for safety has proven to outperform the simplistic linear thinking that a few leading indicators drive the lagging indicators.


Terry L. Mathis is the co-author of “STEPS to Safety Culture Excellence” and founder and CEO of ProAct Safety. In 2013, EHS Today named him one of “The 50 People Who Most Influenced EHS” for the third consecutive time. He can be reached at 800-395-1347 or [email protected]

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish