Why We Measure: Geoff Twietmeyer
To keep score? To find out if we’re winning or losing? To see if that new couch we want will actually fit down the basement stairway? At its most basic element the answer is yes, but as leader is that sufficient? We measure ourselves and our teams against goals, standards and desired outcomes but we shouldn’t measure just to keep track of where we stand. We need to measure to change behavior. We need measure to inspire our teams to greater things, and drive them to improve when performance falls below expectations (usually an implied measure by the way).
I found religion so to speak with measures fairly late in my career. Which is somewhat odd, given a long history with team sports and a very competitive family. Finding measures for knowledge work is never easy, and that ambiguity can be a major crutch. In the fall of 2009 I was promoted to a senior leadership role for a global automotive parts supplier. One of the first things I was tasked with was generated meaningful measures for knowledge workers in 3 locations on two continents. Out came the differential equation books, and algorithm development, and a measure that was so complicated it could not possibly be wrong. Mission accomplished! After my first presentation to the Senior Leadership team, the CEO (not an Engineer) offered his help in developing engineering metrics. Mission not quite so accomplished.
Over the next year there were numerous fits and starts around generating measures of effectiveness or efficiency around the efforts and accomplishments of 125 technical people. During one of our annual Senior Management retreats the Operations Director chastised me about the quality of the Bill of Materials hitting the manufacturing floor. I’d love to say that I pulled out my metrics and put up a vigorous defense, but that didn’t happen. Such metrics did not exist. Why not? Because I was measuring the wrong things. I was watching the score of the basketball game and ignoring the shooting percentage. We began to measure the quality of engineering changes, bill releases and processing time. Now we knew what behavior to change, where to add inspection to contain errors and where to focus improvement efforts. Engineering Change processing time went from 51 days to 34 days, manufacturing stoppages dropped from 1 every 3.5 weeks to one every 33 weeks, cost dropped from $92 change to $77 per change. Measurement allowed us to focus on the things that mattered and change the relevant behaviors.
The revelation around poor performance led me to ask my engineering managers what else they thought we did poorly. Their answer…”We retest a lot of pre-launch parts that should pass the first time through”. So we measured. We checked on time completion, re-tests, and failure types. We generated a lot of data and made some improvements but something was still missing. Further investigation revealed that the Test Engineers were not promptly writing test reports at the completion of the test. Another thing we did poorly, so we decided to measure it. At the start it took an average of 62 days to complete a report. Within in 6 months it was down to 31 days eventually leveling out at 9 days after 8 months. Pretty remarkable change, and all it took was measurement. Literally, the engineering leadership did nothing other than begin measuring, and posting results. No change in process, no fancy new equipment or software, we just paid attention and demonstrated that this was something that was important and we were watching. Measurement alone changed the behavior…that’s why we measure.