Tuesday, February 7, 2017

Capability Maturity Model: Reassessment

Our initial capability maturity model assessment in March of 2016 was eye opening. In spite of using an informal and internal assessment method, the assessment results were a little shocking to the system. An 'F' was the final score; we scored an 'F' even though we provided all of the ratings ourselves based on our subjective understanding of organizational operations.

We also discovered that understanding the CMMI model is a lot easier if you can anchor the conversations in a concrete analogy. We chose hotdogs. As we went through each of our focus questions, we compared our operations against a hotdog stand. So, for Requirements Management (REQM), we asked, "if we were a hotdog business, what level would our requirements management process fit?" At level 0, we are not currently making hotdogs (i.e., not doing Requirements Management). At level 1, we can make hotdogs but we do so inconsistently (i.e., we do Requirements Management sometimes but not others). Each area has several focus questions that are very specific.

Another really useful part of this exercise was the impact it had on our strategy moving forward. The CMMI informal assessment made us realize that we were focusing on things in the wrong order. The visual analogy of a pyramid structure helped us realize that we needed to focus more of our efforts on completing the base of the pyramid, things like consistent Service Delivery (SD), before trying to develop a comprehensive Risk Management (RSKM) plan that was higher up.

In January of 2017, we repeated the process. Although we made some progress, we still got an 'F' as a final grade. There are 1,000 reasons why we didn't make more progress, but the reassessment results are actually a good news story. Our more focused efforts strengthened our foundation, and the momentum generated by that focus will be exponential. I expect us to have moved into the 'D' range by summer's end and into the 'C' range within a year. At that time, we may start looking at a formal independent assessment --- one that is a little more objective.

Tons of maturity model information is available at SEI (

Thursday, September 22, 2016

Green Belt Triple Crown Winners

In the green belt course, the process simulation calls for teams of 4-6 students to manufacture 'hits' by shooting ping pong balls from a catapult onto a target.

The simulation has evolved slightly over the years to emphasize specific learning objectives. In its current form, students also track yield and net profit.

In a class size of 20, four teams compete with each other to improve the process. It is rare for one team to win the competition in all three categories.

In fact, the first triple crown was awarded in August of 2016. Congrats to the Juan Won One team!

Friday, April 22, 2016

Running Faster by Improving the Accuracy of the Stopwatch: When the Preferred Solution is to Blame the Data for Poor Performance

In August of 2015 (nine months from the time of this writing), I was asked to help improve to process of retiring medical treatment records when service members separate. With clear direction from senior levels of the organization, the urgency of figuring out how to retire the medical treatment records in 45 days or less was palpable.

The problem of late records, at least on the surface, was very solvable. First, the record had to be located. Next, the record was shipped to a scanning facility. Finally, the scanning facility would produce a digital image of the hard-copy file and archive it electronically.

Fourty-five days seemed like plenty of time to accomplish the task. The scanning facility, by contract, had 14 days to complete the scanning and archive functions, so the medical treatment facilities had 31 days to locate and ship the record. Because service members generally begin the separation process months in advance, the medical treatment facilities could actually start the process of locating the records early and ship them to the scanning facility on the day of separation.

At the first stakeholders' meeting that I attended, the project sponsor made it crystal clear in her kickoff message that only one solution would be considered -- we must develop a new enterprise-wide IT system to track separations data and compute performance metrics on the speed with which we retire medical treatment records. The sponsor insisted that meeting the 45-day timeline was currently impossible because data about who was separating from service and when was less than perfect. Sometimes the names on the separation list changed, because service members made last minute decisions. Sometimes the names on the separation list were incomplete, because service members occasionally leave service without warning (e.g., if a death occurs). Sometimes the separation list had extra names, like when a reservist is temporarily assigned to active duty and then returns to reserve duty.

When I pointed out that even if the data were perfectly accurate and instantaneous that over 40% of the records would still be late, the sponsor let me know that I simply did not understand the complexities of the data issue. When I asked questions about how the 'data issue' could cause 5-10% of the records to never be found, the conversation became somewhat heated.

I really didn't understand. How would an expensive enterprise-wide IT system solve the fundamental treament record management issues that appeared to be the root cause of the delays? Why were we waiting until the service member separated to find the treatment record? Why were some records still hard copy when we had a system with the capability to keep digital records?

Besides, a new IT development would take at least two years to design, develop, test, and field. Senior leadership had made it clear that the problem should be solved by the end of the next fiscal year (September 2016).

Nine months later, what have we accomplished? The data has improved; we implemented several minor changes to address data accuracy and timeliness. However, the IT solution is still at least a year from being fielded.

Now, we can say with greater accurracy that the treatment records are late and/or lost. We have improved our stopwatch but lost the race.

The irony is that if we had focused on improving the process of managing treatment records that we would have simultaneously improved the quality of the data.
By definition, a better process will generate better data. It does not necessarily work in reverse though --- efforts to improve performance data do not automatically drive a better process.

Thursday, March 3, 2016

Hands In

I learn more about leadership, motivation, and training in 60-minutes by coaching a 7-8 year old basketball team than I could learn in a month on the job.

The reason is a little counter-intuitive. I can make 100 leadership mistakes in a minute at practice, maybe more. I have my own little 10-person developmental laboratory where I can try out leadership strategies, write and revise training plans, and directly apply motivational techniques with the ability to get immediate feedback on their effectiveness. In this case, my DPMO (defects per million opportunities) is quite high --- but I learn something from each mistake.

This picture was taken right after a 32-8 victory. During this game, I learned (1) many hands make light work, (2) most production will almost always come from a core team, and (3) all role players are star performers in the right circumstances.

On the last point, one of our role players had not scored up to this point in the season. In an effort to motivate her, the grandparents offered her $20 per basket. Being a clever player, she told her teammates about the offer and together they conspired to create shot opportunities. Four baskets and $80 later, a star performer was born. Although the payment was a one-game offer, she continues to perform at the higher standard and is now my 3rd leading scorer for the season.

Tuesday, January 26, 2016

CMMI for Service as Process Improvement

Capability maturity models answer the question: What are the characteristics of a high functioning organization? The defacto standard maturity model is managed by SEI ( and provides a detailed description of what highly mature organizations do.

As a process improvement tool, a CMMI model provides a standard against which the baseline organization can be compared. Any gaps between the standard and the baseline organization points the way for future improvement plans. The value of the model lies in the assessment material; it forces you to look across a broad array of processes and compare your organization to a consistent standard. Because the standard does not change, any reduction in gaps between the baseline and the standard represents progress for your organization.

The summary graphic shown here is my depiction of the CMMI for Service standard. It consists of 24 must-do processes that define a highly mature organization. Each abbreviation brick represents an important process area for a mature organization. As an example, the REQM brick represents requirements management.

Each brick in the pyramid is kind of like an individual assignment. You score the brick 0-3 based on how well your organization accomplishes that process. Then you combine scores on all of the bricks to determine your final grade -- your capability maturity level which is scored 1-5 (or F to A).

To conduct a formal CMMI assessment can be a monstrous undertaking that requires payment to outside assessors. In my organization, we are planning to start with an internal and informal assessment -- just to get us started and establish a baseline. We are the process now of translating the CMMI materials into focus questions for our organization.

Wednesday, December 9, 2015

DMADV for Travel Claims

The Navy's annual budget for Permanent Change of Station (PCS) moves is roughly $800 million. That sounds like a lot of money until you consider the scope and size of the effort. Between 110,000 and 160,000 PCS travel claims are processed each year. These claims include various allowances for time in training, family relocation, temporary lodging, and house hunting.

Because the Navy relies on rotational duty assignments by design and Sailors are entitled to PCS-related compensation by law, the expenses associated with PCS moves are a predictable cost of doing business.

However, the speed and accuracy of travel claim settlements has a significant impact on the operational availability of funds during the execution year. Adequate funds to safely cover all PSC-related expenses are obligated in advance of travel, and these funds must be held in abeyance until the travel claim is settled once travel is completed. Any excess obligations can then be de-obligated and used to fund additional PCS-moves. The goal is to settle travel claims within 30 days of travel completion. The process baseline in fiscal year 2015 was a median settlement time of 38 days. In addition, a small number of claims are never settled which ties up funds and creates the possibility of Sailor indebtedness to the Navy for advances paid on travel expenses.

We convened a meeting of the travel claim processing stakeholders in December of 2015 to discuss ways to (1) improve customer service, (2) improve the timeliness of travel claim settlements, and (3) improve the traceability of funds for the purposes of audit readiness. The group quickely settled on a DMADV methodology as an organizing framework for the effort. DMADV stands for Define, Measure, Analyze, Design, and Validate; it is a modified version of the familiar DMAIC model for lean six sigma process improvement. The DMADV framework is used when designing a new process, and DMAIC is more appropriate for refining an existing process.

After constructing a high-level conceptual diagram (see figure), the stakeholders designed a pilot test of the new process and constructed a Plan of Actions and Milestones (POAM) for execution of the pilot. The pilot test will run for a period of six months, includes a control group (i.e., all claims not in the pilot), and a thorough plan to evaluate the pilot against the measurable goals of the effort. If successful, the pilot will result in a Navy-wide implementation of the improved travel claim settlement process.

Tuesday, October 6, 2015

Telling a Good Story

I recently came across a fantastic news story in one of our process improvement efforts. Unfortunately, the effort and accomplishment was not fully appreciated -- due in part to the way the information was presented. Month after month, we presented the change as an unadorned bar chart. The bar chart showed clear improvements, but the sense of story was lacking.

After working with the team, we applied three recommendations to improve how the information was communicated.
1. We added control limits to the chart to convey a sense of context. How do we know when something changed? When the line breaks the control limits, something has changed.
2. We decided to show both the meter (the # of expired LIMDU PRDs) and the levers (the actions we had taken to influence change).
3. We made the math easy. Senior leadership no longer needs to calculate the size of the impact or guess; we spelled out the accomplishment explicitly.