A Modular Software Process Mini-Assessment Method
Karl E. Wiegers
Doris C. Sturzenberger
Organizations often launch software process improvement initiatives based on the Software Engineering Instituteís Software Capability Maturity Model5 with a comprehensive process appraisal. These appraisal methods include the CMM-Based Appraisal for Internal Process Improvement and the Software Process Assessment. However, such appraisals are expensive and time consuming, so many companies find it difficult to perform them frequently.
Several organizations have created small-scale assessment techniques to take the process pulse of a software organization between full appraisals. Motorola developed a progress assessment instrument, with criteria an organization can use to evaluate its performance for the approach, deployment, and results achieved in a given CMM Key Process Area.1 The Interim Profile technique, developed jointly by the SEI and Pacific Bell,2 relies primarily on the CMM maturity questionnaire to gather data pertaining to a projectís software process maturity and to develop a process profile indicating KPA satisfaction. The Norad System Support Facility uses a "spot check" approach, in which each activity described for a CMM KPA is evaluated as to how effectively it is being conducted.3
CMM-based software process improvement initiatives are underway throughout the internal software development and product software development organizations at Eastman Kodak Company. Many of these organizations are using small-scale assessments to stimulate SPI at the project level, to track progress toward higher maturity levels, and to assess an organizationís readiness for a full-scale appraisal. This article describes a flexible, modular mini-assessment method that enables construction of a customized activity sequence to meet the needs of an individual project. Several Kodak departments have applied this method successfully. We also provide several tips for readers who are interested in implementing their own mini-assessment methods.
A process assessment itself does not lead to any benefits¾ it leads to knowledge. An assessment is part of the investment an organization makes in software process improvement. Failing to make beneficial process changes following the assessment wastes this investment, in addition to making the participants skeptical about the organizationís true commitment to improving its process capabilities.
Over time, different Kodak departments had developed three distinct software process mini-assessment approaches. While their objectives were similar, they differed in the maturity questionnaire used, the steps involved, and the time commitment by both assessors and project team members. One method was essentially a two-day miniature version of a SPA. Another variant required about 12 hours of contact time spread over several sessions, including an eight-hour session to hold a practitioner discussion and generate findings by consensus. In a third, very compressed approach, one to three project representatives completed a maturity questionnaire by consensus in a facilitated session. The assessors then generated findings by identifying performance gaps in all CMM key practices in the KPAs covered.
Members of Kodakís SPI community wished to develop a common method that used a standard set of tools and procedures, yet accommodated the current methods where possible. The objective was to construct a mini-assessment architecture that could be tailored to each projectís improvement objectives, life-cycle status, team size, and time constraints. Standard procedures and tools would make it easier to bring new assessors up to speed, and they would facilitate collaboration among assessors from different departments. The method also had to yield consistent and reliable results. While the modular mini-assessment method, or MMA, described here has not yet been universally adopted across Kodak, it has been applied on many projects in several organizations. Using a mini-assessment to launch or reinforce a process improvement initiative brings a level of visibility and focus to the effort that goes far beyond the informal improvement activities that might come out of regular project status review meetings.
Tip 1: Determine the objectives of your SPI initiative, and select assessment techniques aligned with tracking your progress toward achieving those objectives.
Figure 1illustrates the overall process flow for the MMA. We can combine these steps in various ways to reduce the number of separate events. The mini-assessment is distinct from the follow-up action planning and action plan tracking activities. Members of the software engineering process group (SEPG) that support the assessed projectís organization may facilitate these essential steps, but they are ultimately the projectís responsibility.
Figure 1. The modular mini-assessment process flow allows several options within each step. Some steps can be combined to reduce the number of separate events.
The principal data-gathering methods used in the MMA are responses to a process maturity questionnaire and an optional project participant discussion. All project team members who participate in the MMA must have one to four hours of CMM orientation. No managers above the project software leader are involved in data gathering. A confidentiality agreement makes it clear that all mini-assessment data and findings are private to the project team, and that no data is attributed to individuals. The MMA is designed to use two assessors (lead and backup), although a single assessor can perform several steps effectively, thereby reducing costs.
Tip 2: Decide which data collection methods will afford the appropriate balance between objective and subjective input, on both the process activities the project team performed and the results the team achieved.
Although the MMA is based on the CMM, it was not specifically designed to comply with the CMM-based appraisal framework.4 Consequently, the MMA cannot yield an official maturity level rating. Because we use the method primarily to initiate and sustain SPI activities, we are more concerned with identifying appropriate improvement opportunities than with maturity level ratings.
The flexibility of the MMA method comes from the multiple options available for most assessment steps. The selections that are made affect the number of meetings held and the meeting durations. The time required for a mini-assessment ranges from two to 16 contact hours per participant, depending on the options chosen. The following options, shown in Figure 1 and summarized in Table 1, are available for the MMA component.
Table 1. Mini-Assessment Component Options
After a project decides to have a mini-assessment, the SEPG manager assigns two SEPG members to be the assessors. These assessors meet with the projectís software leader (and perhaps the software quality leader) to plan the activities. The assessors describe the mini-assessment process and collect information about the project and the team members. The project leader conveys his or her expectations for the mini-assessment experience, and the assessors state their expectations of the project participants. This planning meeting can also be used to educate the project leader further about the CMM and on the SPI strategy for the organization, if necessary.
The assessors stress that the mini-assessment is only the first step on the path to improved software process capability. The real work consists of planning and implementing actions to address shortcomings identified in the projectís current processes. If the project leader balks at committing the time needed to follow through on action plan implementation, we question whether the mini-assessment is worth performing at this time.
Tip 3: At various points throughout the mini-assessment, explain to the participants how this activity and its results relate to the follow-up steps that will need to be taken. This will help build momentum for the post-assessment SPI activities.
Next, the assessors present typical mini-assessment objectives, and the project leader rates each of these as being of high or low priority ("medium" is not an option). Possible objectives are to
®identify process strengths and improvement opportunities,
®educate the team on SPI and the CMM key process areas,
®serve as a catalyst for improvement,
®prepare for a formal CMM assessment,
®obtain team buy-in to the importance of software process improvement, and
®identify software engineering best practices being used.
The pattern of high-priority objectives helps the planning group select appropriate mini-assessment activities to meet the objectives.
The planners select the specific components that will make up this mini-assessment, choosing from the options defined by the MMA method. This includes selecting the KPAs covered by the questionnaire to be administered. The project leader also decides whether the entire project team, or just a representative slice, will participate in the assessment. The planning group sets a preliminary schedule, and the project leader identifies a project representative to serve as the process liaison between the assessors and the project team and to assist with logistics. The deliverable from the planning stage is an agreement that summarizes the objectives, participants, and events that were selected for this mini-assessment.
Tip 4: Decide which subset of the project team should participate to provide accurate data, gain ownership of the mini-assessment outcomes, and keep costs low.
This kickoff event provides the first opportunity for the project team to hear what the mini-assessment is all about. The opening meeting can be held as a separate event (typically as part of a regularly scheduled project team meeting) or as a brief lead-in to administering the questionnaire. When it is done as a separate event, the assessors typically spend more time describing software process improvement and the organizationís SPI strategy. The opening meeting provides an excellent opportunity for the project leader and higher level managers to state their support for the mini-assessment and their expectations for the subsequent process improvement activities.
Four choices are available for presenting some background on the CMM and SPI to the project team members. The assessors normally present a short briefing (10 to 15 minutes) prior to administering the maturity questionnaire. This refresher is sufficient for teams that have undergone a previous mini-assessment. Projects having less CMM exposure can opt for a small briefing (about 30 minutes) or a large briefing (about one hour). We strongly recommend that participants who are unfamiliar with the CMM take a four-hour in-house course on software process improvement using the CMM prior to beginning the MMA. The assessors present the shorter briefings as part of the initial mini-assessment activities.
A maturity questionnaire is an important data-gathering instrument for our mini-assessments. The project leader can choose which questionnaire to administer, as well as the respondents. The assessors facilitate the questionnaire administration session, using standard slides to describe the intent of each KPA before the participants answer the questions for that KPA.
We adapted the basic questionnaire used from one codeveloped by Kodak and the Institute for Software Process Improvement. It addresses many key practices and subpractices of the Activities Performed common feature of each KPA, along with some institutionalizing practices. ISPI also created a second questionnaire that addresses only institutionalization factors and a composite questionnaire that encompasses all key practices of the CMM. The assessors can use any of these questionnaires in a mini-assessment, although we nearly always use the first one.
All the questionnaires have possible responses that indicate how frequently each practice is performed (Always, Usually, Sometimes, Rarely, Never, Donít Know, Not Applicable), rather than the Yes/No choices used in the SEIís maturity questionnaire.6 We also encourage participants to write comments on the questionnaires.
The second questionnaire administration option is to decide whether to collect an individual questionnaire from each participant or a single set of consensus responses. The consensus approach is valuable for stimulating discussion among participants and clarifying their understanding, but itís impractical if more than a few participants are involved. Individual responses (anonymous, of course) provide a broader cross-section of input from project team members. The pattern of responses can also highlight significant discrepancies in how the team members view the software practices being used on the project.
Tip 5: Donít expect participants new to SPI to understand the CMM well enough to accurately complete a maturity questionnaire on their own. Administer the questionnaire in a group to help all participants have a common understanding of the questions.
Questionnaire response analysis
The assessors analyze the questionnaire responses using a spreadsheet tool that was originally codeveloped by Kodak and ISPI. The outputs from this tool are individual question response distributions, profiles of question ratings for each KPA (see Figure 2), and an overall project KPA profile that indicates a satisfaction percentage for each KPA (see Figure 3). To compute individual question ratings, we weight the responses from individual questionnaires. An Always response receives a 1.0 weight, Usually gets 0.75, Sometimes gets 0.5, Rarely gets 0.2, and a response of Never has a zero weight. We average these weighted scores for all of the questions for a given KPA (all questions weighted equally) to compute an overall percentage satisfaction rating for that KPA. Note that this is an approximate quantification of the practices being performed, not a direct evaluation of whether the KPA goals are being achieved, which is the primary objective of a full CMM appraisal.
Figure 2. This sample KPA question profile shows the overall performance rating, expressed as percentages, for each question in a particular KPA.
Figure 3. This sample project KPA profile shows the overall satisfaction rating, expressed as percentages, for five key process areas from a single mini-assessment
Tip 6: Unless achieving a specific CMM maturity level is your overriding goal, downplay the significance of questionnaire scores and a maturity level rating. Focus instead on the improvement opportunities that the questionnaire response patterns and written participant comments reveal.
The assessors study the questionnaire response profiles and participant comments to compile a list of observations about the projectís practice of each KPA. If a participant discussion is scheduled as part of this assessment, these observations constitute preliminary findings that the assessors present at the beginning of the discussion. However, if no discussion is planned, the assessors craft the observations into relative process strengths and finding statements.
In this supplemental data-gathering activity, the assessors facilitate a discussion with the project team members, optionally including the team leader. Though optional, we encourage holding a participant discussion if the project team can afford the time, and nearly all projects have held the discussion. The additional information elicited by the discussion provides a more complete picture of the projectís state of software practice. The scope of the discussion is set during the planning session: it can be limited to CMM topics, or it can cover any process-related issues that are important to the project team.
An important contribution of the participant discussion is to confirm the validity of the questionnaire results. Assessments that rely solely on a questionnaire for input are less reliable, as the participants might have misinterpreted questions or provided a distribution of responses that is difficult to interpret. Inaccurate assessment findings will reduce the assessorsí credibility with the project team, which can undermine the entire process improvement effort.
To open the discussion, the lead assessor presents the observations gathered from analyzing the questionnaire responses. Then the project team selects up to three KPAs for further discussion. The process issues raised around these KPAs will be used to generate mini-assessment findings.
Tip 7: Although the questionnaire results and the assessorsí observations provide an important basis for the participant discussion, listen closely to the "points of pain" expressed during the discussion and use these as additional input to the findings.
During planning, the project leader can choose to have the assessors generate the findings off-line or to have the project team itself draft the findings with assessor facilitation. In the first case, the assessors develop finding statements after the discussion (or after questionnaire response analysis, if no discussion was held), using all available data gathered during the assessment. Alternatively, the participant discussion is extended by at least one hour, during which the assessors help the participants craft finding statements from the notes gathered..
In either case, the primary deliverables are up to three finding statements per explored KPA. Each finding identifies a relative process weakness, states actual or typical consequences of the weakness, and recommends ways to address the finding. To accelerate findings generation, we have compiled a database of all finding statements from completed mini-assessments, since many projects face similar problems. This database also provides a valuable summary of the issues and concerns project teams are facing, and we have used it for strategic planning.
Tip 8: Keep the number of findings small to keep the project team from being overwhelmed and to provide just a few improvement areas on which to focus. "Focus" is a key word for software process improvement.
Tip 9: Perform a reality check on each recommendation you offer by evaluating whether it really would help address the corresponding finding, and whether the project team could actually implement it if they decided to do so.
The last step of a mini-assessment is to present the findings summary to the appropriate audience. If the assessors generated the findings, they will present them to the project team. If the team generated the findings, they may choose to present the final findings slides to their own management. The scope of visibility of the findings is entirely up to the project team.
During this presentation, we conclude the mini-assessment itself and again emphasize the next steps that the project team needs to take. Whenever possible, we have the project leader announce the intentions for action planning sessions. Projects that complete a mini-assessment but take no further action, and projects that are unable to translate action plans into actions, create an impression that management is not serious about really changing the way people work.
We developed an extensive supporting infrastructure to make the MMA repeatable, reliable, and efficient. We cannot make these elements available publicly, but we recommend you develop similar components for your own use, including detailed procedural guidance, presentation slide modules, forms, checklists, and a database of information about all assessments you conduct.
The MMA procedural guidance document contains detailed procedures for planning and conducting a mini-assessment, entry and exit criteria and checklists for each step, and the roles and activities for the lead and backup assessors. Various tables guide the selection of appropriate component options to satisfy each projectís mini-assessment objectives. We update this document as we gain experience and find ways to improve the method.
We developed more than a dozen slide modules that assessors can use during the presentation events. These include three CMM training modules of different durations, slides describing the MMA process, a confidentiality statement, slides illustrating how questionnaire responses are analyzed, and templates for preparing observations and findings.
The ability to quickly assemble the slides to be used for each MMA activity from standard packets saves considerable time and reinvention for each mini-assessment. It also increases the repeatability of the MMA process. Assessors can quickly tailor the slide module templates for a specific project, saving time and providing a consistent look to all of their MMA presentations.
Tip 10: Tailor your generic materials for specific events, but avoid over-customizing them. You can lose the cost savings from reuse if you routinely do excessive tailoring.
Forms, checklists, and templates
We have also developed several tools to streamline the execution of a mini-assessment. To collect background information prior to the planning meeting, we give the project leader a standard project profile questionnaire and a mini-assessment readiness survey. We created forms to plan a mini-assessment, record the time each assessor spends on each mini-assessment stage, collect summary data, and obtain post-assessment feedback from project members. An overall process checklist helps make sure that the assessors do not inadvertently overlook any tasks, and that nothing is forgotten on the way to a meeting. Our templates also help us write mini-assessment agreements and summary reports. As with the slide modules, these electronic aids save us time and enable a repeatable MMA process.
Mini-assessment metrics database
One large Kodak department created a database to store information about their mini-assessments. The data stored includes
®the date, duration, and number of participants in each MMA meeting;
®the questionnaire used, KPAs covered, and questionnaire results;
®the KPAs that were selected for process improvement activities; and
®the assessor time spent on each phase of the mini-assessment.
The accumulated data regarding the average time spent on each step helps us plan our schedules and commitments more reliably, as well as letting us calculate the cost of each mini-assessment in total staff time. The data in this database also lets us track an organizationís SPI progress, by aggregating results from multiple projects and seeing which KPAs are being pursued by various projects.
Twenty-four mini-assessments had been completed by the time we submitted this article. The average project team size was 12, with a range of six to 20 participants. The flexibility of the MMA method results in a wide range of both assessor and project participant effort, depending on the component options selected; on average, the participants spent four hours in project activities and the assessors expended a total of 48 labor-hours of effort. The assessor effort began to decrease as all assessors became fully trained and experienced. The cost of a typical mini-assessment is approximately $6,000, 5% to 10% of the cost of a Software Process Assessment (SPA) or a CMM-Based Appraisal for Internal Process Improvement (CBA IPI).
The value of the modular mini-assessment method is demonstrated by the fact that project leaders choose many different combinations of assessment components to create custom approaches that best suit their needs and schedules. Virtually all of the available component options have been selected by at least one project. Project leaders appreciate the respect this flexible method shows for the realities and pressures their projects face.
Project team members also feel generally positively toward the mini-assessment experience: 79% of those who returned feedback forms (approximately 30% response rate) indicated that the amount of project time spent on the MMA activities was about right. Moreover, 52% felt the mini-assessment would be very or somewhat helpful to their SPI efforts, 37% said it was too soon to tell, and only 11% indicated that the mini-assessment would be detrimental or not helpful.
The strategy of using mini-assessments to launch and sustain SPI activities yields multiple benefits. A mini-assessment can be timed appropriately for each project team, whereas a full organizational assessment can be disruptive to projects with looming deadlines. Each project can deal specifically with its own key process issues. The whole project team usually is involved, rather than just a subset of a large organization as in a typical SPA or CBA IPI. This facilitates the education of all project team members and leaders in SPI and the CMM.
By working closely with software project teams through mini-assessments, SEPG members acquire a better understanding of the common challenges the projects face. One large organization used the patterns of observed process shortcomings to direct the evolution of its SPI strategy over time. The SEPG members gain credibility with the software developers through face-to-face¾ but non-confrontational¾ interactions during a mini-assessment. By working with multiple projects, the SEPG encounters opportunities to leverage improved processes from one project to another. This minimizes the amount of invention each project must do on its path to improved processes.
This approach to process assessment is subject to the same risks that confront any SPI activity.7 The most common point of failure we have observed is a lack of follow-through into action planning and action plan implementation following the mini-assessment. The timing of the mini-assessment is also important. If it is performed too late in the project life cycle, it can be difficult for the project to adjust its plans and schedule to permit time for action planning. Also, later mini-assessments might provide fewer benefits for the current project, although they position the team for greater success on their next release or project.
The MMA method does have some shortcomings. The absence of focused interviews and document reviews reduces the rigor of the mini-assessment, making it a less reliable predictor of the likely outcome of a full CMM-based appraisal. In addition, senior managers are not as directly engaged in mini-assessments as they would be in a full organizational assessment. Despite these limitations, the modular mini-assessment method has proven to be an effective component of a large-scale software process improvement initiative at Kodak.
We acknowledge the contributions made to the development of the MMA method by Jeff Duell, Dave Rice, and Marsha Shopes, as well as the maturity questionnaire and supporting materials developed by Jeff Perdue of ISPI, Linda Butler, and Ron King.
1. M. Daskalantonakis, "Achieving Higher SEI Levels," IEEE Software, July 1994, pp. 17- 24.
2. R. Whitney et al., Interim Profile: Development and Trial of a Method to Rapidly Measure Software Engineering Maturity Status, Tech. Report CMU/SEI-94-TR-4, Software Eng. Inst., Pittsburgh, Penn., 1994.
3. M. Wakulczyk, "NSSF Spot Check: A Metric Toward CMM Level 2," CrossTalk, July 1995, pp. 23- 24.
4. S. Masters and C. Bothwell, CMM Appraisal Framework, version 1.0, Tech. Report CMU/SEI-95-TR-001, Software Engineering Institute, 1995.
5. M.C. Paulk et al., eds., The Capability Maturity Model: Guidelines for Improving the Software Process, Addison Wesley Longman, Reading, Mass., 1995.
6. D. Zubrow et al., Maturity Questionnaire, Tech. Report CMU/SEI-94-SR-07, Software Eng. Inst., Pittsburgh, Penn., 1994.
7. K. Wiegers, "Software Process Improvement: Ten Traps to Avoid," Software Development, May 1996, pp. 51- 58.
(This paper was originally published in IEEE Software, January/February 2000. © 2000 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.)