Essays on Software Engineering
But I Don't Have Time!
Did You Hear What I Said?
License to Hack
Making Sure You Buy the Right Packaged-Software Solution
Quality Still Counts on the Web
Has Design Become Obsolete?
If Software Development Is a War, Who is the Enemy?
What Makes Software Process Improvement So Hard?
Are You Being Blackmailed by Change Resisters?
Ninety Percent Done
Has Design Become Obsolete?
Recently, I've observed yet another disturbing trend in the software industry. The various groups I work with exhibit considerable interest in requirements and coding, but the word "design" rarely comes up. I often ask seminar audiences how they do software design. The response is usually an awkward silence. Then someone hesitantly says, "We mostly write text descriptions of our designs," or "We all draw whatever kinds of pictures we feel like."
I've reluctantly concluded that software developers are doing less design work today than they did, say, 10 years ago. User interface design remains a strong area of interest, and there is increasing enthusiasm about design patterns and component-based development. However, I've encountered projects that have more than a million lines of code but virtually no architecture or other design documentation. I've seen large "design" documents that were almost entirely narrative text, with a few simple figures drawn using no identifiable convention.
Some may argue that explicit design activities are no longer necessary, but I don't agree. I've worked on projects in which the time spent on design resulted in a simpler and more robust solution than if we had just written code based on the detailed structured analysis. Increasingly complex, distributed, and multiplatform systems demand more thought about how best to construct them, not less. Sound designs can facilitate effective component development, use, and reuse. Documented and well-communicated architectures enable developers to safely make enhancements that don't compromise the system's quality.
Perhaps college students don't learn about classical design approaches, and when they enter the workplace, their older colleagues aren't using the newer design techniques. Caught in a time warp between methods, many new grads don't do any design at all. Perhaps a new hire's role models treat design as some quick whiteboard scribbles on the way from the requirements elicitation workshop to the source code editor. The rush to whip out code doesn't seem to leave time to contemplate how best to structure software that will address the user's needs effectively.
Another contributing factor is a tendency for software developers to discard existing methods as irrelevant whenever something new comes along. Most engineering disciplines accumulate a body of knowledge through generations of experience, building on the shoulders of the successful giants of the past and learning from the painful failures. In software, though, we stand on the shoulders of the giants and press them into the quicksand in our rush to adopt the latest development fads and abandon the old. Now that we can do object-oriented design, all that obsolete structured design stuff must be useless, right? On the contrary, software engineers should build a rich toolkit of techniques they can apply to a wide variety of problems. Sometimes a flowchart or a data flow diagram is the easiest and most appropriate way to represent some bit of knowledge.
I wonder if contemporary development environments that combine visual programming tools with very fast workstations encourage developers to substitute frequent compilations for thinking. Too few developers seem to understand and apply basic principles of software design in their haste to get something -- anything -- that sort of works out the door. One consequence of inadequate design is that anyone who does maintenance on a system has to reverse-engineer its architecture and functions from the code. However, the maintainers rarely document the knowledge they gain from reverse-engineering in a design model, so the next maintainer gets to reverse-engineer part of the system again. This is an expensive way to do maintenance.
Or maybe design really isn't needed anymore. What do you think?
If Software Development Is a War, Who is the Enemy?
Several recent experiences have left me concerned that the field of software development feels more like a war zone than a stimulating and challenging way to earn a living. One magazine editor I spoke with used several military metaphors, including describing a project manager who "snatched defeat from the jaws of victory." A thoughtful essay I read pondered whether software development was more like warfare or the culture of a monastery. I've even used terms like "in the heat of battle" during my seminar presentations. These experiences make me wonder: If we're engaged in a war, whom are we fighting?
Software development (more specifically, computer programming) is one of the few technical disciplines in which self-taught hobbyists and refugees from other fields can earn a respectable living doing something they love. It's a discipline in which people can't believe they get paid for having so much fun and often work at home and on vacation for the joy of making the computer do interesting things. People enter the software profession because of the challenge, the creativity, the never-ending opportunity to learn, and the ability to apply ever-increasing hardware horsepower and software tool capabilities to both business and recreational computing.
At least, that used to be true. Today I don't see as many developers having fun. Although we can't all hack away in a garage doing whatever we like, however we want to, we should be able to enjoy our work. Instead of enjoying perhaps the most exciting and accessible technical profession, many corporate software engineers act as though they have enlisted for a tour of duty and are looking for transfers to units that aren't in the combat zone.
Something seems to be going wrong with this industry. Here are six enemies I think we are battling -- and it's not clear that we're going to win.
What can we do to escape the stress of the war zones too many of us work in today? Begin by helping all members of your organization continually enhance both their technical and nontechnical skills. Educate your customers and your managers about the realities of software development; make sure they understand the "impossible region." Resist the pressure to commit to project goals you know you cannot achieve. Learn about industry best practices and adapt them to work in your environment. Compensate for staff shortages by increasing your team's productivity through superior software processes and an up-front emphasis on quality. Go home once in a while and leave the laptop behind. Finally, remember how much fun software development can be, and balance the need to get business work done against the pure intellectual stimulation of making the computer sing and dance.
What Makes Software Process Improvement So Hard?
If you cannot truthfully say, "I am building software today as well as software can ever be built," you should be looking for a better way. This is the essence of software process improvement (SPI). Despite the apparent simplicity, many software organizations struggle to achieve significant and lasting improvements in the way they conduct their projects.
I see five major reasons why it is difficult to make SPI work. First, insane schedules leave insufficient time to do the essential project work, let alone time to investigate and implement better ways to work. Just as manufacturing industries realize their equipment must be taken off line periodically for retooling, preventive maintenance, and installing upgrades that will improve productivity or quality, a focus on SPI is necessary to upgrade the capabilities of both individuals and organizations to execute software projects. Of course, you can't shut down the software development machine, so the upgrading has to be done in parallel with production work. Process improvement must be integrated with development as a routine way you spend some of your time. It is always difficult to find the time, but I know of one Web development project that takes a few weeks between releases to explore new technologies, experiment, and implement improvements.
A second obstacle to widespread process improvement is that many software practitioners aren't familiar with industry best practices. My informal surveys at conferences suggest that the average software developer doesn't spend much time reading industry literature. Programmers may buy books on Java and COM, but don't look for anything about process or quality on their bookshelves. The industry awareness of process improvement frameworks such as the Capability Maturity Model (CMM) has grown in recent years, but effective and sensible application still is not that common.
Some organizations undertake SPI for the wrong reasons. An external entity, such as a prime contractor, demands that the development organization achieve CMM Level X by date Y, or a senior manager decides to climb on the CMM bandwagon. Someone told me recently that his organization was trying to reach CMM Level 2, but when I asked why, he had no answer. He should have been able to describe some problems that the culture and practices of Level 2 would help solve, or the business results his organization hoped to achieve as a benefit of reaching Level 2.
A fourth barrier to effective process improvement is the checklist mentality, a rigid and dogmatic implementation of the CMM or other SPI framework. Managers and change leaders should realize they need to change the culture, not just implement new technical practices. New processes must be flexible and realistic. I know one company that mandated all software work products would be formally inspected, which is a great idea but not a realistic expectation for most organizations. The company's waiver process will run overtime, and practitioners may play games to appear as if they are complying with this unrealistic policy.
Finally, organizations may claim the best of intentions but lack a true commitment to process improvement. They start with a process assessment but fail to follow through with actual changes. They devote insufficient resources, write no improvement plan, develop no roadmap, and hence achieve a zero return on their SPI investment. Managers lose interest, practitioners conclude it was an idle exercise, and frustrated process improvement leaders change jobs.
How can you avoid these pitfalls and achieve the kind of process improvement successes that some organizations have reported? First, recognize that SPI is an essential strategic investment for the future of your organization. Focus on the business results you wish to achieve, using SPI as a means to that end, rather than an end in itself. Expect to see improvements take place over time, but don't expect instant miracles. Thoughtfully adapt existing improvement models and industry best practices to your situation. Treat process improvement like a project. Finally, remember that the bottom line of process improvement is that the people in your organization are working in a new way that yields better results. You can make substantive improvements in the way you build software; you have to.
Are You Being Blackmailed by Change Resisters?
After presenting a seminar on software inspections at a client site recently, I asked the development manager about his plans for implementing these practices in his group. He confessed a reluctance to set ambitious expectations for his team. His concern was that, in today's tight job market, a developer who rebelled against what he or she perceived as unreasonable process formality would simply jump ship for another company. This small development group couldn't tolerate much turnover, so the manager wanted to leave the decisions about whether and how to implement inspections up to the team members. I've seen this fear -- or threat -- of staff turnover put the brakes on process improvements before. It makes me nervous.
Although all individuals can choose to improve the way they work, steering an organization toward sustained higher performance requires leadership. Leadership includes admitting when there's a gap between current and desired performance and committing to actions that will close that gap. It means understanding the root causes of your performance shortcomings, identifying industry best practices that can help, and enabling your team to successfully adopt them. Sometimes leadership means helping individuals recognize that the way they work has an impact on the performance others can achieve and encouraging them to stretch beyond their comfort zone.
When a leader attempts to change a group's practices for the better, team members will react in one of three ways. The early adopters will say, "Great! What took so long? How can I help?" These allies can help the leader by piloting new methods on their own work and serving as advocates with their peers. The majority will be skeptical, concerned about adopting new processes while coping with their overwhelming current work demands. Perhaps they had previous encounters with unsuccessful or poorly managed change initiatives. Most of these people will get onboard when they understand the impact the changes will have on their own lives.
A few diehards will lie across the railroad tracks of progress, trying to keep the train from coming through. They'll skip the training seminars, refuse to try new techniques, insist that their old ways of working are more than adequate, and bad-mouth the change initiative to their colleagues. They might even threaten to leave. And maybe you should help them pack.
Last year, I delivered some training at an Internet development company whose managers were very serious about adopting better software practices to address some clear points of pain. Two of their developers fit in the kicking-and-screaming market segment for process change. Because of the damage these developers were doing, both to the change effort and to the project work, their managers were happy to see them make good on their threat to quit. Naturally, this left the managers having to replace people who had some valuable technical skills. On balance, though, the managers viewed the departure of those few obstructionist individuals as a net positive outcome for team effectiveness and morale.
I've worked in organizations where it seemed that no one could make a decision unless anyone who was affected in any way by the decision concurred completely with every aspect of the decision. You can't always run an effective business or development team by consensus. You can't avoid correcting a deficient software process simply because the necessary changes might make someone in the group unhappy. Leadership demands the courage to set strategic directions and provide convincing arguments as to why changes are needed. Change leaders must themselves apply the new practices, thereby leading by example and pulling their compatriots toward the intended objective.
Development managers and process improvement leaders certainly must include developers when evaluating current practices, identifying valuable improvement areas, and designing strategies for adopting process improvements. Change only works when those affected understand the need to make changes and how the changes will affect them personally. However, don't be held captive by recalcitrant developers who try to blackmail you into letting them do whatever they want simply because other employment opportunities abound. If you style your development processes around the group's highest common comfort level, you might never achieve significant performance gains.
I've known some hackers who wanted to work only in their own preferred way, consequences to other team members, customers, or future maintainers be hanged. People who intimidate managers by threatening to quit if they have to follow a process remind me of a five-year-old child who says he'll take his ball and go home if the other kids won't play by his rules. Fortunately, I've also known many software professionals who sought out jobs where they could benefit from the sensible application of rational processes and industry best practices. Which behavior do you want to be a hallmark of your group?
Ninety Percent Done
"Hey, Phil, how are you coming on that subsystem?"
"Pretty good. I'm about 90 percent done."
"Oh. Weren't you 90 percent done a few weeks ago?"
"Yes, but now I'm really 90 percent done!"
The fact that software projects and tasks are reported to be "90 percent done" for a long time has become something of an industry joke. (A related joke states that the first half of a software project consumes the first 90 percent of the resources, and the second half consumes the other 90 percent of the resources.) This well-intentioned but misleading status tracking makes it difficult to judge when a body of work will truly be completed so you can ship the next product release to your customers. Here are several typical causes of "90 percent done" syndrome and a few possible cures.
Cause #1: Inadequate Task Planning. You might have actually finished 90 percent of the tasks you originally thought were needed to complete some body of work, only to discover that there was a lot more work involved than you thought. This often happens when implementing a change in an existing system, as you keep encountering additional components and files you need to modify. The underlying problem is that we don't think of all the necessary tasks when planning and estimating some chunk of work.
To cure this problem, spend a bit more time while planning an activity looking for all the work you'll have to do. You might create task-planning checklists that itemize the steps involved when performing common activities such as implementing a class, building a release CD, or executing a system test. These checklists will help you better estimate how much time the activity will require, they'll reduce the chance of overlooking a necessary step, and they'll help you track your progress more accurately. Similarly, perform a structured impact analysis when you're asked to make a change in an existing system. Some impact analysis checklists and worksheets to get you started are available at www.processimpact.com/goodies.html. Start with these and tailor them to suit your own situation.
Cause #2: Too Much Partial Credit. We have a tendency to give ourselves too much partial credit for tasks we've begun but haven't really completed. You might think about the algorithm for a complex module in the shower one morning and conclude that you're about 50% done, because defining the algorithm was really the hard part. It's very difficult to accurately determine the percent completion of a large task, both because unidentified tasks probably remain (see of Cause #1) and because we're overly optimistic about how smoothly the remaining work will go.
The first step to curing this problem is to break large tasks (milestones) down into multiple small tasks (inch-pebbles; get it?). Inch-pebbles should not be longer than one or two days in duration. The individual items on your task-planning checklist from Cause #1 might serve as the inch-pebbles. Next, track your progress on the inch-pebbles in a binary fashion: a small task is either completely done or it is not done. You get no partial credit for incomplete tasks. Progress on a large task is then determined by what percentage of the inch-pebbles for that big task are entirely completed, rather than by guessing what fraction of a large, involved, and vague body of work is completed. If someone asks you how you're coming on an inch-pebble task and you reply, "I'm all done except...", then you're not done and you get no completion credit for it.
Cause #3: Forgetting About Rework. Studies have shown that software development projects typically devote between 35 and 50 percent of their total effort to rework, doing over something you thought was already done. However, project plans and estimates typically ignore the reality of rework, apparently expecting that everything will go right the first time (it doesn't). As a result, you might think you're nearly done with a task only to discover that correcting the bugs found during late-stage integration and testing adds another 20 percent or so onto your schedule.
The cure here is simple: include rework as an explicit task in your plans following every quality control task, such as testing or a technical review. You won't know how much time you spend on rework on average unless you keep some records. For example, you might discover that you found five defects while testing one module, which took you eight hours to correct and re-test (another form of rework). Now you have some data to estimate how much effort you might expect to devote to rework on the next average module. Without the data, you'll always be guessing on future estimates. But without considering rework when estimating your current task completion status, you'll almost always be wrong.
If you want to cure an organization's chronic case of "90 percent done" syndrome, try to identify the root causes that led your status tracking astray, and actively pursue these simple methods for more accurately tracking your project and task status.
Copyright © 2018 Karl Wiegers. All rights reserved.