The Psychology of Iteration (part 2): Cybernetics and Control
John Cleese (yes that John Cleese) tells the story of his "favorite childhood superhero, Gordon the Guided Missle". In each episode, Gordon would be given a task. Setting out to accomplish this task, he would be constantly told how he was wrong: too low, too high, target to the left, target to the right, etc. He was always wrong. Always mistaken. However, he never despaired, and grew more confident with every correction. Finally, at the end of every episode he would reach his target successfully, and everyone was happy.
The lesson Cleese wants you to take is that it was because Gordon was willing to be "continually" wrong that he was ultimately successful. Gordon's "Process for Success" depended on feedback. It depended on "tracking the moving target" by constantly changing his direction. Viewed from the larger perspective, Gordon's ability to consistently deliver successful results was more stable because he included feedback and a willingness to admit mistakes. Even more so, an eagerness to make as many small mistakes as quickly as possible before they became big mistakes.
The theory behind system stability from feedback goes back to Von Neumann's Cybernetics and the original work on System Theory. Systems that can flex are inherently better than those that are rigid. You can see this in everything from ancient Japanese earthquake-resistant buildings to metabolic mechanisms to eco-systems to market economies.
Why should an endeavor as complicated as buiding software be any different?
You know you are going to get feedback. Do you unit test? Of course. Why? To feedback the test results into better code. Does your team do regular system builds? Pre-production tests? Beta releases? Customer-reviewed prototypes? Feedback, feedback, feedback, feedback.
Grab hold of this live wire. Consciously use this power.
But this can be scary. What keeps you from being electrocuted instead? What about the "feedback" when you put the microphone in front of the speaker? Tulip-mania and the dot-com bubble? Haven't you seen the Tacoma Narrows Bridge video (http://www.civeng.carleton.ca/Exhibits/Tacoma_Narrows/)?
Won't constant feedback, from customers or users, just as easily lead to cost overruns? Drive the project "into the ditch"? Everytime I talk to customers they add more features! And we have a fixed-price contract for a specific deliverable!
As with any tool, feedback can burn your hand or cook your dinner. Software development must use controlled feedback. It will depend on being explicit about the why's and what kinds of feedback you solicit.
As I hear the doubts ("so, how do I control this feedback?") I ask you to consider the following "homework" question: What keeps your unit testing from eventually generating an infinite number of bugs? Why doesn't each "fix" create even more bugs?
John Cleese' quotes from his speech on "The Importance of Mistakes"; any errors or misinterpretations are mine.
Padulo and Arbib's "System Theory" was my basic text on linear system theory.