Society and Our Technology Built World
Posted on 2 June 2011 by cjf
The interrelationships between society and technology run deep. We all partake and participate in the unfolding technology evolution “discussion” that is our lives. The tools we use, try out, improvise, critique, and/or advocate are our minimal contributions to this discussion. The accidents of technological history set the context for the discussion. We are all technologists entangled in a technological world! Technology has been the main (perhaps the only?) means by which human progress has been achieved with tools like the pencil, slide fastener (or zipper), jet airplane, water systems, skyscrapers, bridges, and computers all dramatically changing society. Henry Petroski’s great short book “Invention by Design: How Engineers Get from Thought to Thing” explores the design and engineering arts in the full richness of their social context in nine intriguing case studies.
I first read Invention by Design in February 1999. Recently I was re-reading it when Michael Tweed of the The Ben Franklin Thinking Society invited me to lead the group’s Science & Technology meetup every month. That led to the Discussion: Engineering Failures & Society on 8 May 2011. Here are some thoughts reflecting on Petroski’s book, the 8 May meetup, and further cogitating about the big picture of society and technology. Hopefully these notes and your feedback will help us better understand the technological world at the core of our ever changing civilization.
What is Technology?
Petroski’s definition suggests that civilization itself may be technology. So it would seem that technology embraces culture, values, psychology, history, and the multidimensional elements of the environment (materials science, biology, anthropology, geophysics, chemistry, etc.). Buckminster Fuller goes further:
In its complexities of design integrity, the Universe is technology. The technology evolved by man is thus far amateurish compared to the elegance of nonhumanly contrived regeneration. Man does not spontaneously recognize technology other than his own, so he speaks of the rest as something he ignorantly calls nature. — Buckminster Fuller, Synergetics, 172.00-173.00
By taking Petroski’s “networks, systems, and infrastructures” to the next level of “design integrities” and identifying it as technology, Bucky leads us to the biggest of big pictures: Universe itself! As social creatures we often think of society as the big picture. I think his point is well made: technology is an inhernet component of Universe itself. Human society is our storied Earth-developed technology. It seems likely that Human society will become the “brain” managing the regenerative ecological functions of Gaia, the theory that Earth is “alive”. If that happens, the storied technology of Earth would probably become even more syntropic and powerful than what life has achieved thus far. Regardless, society and the technology with which it is built are inextricably intertwined!
Design and Engineering in Society
Design and engineering are the arts of consciously working to evolve and develop our technological infrastructure to improve our worlds. Petroski emphasizes the role of society in the engineering process and vice versa in these illuminating quotes:
Engineering is a fundamental human process that has been practiced from the earliest days of civilization. … We have to think and scheme about nature and existing artifacts and figure out how they can be altered and improved to better achieve objectives considered beneficial to humankind.
It is noteworthy that Petroski includes compromise and politics in his conceptioning of design and engineering initiatives. Bucky Fuller was avowedly apolitical and implied that good design innovations will be spontaneously adopted. Petroski’s view seems more realistic: engineering is an ongoing process, never ending, always evolving: it is a discussion … it is inherently political. I think Fuller’s idealistic view is apt and important for visionaries whose work is so far ahead of its time that contemporary political forces would reject such new ideas (politics always lags significantly behind the initial vision for change). Still, it seems that most design or engineering projects that are actually implemented will be influenced by social forces with their concomitant political elements.
It is my feeling that too many engineers get so caught up in the practical and its political-economic forces that their work is stifled by slavery to the expedient. On the other hand, too many dreamers and visionaries, are so idealistic that they find it difficult to contribute to practical projects and are relegated to the margins of social progress. Bucky was exceptional in that even though most of his ideas were too visionary and idealistic to be implemented in his lifetime, he managed to implement some substantial commissions.
It is also my feeling that present-day society is too focused on short term “return on investment” (1–5 years is typical with only a few industries investing more than 10 years out and almost no one investing 100 or 1000 years out). So we are currently making too little investment in the kind of technology that will be needed in just a few decades. The Long Now Foundation stands out as perhaps the only organized effort to envision Humanity’s 10 000 (10-illion) year plan. Academia also seems too short-term focused with their cut-throat “publish or perish” values and overly-specialized “departments”. Long range initiatives also deserve substantial investment!
How might such idealism be made more practical? I recommend one of Ken Iverson‘s Nucor mottos: “If it’s worth doing, it’s worth doing poorly.” That is, if you identify something worth doing, start boldly working through its learning curve now. Do not worry that your first efforts might be “done poorly”: it’s worth doing, so get started! In short, do what Bucky did: boldly prototype solutions as stepping stones for the future.
Petroski puts it more bluntly: “An idea that unifies all of engineering is the concept of failure.” Failure criteria are explicit ideas about how a system can fail to perform as intended. Preferably, the failure criteria are identified during the design process, but Petroski cites examples such as the explosions that destroyed de Havilland’s Comets where some failure criteria were only discovered after analysis. Petroski points out that failure can take nontechnical forms including environmental impact, aesthetics, and economic (including bankruptcy). In Operating Manual for Spaceship Earth, Bucky blames over-specialization:
Of course, our failures are a consequence of many factors, but possibly one of the most important is the fact that society operates on the theory that specialization is the key to success, not realizing that specialization precludes comprehensive thinking.
A Case Study of Failure: The Aluminum Beverage Container
In Chapter 5 of “Invention by Design”, Petroski explores the role of failure with a case study of the Aluminum Beverage Container. When the iron food container was first introduced instructions for how to open the containers with hammer and chisel were sometimes provided. Later
As one can see from this seemingly simple example, there is tremendous complexity to take into account all the failure criteria! Diligent effort and years of continuous redesign and analysis are needed to produce safer and more effective solutions for widely deployed technologies like the aluminum can. The inherent complexity means there are always opportunities for big improvements. Thinking this through, it becomes clear that these opportunities spur the incessant design evolution we all experience and sometimes struggle with as future shock.
Failure is the Wellspring of Invention
The aluminum beverage can case study illustrates how failure leads to invention in that Fraze’s pull-tab solved the problem of being thirsty without a churchkey. Then Cudzik’s stay-on-tab solved the environmental hazard that was a side effect of Fraze’s pull-tab. Modifying existing designs to overcome observed shortcomings drives innovation in the design and engineering arts. In short, failure is essential to progress … it is, in fact, the wellspring of progress. Here are some inspirational quotes on failure:
Everyone makes mistakes, even geniuses like Galileo. Engineers must always be alert for what they may be oversimplifying and overlooking or to what conclusions they may be jumping. Because errors in engineering can have disastrous consequences, it is especially important for engineers to be reflective and alert in their design and analysis.— Henri Petroski
In the video Tenner cites other “risks of safety” including “normalization of deviance” (where people gradually adopt a culture of by-passing safety controls that initially were inviolable), “risk compensation” (where the adoption of safer procedures “encourages” the workers to take bigger risks to “compensate” for their improved situation), and “practical drift” (where a safety feature is thoughtlessly required even when the “fix” is impractical; e.g., the capsizing of the SS Eastland which killed 844 people due to instability from retrofitted lifeboats). Tenner suggests that these risks can be mitigated by engaging engineers and designers in multidisciplinary teams whose many different perspectives can proactively identify some of the latent problems that are inevitable with each new push for better safety.
Big Failures: Catastrophe and Disaster
Big failures seem to be in the news a lot lately: the Fukushima Nuclear Accident, Southwest’s failing jet fuselages, BP’s Gulf Oil Spill, infrastructure failures from Hurricane Katrina and the 2011 Joplin tornado, the September 11 attacks, and the global financial crisis of 2008-9. As suggested above, failure is normal and is indeed an essential element in all progress. What is the nature of catastrophic failure?
Big events happen less frequently than smaller scale events. It is an elementary observation that should inform our equanimity. Even if the impact is enormous and vigorous remediation is required, remember that big events are actually very rare. Long term, effort is best applied to find perspicacious solutions to provide wide-ranging improvements that would also mitigate the next disaster. Design work is filled with complex tradeoffs and insufficiently vetted, rushed “solutions” can cause more problems than the original disaster (witness the USA’s recent wars which have lost more lives, destroyed more capital equipment, and cost more money than the disaster on 9/11 that instigated them). Intelligent improvements to ameliorate the effects of another “big one” make sense, but efforts to “prevent” one “at all costs” can be a big waste of resources. The wretched plan to irradiate airline passengers with “see-you-naked” scanners is another example of over-reacting to unlikely risks such as IEDs (Improvised Explosive Devices) on planes [see this cost-benefit analysis].
Nassim Nicholas Taleb poignantly illustrates the problem of rare events with his Black Swan Theory. The history of the term “black swan” is fascinating: in 16th century London the notion of a black swan was a statement of impossibility. Then in 1697, Willem de Vlamingh discovered flocks of them in Western Australia. So “black swan” is now a metaphor that assertions of “impossibility” based on prior experience can be untrustworthy. When there is almost no historical experience to guide us and complex decisions are necessary (Taleb’s so-called Fourth Quadrant), the future will be inherently unpredictable. In such situations it is unwise to delude oneself with traditional risk mitigation strategies (like insurance, six sigma, etc.). Taleb argues that “all small probabilities are incomputable” and are therefore meaningless. Attempts to compute them typically result in significant underestimates which can be enticingly deceptive and thus dangerous.
Another important thinker about risk management in society is Charles Perrow. The second edition of his book “Normal Accidents” was published in 1999 and he has a 2010 book The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters. Here is a short review as I have not had a chance to read them:
“[Charles] Perrow is famous for his book Normal Accidents: Living With High-Risk Technologies, originally published in 1984. In it he argued that most major industrial disasters could be traced not to simple operator error but to the vulnerabilities of what he called complex, highly coupled systems, where each part depended on many others. He showed how small and apparently disconnected failures could cause such a system to fail catastrophically and unpredictably. So unpredictable are these systems that an effort to prevent one mode of failure may inadvertently create another one.”
Before a disaster, people tend to dismiss the possibility of black swans and their large impact. Even when someone raises the spectre of potential disaster, we tend to dismiss the concern noting that it has never happened before or we compute a meaningless astronomically low probability to boldly set ourselves up for infamy. Sometimes, we dismiss the concern because fixing it might cost too much. What is not often considered ahead of time is the fact that big unexpected failures often have extremely large impacts on psychology and society. These effects must be factored into planning! With Charles Perrrow, Henry Petroski, Nassim Nicholas Taleb, and Edward Tenner ringing alarm bells alerting us to the nature of failure, perhaps we are now informed enough to build more robust systems in the future. Or will their message dissipate over time as the “normalization of deviance” suggests?
Beyond awareness, understanding and education, we have other tools to mitigate disasters. One trend that is helping is to include larger, more diverse teams in the design process. These more resourceful groups are more able to more completely identify the failure criteria and find better designs which can effectively balance tradeoffs. It could be that global Internet-connected communities working with open designs and discussions could take this notion of reliability from diversity to an even more effective scale. Systems thinking, and comprehensive thinking approaches like synergetics can also help us find better solutions. For example, a wind-shedding geometry like the geodesic dome could substantially reduce property damage from hurricanes and tornados. Amory Lovins et. al. at the Rocky Mountain Institute have an interesting report entitled “Factor Ten Engineering Design Principles” on designing with whole-system thinking and integrative design. When failure criteria are addressed in the design phase and with a broad and creative approach, technology can more incisively meet society’s needs through effective design and engineering!
Fostering Technology to Build Society
We have seen that technology is inseparable from society. Indeed it may be that technology is the primary means by which life builds increasingly complex systems to perform its local syntropic functions. If that is so, our role as individuals in the conversations to design and re-design technology are at the core of all social development. Failure is a focal point in developing technology: it is both inevitable and the impetus for our never ending quest to re-make the world to be better and better. In less than 250 pages Henri Petroski’s Invention by Design helps clarify the nature of design and invention so that you can participate more effectively in the discussions and initiatives that improve society and our technology built world.
I look forward to reading your questions, thoughts and ideas in the comments.