You’ve added people, you’ve bought tools, you’ve done long and expensive migrations, but the biggest issues – like complaints from clients or associates for years, or slow adaptation to change markets – remain. You’ve adopted Agile or DevOps ways of working, but you still publish once every few months – or worse – and projects are always delayed. You added more checks and approvals to production changes, but crashes still occur. You are desperately trying to change the outcome in a complex and adaptive socio-technical system.
What you need now are new sources of feedback – to teach you more.
Leaders can start by considering the dynamics common to all systems. To begin with, Donella Meadows uses the example of a Slinky toy in her seminal book, “Thinking in Systems”. Holding the Slinky in both hands, she withdraws the bottom hand so that it falls, suspended and bouncing.
She then asks, “What made the Slinky bounce up and down like that?”
“His hand”, isn’t it?
We often wrongly attribute the cause to individual actors within a system.
She repeats the demonstration, but instead of the Slinky, she hangs the box she entered. Of course, when his hand moves, the box doesn’t bounce at all. Now we see her hand that’s not what made the Slinky bounce; it was the Slinky himself. If you’ve ever blamed a problem on “human error”, you’ve probably made the same mistake: we often wrongly attribute the cause to individual actors within a system, regardless of the larger system around them. In computer science, this lesson is at the origin of “irreproachable autopsies”.
But if it’s not human error, then what exactly is the problem?
[ Need to explain key Agile and DevOps terms to others? Get our cheat sheet: DevOps Glossary. ]
The importance of dissent
In “Outliers,” Malcolm Gladwell explains that in the 1980s and 1990s airline pilots around the world had similar practices and planes, but accident rates in some countries were orders of magnitude higher than others. The apparent correlation between region and security was uncomfortable, but undeniable. What was happening?
The most spectacular recent advancements in flight safety have not come from analyzing recorded instrument readings, but from analyzing recorded cockpit conversations.
If piloting an airplane is obviously technical, it is also social. A flight, like computers, is an example of a socio-technical system. The aircraft is just a technical subsystem, part of a larger social structure that includes flight crew, air traffic control, etc. Thus, the most spectacular recent advances in flight safety have not come from the analysis of recorded instrument readings, but from the analysis of recorded conversations in the cockpit. The accidents were due to bad Communication between crew.
Thus, safety records varied from region to region, primarily because cultural norms regarding communication varied from region to region; in particular, the norms of challenge to authority.
This challenge is inherently human. Chris Clearfield – a consultant and, incidentally, a pilot – and his co-author András Tilcsik discuss in their book, “Meltdown”, that when an authority figure is in control, there is often a lack of dissent, and a lack of dissent predicts failure. For example, in the United States, the vast majority of accidents have occurred when the captain – the most experienced pilot! – was the one who flew. People are generally uncomfortable challenging their superiors, but in complex systems you need as many opinions as possible, especially uncomfortable ones.
Faced with dire consequences, airlines began to train crews in dissent, and in doing so, accidents dropped dramatically. The practice, Crew Resource Management, has since become a global standard with influence far beyond aviation.
[ Want DevOps best practices? Watch the on-demand webinar: Lessons from The Phoenix project you can use today. ]
Feedback and information flow
In systems, feedback is a fundamental force behind their operation. When we fly an airplane, we get feedback from our instruments and our co-pilot. When we develop software, we receive feedback from our compiler, our tests, our peers, our monitoring and our users.
Dissent works because it is a form of feedback, and clear and prompt feedback is essential for a well-functioning system.. As reviewed in “Accelerate,” a four-year study of thousands of technology organizations found that fostering a culture that openly shares information is a sure way to improve software delivery performance. It even predicts the ability to achieve non-technical goals.
These cultures, called “generative” in Ron Westrum’s organizational culture model, are performance and learning oriented. They understand that the information, especially if it is difficult to receive, only serves to accomplish their mission, and thus, without fear of reprisals, the associates express themselves more frequently than in the oriented rules (“bureaucratic”) or oriented on the (“pathological”) power of cultures. Messengers are hired, not slaughtered.
“The antidote to complexity isn’t necessarily simplicity, it’s transparency.”
Feedback loops are often the source of surprising nonlinear dynamics in complex systems. But not all surprises are bad. By intentionally exploiting feedback loops, we can use unpredictable adaptive behavior for our own purposes.
In one of Meadow’s favorite examples, simply moving the electricity meters from homes to a high-visibility area reduced electricity use by 30%. It’s the same reason the Accelerate authors found that continuous delivery – rapid feedback cycles from development to production – is predictive of generative culture and peak performance.
As Clearfield likes to say, “The antidote to complexity isn’t necessarily simplicity, it’s transparency.
The learning organization: how to cultivate it
Getting feedback is one thing, but learning from it is another. Organizations have sources of information everywhere, hidden with their people and failures, but few are effective in integrating them. These are the few who separate themselves from the pack.
If you are a leader, don’t speak first
Clearfield and Tilcsik suggest that the next time you have an idea in your mind, don’t speak first. Instead, start by soliciting diverse and opposing opinions and make sure others feel safe enough to offer them. Try your peers’ suggestions or reports, even if you are skeptical. Experiment – like Toyota.
If you hear information that makes you uncomfortable, thank the messenger. Their behavior is exactly what you need. Listen to and amplify slight cues of trouble, like silent complaints, near misses, or – and most importantly – emotional cues. These are signals that are trying to teach you something. If you do not amplify them, you are implicitly indicating that these signals are not sought.
Bring together different minds
The authors of Accelerate and Meltdown all cite the fact that there is evidence that teams made up of multiple races or genders are more likely to critically discuss the opinions of others, make fewer mistakes by remembering facts, and obtain a better score in collective intelligence.
At Red Hat, a small IT team was working on the redhat.com single sign-on solution. When we joined “outsiders” from our engineering and marketing teams to improve the next generation of hybrid cloud, the change that followed was transformative. In six months, our publication rate doubled, while simultaneously reducing the number of unsuccessful requests per month by 98%. It all started with an unlikely conversation.
From disappointment to curiosity
Were these results predictable? Not exactly. What was to be expected was that with new sources of feedback we would learn something. Of course, we have a long way to go. But if we accept that our systems are too complex and dynamic to be predictable, we allow ourselves to move from disappointment and fear to curiosity and adaptation.
If we activate information and creativity within our teams, we may not be able to predict exactly how we will get results, but we can predict that we will be successful.
[ How can automation free up more staff time for innovation? Get the free eBook: Managing IT with Automation. ]