I wrote a tweet a few days ago that went:
Complexity is a truly weird book. The story the book tells is compelling — or as compelling as a story about any scientific movement can be — and the author, one M. Mitchell Waldrop, is known for historically important books. Even the concept of a complex adaptive system is easy to grok. But the second and third order implications of the idea are pernicious as hell; it has affected the way I see everything around me.
This is a companion discussion topic for the original entry at https://commoncog.com/learning-from-waldrop-complexity/
Fwiw, I work at the intersection of AI and simulation, applying those technologies to model complex adaptive systems while seeking to optimize certain outputs (e.g. a supply chain is arguably a complex adaptive system, and a few metrics to be optimized include: inventory holding costs, delivery times, resilience in the face of demand and supply shocks, and carbon emissions).
The field of complexity has been something of a spectator sport. Yes, there are entrepreneurs and investors understanding and then creating, say, network effects and other positive feedback loops, but a lot of complexity theory is descriptive: it simply looks at the system, rather than attempting to control it.
One of the most interesting set of algorithms to emerge from AI in the last decade is deep reinforcement learning. DRL learns through trial and error to take actions that help the RL agent achieve its user-defined goals, within a complex system. If the AI controls multiple agents, you effectively have a way of creating emergent behavior that will steer the complex system toward certain outputs.
One thing that’s interesting about this is that DRL is able to grasp complexities that are often overlooked by, and maybe sometimes incomprehensible to, a single human mind. It is a tool that allows people to influence much larger environments, and in that sense, it realizes Johnny Von Neumann’s old dream of using computers to control the weather.
The potential of DRL has barely been scratched. But given its ability to achieve goals within complexity, it is thought to present a path to super intelligence.
David Snowden talks quite a bit about complexity and applications of complexity. He’s famous for the Cynefin framework for decision making. For complex systems he recommends “probe, sense, respond” as the overall approach. My favorite video series from him is the strategic understanding lecture (Strategic Understanding with Prof Dave Snowden. Part 1 of 5 - YouTube ) I’ll let you go down that rabbit hole yourself.
This post motivated me to buy the book
Btw, funny note, I originally read Complexity as a way of investigating What Bill Gurley Saw.
I never expected the book to change the way I see the world as much as it did.
As @mgoodrum mentions in the comments to that article, context matters a lot. I guess I had read Complexity at just the right time, when I was already grappling with the question of how to decide when you can’t rely on predictions.
Really liked this post and the idea of “action without prediction” but wanted to make sure I understood its implications correctly:
Complexity science I appears to be about moving away from a world of simple physics (Billiard balls bouncing off each other on a pool table can be perfectly predicted when you have all the relevant info) and towards something like meteorology.
General trends can be predicted (a huge storm front will likely lead to rain) and rules of thumb can be had but no one can predict whether it will rain in NYC 1 year from now with any accuracy.
So crisp, specific predictions (the 8 ball will go into the top right pocket with a given shot) are not possible.
But probable outcomes (based on past empirical observations) can be derived and acted on, though they will still be sometimes wrong / need tweaking (prolonged dry weather will usually lead to big fires). Or is that still prediction in a way?
I guess without some sense of the future, how can one act?
Also I definitely see the Tao te ching alignment around acting with harmony with “nature” which in this case is the complex, hard to predict, interconnected system!
@jason Yeah there’s definitely a bit of a jump from this worldview to something that’s actually actionable.
I highly, highly recommend reading Complexity Investing — NZS Capital, LLC — which outlines one actionable instantiation of the CAS worldview, but applied to investing. You’ll notice that they say things like ‘high valuations force more narrow predictions’ — so it’s not as if they’re completely isolated from the need to predict.
(If you prefer to listen to a podcast instead, their recent podcast with the Acquired folk is pretty good, though not enough, I think, to internalise their style of thinking):
One of the more interesting factoids in their paper is the fact that ants keep half of their workforce idle in their nest. The implication is that if you’re in a CAS, ‘rare’ events that shock the entire system are less rare than you might think. Imagine a flood that wipes out the entire foraging workforce, or, conversely, a picnic near the nest (in which case — send out all the reserves to feast!). The point is that ants have evolved to survive and thrive in a CAS, because they’ve adapted to the fact that unpredictable events are a fact of life. The NZS folk then point out that good businesses are like this too.
(And, interestingly to me — Koch Industries has always kept huge amounts of cash on their balance sheets and paid down their debt quickly, because they want to feast — acquire and consolidate competitors — during market downturns. Christopher Leonard writes in Kochland that this approach was mostly due to the fact that Charles Koch cut his teeth as a young businessperson through the oil shocks of the 70s and 80s. (Talk about unpredictable events, heh.) And in so doing, they grew their way to becoming the largest private (evil?) company in the US)
One of the more interesting books I’ve read on the topic of acting without predicting is Simple Rules by Donald Sull. https://www.goodreads.com/book/show/22749823-simple-rules based on the ideas of heuristics