How Social Networks Change Strategy, Prediction and Decision-Making

Unusual strategy & management guide shows how to use social data to solve complex problems

How Social Networks Change Strategy, Prediction and Decision-MakingBook Review: Everything Is Obvious* Once You Know the Answer/Duncan J. Watts

Everything Is Obvious* is an excellent how-to guide to understanding how social networks change strategy, prediction and decision-making. It offers practical techniques and profound insights for using social networks, big data and new ways of thinking to solve complex problems in business and government.

Intriguingly, the book also cites research that debunks several social media sacred cows.

Watts has an interesting point of view because he combines several disciplines: he began his career as a physicist before moving into sociology, so he strives to combine the quantitative, experimental methods of physics with maddeningly complex social problems. Moreover, he’s been running practical experiments at Yahoo! for several years, using search, Web and social data. Watts backs up his assertions with primary research that he has led or in which he has participated. He is also a very engaging writer.

I also highly recommend “Obvious” because it enables the reader to question commonsense thinking that s/he uses in cases where s/he oughtn’t—and that clients use. This affords a new level of flexibility and insight because common sense is so pervasive and leads us astray so regularly.

My analysis and conclusions follow the outline of each chapter.

Book Overview

One of the key constructs of the book is that there are two kinds of problems and systems that frame decision making: simple and complex. Simple are the most common, and common sense is invaluable to solving them. Complex problems and systems, however, are a different animal that we are only beginning to understand. They involve mind-boggling interdependencies, so we really cannot perceive them, much less respond to them effectively. Big data is beginning to lift the veil, but only just. It turns out that, even to develop a perception of the world’s complex systems, we have to reduce their complexity astronomically, which causes us to make faulty decisions (we don’t know how to reduce them, so we slash and burn).

We are better at simple systems. Common sense is very effective at helping us make discrete social decisions every day, where the context is very defined. This causes us to value it highly. Watts ends up proposing to develop another lens, which he calls uncommon sense. It relies on big data to guide our decision making.

Part I: Debunking Common Sense and Other Decision-making Fallacies

Part One is a necessary but enjoyable discovery process into “common sense,” which is so common that almost no one knows what it is! Seriously. This is intensely interesting and useful; you will undoubtedly learn very practical things that will help you in your work. It exposes many flaws in human decision making about social issues and problems. It debunks common sense, which strongly affects how people think about complex systems. People are not aware of common sense or how they rely on it, so they make unfortunate decisions.

One: The Myth of Common Sense

Common sense is very context-specific. Formal social rules exist, but how they are applied is subject to complex and nuanced social rules, many of which are unspoken. This chapter delves into its essence and defines it (which is surprisingly difficult to do).

  • Common sense is so ordinary that we’re often not aware of it (“we don’t have to know, it’s common sense”!); it helps us with social applications of complex social rules.
  • Common sense overwhelmingly practical, not theoretical; it’s about how, not why; it’s focused on dealing with concrete situations—in their own terms.

“… [that] what is self-evident to one person can be seen as silly by another should give us pause about the reliability of common sense as a basis for understanding the world. How can we be confident that what we believe is right when someone else feels equally strongly that it’s wrong—especially when we can’t articulate why we think we’re right in the first place?”

  • Common sense generally works in everyday life, which is effectively broken up into small problems, grounded in very specific contexts that we can solve more or less independently of one another. However, when we use it to solve complex problems, it’s a massive failure.
  • Our mental model of collective (human) behavior is flawed because our common sense runs on context, which is missing in complex systems that require nuanced decision making; marketers, government officials and economists deal with large numbers of people; they “model” demographics, and use concepts like “representative individuals,” “personas,” “the market,” or “the electorate.” These are often based on common sense and are massively distorted.
  • We learn less from history than we think—because we only notice and codify what we think is interesting; we fail to account for most of the data, which we filter out to create “the story” part of history.

Two: Thinking About Thinking

How common sense warps our understanding of how individual people make decisions.

  • The belief that people are “rational,” make choices based on reasons we think we understand, but do not.
  • The need to understand people’s true, root incentives, which manifest in behavior; this is different from superimposing our own beliefs on others’ behavior.
  • Confirmation bias and motivated reasoning: people take in, give preference to, information that reinforces what they already believe; they subject dis-confirming information to higher scrutiny (think about the adoption of new technologies).
  • The frame problem, the circularity of relevance: “it is relevant when it’s relevant.” We learn in retrospect what is relevant.
  • Using common sense to predict others’ behavior is fraught with risk; once we see the behavior, we invent reasons for why it occurred, then we assume we have “learned” and can predict future behavior, but we fail to take into account all the data and interdependencies. This is a big problem when we try to create incentives to engineer social change.
  • Our ability to make sense of behavior we have observed does not imply an ability to predict behavior.

Three: The Wisdom (And Madness) of Crowds

How common sense leads us astray when we try to predict the decisions are large numbers of people. Watts doesn’t reference the power law by name, but it’s woven in here, too.

  • Instances of circular reasoning: what becomes famous, what gets pervasive adoption, is interpreted after the fact. Even “experts” see what’s famous and justify popularity based on the features of popular things.
    • The Mona Lisa, an ordinary painting that became the most famous in the world; it’s the most famous because it’s recognized as the best.
    • Facebook’s popularity explained by showing that it had the features that people “wanted” in a social network, but no one knew these ahead of time. Explanations of Facebook’s success are less relevant than they seem.
  • The micro-macro problem: we analyze behavior of a small number of people—and extrapolate it onto large numbers to try to predict “emergence” of phenomena, what will become popular.
  • Part of the answer may lie in combining disciplines (i.e. biology and data analysis).
  • The “convenient fiction” of the “representative agent” (personas, demographics); it reduces complexity and increases risk.
  • The example of the tendency to riot: Mark Granovetter’s analysis; the threshold of social influence (that leads to action). Granovetter’s social influence models show the risks of reducing out complexity:
    • In Town1, in a fictitious crowd of 100, each member has tendency to riot of n+1. Person1 will start rioting with no social influence; person2 requires one person already rioting, so he starts, and the domino effect trashes the town
    • Town2 has a 100 person crowd that’s identical except the person with a threshold of three is missing, and there are two people with a threshold of four, which breaks the chain, and Town2 remains intact; however, community leaders will find “social” reasons for the difference without perceiving the root cause
    • Illustrates the dangers of using a small sample and extrapolating
  • Cumulative advantage: people tend to like something other people like. Watts’ Music Lab experiment; parallel social music networks except some had features showing what other people liked, so they measured social influence.

Social influence increases inequality and unpredictability; makes the product’s “inherent qualities” less important.

Four: Special People

This by itself is worth many times the price of the book. Profound: human bias and common sense hold that “influencers,” “stars” or “hubs” in social networks have the ability to “influence” large numbers of people. Based on Watts’ experiments and analyses, influencers have, at best, small marginal abilities to influence. Far more potent are the real-time dynamics of the network, which are complex and difficult to discern. However, people don’t like to believe this, so they hold onto the fallacy of “influencers.”

  • Debunking a key construct in Stanley Milgram’s test of the “six degrees of separation” experiment, which speculated that hyperconnected “hubs” or “stars” were key to understanding how “six degrees” worked (they made the chains shorter). Wrong, Watts learned when he reran the experiment using far larger numbers of people, analyzing email threads and using modern statistical methods. Where Milgram had used physical packets and 300 people, Watts et al used 60,000 people in 166 countries. No hubs.
  • Social networks are connected in far more complex ways than we can imagine (but we assume that we can understand them, which prevents us from understanding them). The concept of “hubs” is a common sense fail.
  • Gladwell falls for it in The Tipping Point, but it’s more perception than reality.

Contagion: “the law of the few” is far less potent that people want to believe: contagion depends far more on the overall structure of the network than on the properties of the individuals who happen to trigger it.

  • More important than individual actors is the existence of a critical mass of easily influenced people [who happen to be in the configuration at the time.. or not], which appears to be random [because we don’t understand it; this is emotionally unsatisfying to common sense, so people ignore it].
  • Learning from a Yahoo! study of viral tweets and videos; don’t only study successes, which invokes the circular problem; study failures. They simulated a marketing process in which they analyzed data on past “successes” and selected “influencers” for sponsored tweets. Individual-level predictions very noisy; they found no reliable characteristics of “successful influencers.”
  • “Special people” function primarily to help us reduce the complexity of reality; but the concept is far less useful than we want to believe.

Five: History, The Fickle Teacher

This is a mind-bender with a powerful insight: our understanding of the past is grounded in narrative, which necessarily reduces complexity and implies causality. Watts implies that we cannot understand the past honestly due to our need to reduce the complexity of all the things that were happening that we omit from the story. This lays the ground work for a more important problem: predicting the future.

  • History is only run once; but we expect to “learn” from it, so we persuade ourselves. It is another example of circular reasoning, which describes but doesn’t explain.
  • The difficulty in experimenting with humans; social context may be missed, which unknowingly bypasses crucial causes/effects.
  • Sampling bias; we notice exceptions but ignore the normal; exceptions are more “interesting”; we leave out most of the data; especially problematic when the subject is events that happen rarely, like wars or plane crashes.
  • The Post-hoc Fallacy; imagined causes; Malcolm Gladwell’s “law of the few” is a poster child; Paul Revere vs. William Dawes; we don’t have the data to really conclude why Revere “succeeded” and Dawes did not [but that doesn’t stop us!].
  • Debunking the “superspreaders” theory of the SARS epidemic of 2003; the real cause was the misdiagnosis and treatment of pneumonia; the not the patient. The separate apartment building outbreak was caused by leaky plumbing, not a superspreader.
  • History can’t be told while it’s happening; hard to differentiate the “why” from the “what”; we select the facts to tell the story; there’s too much data, so we simplify, but it’s likely that we omit key data; to say what is “happening,” implies a narrative. Examples:
    • Butch Cassidy and the Sundance Kid; if you ended the story a month before they’d been surrounded and gunned down, their decision to go to Bolivia would have been declared a success; by the end of the film, though, they were dead, so we conclude it wasn’t a good decision. In real life, we don’t know when “the ending is,” so how to draw conclusions?
    • Cisco’s vacillation between darling and failure on Wall Street through the 1990s and 2000s. Same CEO, too
  • Whoever tells the story best wins. History is story.
  • Psychology experiments: simple stories consistently judged more credible—because they are simpler.
  • In (natural) science, we can test the stories in experiments.
  • Confusion between stories and theories is at the root of the problem with using common sense to understand the world.

Six: The Dream of Prediction

Chapter six starts getting to the practical point of the book: how do we predict the behavior of large numbers of people? [which is another way of saying “predict the future”] It turns out that “natural science” is a false friend of social science. Social “scientists” have had “physics envy” for years; they want to believe that, as Newton and Kepler could predict orbits, we should be able to predict behavior. Watts finds this to be false because natural systems are relatively simple; human networks are complex and subject to far more variables.

  • Newton’s laws and Laplace’s Demon; humans’ desire to apply “scientific” prediction to “history”; it’s a debacle.
  • Simple vs. complex systems. Orbits are the former, human networks and societies the latter. Most scientific models describe simple systems. Complexity involves “many interdependent components interacting in nonlinear ways.” Small changes in the system can get amplified to produce large effects [or maybe not].
  • The future is not like the past; the past is our narrative, which blots out most of the data; probability tries to account for all the data it can manage to predict things that may be likely to occur in the future.
  • By “prediction,” we really mean that we want to know what will actually happen, not the probability that various outcomes might happen. In other words, we want to force the future into our small “narrative experience.” [You can imagine how well that works.]
  • The problem of predicting what to predict: what’s “relevant” can’t be known until later.
  • Black swans are only recognized in retrospect; the example of the storming of the Bastille in 1789, how it acquired significance later. How black swans differ from plane crashes, which are difficult, but at least we know what we’re trying to predict; black swans require predicting even further into the future to understand what is relevant—before we can even think about predicting the black swan.

In fact, black swans are not events, they are really shorthand for large chunks of interconnected events.

  • When we look into the past, we see [our dumbed down perception of] what actually happened; we ignore far more numerous outcomes that could have happened but didn’t.
  • “History” conveniently “forgets” inconvenient facts and enables us to tell a good story.
  • The problems of government policy, corporate strategy or marketing campaigns affect large numbers of people, so their context is complex systems. The need for “uncommon sense.”

Part II

Part II is more prescriptive; Watts explores how to take our imperfect abilities and make the best of them to improve sociology, economics and decision-making.

Seven: The Best-Laid Plans

Uncommon sense for strategy. Even though we can’t predict the future, or the outcomes of interactions of complex systems, we can still improve our decision-making by working with probability about outcomes. This forces us, though, to change the way we plan and create strategy.

  • Two kinds of events that occur in complex systems: those the conform to a historical pattern and those that don’t; we can only work with the former kind (which still helps us).
  • Big data is helping: amounts of flu vaccine and credit card default rates; can’t predict to the individual, but we can predict percentages of large groups.
  • But probability doesn’t help us to predict something individual such as, “Will this book be a bestseller?”
  • Review of prediction markets, statistics and crowdsourcing: results more complicated than it appears.
    • Yahoo comparative prediction study: predicting outcomes of sporting events: “predicted” NFL games using two polls, two prediction markets and two simple statistical models. All performed about the same, 3% spread between the best (one of the prediction markets) and the worst (a statistical model).
    • Repeated with baseball; best and worst “predictions” indistinguishable.
    • Lesson: the basic information (like home team advantage) helps a lot; more specific information incremental at best.
    • Increments can be relevant in stock trading or other scenarios when large numbers of transactions are happening; but in business that’s rarely the case.
  • Do not use a single person’s opinion, especially your own: humans are decent at perceiving what factors are potentially relevant to a problem, but we are bad at estimating how important one factor is relative to another (in other words, what variables matter). Experts are no better, and their results are worsened because they are usually used one at a time. You would get better results by polling many people and taking the average (Wisdom of Crowds approach).
  • The more in advance you want to predict, the greater the error. Results from experiments with predicting popular movies, books, election results the same.
  • Most predictions are forgotten [thankfully, our track record is so bad]; few people revisit them, which would potentially be the most useful exercise of all.
  • But be aware that the biggest opportunities are the black swans, things that aren’t predictable. The financial crisis of 2007-2009, the overthrow of dictators. Historical data is of no help with them.
  • Strategy and planning. As practiced, it exhibits the same problem; it lays out the “future narrative” and makes decisions based on that. Examples of Sony Betamax, Apple iPod; Michael Raynor’s The Strategy Paradox.
  • Recommendation: change the model; work with most probable outcomes and design business process for flexibility. Scenario envisioning. Reaction, not prediction.

Eight: the Measure of All Things

Uncommon sense for business. Concludes that “predicting the future” is off the table for us; but we can manage more effectively by changing the model—away from prediction toward measure/respond (Raynor’s thesis).

  • Examples of measure/respond: Zara; Yahoo blind-testing home page designs prior to release (bucket testing).
  • The mullet strategy for websites and blogs; put the comments on pages that are deeply buried and, based on the crowd’s reaction, selectively promote the pages.
  • Journalism and crowdsourcing; the Huffington Post: they have thousands of unpaid bloggers. TV measure and respond: Bravo.
  • Mechanical Turk; turkers far more diverse and representative than researchers believed at first.
  • “Predicting the present” by using Web searches; predicting flu epidemics with Yahoo and Google searches. Facebook’s “gross national happiness” index based on status updates. Twitter and Foursquare.
  • Using large numbers of search queries to predict film success or “hot 100” rankings; it does offer small but reliable advantage over other public data.
  • Advertising; we are starting to crack the code; advertisers traditionally claim to offer causation, but they really measure correlation; advertising can’t prove that it causes an outcome, but they can correlate increased advertising with certain outcomes. Not the same thing. The need for experiments and controls. Online, what the user sees is subject to numerous unrelated technologies, which clouds simple experiments’ results. Targeted ads.
  • The importance of the marginal customer,” who is influenced by the ad to buy when s/he would not have bought otherwise.
  • Discussion of Yahoo experiments; getting there in measuring advertisings’ results. The hope of big data.
  • The importance of local knowledge in creating plans; take data from the environment in which the plan will be deployed [and the more data the better]. Some examples are cap and trade, market-based mechanisms; prize competitions, Innocentive.
  • Other ways to use local knowledge; “bright spots” and bootstrapping—not “solving”; the latter refers to top-down aid programs; the former study instances of innovation within the application space and promotes them (Charles Sabel). Ask what’s working—and what could work if obstacles were removed.
  • Planners think they know the answer; searchers admit they don’t know, but they observe and experiment within the application space (William Easterly).

Don’t neglect the profound human tendency and desire to predict; it crops up constantly, leading people to mislead themselves.

Nine: Fairness and Justice

Uncommon sense for justice and “social fairness,” which is relevant to business and government. Introduces new ways to think about problems by examining them from a larger scale perspective. In chapter nine, Watts debunks common sense in this context while he suggests uncommon sense approaches.

  • How common sense introduces unfairness in society. Cites the case of Joseph Gray, an off-duty cop driving home after too many drinks who kills the wife and kids of another man. He gets fifteen years, but many other drunk drivers go free. Common sense leads us to punish the outcome, not the behavior; maybe there’s no way around this. Revisits formal rules vs. how they are applied. Social value of making examples. But it isn’t “fair.”
  • The Halo Effect; common sense correlates independent variables; for example, physical looks with intelligence. Firms and CEOs; remember Cisco, which was alternately glorified and vilified, even with the same CEO.
  • Skill vs. luck: problems measuring individuals’ performance; fund managers, bankers, employees and pay for performance. The example of baseball batting averages.. but employees deal in complex systems more often (baseball is relatively simple). Asserts that individuals, regardless of “talent,” have far less impact that the environment in which they operate (“the network”).
  • Robert Merton and The Matthew Effect: successful people are more likely to be successful. Talent is rarely evaluated on its own terms. All these are distortions.
  • The myth of the corporate (CEO) savior. Rakjesh Khurana: “the CEO often less important than outside factors.” CEO selection is flawed because the candidate pool is small, which leads to huge compensation.
  • What is a “just” society? Robert Nozick and John Rawls.
  • Concludes that arguments about individual compensation should be conducted at the industry, not the individual, level (in other words, how much does an industry contribute to society, i.e. banking). Make banking less lucrative by limiting the amount of leverage banks may use.
  • The distribution of wealth reflects reflects a series of choices a society has made, but in complex ways of which people are unaware (i.e. tax policy, complex legislation).
  • In 2007-2010, banks succeeded in having it both ways: they managed to privatize profits and socialize losses; this is inconsistent.
  • Social networks have a counterintuitive insight for “individual freedom”: we can never be “free” because we are so interconnected, and results we achieve are caused by network dynamics, probably much more than our individual decisions. This casts doubt on one of our most cherished beliefs, “individual performance” based on talent.
  • Michael Sandel’s Justice; argues that fairness has to take into account the network effect; evaluate competing claims to give perspective to isolated claims.

Ten: The Proper Study of Mankind

Chapter ten is a summary of sorts. It addresses how to use data and awareness of networks to make sociology more relevant. Useful debunk of warped commonsense thinking about “the Internet” and society, based on research Watts and colleagues have conducted at Yahoo.

  • Robert Merton’s “middle range” theories; don’t aim to solve complex problems with physics (“high range”); aim for incremental improvements using network data.
  • Social theorists are people too, and they fall for common sense all the time.
  • Big data is going to change the game.

The homophily (“birds of a feather”) principle; turns out that individual choice is far less relevant than how the environment limits alternative choices people have.

  • Debunking the commonsense view that “the Internet polarizes people.”
  • Yahoo’s Facebook experiments with users predicting their friends’ stances on political and economic issues.
    • Friends more similar than strangers, and close friends more similar than friends.
    • But all friends believe themselves to be more similar to each other than they actually are.
    • People bad at guessing their friends’ positions. Researchers found that people don’t know what their friends think. They use simple stereotypes to guess instead.
  • Simple correlations are often obvious, but how “obvious” things fit together is very difficult.

Analysis and Conclusions

Social Networks

  • One of the biggest takeaways is the book’s powerful description of what it means to live in the networked world. Watts is no SNA quant, but he vividly describes the reality in which we’ve always lived but have never perceived.
  • We are surrounded by networks, but it’s never been so obvious ;^). The universe is a cloud of interdependent forces. Nature runs on networks; the systems within our bodies and brains are networks. Digital social networks serve as a model that affords an inkling of what networks are and how they act. Think about the points in Chapter Nine, for example. Watts shows (and provides some evidence for) that the state of the network at a certain moment has a greater bearing on the effect “any individual” will create than the individual. I find this very unsatisfactory, emotionally, because I want to believe that my will and skill can make the difference. If Watts is right, it is deeply humbling.
  • However, Watts does not address the impact of network construction, so I’ll offer this: I’ll give him that the network is vital and perhaps more important in determining outcomes than any individual; however, this does not negate the power of building a network of committed people or businesses around you or your brand, which will increase the chance that your actions will have a larger impact, not because they are “special people,” but because they will tend to advocate for you. Reciprocal altruism will provide an advantage.
  • As to how important the network is versus the individual, I think much more research will have to be conducted to determine those dynamics. For the time being, Watts gives the reader a real sense for the network and how we sabotage ourselves by being careless with the data.

Watts’ “influencer debunk” is worth thousands of dollars of wasted “sponsored tweets” ;^)

  • Citing his research at Yahoo, Watts makes a convincing argument that social information increases the volatility of markets and usurps influence from the “product features” because people decide to “like” it or not based on others’ opinions. This is something I’ve seen constantly but couldn’t verify, so his team’s research is an excellent reference.

Common Sense and How We Think

  • “Obvious” gives numerous examples of how people filter out data they don’t understand or want to deal with. If you think about it, this is adaptive. Human beings, from an evolutionary perspective, weren’t “designed” to deal with large amounts of data or complexity. We were “designed” to manage our lives within the tribe, a task for which common sense is well suited.
  • However, during the past 200 years, the scale of human systems and interconnectedness has grown beyond our abilities to manage, so our blunders are growing in size. Watts makes this clear, and he’s honest enough to state our limitations without offering a pat solution to the problem; the way he presents “uncommon sense,” it is very much a work in progress. He makes a major contribution by drawing our attention to our profound limitations.
  • Try thinking about large scale, intractable problems that you read and hear about constantly. My money is on Watts, that if we eradicate the common sense from the decision making process, we will have a better chance at successful resolution.
  • Watts exposes many of the situations in which our decision making is flawed. Our tendency is to scale things down to a level where we can deal with them. The micro-macro problem is a useful example. It is not “elegant” to believe the truth: dynamics of the network trump the “influence” of any member any time. We believe in “influencers” because it’s convenient, we like the thought. But it’s largely false.
  • Being aware of common sense can enable you to use it more purposefully as a tool. For people (like yours truly) who are often dealing with complex systems, it is exceedingly useful to be aware of simple vs. complex systems. That said, as Watts warns, it would be a mistake to get overconfident; common sense is second nature to us, so it is very difficult to be aware of when we are using it. Keep in mind that everyone uses it, and few people are aware of it.

On an emotional level, I think common sense is comfortable; it implies intimacy and safety; Watts doesn’t address this, but I’ll hazard that people use it more often and more rigidly when they are under stress. As cited research indicates, people prefer simpler stories.

  • Along with this, think about how common sense evolved, in the tribe, where social context determined relevance. It doesn’t have to abstract thought because life is comprised of hyperlocal concerns. Life is far more complex for many people now, so common sense is maladaptive for an increasing range of situations.
  • Another key takeaway: understanding the difference between describing (as history) and explaining, which implies the abstract patterns necessary to try to predict something.
  • Chapter Ten’s discussion and debunk of the conventional wisdom that “the Internet is polarizing people” is very useful, especially since he backs his assertion with his team’s research.

Complex Systems, Predictions and Strategy

  • Although Watts doesn’t focus on it (that would be a book in itself), he states several times that complex systems are complicated due to massive interdependencies among the nodes. Regarding human systems, since we have the ability to think, and therefore free will, that dramatically increases the range of possible responses to stimuli among people. Natural systems, like stars, plants and simpler animals, lack the abstraction, so their responses to stimuli are more predictable and their systems have fewer possible outcomes.
  • Chapter Three’s discussion of circularity makes the point: popular things are popular. Chapter Nine’s discussion of The Matthew Effect is another instance. And it’s the network that determines the popularity, it’s not the person or object, nor is it an “influencer.”
  • I agree with Watts’ thesis, that complex systems are normally beyond us [because we have not had to deal with them before]. We can mitigate that risk by recognizing our fallibility and by using our increasing capabilities with data to stack the deck in our favor.
  • As a management consultant and executive, I have practiced “sense and respond” (I call it “agile”) for years and find it highly effective. CSRA usually practices strategy by creating a decision framework based on robust due diligence into market mechanisms and their implications for the business.
  • “Obvious” offers the possibility of changing one’s attitude and approach toward uncertainty. Watts implies that we reduce the world’s complexity down to our scale in our desire to survive and feel comfortable (one feels more comfortable when one perceives that one knows what is happening). This is okay when we deal with simple systems. In the realm of the complex, it seems more adaptive to recognize that we don’t know much of what is occurring, and to respect the unknown. Recognize the most reliable known variables in the situation and try to identify and correct for as many unknowns as we can address.
  • Chapter Eight contrasts top-down aid programs with bottom-up “local knowledge.” In the former, planners “think they know” what the issues are (and often don’t), so their programs are often wasteful and marginally effective. Conversely, supporting and empowering local knowledge can have a breakthrough effect because it incorporates implicit knowledge. CSRA has used this realization on client work for years: management consultants too often serve the “planner” role and do clients and themselves no good. By consciously involving clients on engagements, we tap local knowledge.

I found Watts’ discussion of black swans and the “relevance problem” very useful. It is very practical. Instead of focusing on outcomes, identify mechanisms and inter-dependencies with whose workings you are familiar. Networks are comprised of pieces that interconnect, so being aware of how some of their components work can be a useful guide.

  • This is not an overtly philosophical book, but it is replete with delightful implications. Chapter Five implies that we create “history” somewhat arbitrarily because it’s the only way we can think and tell a story. So it is really our poor attempt to describe our perceptions. Watts doesn’t say this, but it’s implied that there is little difference between past and future; there’s much more data than we can deal with; it’s just that the future doesn’t have a “story” yet. We recognize that we don’t know what “will happen.” We think we know “what happened.” But we don’t!

4 comments to How Social Networks Change Strategy, Prediction and Decision-Making

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.