You know it’s true what they say — the second baby is much more of a blur than the first. My good friend Carol likes to tell the story about giving birth the second time with her cardigan still buttoned up. There she was, in the delivery room, bundled from the waist up. When the baby comes, it comes.
And by baby, I mean book. Yes, my second book took me a little by surprise too. Sure, I had enough time to make a cup of tea and change into slippers (sorry, Carol) but I didn’t have the time for a publisher.
Way back in 2012, I began writing my first book. I had secured what was known as a “book deal” with a small social science publisher that had a good reputation among people I knew professionally. I knew nothing about publishing, of course, except that someone would put my ink on some sort of pages and paste them all together. Or would they sew them together? That’s a thing, isn’t it? Given my total ignorance, I thought it was acceptable to earn only 8% of every book sold. That’s right 8%.
I fought harder for the 15% I now earn on all Kindle copies of Practical Ethnography but I thought it was worth it to learn how to publish a book. In retrospect, I don’t know if it was. The publisher hired a copyeditor for me, but I found we mostly argued about emdashes and doubt quotes instead of making my actual copy better. The design of the book was…well it was okay. The marketing was abysmal. I had the good sense to buy the domain name myself, and several years after publication, when my little West Coast publisher got purchased by British Behemoth, I told myself I never wanted all that hassle again.
So when my second “baby” was ready to arrive, of course I tuned to my friend Carol for help! Luckily, Carol is also a professional journalist and has a cadre of colleagues who are fantastic freelance copyeditors. Her old buddy Rebecca came on board and managed to midwife this book so quickly that my tea never even got cold. But then what? Do I sew together the pages like before? The prospect of a print book almost sent me to another publisher. Almost.
I bit the bullet and learned how to make a print book. I got on Reedsy.com. I hired a designer. I hired a web designer. I took the cover photo myself. I released it on Kindle to finance the whole shebang. And now, here we are. Book Number 2 is now available in paperback and Kindle, and it’s a 100% Sam Ladner Production. Special thanks to Carol, Rebecca, book designer Sarah Beaudin and web designer Derek Moore. It really does take a village to raise a child.
My latest book is now available on Kindle! The book is a short guide to doing applied mixed-methods research. It’s ideal for people who work in applied research role in industry, but may not have a broad, methodological training in both qual and quant research methods.
Microsoft is betting that Cortana will bring AI to the workplace. Here’s why that won’t happen.
Cortana is an intelligent agent that is supposed to act as a personal assistant. You can interact with her (notice I said “her”? More on that in a minute) via voice or text, on mobile devices or on desktop computers. Given that Microsoft’s mobile market share has fallen below 1%, it’s pretty much a certainty that most people would interact with Cortana in their offices.
Let’s walk through that interaction of Cortana as a member of a workplace.
Microsoft encourages you to command Cortana by saying, “Hey Cortana…” and then giving her a command. A typical office scenario might be, “I wonder if I should book a vacation for the first week of August. Hmm. I’ll ask Cortana.”
This is how Cortana is supposed to work:
User: Hey Cortana, should I book a vacation for the first week of August?
Cortana: Let me check your calendar. Looks like you have a meeting on Monday, August 1st. Should I move it for you?
User: Yes, that’d be great.
Cortana: Okay, I’ve moved that meeting to Monday August 8th. Would you like to see some vacation suggestions?
User: Yes, please!
This is exactly how it plays out on a demo video one Microsoft’s site.
But let’s face it: there are a lot of contextually dependent reasons why this is completely unrealistic. Leaving aside Cortana’s technical limitations for the moment (and there are many), let’s take a look at what a real office and real user might look like.
Most offices are either open concept without even the suggestion of walls. As many as 70% of us work in open concept offices. As anyone who’s worked in such an office can tell you, hearing a neighbor on the phone can be excruciatingly annoying or excruciatingly awkward, depending on your neighbor’s TMI quotient.
So there’s a good chance that everyone in the user’s office will hear this idealized scenario. There are two clear disincentives against this happening. First, Cortana will make more “boundary work” for office workers. The mere act of trying to keep your private life private at work is turns out to be, well, work. Recent research has found that keeping work and life private actually causes cognitive overload. If people use Cortana as intended, she is poised to make that much worse.
Second, Cortana demands office workers treat their workplaces as if they were kings and queens, instead of pawns and rooks. Voice interactions require workers to own their workspace, something that we know they do not do. Typical workers share their workspaces with others, and because we are apt social animals, we tend to comply with unwritten rules of workplace etiquette. Bosses’ calendars take precedence over workers’ calendars. Bosses talk more than workers. Men talk more than women. In other words, people with power talk out loud more than people with less power.
Which brings me to the fact that Cortana is a woman. Is it any coincidence that most intelligent agents today are anthropomorphized as women? One of the most striking changes in the twentieth century workplace was the almost total elimination of support staff, which were typically women. Only the most senior executives have assistants nowadays, and other mid-level white collar workers are on their own for scheduling and administrative work.
Let’s not forget that Cortana is actually based on a supportive AI character in a video game. Cortana provides these workers with a sense that they can indeed recoup the times of Mad Men and have a compliant, supportive, and self-abnegating assistant who has no needs of her own. Practically, this promises white-collar workers with a huge productivity boost, but the symbolic nature of this is even more interesting. When white-collar workers have a virtual assistant, they have re-claimed a sense of hierarchy, of control, and power (even if it is completely imaginary).
And this is why Cortana will not work in the workplace. Today’s typical office worker does not have power enough to command the space around her, and bark orders to anyone out loud, even if just to an intelligent agent. This office worker has been stripped of her ability to occupy a rung on the ladder higher than admin or support staff, because there is no admin or support staff. This typical office worker is embedded in a physical space that reflects this lack of hierarchical position — she has no command over it.
Scholars of gender and technology have described some ill-advised approaches to gender equality as “add women and stir.” The same applies to Cortana and other intelligent agents. You cannot “add Cortana and stir” and expect to see productivity improvements that somehow negate the existing organizational and physical structures of contemporary workplaces.
Qualitative research is not generally considered “real” research, and this has terrible implications for innovation. Companies’ thirst for operational effectiveness begs for quantitative data. But quantitative data does not and cannot form strategy. Qual data are a key ingredient to strategy, or the development of new and differentiated products.
Many people are familiar with Michael Porter’s famous paper, “What is strategy?” Porter famously argued that many companies mistake operational effectiveness for corporate strategy. Operational effectiveness, according to Porter, is about quality, productivity, and speed. It is about doing the same thing as others, but doing it better.
Strategy, by contrast, is about being different. It is about doing entirely different activities to deliver value to customers. Other authors have called this the “blue ocean” or finding a place in the market that is calm, unoccupied, and yours for the taking. A “red ocean” is full of competitors, doing exactly the same thing as you are, and demanding ever higher performance.
Framed this way, it is clear that operational effectiveness relies heavily on quantitative data. How efficient are we? How do we stack up against the competition? How good are our products? How fast do we make them?
Operational effectiveness simply begs for quantitative data, and now that we have access to petabytes of passively collected data relating to productivity, quality, and speed, it is easier than ever to be operationally effective.
Or it should be. We know that quantitative data requires a great deal of cleaning, massaging, and managing, not to mention analysis, to make it useful for operational effectiveness.
The shift to data-driven operations has demanded a great deal of companies’ attention, mostly because data collection and analysis is not as easy as most think it to be.
But let us not mistake this for strategy.
There is nothing inherent to benchmarking performance that lends itself to strategic advantage. Quantitative data does not reveal how or in what ways customers are making their own workarounds. Quantitative data shows us how many products meet a particular standard, how many products are produced or sold, or how fast a company makes them. It can tell you the average satisfaction a customer may have, but it cannot reveal any of the detail behind that satisfaction.
In their insightful Harvard Business Review article, “An Anthropologist Walks Into A Bar,” Christian Madsbjerg and Mikkel Rasmussen argue that qualitative research gives companies the ability to bridge the “complexity gap,” or what a study of 1500 CEOs revealed as their main challenge. Why do customers do what they do? You must do qualitative research to find out. And, by extension, you must do qualitative research to innovate.
Qualitative research is explicitly about revealing detail. Qualitative research shows how people are using products, or how these products sit and gather dust in the corner of the kitchen. Qualitative research, particularly field-based research like ethnography, offers that path to delivering truly different products.
Companies that do ethnography regularly uncover entirely new or different ways to deliver value to customers. Oftentimes, this is done unsystematically. Skillful product and brand managers know that observing everyday life can reveal the how and the why of product failure.
Nike is a great example of a company that is in touch with culture. Its marketers are largely acknowledged to be among the best in the world. Its product innovation is continual, and its brand equity is unparalleled. Even their lab-based researchers like the fabulously named Gordon Valiant are active members of the running community. His lab-based practice is complemented by regular participation in running events, where he comes into contact with other runners.
Valiant conducts in-lab studies systematically, but observes human behavior in an ad hoc way. Imagine the advantage to companies that do this research systematically. Imagine having a steady stream of insight into real people and why they do what they do. Imagine having thick description of painful workaround work, and regular replenishment of unmet customer needs.
That can only come from systematic, regular, and rigorous qualitative research.
Compare that to a company tirelessly benchmarks its quality, productivity, and speed. The company with qualitative insight into human behavior will have almost limitless potential to do things differently, to deliver new products or services, to find entirely unexamined oceans of product innovations.
Yet we spend almost nothing on qualitative research. Esomar, the international market research association, estimates that in 2013, corporations spent $6.6B USD on qualitative research, worldwide. The vast majority of money spent on qualitative research is on focus groups, but Esomar estimates almost $1.6B is spent on the more interpretive methods of in-depth interviewing and ethnography. This amount is dwarfed by the $32.4B USD spent on quantitative market research.
It could be argued that many companies need to start with operational effectiveness. Fair enough. But no company can survive on quality, productivity, and speed alone. It is too competitive a marketplace. Qualitative research, therefore, is a cheap way to guide the company toward different product offerings, and ultimately, toward innovation.
Research into brain science some surprising insights for guiding research practice. These findings suggest that the scientific method constrains our natural creativity.
Too often, researchers take their cue from the scientific method. While this method undoubtedly changed the world and our knowledge of it, it is antithetical to the creative needs of a well-rounded researcher. It is especially problematic for design research, which requires creative solutions to existing problems.
Design researchers should embrace less structure and more openness at the early stages of product design, and rigor and structure in the mature stages of product sales. As sales drop off and the product loses its natural match to the culture, design researchers should once again embrace openness in their research approaches.
Generally, we think of research as the focused, systematic collection of data, over time, in keeping with a given framework or theory. In this view, research is intended to confirm or deny given hypotheses, and incrementally improve our knowledge about a given topic.
We know from the book Thinking Fast and Slow, however, that this research approach only serves one type of thinking. Thinking Fast and Slow author Daniel Kahneman tells us that “Type 2” or “slow thinking” is a disciplined, focused, kind of thought that roughly matches the deductive reasoning of the scientific method and other traditional forms of research. It is structured and deliberate, requiring the cerebral cortex.
But Type 1 or “fast thinking” is less structured, more instinctual, and involves the more reptilian parts of the brain. At first glance, fast thinking appears to be undisciplined or even lazy – the antithesis of the scientific method. But fast thinking produces creative and intuitive leaps that are impossible with the iterative, deductive, and controlled manner of slow thinking.
Design research both thinking fast, and thinking slow. Thinking fast entails creating novel combinations, unusual interpretations, or unique syntheses. Thinking slow entails systematic evaluation and the structured contribution to a body of knowledge.
Gifted researchers engage in both thinking fast, and thinking slow. As sociologist C. Wright Mills describes, a researcher must have her “files,” which is a set of unstructured, messy, and without order:
C. Wright Mills
“…You will notice that no one project ever dominates [the files], or sets the master categories in which it is arranged. In fact, the use of the file encourages expansion of the categories which you use in your thinking. And the way in which these categories change, some being dropped and others being added – is an index of your intellectual progress and breadth. Eventually, the files will come to be arranged according to several large projects, having many sub-projects that change from year to year. [1, p. 3]”
Anthropologist Bronislaw Malinowski echoes this messy disorder when he describes what will eventually become his masterwork The Argonauts of the South Pacific:
“I estimate that my future publication will be voluminous, roughly three volumes of 500 pages each at 500 words per page. It will take me about two years to get the [manuscript] ready and see it through the press. My material is now a chaotic mass of notes. To work it out and put it into the right theoretical frame is perhaps the most difficult, exacting, and important stage of research. To work it out efficiently I must give it all my time. [2, p. 582]”
Malinowski recognizes the “chaotic mass of notes” must be whipped into shape to become a manuscript, but he must first grapple with the disorder. This is precisely what psychotherapist Rollo May describes as the “creative encounter,” or the unstructured time an artist (or researcher) spends with her subject of study.
“The first thing we notice in a creative act is that it is an encounter. Artists encounter the landscape they propose to paint – they look at it, observe it from this angle and that. They are, as we say, absorbed in itOr scientists confront their experiment, the laboratory task, in a similar situation of encounter. [3, p. 39] P. 39″
Consider also the “commonplace book,” or the kind of notebook great thinkers like John Locke and Charles Darwin used to organize their thoughts. As innovation author Stephen Johnson tells us, early modern readers did not read sequentially, but jumped around, setting the stage making creative connections.
“The tradition of the commonplace book contains a central tension between order and chaos, between the desire for methodical arrangement, and the desire for surprising new links of association….Each re-reading of the commonplace book becomes a new kind of revelation. [4, pp. 109–110]”
In other words, researchers who allow themselves to read out of order, or to collect without regard for structure, are able to make creative, intuitive leaps. But researchers who fail to methodically manage their knowledge fail to close the loop of production. Researchers need to think fast and to think slow. They need to think broadly and think narrowly. Type 1 and Type 2 thinking translates into 3 kinds of research: exploratory (thinking fast), evaluative (thinking fast and thinking slow), and experimental (thinking slow).
Frequently, social scientists in particular focus on “rigor” as the solution to good research. But rigor without creativity adds little to our collective knowledge. As Heideggerian scholar Carol Steiner argues, this “fore-structure” – or predetermined way of looking at the world – stops us from conducting innovative research and producing innovative things. Instead, innovative researchers, she found, are open to “Being,” or the ability to have experiences, people, and objects reveal themselves to them.
Carol J. Steiner
“The innovators I studied seemed sometimes to be attuned to that old understanding of the relationship between Being and people…Losing faith in the scientific method has allowed them to understand themselves as other than knowledge-makers. Consequently, they often project an openness that allows them a different world to shine through for them, the public world. “[5, p. 594]
In other words, researchers in particular must struggle against the “fore-structure” or their extensive theoretical and methodological training which interferes with receptivity. As Rollo May argues, being receptive does not mean lacking in rigor.
“The receptivity of the artist [or researcher] must never be confused with passivity. Receptivity is the artist holding him or herself alive and open to hear what being may speak. Such receptivity requires a nimbleness, a fine-honed sensitivity in order to let one’s self be the vehicle of whatever vision may emerge. [3, p. 80]”
Rigor must be introduced later in the process – after the researcher becomes open to a vision, after the researcher grapples with the complexities of the data and their incongruence. Rigor often comes after a period of unconscious processing of the data. Taking walks, playing, napping, and engaging in unstructured activity have all been shown to allow synthetic ideas to emerge.
Researchers should therefore use the scientific method with caution. Be aware of when you need rigor, and when you need creativity.
 C. W. Mills, The Sociological Imagination. New York: Oxford University Press, 1959.
 M. W. Young, Malinowksi: Odyssey of an Anthropologist. New Haven, CT: Yale University Press, 2004.
 R. May, The Courage to Create. New York: WW Norton, 1994.
 S. Johnson, Where Good Ideas Come From: The Natural History of Innovation. New York, NY: Riverhead Books, 2010.
 C. Steiner, “Constructive Science and Technology Studies: On the Path to Being?,” Soc. Stud. Sci., vol. 29, no. 4, pp. 583–616, 1999.
Sadly, it is all too predictable that technologists underestimate, misjudge, or otherwise underappreciate how humans will interact with their technology. This is for one simple reason: engineering, as a discipline, does not bother to ask: “What is this?”
Engineers are not scientists, much less social scientists. They typically have no knowledge of basic human behavior such as loss aversion or impression management, even though these are the building blocks of social interaction – and entry-level knowledge for social scientists.
Engineers could ask, “What is this?” but instead choose to ask: “Does this work?”
“Does this work?” underpins research within tech companies. Once upon a time, tech companies hired engineers they called research scientists and stuck them in labs to tinker endlessly with pieces of hardware and scraps of computer code. Even today, there are over 7800 job postings for “research scientist” on LinkedIn, which are typically engineers or computer scientists. A posting for an Uber research scientist intern is instructive. In addition to having a Master’s degree in a “technical field,” the intern is also encouraged to engage in “risk taking” and to “turn the dreams of science fiction into reality.” Another job posting for a research scientist at Facebook asks for skills in the scientific method, but then specifically narrows that down to “evaluate performance and de-bug.” In other words: Does this work? Notably not mentioned: the ability to develop basic knowledge.
Academics would see much of this activity as more akin to prototyping than to scientific inquiry. Indeed, these engineers produced many technology prototypes, but not much in the way of generally applicable knowledge, or what the rest of us might call “science.” In other words, they never seem to stop and ask, “What is this?”
Today, tech companies need to ask things like “What is a digital public sphere?” and “What is the nature of privacy?” and “What is artificial intelligence versus human intelligence?” Tech companies need typologies of human-computer interactions, motivations, fears, and human foibles. They need to create a system of knowledge around key questions of technology like artificial intelligence and social media.
Some argue that technology development doesn’t have time for “understanding,” that asking “What is this” takes too long and is too expensive. But this is a false economy. Philosopher Martha Nussbaum tells us plainly that we need that understanding, not for understanding’s sake but because it guides our planning:
“Understanding is always practical, since without it action is bound to be unfocused and ad hoc.” — Martha Nussbaum
In other words, if you don’t know “What is this” you’re probably going to build the wrong thing.
We can see this pattern of building the wrong thing in technology, over and over again. The term “user friendly” was invented way back in 1972. Curiously, “user hostile” wasn’t invented until 1996, just before Microsoft’s infamous Clippy appeared in 1997. Clippy’s abrupt entre onto the desktops of the world indicated that technology “researchers” had no idea what they had made. Word famously exploded from what appeared to be a digital typewriter, to a swollen behemoth that did everything from create a newsletter to automate mailing labels. Pick a lane, people. Clippy was there to tell users how to make Microsoft Word work, but no one bothered to find out much less explain what Microsoft Word actually was. Word is still so swollen that a new user today can credibly ask “What even IS this?”
Flash forward to today, and the so-called “lean startup” approach to building technology is really just a faster, even more facile way to ask “Does this work.” In reality, tech companies still don’t know, “What is this?” even after they’ve built a working prototype.
In my former role as a hiring manager at a major tech company, it took an average of 100 days to hire just one ethnographer and more often than not, the job remained open much longer than that. These are the very people who can tell us, “What is this?” The demand for these social scientists only grows. Yet, the tech industry as a whole has not yet figured out they need to ask “What is this?” before they build something.
Were tech companies to ask, “what is this,” they would learn the basic properties of their tools, their coherence, intelligibility, performance, and affordances. Instead, they are fully occupied with “does this work,” and create horrific blights on our collective consciousness like Tay, the racist AI Bot on the relatively innocuous end of the scale, and Compass, the racist parole algorithm at the full-on evil end of the scale.
Technologists do not know what they do not know. Ethnographers hope for the day when they can just ask “What is this” without worrying about whether it works, because it doesn’t even exist yet. But tech development continues apace.
It’s time for ethnographers to stop this sad venture, and instead insist on asking: What IS this? Before another Tay, before another Compass. Technologists too must take responsibility because if we don’t, the 21st century will become even more technocentric, and even less intelligible. Let’s find out what’s going on before we build anything else.
Lately we’ve been inundated with news about how terrible technology is, and how “no one could have known” what awful outcomes would come from mixing humans and technology together. This blog post is a redux of a talk I gave in Vancouver recently, and it’s a hopeful (though a little stoic) analysis on how social scientists inside tech companies can stay the course, and keep talking about awful outcomes. If you’re just such a researcher, or maybe you’re a social scientist working outside tech companies, this post is for you.
Social scientists inside tech companies might see a little of themselves in another social scientist, Tim Lee. Mr. Lee, a self-employed economist and works alone in an office in Greenwich, Connecticut.
To make his living, Mr. Lee sells subscriptions to his newsletter called piEconomics to institutional and private investors – a boring, 10-page block of text analyzing, macro and microeconomic trends.
Back in 2011, the bearish Mr. Lee predicted a crash of the Turkish lira. Specifically, he said one dollar would buy 7.2 lira. Most people thought he was crazy. By 2018, Mr. Lee’s prediction came through. In August, the dollar bought 6.95 lira, and it may well his that 7.2 by year’s end. As you might expect, Mr. Lee was rewarded for his prescience…with cancelled subscriptions.
That’s right, his subscribers rewarded his accuracy and insight by taking back their money. Mr. Lee seems realistic about the whole affair. “It has been some hard sledding,” he told the New York Times. “I have lost a lot of clients because I am too bearish.”
People who do human-centred research inside a tech company know what Tim Lee feels like. These researchers have probably told people what they know to be true, only be disbelieved. Maybe there was a researcher inside Twitter that warned it would be a platform loved by Nazis. Maybe it was a researcher inside Facebook who warned the newsfeed is easily gamed for nefarious purposes. These researchers, just like Tim, both bearish, and probably both “rewarded” in the same way.
Social scientists inside tech companies, and Tim Lee, are a little like Cassandra, the tragic Greek hero who absolutely knew what sorrow was to come, yet no one believed her either. Social scientists inside tech companies, listen up: you can learn from Cassandra. A lot.
Cassandra in the Temple
When Cassandra of Troy was little, she and her brother camped out in the Temple of Apollo. While there, they had their ears licked by the temple snakes. This gave her the gift of prophecy. But Apollo being the vengeful Greek God we know him to be, also cursed her: she would see the future, but no one would ever believe her.
In the beginning, she saw trivial things, like when visitors would arrive. But eventually, here visions became more grand, dramatic, and even scary. It culminated in the mother of all warnings: Cassandra knew there were soldiers inside the Trojan Horse. And of course, no one believed her.
Of course, Troy fell and the Trojans lost the war. She was kidnapped and enslaved by Agamemnon, of the winning side. When she got to Agamemnon’s palace, she got a terrible sense of foreboding. Sure enough, she was right: Agamemnon’s wife Clytemnestra murdered her and Agamemnon, and that was the end of Cassandra. All the she ever did was tell the truth about what she saw, and accurately predict the future, and this is what she gets. A little bit worse than cancelled newsletter subscriptions, eh?
Technology researchers know how she feels. They have real information that will help their technology partners do their jobs better. And yet, we often have this challenge: no one believes us. That is some hard sledding. I mean, sure not-taken-as-a-slave-after-the-war-and-murdered-by-your-slaver’s-jealous-wife hard sledding, but you know, still kind of rough.
What can we learn from Cassandra? This gift – her gift, our gift – comes at a cost. But it’s still a gift. In fact, the fact that it comes with hard sledding is actually a blessing. But Cassandra didn’t understand that. The people of Troy really didn’t believe her, she got more and more hysterical. It was just this vicious circle. She didn’t embrace the cost of her gift.
The Cassandra Complex
Psychologist Laurie Layton Schapira writes about what she calls the Cassandra Complex, or the persistent experience of being unable to accept that others will not bow to your will. Schapira uses Cassandra to describe her patients who had become plaintive, immature whiny people who continually fail to move past the moment when people disbelieve them. Instead, they stay arrested in time, mired in pain, regret, and anger. That anger is often justified; some of her patients had led very traumatic lives. The problem is that they stay angry, instead of reconciling and integrating that anger. They are unhappy, and stuck. They cannot move on with their lives.
You can see how a researcher could fall into this same trap. She might be literally saying, “My usability test predicted people will mistakenly post personal things” or “My ethnographic data clearly showed that the newsfeed is full of garbage.” But if you are not believed, over and over again, this begins to morph into “I am angry you do not believe me.”
This is where Schapira finds her patients: caught up in the pain and anguish of not being believed. The Cassandra Complex is a real risk for researchers, either working within or even outside technology companies; we predict terrible outcomes and no one believes us. Eventually, they just stop listening.
I cannot tell you how many times I have been the one saying, “There are SOLDIERS in THAT HORSE!”
So how do you solve for the Cassandra Complex? I’ll start with what won’t solve it. First, self-care.
Look, self-care is bullshit. I’m sorry, it is. I’m not going to stop those soldiers from jumping out of the horse by reading a lot skin care advice from some rich white lady. No. It might give me nice skin, don’t get me wrong, but it won’t solve the problem. Sure, go ahead and get your 8 hours sleep, by all means, but that’s not what keeps Tim Lee alive during patches of hard sledding. So forget self care.
Do people fail to believe social science warnings because we are bad researchers? Also no. Decades of psychological research has shown that fixed minds are hungry for confirmation, not for refutation. Data can be valid and sound and still no one believes us, so it’s not the quality of the research.
No, wait, that’s not entirely true. Absolutely we can improve. We don’t spend enough time analyzing our data. We report a laundry list of “things that happened” instead of providing explanations of why they happened. We are fearful of making universal statements. We’re afraid of our voices, so we bury them. I quote here the Robert Solow, who says, “The fact that there is no such thing as perfect antisepsis does not mean one might as well do brain surgery in a sewer” (cited in Geertz, 2000). So just like self-care, being good researchers is necessary but not sufficient to solving the problem.
Why do people fail to believe when the evidence is clear? I gather data, as I’m trained to do, with the utmost rigor and care. I take pains to present the data in rigorous but also compelling ways. I encourage stakeholders to come along with me, to witness product failures first hand. I build relationships, and above all, I care. And yet I still fail. Why?
This phenomenon of not being believed is not about any individual but about the cultural context in which researchers practice their work. I like to believe it’s all about me but it’s not about me, or you, or Tim Lee, or even Cassandra. It is about the way we organize ourselves, as humans, into groups. It’s very difficult not to take things personally, but it helps if you understand that the context, which is not something you can control. Culture is, as Peter Drucker said, what eats strategy for breakfast. Culture is what makes confirmation bias a generalized phenomenon; one person’s disbelief is confirmation bias, but a whole organization full of confirmation bias? That is culture.
Humans need consensus for groups to stay cohesive and unfortunately, the nature of what we do attacks that consensus. The data we collect is what anthropologist Elizabeth Coulson calls, “uncomfortable knowledge” (as cited in Ramírez & Ravetz, 2011).
Technology researchers are the bearers of news, which can often mean bad news. It’s not about the individual researchers, but the hard role they are required to play. We are here to tell people things they don’t want to hear. It’s a hard job, and hard sledding is guaranteed.
But it turns out, being the bearer of bad news is a unique and wonderful opportunity to become more self actualized, and lead a more meaningful life.
It is the opportunity to be a hero. All heroes must deal with failure. W.H. Auden wrote, “The typical Greek tragic situation is one in which whatever the hero does must be wrong” (Auden, 1948, p. 21 emphasis mine). So, you know, we are doomed. Sorry. We researchers are heroes, but more specifically, we are tragic heroes. Being Cassandra is actually a GIFT. It is something that many people only dream about. It is the gift of self-creation. Sure, it’s foisted upon us, but it’s a wonderful gift. We know from philosophy that making oneself is the key to becoming a realized person.
Nietzsche wondered what makes a hero, and he found that it’s about integrate the best and the worst together: “What makes [us] heroic? To go to meet simultaneously one’s greatest sorrow and one’s greatest hope” (Nietzsche, 1977, p. 235). This is the path to a unique and truly meaningful life. Imagine if you lived your entire life without meeting your greatest sorrow. On the surface, it seems like a pretty good life, but it’s not. You cannot make sense out of goodness without badness.
People whose job it is to point out the essential problems with their company’s products must face their sadness. But this is a gift.
Simone de Beauvoir puts it bluntly. “Since we do not succeed in fleeing it, let us therefore try to look the truth in the face” (de Beauvoir, 1948, p. 24). Let us embrace looking truth in the face.
Facing your sorrow can be a path to reinvention. Polish Canadian psychologist Kazimierez Dabrowski has a wonderful way of thinking of meeting one’s greatest sorrow. He called it the theory of positive disintegration. Contrary to most psychologists, Dabrowski believed there was value in fear, anger, despair, and psychic pain because it can lead to a crisis, and then, ultimately, to growth. The key to this growth is taking advantage of psychic pain, making an opportunity to question yourself, your beliefs, and the gap between your ideal self and your current self.
The key to weathering being a Cassandra is making peace with the gap between your ideal self and your actual self. As Schapira tells us, “She needs to pull herself out of her with her own ego, finally to meet her own animus equal terms” (Schapira, 1988).
What does this mean? This means respecting that power you have inside you and embracing your masculine animus, or masculine power. Your animus is strong, confident, but can also be arrogant and aggressive. Cassandra is insightful and prescient, but she is plaintive and whiny. Imagine you integrate the two. Incorporate that power, don’t being afraid of it.
We need to take a stand, be bold, and tell people when we disagree. At the same time, we must accept that we will probably fail. We must have courage in the face of this failure and instead of attaching ourselves to “success,” we should attach ourselves to the struggle. This is how we become whole: by recognizing the struggle. There are some specific steps you can take to focus on the struggle, and meet your animus.
To do this, researchers will need a daily dose of meaning. A lot of us believe meaning is something that exists out there, in the world, and our life’s task is to just find it.
Meaning is not something you can find. Creativity coach Eric Maisel tells us that meaning is not something sitting on a shelf somewhere. It is something you must make, with the processes of your own mind.
“There are so many ways to kill off meaning: by not caring, by not choosing, by not besting demons, by not standing up” (Maisel, 2013, p. 129).
Incidentally, Maisel does endorse self care as well, but note that he too sees it as a enabler, not the outcome itself. “You will also have to change your life so that you feel less threatened, less anxious, less rageful, less upset with life, and less self-reproachful, and so on” (Maisel, 2013, p. 63).
I keep asking myself, why didn’t Cassandra just go up to the horse and open the door!? Why didn’t she go all Arya Stark and just kill them all herself? Or at least die trying to kill them? What was WRONG with her? She let us all down, really. So don’t be like Cassandra. Be more like Tim Lee. Tim Lee is now predicting a new crash, bigger than 2008, bigger than the Turkish lira. People don’t believe him, because of course.
Courage, Sartre wrote, is the ability to act despite despair. So if you come in tomorrow and that same goddamn boulder is at the bottom of the hill, look at it. Think about its meaning. It your chance to be courageous. Tim Lee is still going. He’s had some hard sledding sure, but he’s also accepted that. And he has also said he stands by his predictions. So should you.
This post the full text of a presentation at the Radical Research Conference in beautfiul Vancouver, British Columbia in September 2018.
Auden, W. H. (1948). Introduction. In W. H. Auden (Ed.), The Portable Greek Reader. New York, NY: Penguin Books.
Bailey, F. G. (1983). The Tactical Uses of Passioin: An Essay on Power, Reason, and Reality. Ithaca, NY: Cornell University Press.
de Beauvoir, S. (1948). The Ethics of Ambiguity. New York, NY: Open Road Integrated Media.
Geertz, C. (2000). The Interpretation of Cultures. New York: Basic Books.
Gilligan, C. (1993). In A Different Voice: Psychological Theory and Women’s Development. Cambridge: Harvard University Press.
Maisel, E. (2013). Why Smart People Hurt: A Guide for the Bright, the Sensitive, and the Creative. Red Wheel Weiser. Retrieved from https://books.google.com/books?id=dJ1dQrbR-rkC
Mills, C. W. (1959). The Sociological Imagination. New York: Oxford University Press.
Nietzsche, F. (1977). A Nietzsche Reader. London, UK: Penguin Classics.
October, T., Dizon, Z., Arnold, R., & Rosenberg, A. (2018). Characteristics of physician empathetic statements during pediatric intensive care conferences with family members: A qualitative study. JAMA Network Open, 1(3), e180351. Retrieved from http://dx.doi.org/10.1001/jamanetworkopen.2018.0351
Ramírez, R., & Ravetz, J. (2011). Feral Futures: Zen and Aesthetics. Futures, 43(4), 478–487. Retrieved from http://linkinghub.elsevier.com/retrieve/pii/S0016328710002880
Schapira, L. L. (1988). The Cassandra Complex: A Modern Perspective on Hysteria. Toronto, ON: Inner City Books.