What even IS this? Why tech companies are still failing us

Why do we know so little about the social implications of technology? It plays a starring role in everyday life, as essential as food, shelter, and clothing. A huge share (70%) of Americans use social media, and even 65% of senior citizens use Facebook  – that’s more than the number of people who eat family dinner at home, attend church, or have a pet.

Yet, we know so little about technology’s impact on everyday life. We are only just now recognizing problems like coordinated disinformation, breaches of personal data, and algorithmic discrimination.  Clearly, technology companies are falling short on understanding the social implications of their tools before and after they build them. But why? Why are tech companies failing us?

Woman in kitchen.
Woman in kitchen. Source: Art Institute of Chicago

Sadly, it is all too predictable that technologists underestimate, misjudge, or otherwise underappreciate how humans will interact with their technology. This is for one simple reason: engineering, as a discipline, does not bother to ask:  “What is this?”

Engineers are not scientists, much less social scientists. They typically have no knowledge of basic human behavior such as loss aversion or impression management, even though these are the building blocks of social interaction – and entry-level knowledge for social scientists.

Engineers could ask, “What is this?” but instead choose to ask: “Does this work?”

“Does this work?” underpins research within tech companies. Once upon a time, tech companies hired engineers they called research scientists and stuck them in labs to tinker endlessly with pieces of hardware and scraps of computer code. Even today, there are over 7800 job postings for “research scientist” on LinkedIn, which are typically engineers or computer scientists. A posting for an Uber research scientist intern is instructive. In addition to having a Master’s degree in a “technical field,” the intern is also encouraged to engage in “risk taking” and to “turn the dreams of science fiction into reality.”  Another job posting for a research scientist at Facebook asks for skills in the scientific method, but then specifically narrows that down to “evaluate performance and de-bug.” In other words: Does this work? Notably not mentioned: the ability to develop basic knowledge.

Academics would see much of this activity as more akin to prototyping than to scientific inquiry. Indeed, these engineers produced many technology prototypes, but not much in the way of generally applicable knowledge, or what the rest of us might call “science.” In other words, they never seem to stop and ask, “What is this?”

Painting by Salvador Dali
Inventions of The Monsters

Today, tech companies need to ask things like “What is a digital public sphere?” and “What is the nature of privacy?” and “What is artificial intelligence versus human intelligence?” Tech companies need typologies of human-computer interactions, motivations, fears, and human foibles. They need to create a system of knowledge around key questions of technology like artificial intelligence and social media.

Some argue that technology development doesn’t have time for “understanding,” that asking “What is this” takes too long and is too expensive. But this is a false economy. Philosopher Martha Nussbaum tells us plainly that we need that understanding, not for understanding’s sake but because it guides our planning:

“Understanding is always practical, since without it action is bound to be unfocused and ad hoc.” — Martha Nussbaum

In other words, if you don’t know “What is this” you’re probably going to build the wrong thing.

We can see this pattern of building the wrong thing in technology, over and over again. The term “user friendlywas invented way back in 1972. Curiously, “user hostile” wasn’t invented until 1996, just before Microsoft’s infamous Clippy appeared in 1997. Clippy’s abrupt entre onto the desktops of the world indicated that technology “researchers” had no idea what they had made. Word famously exploded from what appeared to be a digital typewriter, to a swollen behemoth that did everything from create a newsletter to automate mailing labels. Pick a lane, people. Clippy was there to tell users how to make Microsoft Word work, but no one bothered to find out much less explain what Microsoft Word actually was.  Word is still so swollen that a new user today can credibly ask “What even IS this?”

Clippy the paperclip
Source: NYMag.com

Flash forward to today, and the so-called “lean startup” approach to building technology is really just a faster, even more facile way to ask “Does this work.” In reality, tech companies still don’t know, “What is this?” even after they’ve built a working prototype.

In my former role as a hiring manager at a major tech company, it took an average of 100 days to hire just one ethnographer and more often than not, the job remained open much longer than that. These are the very people who can tell us, “What is this?” The demand for these social scientists only grows. Yet, the tech industry as a whole has not yet figured out they need to ask “What is this?” before they build something.

Were tech companies to ask, “what is this,” they would learn the basic properties of their tools, their coherence, intelligibility, performance, and affordances. Instead, they are fully occupied with “does this work,” and create horrific blights on our collective consciousness like Tay, the racist AI Bot on the relatively innocuous end of the scale, and Compass, the racist parole algorithm at the full-on evil end of the scale.

Technologists do not know what they do not know. Ethnographers hope for the day when they can just ask “What is this” without worrying about whether it works, because it doesn’t even exist yet. But tech development continues apace.

It’s time for ethnographers to stop this sad venture, and instead insist on asking: What IS this? Before another Tay, before another Compass. Technologists too must take responsibility because if we don’t, the 21st century will become even more technocentric, and even less intelligible. Let’s find out what’s going on before we build anything else.

What technologists can learn from the tragic Greek myth of Cassandra

Lately we’ve been inundated with news about how terrible technology is, and how “no one could have known” what awful outcomes would come from mixing humans and technology together. This blog post is a redux of a talk I gave in Vancouver recently, and it’s a hopeful (though a little stoic) analysis on how social scientists inside tech companies can stay the course, and keep talking about awful outcomes. If you’re just such a researcher, or maybe you’re a social scientist working outside tech companies, this post is for you.

Social scientists inside tech companies might see a little of themselves in another social scientist, Tim Lee. Mr. Lee, a self-employed economist and works alone in an office in Greenwich, Connecticut.

To make his living, Mr. Lee sells subscriptions to his newsletter called piEconomics to institutional and private investors –  a boring, 10-page block of text analyzing, macro and microeconomic trends.

Source: The New York Times

Back in 2011, the bearish Mr. Lee predicted a crash of the Turkish lira. Specifically, he said one dollar would buy 7.2 lira. Most people thought he was crazy. By 2018, Mr. Lee’s prediction came through. In August, the dollar bought 6.95 lira, and it may well his that 7.2 by year’s end. As you might expect, Mr. Lee was rewarded for his prescience…with cancelled subscriptions.

Wait, what?

That’s right, his subscribers rewarded his accuracy and insight by taking back their money. Mr. Lee seems realistic about the whole affair. “It has been some hard sledding,” he told the New York Times. “I have lost a lot of clients because I am too bearish.”

People who do human-centred research inside a tech company know what Tim Lee feels like. These researchers have probably told people what they know to be true, only be disbelieved. Maybe there was a researcher inside Twitter that warned it would be a platform loved by Nazis. Maybe it was a researcher inside Facebook who warned the newsfeed is easily gamed for nefarious purposes. These researchers, just like Tim, both bearish, and probably both “rewarded” in the same way.

Social scientists inside tech companies, and Tim Lee, are a little like Cassandra, the tragic Greek hero who absolutely knew what sorrow was to come, yet no one believed her either. Social scientists inside tech companies, listen up: you can learn from Cassandra. A lot.

Cassandra in the Temple

When Cassandra of Troy was little, she and her brother camped out in the Temple of Apollo. While there, they had their ears licked by the temple snakes. This gave her the gift of prophecy. But Apollo being the vengeful Greek God we know him to be, also cursed her: she would see the future, but no one would ever believe her.

In the beginning, she saw trivial things, like when visitors would arrive. But eventually, here visions became more grand, dramatic, and even scary. It culminated in the mother of all warnings: Cassandra knew there were soldiers inside the Trojan Horse. And of course, no one believed her.

Of course, Troy fell and the Trojans lost the war. She was kidnapped and enslaved by Agamemnon, of the winning side. When she got to Agamemnon’s palace, she got a terrible sense of foreboding. Sure enough, she was right: Agamemnon’s wife Clytemnestra murdered her and Agamemnon, and that was the end of Cassandra. All the she ever did was tell the truth about what she saw, and accurately predict the future, and this is what she gets. A little bit worse than cancelled newsletter subscriptions, eh?

Technology researchers know how she feels. They have real information that will help their technology partners do their jobs better. And yet, we often have this challenge: no one believes us. That is some hard sledding.  I mean, sure not-taken-as-a-slave-after-the-war-and-murdered-by-your-slaver’s-jealous-wife hard sledding, but you know, still kind of rough.

What can we learn from Cassandra? This gift – her gift, our gift – comes at a cost. But it’s still a gift. In fact, the fact that it comes with hard sledding is actually a blessing. But Cassandra didn’t understand that. The people of Troy really didn’t believe her, she got more and more hysterical. It was just this vicious circle. She didn’t embrace the cost of her gift.

The Cassandra Complex

 Psychologist Laurie Layton Schapira writes about what she calls the Cassandra Complex, or the persistent experience of being unable to accept that others will not bow to your will. Schapira uses Cassandra to describe her patients who had become plaintive, immature whiny people who continually fail to move past the moment when people disbelieve them. Instead, they stay arrested in time, mired in pain, regret, and anger. That anger is often justified; some of her patients had led very traumatic lives. The problem is that they stay angry, instead of reconciling and integrating that anger. They are unhappy, and stuck. They cannot move on with their lives.

You can see how a researcher could fall into this same trap. She might be literally saying, “My usability test predicted people will mistakenly post personal things” or “My ethnographic data clearly showed that the newsfeed is full of garbage.” But if you are not believed, over and over again, this begins to morph into “I am angry you do not believe me.”

This is where Schapira finds her patients: caught up in the pain and anguish of not being believed. The Cassandra Complex is a real risk for researchers, either working within or even outside technology companies; we predict terrible outcomes and no one believes us. Eventually, they just stop listening.

I cannot tell you how many times I have been the one saying, “There are SOLDIERS in THAT HORSE!”

So how do you solve for the Cassandra Complex? I’ll start with what won’t solve it. First, self-care.

Look, self-care is bullshit. I’m sorry, it is. I’m not going to stop those soldiers from jumping out of the horse by reading a lot skin care advice from some rich white lady. No. It might give me nice skin, don’t get me wrong, but it won’t solve the problem. Sure, go ahead and get your 8 hours sleep, by all means, but that’s not what keeps Tim Lee alive during patches of hard sledding. So forget self care.

Do people fail to believe social science warnings because we are bad researchers? Also no. Decades of psychological research has shown that fixed minds are hungry for confirmation, not for refutation. Data can be valid and sound and still no one believes us, so it’s not the quality of the research.

No, wait, that’s not entirely true. Absolutely we can improve. We don’t spend enough time analyzing our data. We report a laundry list of “things that happened” instead of providing explanations of why they happened. We are fearful of making universal statements. We’re afraid of our voices, so we bury them. I quote here the Robert Solow, who says, “The fact that there is no such thing as perfect antisepsis does not mean one might as well do brain surgery in a sewer” (cited in Geertz, 2000). So just like self-care, being good researchers is necessary but not sufficient to solving the problem.

Why do people fail to believe when the evidence is clear? I gather data, as I’m trained to do, with the utmost rigor and care. I take pains to present the data in rigorous but also compelling ways. I encourage stakeholders to come along with me, to witness product failures first hand. I build relationships, and above all, I care. And yet I still fail. Why?

This phenomenon of not being believed is not about any individual but about the cultural context in which researchers practice their work. I like to believe it’s all about me but it’s not about me, or you, or Tim Lee, or even Cassandra. It is about the way we organize ourselves, as humans, into groups. It’s very difficult not to take things personally, but it helps if you understand that the context, which is not something you can control. Culture is, as Peter Drucker said, what eats strategy for breakfast. Culture is what makes confirmation bias a generalized phenomenon; one person’s disbelief is confirmation bias, but a whole organization full of confirmation bias? That is culture.

Humans need consensus for groups to stay cohesive and unfortunately, the nature of what we do attacks that consensus. The data we collect is what anthropologist Elizabeth Coulson calls, “uncomfortable knowledge” (as cited in Ramírez & Ravetz, 2011).

Technology researchers are the bearers of news, which can often mean bad news. It’s not about the individual researchers, but the hard role they are required to play. We are here to tell people things they don’t want to hear. It’s a hard job, and hard sledding is guaranteed.

But it turns out, being the bearer of bad news is a unique and wonderful opportunity to become more self actualized, and lead a more meaningful life.

It is the opportunity to be a hero. All heroes must deal with failure. W.H. Auden wrote, “The typical Greek tragic situation is one in which whatever the hero does must be wrong” (Auden, 1948, p. 21 emphasis mine). So, you know, we are doomed. Sorry. We researchers are heroes, but more specifically, we are tragic heroes. Being Cassandra is actually a GIFT. It is something that many people only dream about. It is the gift of self-creation. Sure, it’s foisted upon us, but it’s a wonderful gift. We know from philosophy that making oneself is the key to becoming a realized person.

Nietzsche wondered what makes a hero, and he found that it’s about integrate the best and the worst together: “What makes [us] heroic? To go to meet simultaneously one’s greatest sorrow and one’s greatest hope” (Nietzsche, 1977, p. 235). This is the path to a unique and truly meaningful life. Imagine if you lived your entire life without meeting your greatest sorrow. On the surface, it seems like a pretty good life, but it’s not. You cannot make sense out of goodness without badness.

People whose job it is to point out the essential problems with their company’s products must face their sadness. But this is a gift.

Simone de Beauvoir puts it bluntly. “Since we do not succeed in fleeing it, let us therefore try to look the truth in the face” (de Beauvoir, 1948, p. 24).  Let us embrace looking truth in the face.

Facing your sorrow can be a path to reinvention. Polish Canadian psychologist Kazimierez Dabrowski has a wonderful way of thinking of meeting one’s greatest sorrow. He called it the theory of positive disintegration. Contrary to most psychologists, Dabrowski believed there was value in fear, anger, despair, and psychic pain because it can lead to a crisis, and then, ultimately, to growth. The key to this growth is taking advantage of psychic pain, making an opportunity to question yourself, your beliefs, and the gap between your ideal self and your current self.

The key to weathering being a Cassandra is making peace with the gap between your ideal self and your actual self. As Schapira tells us, “She needs to pull herself out of her with her own ego, finally to meet her own animus equal terms” (Schapira, 1988).

What does this mean? This means respecting that power you have inside you and embracing your masculine animus, or masculine power. Your animus is strong, confident, but can also be arrogant and aggressive. Cassandra is insightful and prescient, but she is plaintive and whiny. Imagine you integrate the two. Incorporate that power, don’t being afraid of it.

We need to take a stand, be bold, and tell people when we disagree. At the same time, we must accept that we will probably fail. We must have courage in the face of this failure and instead of attaching ourselves to “success,” we should attach ourselves to the struggle. This is how we become whole: by recognizing the struggle. There are some specific steps you can take to focus on the struggle, and meet your animus.

To do this, researchers will need a daily dose of meaning. A lot of us believe meaning is something that exists out there, in the world, and our life’s task is to just find it.

Meaning is not something you can find. Creativity coach Eric Maisel tells us that meaning is not something sitting on a shelf somewhere. It is something you must make, with the processes of your own mind.

“There are so many ways to kill off meaning: by not caring, by not choosing, by not besting demons, by not standing up” (Maisel, 2013, p. 129).

Incidentally, Maisel does endorse self care as well, but note that he too sees it as a enabler, not the outcome itself.  “You will also have to change your life so that you feel less threatened, less anxious, less rageful, less upset with life, and less self-reproachful, and so on” (Maisel, 2013, p. 63).

I keep asking myself, why didn’t Cassandra just go up to the horse and open the door!? Why didn’t she go all Arya Stark and just kill them all herself? Or at least die trying to kill them? What was WRONG with her? She let us all down, really.  So don’t be like Cassandra. Be more like Tim Lee. Tim Lee is now predicting a new crash, bigger than 2008, bigger than the Turkish lira. People don’t believe him, because of course.

Courage, Sartre wrote, is the ability to act despite despair. So if you come in tomorrow and that same goddamn boulder is at the bottom of the hill, look at it. Think about its meaning. It your chance to be courageous. Tim Lee is still going. He’s had some hard sledding sure, but he’s also accepted that. And he has also said he stands by his predictions. So should you.

 

This post the full text of a presentation at the Radical Research Conference in beautfiul Vancouver, British Columbia in September 2018.

References

Auden, W. H. (1948). Introduction. In W. H. Auden (Ed.), The Portable Greek Reader. New York, NY: Penguin Books.

Bailey, F. G. (1983). The Tactical Uses of Passioin: An Essay on Power, Reason, and Reality. Ithaca, NY: Cornell University Press.

de Beauvoir, S. (1948). The Ethics of Ambiguity. New York, NY: Open Road Integrated Media.

Geertz, C. (2000). The Interpretation of Cultures. New York: Basic Books.

Gilligan, C. (1993). In A Different Voice: Psychological Theory and Women’s Development. Cambridge: Harvard University Press.

Maisel, E. (2013). Why Smart People Hurt: A Guide for the Bright, the Sensitive, and the Creative. Red Wheel Weiser. Retrieved from https://books.google.com/books?id=dJ1dQrbR-rkC

Mills, C. W. (1959). The Sociological Imagination. New York: Oxford University Press.

Nietzsche, F. (1977). A Nietzsche Reader. London, UK: Penguin Classics.

October, T., Dizon, Z., Arnold, R., & Rosenberg, A. (2018). Characteristics of physician empathetic statements during pediatric intensive care conferences with family members: A qualitative study. JAMA Network Open, 1(3), e180351. Retrieved from http://dx.doi.org/10.1001/jamanetworkopen.2018.0351

Ramírez, R., & Ravetz, J. (2011). Feral Futures: Zen and Aesthetics. Futures, 43(4), 478–487. Retrieved from http://linkinghub.elsevier.com/retrieve/pii/S0016328710002880

Schapira, L. L. (1988). The Cassandra Complex: A Modern Perspective on Hysteria. Toronto, ON: Inner City Books.

 

 

Why Machine Learning isn’t about machines

How will machine learning change us as a society? It’s now time to ask this question — before we start building products and services that have unintended consequences.

I wanted to start this blog post by referencing “the turn of the the last century.” I realized that would put me smack dab in the middle of the Y2K hysteria, and not in the birth of bureaucracy (whence such hysteria came).

No, we only know what the “turn of the century” means, many years in retrospect. Now, we can look back on the year 1900 and see quite clearly that its significance was the shift from idiosyncratic, family-run, and sometimes chaotic organizations toward professional management, and of course, bureaucracy.

It was hard to see while it was happening, but Max Weber saw it (perhaps that’s why he had a nervous breakdown). Weber saw that we had begun to run our businesses and governments with standardized rules, and standardized hierarchies. No longer could the boss’s son waltz in and tell everyone what to do, unless he had an actual job title. (Well, that was the idea anyway; we apparently still let the boss’s kids take jobs they’re not qualified for).

This was radically new and had huge implications for how we purchase, exchange, work, and live. Bureaucracy became the irrationally rational norm; rules were to be followed even if they made no sense.

Which brings me to machine learning. Machines can learn if we give them the tools to learn, and the data to help them practice. But they cannot see what Max Weber saw. Machines cannot know they are creating an irrational bureaucratic hellscape — and nor would they care. They are very good at things humans are bad at, namely, vigilance and repetitive tasks. We should let them do those things.

But we should not let machines make decisions about rules, about whether the boss’s son is qualified, or other culturally and socially important questions. At the turn of this century, we are making machines that can do all of those things, but we are not pausing to evaluate whether we should.

Historians like to say that the 19th century did not really end on the arbitrary date of December 31, 1899, but instead on the more auspicious and socially meaningful date of November 11, 1918. It was only then that humanity realized what its changes had wrought, what horrors we had invented, and that humans themselves must take responsibility for those changes. I would argue we need to do the same now, before an equally socially meaningful date in the future.

Why Cortana doesn’t work at work

Microsoft is betting that Cortana will bring AI to the workplace. Here’s why that won’t happen.

Cortana is an intelligent agent  that is supposed to act as a personal assistant. You can interact with her (notice I said “her”? More on that in a minute) via voice or text, on mobile devices or on desktop computers. Given that Microsoft’s mobile market share has fallen below 1%, it’s pretty much a certainty that most people would interact with Cortana in their offices.

We know that most Windows 10 computers are in workplaces, so there’s a very strong likelihood that people will talk to Cortana in an office. This is very different place than where people might interact with Siri on their phones, or Alexa in their homes. Siri and Alexa t are called upon in private, controlled places (in fact, just 3% of iOS users report using Siri in public).

Let’s walk through that interaction of Cortana as a member of a workplace.

Microsoft encourages you to command Cortana by saying, “Hey Cortana…” and then giving her a command. A typical office scenario might be, “I wonder if I should book a vacation for the first week of August. Hmm. I’ll ask Cortana.”

This is how Cortana is supposed to work:

User: Hey Cortana, should I book a vacation for the first week of August?

Cortana: Let me check your calendar. Looks like you have a meeting on Monday, August 1st. Should I move it for you?

User: Yes, that’d be great.

Cortana: Okay, I’ve moved that meeting to Monday August 8th. Would you like to see some vacation suggestions?

User: Yes, please!

This is exactly how it plays out on a demo video one Microsoft’s site.

But let’s face it: there are a lot of contextually dependent reasons why this is completely unrealistic. Leaving aside Cortana’s technical limitations for the moment (and there are many), let’s take a look at what a real office and real user might look like.

Most offices are either open concept without even the suggestion of walls. As many as 70% of us work in open concept offices. As anyone who’s worked in such an office can tell you, hearing a neighbor on the phone can be excruciatingly annoying or excruciatingly awkward, depending on your neighbor’s TMI quotient.

cortana

So there’s a good chance that everyone in the user’s office will hear this idealized scenario.  There are two clear disincentives against this happening. First, Cortana will make more “boundary work” for office workers. The mere act of trying to keep your private life private at work is turns out to be, well, work. Recent research  has found that keeping work and life private actually causes cognitive overload. If people use Cortana as intended, she is poised to make that much worse.

Second, Cortana demands office workers treat their workplaces as if they were kings and queens, instead of pawns and rooks. Voice interactions require workers to own their workspace, something that we know they do not do. Typical workers share their workspaces with others, and because we are apt social animals, we tend to comply with unwritten rules of workplace etiquette. Bosses’ calendars take precedence over workers’ calendars. Bosses talk more than workers. Men talk more than women. In other words, people with power talk out loud more than people with less power.

Which brings me to the fact that Cortana is a woman. Is it any coincidence that most intelligent agents today are anthropomorphized as women? One of the most striking changes in the twentieth century workplace was the almost total elimination of support staff, which were typically women. Only the most senior executives have assistants nowadays, and other mid-level white collar workers are on their own for scheduling  and administrative work.

cortana 02

Let’s not forget that Cortana is actually based on a supportive AI character in a video game. Cortana provides these workers with a sense that they can indeed recoup the times of Mad Men and have a compliant, supportive, and self-abnegating assistant who has no needs of her own. Practically, this promises white-collar workers with a huge productivity boost, but the symbolic nature of this is even more interesting. When white-collar workers have a virtual assistant, they have re-claimed a sense of hierarchy, of control, and power (even if it is completely imaginary).

And this is why Cortana will not work in the workplace. Today’s typical office worker does not have power enough to command the space around her, and bark orders to anyone out loud, even if just to an intelligent agent. This office worker has been stripped of her ability to occupy a rung on the ladder higher than admin or support staff, because there is no admin or support staff. This typical office worker is embedded in a physical space that reflects this lack of hierarchical position — she has no command over it.

Scholars of gender and technology have described some ill-advised approaches to gender equality as “add women and stir.” The same applies to Cortana and other intelligent agents. You cannot “add Cortana and stir” and expect to see productivity improvements that somehow negate the existing organizational and physical structures of contemporary workplaces.