Our future with technology is a polarizing topic. While some urge that advancements pose grave threats to personal privacy and other fundamental rights, others believe whole-heartedly in the promise of new technologies to eradicate long-standing inequalities at a pace that humans have been unable – or unwilling – to go. This second vision, in which computers and algorithms solve social problems and have the power to fix the world, subscribes to a view of technological utopianism — the idea that, through reliance on new technologies, the world will become truly better for everyone.

Yet, this vision of a digitally-enabled utopia is far from reality.

In the 2018 book Artificial Unintelligence: How Computers Misunderstand the World, American writer and professor of digital journalism, Meredith Broussard challenges technological utopianism directly. Her book advocates for society to engage in a thoughtful, skeptical evaluation of technological capabilities, and central to her argument is the discussion of what she refers to as “technochauvinism.” According to Broussard: “Technochauvinism is the belief that tech is always the solution.” It’s easy to see the appeal of that belief, especially in common statements like “doesn’t technology make everything faster, cheaper, and better?” and in pushes to make schools and government agencies more technologically savvy. However, as many have noted, more tech isn’t universally beneficial, and misguided beliefs in objectivity and efficiency often serve to reinforce discriminatory structures affecting minority and other sensitive communities.

The idea that computers and algorithms are objective decision-makers is false. Explored by both Broussard and data scientist Cathy O’Neil in her book Weapons of Math Destruction, this notion comes from an impression that mathematical evaluation yields unbiased results. Simply: it is the belief that the numbers don’t lie. An unfailing faith in math is dangerous for three reasons. First, it ignores that computers and algorithms are created by humans and share human biases. As Broussard notes:

“Computer systems are proxies for the people who made them. Because there has historically been very little diversity among the people who make computer systems, there are beliefs embedded in the design and concept of technological systems that we would be better off rethinking and revising.”

In other words, rather than liberating us from our human biases and subjectivities, computers confirm existing inequalities and prejudices — yet are seen still as objective truth-tellers. This built-in bias poses a second problem, as models often rely on data proxies – such as substituting race for area code when determining if someone is eligible for a loan – that can circumvent existing anti-discrimination laws. This is made worse by a lack of transparency about the processing or, often, the collection of data that is used against individuals. And so, a third problem: whereas humans can be held accountable for the decisions that they make, computers or mathematical models cannot be made to stand trial. Aided by the guise of objectivity, this lack of accountability confirms discriminatory structures and offers little recourse to those who are discriminated against.

Heightened efficiency is also often cited as a reason for more tech. Sometimes, this is true. Automating once manual tasks can save a lot of time. For example, using a calculator to multiply large numbers eliminates the need to do the calculation by hand, saving time and energy. But, the pursuit of efficiency can have negative consequences. To borrow an example from O’Neil’s Weapons of Math Destruction, imagine that a company is evaluating applicants for a social media position: 


“Many people apply for the job, and they send information about the various marketing campaigns they’ve run. But it takes too much time to track down and evaluate all of their work. So the hiring manager settles on a proxy. She gives strong consideration to applicants with the most followers on Twitter.”

For the sake of efficiency, a stand-in value is used. Although technically more efficient, this method does not guarantee finding the best candidate. Rather, it will find the person who succeeds in a very specific area, and eliminate otherwise qualified applicants. This may seem reasonable enough, given the closeness of the proxy with the skillset needed in that position. However, when this same process is automated and applied to situations like policing or determining eligibility for public benefits, its unjust nature is clear.

Furthermore, there are limits to what a computer can actually do in certain situations, meaning that technology itself is not a promise of efficiency. Take, for example, package delivery by drone. Broussard explores this possibility:

“When you think about the hardware and software, the flaws in the idea become apparent. A drone is basically a remote-controlled helicopter with a computer and a camera. What happens when it rains? Electrical things don’t do well in rain, snow, or fog. My cable television service always malfunctions in a rainstorm, and a wireless drone is far more fragile. Is [it] supposed to come to the window? The front door? How will it push the button in the elevator, or open a stairway door, or push an intercom bell? These are all mundane tasks that are easy for humans, but insanely difficult for computers.”

In a case like the one she describes, more tech is the opposite of good — it is inefficient and, in all likelihood, would result in more problems than it would solve. To this, she rightly adds: “Only a technochauvinist would imagine [this] … is better than the human-based system that we have now.”

The conceptualization of technology as universally good and an improvement on current systems is harmful — not in the “old man yells at cloud” way, but in the sense that technology, like people, is flawed. Thoughtful consideration of how – and why – tech factors into daily life is essential: a just future may depend on it.

Read Kaitlin's other contributions to epicenter.works on the Base Rate Fallacy and PNR and different regulatory attitudes in the EU and the US.

Since you're here

… we have a small favour to ask. For articles like this, we analyse legal texts, assess official documents and read T&Cs (really!). We make sure that as many people as possible concern themselves with complicated legal and technical content and understand the enormous effects it has on their lives. We do this with the firm conviction that together we are stronger than all lobbyists, powerful decision makers and corporations. For all of this we need your support. Help us be a strong voice for civil society!

Become a supporter now!

Related stories: