You may have heard the adage, “Guns don’t kill people; people do.” It rears its head whenever a tragic shooting makes national news as a response to those who propose that ease of access to guns and other gun-related policies are the problem.
The quippy line gets at a really important philosophical question on the nature and ethics of technology:
Is technology morally neutral?
Or to expand the question, are technological inventions ever inherently bad, perhaps with design-plans and use-cases that are unequivocally geared toward morally suspect, if not outright reprehensible, outcomes?
[Pause: can you think of any? Most common response: nuclear bombs - we’ll revisit that.]
Now, you might say, the object itself can’t really be bad or wrong - to say such would be a category mistake of some kind, a misuse of language. After all, the object only exists and has its design-plans and foreseen use-cases because a rational agent, capable of moral decision-making, created it. And further, once created, the object would simply sit there idly unless a person happened upon it and began using it, thereby unleashing its potential onto the world for good or for ill. Ultimately, persons and their actions are the locus of moral judgments, not the objects caught in the fray of human action.
Cameras, ovens, cars, baseball bats, and pens all fit this description. They’ll all just sit there, causing no tangible harm, unless an ill-willed person decides to use them in nefarious ways.
A further point on this side of the debate is the difficulty of defining what counts as a technology or a use-case of a technology. For instance, is a baseball bat, and its iconic curvatures, a technology or is the tool used to carve wood or metal into such a shape a technology - or both?
Is the general idea of a social media algorithm for filtering content on your personal feed a technology or should we be focusing our moral appraisal on the more specific types, such as an algorithm that purposely injects uninteresting content into the mix to generate a variable-reward schedule, designed to keep you on the platform longer? You might think the former is fine, and the latter is suspect. So, are you claiming that the tech is morally corrupt or is it just a morally suspect use-case of a broader morally-neutral technology?
We’ve long been fine with allowing, both legally and morally, widespread adoption of technologies that have abominable use-cases; mostly because the bad use-cases happen so infrequently compared to their positive use-cases and how much benefit the latter brings.
Now, back to our big contender, the nuclear bomb. You might think - clearly, this is a technology, if there ever was one, that is inherently bad - designed to cause massive loss of life, and further, life that is not directly harmful to you, such as the infants killed in the blasts of Hiroshima and Nagasaki. Such a technology, and others like it you might say, should be viewed as inherently morally reprehensible.
But, not so fast. First, we have the above problem of tech vs. use-case. You might argue that the tech at hand is really just the nuclear fission and fusion, and someone made a use-case by syncing it up to a time-delay, bombing technology. That is to say, don’t get mad at the tech, get mad at the person(s) who thought to use it in such a way. Secondly, there are of course many people who do not roundly condemn the use of the atomic bombs in WWII, citing that their use ended what might have been a protracted war leading to the loss of lives of the non-aggressors in the war. Who’s on offense and who’s on defense matters.
The point is that we should be careful in our thinking about where we accord moral blame, particularly if we are prone to want to legislate and create policy around what we consider morally permissible or not.
We’ve all had a fair bit of time to think about the nuclear bomb case - it happened before many of us were born. But now, we are facing brand-new technologies, albeit less outwardly disastrous like a nuclear bomb, that are causing great controversy in how we think about technological advancement and whether certain “forward” moves are good for us - advancements that purport to fundamentally alter the human experience.
You’ve likely heard of it - the metaverse.
A quick Google search will yield scores of articles citing the great pros and cons of such a technology - if it can properly even be called “a” technology rather than many, many technologies working together.
You’ll see the same people who touted the “Guns don’t kill people, people do” in deference to the moral neutrality of tech all of a sudden saying that participating in such a technology is inherently bad for various reasons; perhaps chief among them is the worry that people will be even more disconnected from “what is real” or will grow weary of “real life” longing to return to their own personal, curated metaverse sanctum.
And yet, some say it will usher in an era of unimagined prosperity, well-being, and opportunity, for people of all walks of life.
Or maybe it’ll be somewhere in between - like most tech has been.
Let me leave you with these thoughts: technology can be used for good or for ill. Let’s focus on becoming the kinds of people that use technology well, because new technology will nearly inevitably keep coming and coming.
Let’s also always keep in mind that impulses for control, to ban or regulate tech, comes at the cost of liberty. Sometimes, we are morally obligated to sacrifice our liberty, but we should look for non-legislative solutions first, in my opinion.
In this week’s discussion, we explore what the metaverse is and address the questions surrounding the moral neutrality of tech and some of the specific benefits and dangers of widespread adoption of immersive, “metaversal” technologies.
Stay Curious!