Imagine that Alien Robots From Outer Space come to observe the earth. They listen in on our conversations, trying to figure out what kind of creatures we are.
They hear us talking about what we should and shouldn’t do.
Now for a while, these conversations make sense to the robots. When we talk about whether we should buy the cheap, old bread instead of the expensive, fresh bread, they recognize this as a fundamentally economic discussion, having to do with commodities such as money, flavor, and health. Other “should” questions strike them the same way.
- “Should we take the safer road or the shorter road?”
- “Should we pay more for the more durable item or pay less for the flimsy item?”
- “Should we invest in this risky but potentially rewarding company?”
“Economics!” they say to each other, and they become convinced that we are, at least, intelligent beings.
Then they encounter other questions that are more confusing to them.
- “Should I make the first call or should I wait until he calls?”
- “Should I ask beautiful Tabitha on a date, knowing that she’ll probably reject me, or more homely Alice, knowing that she’ll probably accept me?”
- “Should I become his friend, knowing that he has a lot of influence, or should I avoid him because he’s kind of a jerk?”
They puzzle over these questions for a while. But the robots eventually realize that although these questions are relational (which they are not real big on), the questions are still basically economic in nature.
An economic question deals fundamentally with concerns about what will bring the most benefit at the least cost. The “benefits” and “costs” might be financial, emotional, sexual, reputational, or of any of various other currencies. But it’s fundamentally a tactical, pragmatic, calculable (albeit often in vague or uncertain ways), cost-benefit question.
The robots become convinced that we are essentially sophisticated ants. We are concerned, like all creatures are, with getting the most bang for buck. But we are social animals, concerned about interpersonal commodities such as reputation, affection, and loyalty. Although the robots aren’t concerned with these things themselves—their own interests being more in the line of batteries, oil, steam pressure valves, and copper filaments—they recognize that our talk about these things are, for us, economic discussions.
But then the robots come across a completely different use of “should” that leaves them scratching their titanium scalps. They hear statements like these.
- “I know you’ll never get caught, but you still shouldn’t have stolen that $100,000.”
- “I should keep my promise even though it means losing my job.”
- “Should we keep the bonus for ourselves or should we distribute it among the people who made the product successful?”
- “I gave that little girl my last piece of bread. I suppose I’ll starve now, but I knew it was what I should do.”
Now these statements and questions really befuddle the robots. They recognize that there is an economic dimension to the questions, in that costs and benefits are involved to some extent. The puzzling thing is that the force of the “should” in each statement is often opposed to the economic benefit—exactly the opposite of what you’d expect. Humans, they find, often say they “should” do something not because it’s beneficial in some concrete way, but for some other reason. And the robots can’t put their creaking fingers on what that reason is.
As we look at the human race from the perspective of these mystified robots, we come into contact with the study of ethics. This is no authoritative definition, but a decent way to approach the study of ethics is is to see it as the investigation of why people say “should” and “should not” in a non-economic, and yet somehow obligatory, way.
Something else that intrigues the robots is that people often get really upset about these particular kinds of “should” and “should not” questions–even more so (indeed much more so) than they do about the economic questions. The humans seem to have the sense that if a person does a “should” thing, that person deserves praise, even or especially if that action brings no economic benefit to that person. If the feeling of “should” associated with an action is particularly intense and the cost to the person doing it is particularly high, people will even call that person a “hero”, and there is arguably no higher form of praise. Likewise, a person who does a “shouldn’t” thing hasn’t merely done something sub-optimal, costly, or risky. The robots could understand that. What they can’t understand is that people who violate “shouldn’t” statements make themselves contemptible, loathsome, and repugnant, sometimes even to themselves. Other people sneer at them like they’d sneer at a mold-blackened potato. These particular kinds of “shoulds” and “should nots”—the ethical kinds—invoke intense emotion in human beings.
The robots ask, “What kind of ants are these?”
Ethical statements have an emotional nuance as well as an economic nuance. They carry a sense of obligation, describing what ought or ought not to be.
But it gets even more surprising. Despite these strong emotional and intuitive elements, there is also a rational, logical aspect to ethical statements. People argue rationally about what they ought to do or be. They try to convince each other using reasons, deductions, analogies, and inferences. So although ethics involves emotions, it also involves reason and logic. This confluence of emotion, intuition, and logic almost makes the robot’s own brains short-circuit.
The philosophical study of ethics, then, asks the question of why people talk and feel this way about certain kinds of people and certain types of actions. Students of ethics not only observe the talking—they also participate in it, asking what kinds of people and actions are properly worthy of being “should” things or “shouldn’t” things.
Discover more from Holy Ghost Stories
Subscribe to get the latest posts sent to your email.
Comments are closed.