Current Affairs is

Ad-Free

and depends entirely on YOUR support.

Can you help?

Subscribe from 16 cents a day ($5 per month)

Royalty reading issues of Current Affairs and frowning with distaste. "Proud to be a magazine that most royals dislike."

Current Affairs

A Magazine of Politics and Culture

The Effective Altruist Manifesto

The fictitious account of a man who just wanted to do the most good he could.

Rebekah, 

I am sorry it has taken me so long to get back to you, I’ve been swamped with cases. I hope I haven’t delayed your story.

I no longer represent Sam,1 but I am obviously still bound by confidentiality, so I need to speak with him before answering some of your questions. I will try to do this early next week. I don’t know if you’ve gotten in touch with him directly yet but I can give you the mailing info if you need it. 

Yes, Sam did write what you call a “manifesto” (not what he called it, was just intended to be a blog post), which he authorized me to share with whoever. I’ve pasted the full text below.

Your question about whether I “agree” with his actions is one I decline to answer, because it is inappropriate and irrelevant. I hope your article does not end up suggesting that I either agree or disagree. It does not matter what I thought or think about his reasoning. Everyone is entitled to quality representation, and there is no public value in discussing my reasons for taking on Sam as a client. 

I do believe you when you say that you intend to write “sympathetically,” and once I hear back from Sam I will attempt to answer your questions fully and honestly, but I have been misquoted before, so I have to be emphatic. 

Mike

Law Offices of Michael H. Wilder

CONFIDENTIALITY NOTICE: This e-mail transmission (and/or the attachments accompanying it) may contain confidential information belonging to the sender which is protected by the attorney-client privilege. The information is intended only for the use of the intended recipient. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or the taking of any action in reliance on the contents of this information is strictly prohibited. Any unauthorized interception of this transmission is illegal. If you have received this transmission in error, please promptly notify the sender by reply e-mail, and then destroy all copies of the transmission.


Note: I wrote most of what follows before my arrest when I was thinking about starting a blog, but for obvious reasons it would not have been a good idea to post a lot of this at that time. Now that it can’t make a difference I am making it public, having done a few revisions and also added some new parts at the end. It represents my basic attempt to (1) define the basics of morality (2) explain how I came to derive the conclusions I reached, including the progressive development of my understanding (3) encourage others to continue the pursuit of a less bad world. I am still hopeful about human progress and think that others who continue the project of betterment can and will do what I did not succeed at. 🙂 

Sometimes it feels strange to be so alone, because from a strictly logical perspective disagreements over morality shouldn’t need to happen at all. Using reason we can find right answers, and if those answers cannot be refuted, they must be accepted. With Bayseian thinking we can get ourselves closer and closer to the truth, and even though we don’t know we’re 100 percent right we know that we are not as wrong as we once were, and that we continue to progress toward the most correct position that it is possible to hold. It is not arrogant to say that I am more right than other people, because I do not claim that there is anything special about me personally except that I have consistently applied methods of rationality and been willing to work very hard to get rid of ordinary human bias. 

Bias is what prevents us from getting to the best answers to moral/political/other questions. Rationality may not be up for debate, but the human mind is not rational (see Kahneman) and it takes work to overcome our mistakes. Intelligence is cultivated through effort, through trying to see the world correctly rather than through the prism of our Stone Age brains. But the good news is that it is very possible to become more rigorous, and when we apply that rigor, we can be better people who do good more effectively and make the world a less bad place than it was before! 

I still don’t consider myself “political,” but I did have a kind of moral awakening that gave me insight into questions affecting human society broadly. I am from a standard Liberal Democratic family but it wasn’t a big thing in our house. At college I was focused a lot on my work, but was introduced to rationalist blogs and discussion forums that made me think more deeply about issues that I had not really given much thought to. I was a computer science major, but I took some philosophy classes and gradually realized that it was impossible to evade one’s responsibility to think about how to be a moral person. Most people do not ask themselves hard questions about what it means to be good, and what ethical system we should apply in our day to day lives, and how to make our moral beliefs consistent. But I realized that this was irrational: if you believe that morality is real, then you must try to be moral, and if you must try to be moral, you must think about what being moral entails.

The shallowness of everyday moral thinking is illustrated by the philosophical trolley problem, which is now widely known but I will restate to show what I mean. A trolley is hurtling down a track towards five workers on the line, and will kill all five if nobody does anything. If you pull a lever, the trolley will be diverted onto a track that only contains one worker, and one person will die instead of five. The question of the trolley problem is: is pulling the lever the right thing to do?

The question makes people uncomfortable because pulling the lever is obviously the right thing to do, because it will cause one person to die instead of five. Essentially you are being faced with a simple question: is it better for more people to die or fewer to die, and the answer is fewer. But the reason the trolley problem is interesting is that many people are instinctively uncomfortable with the idea of pulling the lever, because it seems like by acting (rather than refraining from acting) you are “killing” someone, whereas if you just let the trolley take its course you would not be responsible for the outcome yourself.

This is not a rational way to think, because the distinction between acting/not acting is an illusion and is morally irrelevant. Standing there letting people die is a choice, pulling the lever is a choice, and if we are to be good people we have to get over our moral discomfort and be willing to pursue the outcomes that lead to the greatest good.

When you start thinking seriously about morality you realize that many of our instinctive, unthinking behaviors and assumptions are actually wrong. For example: when my sister was in law school, she told me she planned to be a legal aid attorney because she wanted to help the poor. This is a good idea! Poor people do not have enough access to quality legal aid services, and I was very sympathetic to her goals. However, consider: a legal aid attorney earns about $45,000 a year on average. The starting salary at a Wall Street law firm is $160,000. Within a few years you can be earning much, much more, and there was a good chance my sister could get $500,000 a year eventually if she worked hard. If my sister went to work at a Wall Street law firm and lived on $45,000 (the amount she would have earned as a legal aid attorney), she could have paid for many more legal aid services for the poor than she could give herself. There would be three or four times more additional legal aid given to the poor than if my sister herself became a legal aid attorney. Morally speaking, then, if my sister believed in legal aid services for the poor, that does not lead to the conclusion that she herself should be a legal aid attorney. The poor do not benefit more from having her in particular than they would have from getting an equally competent attorney whose salary my sister paid. 

When I showed my sister that she was not doing the most good she could do, she became defensive. She could not explain why she should not go to work on Wall Street and donate her salary, except that she did not want to do this, which is not a moral reason but an appeal to self-interest. She was angry when I showed that her arguments did not work, which is common when people run up against biases that they are uncomfortable shedding. (In the event, my sister neither went to a legal aid office nor a Wall Street firm, but that is neither here nor there.) 

The morally correct course of action is to do the most good you can. This should be obvious, and many people say they agree with it, but they do not actually follow the principle through to its logical conclusions, because those conclusions are discomforting. Consider the drowning child. If you were standing at the side of a lake and you saw a child drowning you would—if you were a good person—rush in to save the child, because in this way you would save a life at little cost to your own. However, many children around the world die of preventable diseases all the time. If we donated money, we could save those children. Why are they not just as worthy of our attention? Why does our geographic proximity to the drowning child mean that they get saved? Surely distance is not a morally compelling reason to draw distinctions between the value of different children’s lives. 

But once you start to understand that you have the same obligation to any child, not just the one who happens to be in front of you, you have to radically rethink the course of your life. This is not optional, it follows from the acceptance of (1) reason and (2) basic morality. To resist the obligation is not a product of rational thinking, though people will of course come up with many explanations for why they shirk their moral duties. Usually these excuses fall apart under scrutiny but because so many people have an interest in leading comfortable rather than moral lives, they do not question each other’s transparently flimsy excuses for not doing more good.

How should we think about what it takes to be a moral person? This sounds like a difficult question but the answer is actually quite easy: the more good you do, the better you are. What is good? Minimizing harm, maximizing flourishing. A person’s job is to maximize their impact on the world in a positive way, reducing the suffering of others and helping those others to flourish. This may require a great deal of self-sacrifice, because my life does not matter any more than other people’s. But it doesn’t require you to be a complete monk; if certain comforts make it easier for you to flourish and therefore do more good, they are permitted, but everything must be thought of in terms of whether or not it maximizes the amount of good you are doing.

When you start to think rationally about the good, you realize how far short we are falling. There is so much preventable suffering in the world! Consider, for example, the suffering of non-human animals. They are sentient (nobody reasonably disputes this) and can feel pain. Yet we murder and torture them by the billions. We could be vegan, but most people (over 98%) are not. There is no rational defense for this. Any explanation we could give of why it is bad to kill people makes it bad to kill other animals as well, unless we adopt an arbitrary and indefensible hierarchy of life that rejects moral reasoning. It is morally obligatory to be vegan, and every response to this that I have ever seen put forward fails on elementary logical grounds and is very obviously just a paper-thin rationalization for the fact that people enjoy eating animals (Enjoyment does not sufficiently change the moral calculus; if people enjoyed eating babies, that wouldn’t make it right.)

As I became a deeper and more rational moral thinker, I changed my personal conduct. I donated most of my income to charity. But not just any charity. “Charity” in the abstract is not inherently good. In fact, many charities hurt the very people they are trying to help, and are actually immoral. It is important to try to quantify the good that charities do and maximize your impact, rather than just donating to “feel good” causes. For example, while it may feel good to give money to a homeless person, it is actually not the correct thing to do, because that money could do more good elsewhere. It might make you feel virtuous, but rational thinking helps us understand that the feeling of virtue is not synonymous with the authentic good. 

A lot of things that seem good are actually bad, but a lot of things that seem bad can actually be good. For instance, many people instinctively dislike payday lenders, because they give loans with high interest rates to people who are poor. But the interest rates are because of the risk the payday lenders take on, and ultimately people are better off when they are able to get loans than when they are not able to get loans. The dislike for payday lenders is because instinctively something feels wrong about what they do, but they are actually beneficial and the world would be materially worse off without them.

So it is important to examine everything very carefully, because often we are deceived by our bias and do not understand that things that seem bad are good, meaning they are better than all the available alternatives. Sweatshops fall into this category; people think sweatshops are bad, because it’s no fun to work in a sweatshop. But when sweatshops are built in a community that formerly had subsistence agriculture, people’s incomes go up and they end up better off than they otherwise would have been. A thing that seems bad is actually good.

The same, obviously, is true of killing. The bombing of Hiroshima prevented the US from having to conduct a mainland invasion of Japan, which would have cost millions more lives. It horrifies us on an instinctive level to see a city blown to smithereens, but we are not thinking about alternatives. If you cannot provide a plausible description of an alternative that would have produced less suffering, you are only “virtue signaling” when you condemn something that you should be praising. Often in life we must settle for the “least worst” option, but the people who are committed enough to rationally applying moral principles end up being condemned even though they are producing the best outcomes for all.

It can be very frustrating to be a person committed to rational moral thinking in a world dominated by bias and ideology. People often get angry when you point out the inconsistency between the things they say they believe and the things they actually advocate. In the early days of my rationalism, I got into many arguments with people who—even though they were often perfectly logical in certain areas of their lives—stubbornly refused to follow basic reasoning through to its conclusions when it produced consequences they did not like.

There is an entire culture of resistance to evidence. People who tell uncomfortable truths are often socially isolated or punished. Some facts are considered dangerous, and people don’t want to hear them. A Google engineer was famously fired for being willing to ask his colleagues some difficult questions about the empirical evidence on gender and programming aptitude. This was a question of fact, not values, yet certain facts are excluded from discussion on the grounds that they would be dangerous. This is exactly backwards, of course: the dangerous thing is to ignore facts that contradict our presuppositions, because we will be missing opportunities to improve our understanding and become better people. 

Social taboos are the product of pure emotion, but they impede discussion on many topics. For example, in progressive circles it is taken as a given that the police are racist. If you present statistical evidence to the contrary, you yourself will be considered a racist—no matter how valid the evidence you present is. This is a sign of an unhealthy society, one that could easily move into a kind of new Dark Ages. (Reading the book The New Dark Ages helped me understand how societies that put certain ideas off limits and require affirmation of dogmas are instituting barriers to their own flourishing.) If we take a topic like, say, “eugenics,” people react instinctively negatively when they hear the word, even though “improving human beings through being conscientious about selecting for desirable traits” is not inherently monstrous or wrong. Or people hear the word “IQ” and they bristle and become hostile, without thinking about why they are being so hostile. Cannibalism is a “taboo,” but it is not a rational taboo; the film Soylent Green suggested that allowing dead humans to supply protein for other humans was somehow monstrous and horrifying. But that’s prejudice, not logic. By placing entire topics off limits, we may be depriving ourselves of opportunities to do new kinds of good.

I have changed my approach with people over time as I have come to understand the psychological mechanisms underlying the resistance to reason. “You catch more flies with honey than with vinegar” is actually true insofar as personal relations go, and if you speak to someone in ways that suggest to them that you listen to and understand them, you will find them more receptive to your ideas. I began incorporating a lot of exclamation points and smiley faces into my writing, because these disarmed people and caused them to perceive that I was actually a person trying to do good and not just an antagonist for its own sake. I have become a strong believer in the instrumental value of kindness. 🙂 


When I graduated from college I took a job at one of the major tech companies (I will not say which one, though it is easy to find out). It was a well-paying position and I donated most of my income to a charity that was dedicated to vaccinating children in non-Western countries. Objectively, I was doing more good than most of my peers, and I enjoyed the work I was doing (contrary to populist rhetoric, there is moral worth in what a “big corporation” does, because they provide services that people need, often for free. There is a strong case that Elon Musk, by creating electric cars, is 1000x more moral than a legal aid attorney). But I was not satisfied. I did not feel as if I was truly doing the most good. In the hours after work I spent a lot of time in the comments section on one of the popular rationalist blogs, where people would discuss ways to continue shedding our biases and maximize our positive impact on the world.

Sometimes this involved “politics.” In 2019, when the Democratic primary was underway, I tried to figure out which candidate one should support in order to optimize the social good. I ranked candidates numerically on a number of different factors. How open are they to mutually beneficial free trade between countries? Do they make policy based on evidence or ideology? Do they support school choice, so that students’ freedom to be educated is maximized, or do they tend to side with inefficient teachers’ unions? Based on my formula, Cory Booker was the mathematically correct candidate to support, and I gave his campaign money and became a volunteer for several months. 

But politics is not an especially efficient way to do good; the effort required to effect marginal change is substantial, and political thinking is so polarized and prejudiced that it is almost impossible to have a sensible discussion or to get to the core of issues. Politics is dominated by tribal thinking whereby people’s hostility to out-groups guides their conception of what is right, and it is almost impossible to counteract raw tribalistic thinking through reason. Democracy is not in any way a system that maximizes the social good; it is a very clear example of “perceived virtue” being in tension with the demands of actual moral principles. Voters do not know what they want, and do not behave rationally. They choose candidates that undermine everybody’s interests, and they are swayed by rhetoric that has nothing to do with facts. Yet we pretend that voting is good. We spend billions of dollars on the elaborate charade of elections, which produce obviously worse outcomes than a simple merit-based examination system for selecting leaders. It would be very easy to choose rulers on the basis of a straightforward test of competence, yet a baseless faith in “the people” (a meaningless abstraction, for the people all disagree), means we preserve a system that is obviously failing. (Monarchy might actually be a value-maximizing system of government, but I do not want to get into that here.) 

My attention moved away from politics because I realized how important other things were in shaping social outcomes, and in particular because I came to incorporate time into my moral analysis. I realized that there is no reason why future suffering should matter less to us than present suffering, and that therefore we are obligated to think about our actions not only in terms of their immediate visible consequences but their long-term ones as well. If something increases suffering now but reduces it in the future, it may be morally justifiable, because the future lives are not worth less than the present lives, just as the child away from you is not less valuable than the child nearby. And by declining to inflict present suffering, while knowing that you will increase future suffering, you are acting no differently than the person irrationally uncomfortable in the trolley problem, who sees a false distinction between action and inaction. When we start to think of time and distance as morally irrelevant, we begin to have a much clearer moral worldview that helps us do more good. 

This is how I came to see the moral importance of Artificial Intelligence research. We live in a highly advanced technological civilization that is unleashing untold new possibilities all the time. A lot of time and energy is being put into the development of artificial intelligence, but if that intelligence is used for bad purposes it could create untold new kinds of suffering. If a superintelligent AI were to be invented, it could outmatch any human attempts to thwart it and its goals would dominate over our own. If those goals were good, the results would be good, but if they were bad, the results would be very, very bad. 

In fact, the possible bad outcome would be so bad that even a small chance of it happening is worth putting in a lot of effort to prevent. Morally speaking, research on artificial intelligence may be the maximally good thing for people to put time into, because the small possibility of preventing a terrible outcome would be very worth it. To understand why, think of accidents: if there is a 1% chance that a bomb will go off that destroys all of humanity, it is worth putting in a lot of effort to stop that bomb going off even if the chance is low, because the consequences are so unthinkable. (I am speaking here in terms of conventional understandings of the good, not my actual beliefs, which I will get to shortly.) So even if we think that a hyper intelligent malevolent robot is unlikely to destroy the world, it is worth having more people spend most of their time thinking about hyper intelligent malevolent robots. (My sister should probably have done this instead of going to law school, for instance.)

So actions today that seem very irrational and immoral are actually justified in the pursuit of a greater long term good. People recoil at this but they do not have arguments for why it is untrue. Sometimes they argue that the long-term good will not be achieved, but this is not a case against the principle, it is a case against a particular invocation of the principle. Only a monster would actually disbelieve the principle, because the monster would be willing to cause any amount of suffering in the distant future for the sake of appearing to be a good person today, which is the most selfish possible way to behave. 

But my belief in the central moral importance of artificial intelligence was not the final stage in my philosophical development, and the course of action I am now pursuing is based on radically deepening my empathy toward others and thereby improving my conduct. I used to think that empathy was irrational, because empathy would always be selective and you would end up sympathizing with some creatures over others and being driven by the subjective experience of the emotions of those others rather than by an objective and neutral, non-empathetic assessment of the facts. I had, however, a fundamental insight that changed my mind (I am perfectly fine saying that I was wrong about things, because as I said, we all begin life irrational and the task is to become less and less wrong). I understood that empathy is data, meaning that the subjective experience of a sentient creature is information that must be incorporated into an objective analysis of that creature’s welfare-maximization. It is true that selective empathy is irrational, but that does not mean discarding empathy. It means we must adopt radical universal empathy, in which we come to feel and understand the experiences of every other experiencing-thing. 

So, for example: if I empathize with a mother who has lost her child in an accident, I may become emotion-driven, because losing a child is sad, and I may enact policies that are based on helping one mother, or a few mothers, but that do not neutrally weigh the costs and benefits to all parties because they only benefit the ones I have empathized with. The usual way out of this is to say that we should stop empathizing so much and calculate utility coolly and objectively, but this is only the illusion of rationality. In fact, we are still being selective, because we are deliberately excluding the extremity of the feeling of pain. It is because being a mother who has lost a child is so painful that I excluded that experience from my cost-benefit analysis, but this is once again choosing to ignore inconvenient facts.

We have to empathize with everything if we are to understand what the good is, because if we do not know what it is like to be a thing, we do not know what that thing’s best interests are. And when we do this, the conclusions we reach are radical.

Let us consider wild animals and their suffering. Have you ever seen a gazelle be eaten by lions? Have you ever seen a snake devour a mouse? An insect attack and destroy another? Do you know how much pain and misery “prey” experience? They are torn to pieces in the most horrible ways. Nature is “red in tooth and claw.” It is a brutal place, filled with suffering so extreme that it is overwhelming to contemplate it. They leave most of the nastiest bits out of the nature documentaries, another example of ignoring inconvenient facts. But if we confront the reality honestly, we understand that this world is not a world of happiness marred by occasional pain, it is a world of extreme suffering relieved occasionally by bursts of happiness.

Here is a moral truth: suffering is worse than happiness is good. Eating a delicious hamburger makes you happy, but it does not make you as happy as being sexually violated makes you unhappy. The relief of suffering is a much greater moral priority than the positive creation of happiness. Utilitarian thinking is commonly thought of as “the greatest happiness for the greatest number,” but the right approach is “the least suffering for the greatest number.” Freedom from pain is more important than the presence of pleasure, and there is endless empirical evidence on preferences in both humans and other animals to confirm this.

In progressive circles “nature” is thought of as something pristine and peaceful, but this is pure myth. Nature is, in fact, something horrifying. When we consider how many billions of animals are shredded and mangled daily, and when we conduct radical universal empathy to understand just how bad it feels to be one of those animals, we cannot morally countenance the “preservation” of nature, which just amounts to a perverse belief that it’s somehow good when bad things that cause untold suffering continue because they are somehow “beautiful” to us—yet again, pure selfishness disguised as virtue. Conservationism is not good. It is bad. It is the mask of virtue atop the face of suffering and death. We should not let blind prejudice and socialization prevent us from recognizing this obvious truth. We must follow reason to its conclusions even when we do not like them.

When I came to understand how bad suffering was, when I used radical universal empathy to truly appreciate the vast universe of trillions of conscious experiences going on around me at all times, I nearly had a mental breakdown, because I was so overwhelmed by my horror and the inadequacy of my previous attempts to do good. Legal aid? AI? This was less than a drop in the bucket when I considered the true meaning of the sum total of suffering on Earth. When we know what pain is, when we have empathized rather than considered it abstractly, and when we have calculated just how much of it there is at any one time, we are forced to the realization that it must be ended as soon as possible as completely as possible.

The central moral question, then, became: how to end the most pain the fastest. At one point I considered trying to join an oil company, in the hopes of accelerating climate change and eliminating as much of the “natural” world as possible, but this did not seem adequate and the marginal difference I could personally have made through my presence was small; the incentives created by shareholder value maximization meant that, like my sister going into legal aid work, someone else was already going do as much to end nature as I could do myself. 

At this point, I had to stop talking to people about moral questions, because there was too great a distance between them and me. This was true even in the Effective Altruist community. They were committed to the rational assessment of moral questions, but only to a point. They were still ultimately morally weak and allowed emotion to halt them from a fully honest inquiry into the correct course of action. When I brought up the need to eliminate wildlife habitats and consider efficient means of global population reduction across all species, they reacted the same way as those who refused to accept the obvious good of working for Wall Street over a legal aid office. You can call a position “ludicrous,” you can say “Psh,” but noises are not reasons. There were no arguments against me, and the ones that were offered were only superficially reasonable, and were clearly put together in order to evade “unthinkable” conclusions, rather than to assess whether that which is deemed unthinkable is, in fact, compelled. 

The means of ending global suffering was not obvious. If we lived in a society where rationality rather than tribalism governed human thinking and behavior, then many social resources would be put toward the question of ultimate suffering relief. But because so few were willing to overcome and question taboos, I was stuck trying to find a way to do the most good as a lone individual. 

The pandemic of 2020, however, was illuminating, a kind of Eureka moment. A tiny virus, invisible to the naked eye, could from one person in China spread across the globe. A contagious enough, fatal enough virus could have a greater impact per-resource-unit spend on development than almost anything else. It could cross species. If truly perfected, could it get us 20% of the way toward the ultimate goal of eliminating suffering? Possibly. Think of how improved the aggregate total of welfare would be with even 20% fewer sentient life forms. To fully extinguish all life would permanently end suffering. Politics was completely trivial by comparison. The good that can be done with an evidence-based education policy like school choice is dwarfed utterly by the good that can be done by large-scale population reduction. The quest for justice is coextensive with the quest to end all life.

The rebuttals people offer to this position are feeble, and are clearly motivated by people’s horror at the conclusion. They do not grasp how bad suffering is. They want to preserve suffering because they are unwilling to abandon their preconceptions. Or they suggest that while the complete elimination of suffering would be desirable, it is not feasible. But that is an argument for working on the practical problem, not for abandoning morality. 

In the aftermath of the pandemic, a great deal of new funding was available for research on viruses, and though my background was not in medicine, I was able to secure an appointment at Baylor University’s Department of Molecular Virology, working on a programming team helping doctors craft new models for tracking contagion. I began pursuing a second undergraduate degree in biology at the university, so that I might enroll at the medical school after a few years. In the meantime, I spent every hour I could researching epidemiology, attempting to understand why pathogens spread and how they can be made more efficient. What if a virus could be guaranteed to kill whatever it touched, humans other animals alike? 

It was important to me that a virus be developed that maximized fatality while minimizing the suffering that preceded that fatality. It should be as a fast-acting poison, a gentle euthanization. From a strictly rational perspective, it does not actually matter very much whether afflicted creatures suffer before their deaths, because the gains from the deaths (and the future suffering that is eliminated by preventing other entities from subsequently being born) massively outweigh the costs. But there is still a part of me that is swayed by emotion, and because there was little harm in a slight delay of the ultimate relief, I permitted myself some research into efficiency as an indulgence. 

As you probably already know if you are reading this, I did not get very far in my efforts. The pandemic had not only increased the research funding at virology institutes, it also increased federal law enforcement’s vigilance on biosecurity. I do not claim to be a crafty person, only a well-intentioned one, and a number of ill-thought-out Google searches attracted government attention. I fell straight into the trap that was set for me, readily disclosing my plans to the agent who posed as a sincere fellow member of an online forum on rational morality. Baylor swiftly terminated me and revoked my research access as soon as charges were filed. I had hoped that I would at least have the opportunity of a trial to publicly explain my views and hopefully persuade to pursue the questions I had dwelt on, and the project I had begun. But I was informed by my lawyer that moral defenses are not permitted; one must give legal arguments only. This seems to me to capture everything that is perverse and monstrous about our society. 

Because I clearly violated the laws I was accused of violating, there was no sense facing trial, hence my plea. But I would like to affirm clearly that I still believe my position to be correct. In fact, “believe” is a misleading term, because it implies a mere subjective hunch. I have shown my moral arguments to be sound. They are compelled by reason, and those who reject or resist them do so because they have refused to ask the question seriously—they are convinced they already know the answer. I am sure there are those who will jump up immediately to refute me, to show why my calculus is incorrect, but notice that they jump up. They do this because they are not driven by a quest to be good. They are driven by a quest to defend values they hold without rational foundation, to preserve those values by any arguments they can muster, because the alternative would be unthinkable. It is the duty of a good person not to behave like this. The good person must be willing to seem like a bad person, and to entertain possibilities others deem terrible or cruel, because they understand that it is not about them, it is about doing the most good we can do no matter the cost.

Whatever else can be said of me, I have only tried to make the world a better place. 🙂 


  1. Sept. 2022: When I wrote this story several years ago I did not realize there was an actual prominent Effective Altruist named Sam. It just seemed like the sort of name an Effective Altruist would have. Any resemblances to the crypto billionaire guy are totally unintentional. 

More In: Fiction

Cover of latest issue of print magazine

Announcing Our Newest Issue

Featuring

Celebrating our Ninth Year of publication! Lots to stimulate your brain with in this issue: how to address the crisis of pedestrian deaths (hint: stop blaming cars!), the meaning of modern art, is political poetry any good?, and the colonial adventures of Tinin. Plus Karl Marx and the new Gorilla Diet!

The Latest From Current Affairs