Wednesday, May 13, 2020

Estimate of Potential Externality-tax Revenues in the USA

A colleague asked me to estimate how much money the USA could raise by taxing externalities equal to the damage they cause. This is the kind of thing that should be done by a team of competent experts, but apparently nobody with resources has even bothered to ask the question, so I took a shot at it:

CBO's list of budget options estimates about $100B per year from a carbon tax set at the estimated social cost of carbon. In order to be sensible (i.e. non-distortionary and having a chance of passing), you would also need a 'carbon tariff' i.e. taxing all imports on their estimated carbon emissions. Ideally at a slightly higher rate for political reasons and to compensate for enforcement or estimation errors. Imports are 15% of GDP, so assume $120B from carbon tax and tariff.

After that it's all napkin math. We know the harms of various externalities, so we can calculate their social cost, but we don't know how much of that we could capture with a tax. 

Air pollution kills about 100k Americans per year. I'll assume that 50% to 90% of this would get engineered out in response to the tax, and the rest hangs around to be taxed. With 10k to 50k deaths remaining and taxed at the $10M VSL, you raise $100-500 billion.

The economic costs of alcohol are about $250 billion. This does not count the monetized costs of the 2 million lost QALYs. (I had to do the QALY calculation by multiplying their 'years lived with a disability' by the QALY cost of 0.3, assuming an average of mild and moderate QALY loss. At $500k a pop, the monetized QALY cost of alcohol is about $1 trillion a year ($1.3 trillion total cost). With that level of social harm, the question then becomes how much money you can possibly extract from the alcohol market without the market going to organized crime. Annual alcohol sales are a quarter trillion, and maybe you could double or triple the prices before drinkers go to drug gangs instead (or quit), so that is $250-$500 billion in possible taxes.

Bad diet causes a loss of about 10M DALYs a year in the USA (source, use the 'plot option'). Again, I assume that tax-induced behavior change and reformulation would eliminate 50-90% of the harm, leaving 1-5M DALYs lost, or $500-2500 billion in potential taxes. Then assume that a quarter to a half of this is eventually lost to gray-market evasion. (buying a hunk of pig with cash from a local farmer)

Plug all that into your favorite Monte Carlo software:
The 90% CI for taxes raised on unhealthy food $250-1000 billion. The 90% CI for all my health-harm estimates, added up, is $800-1700 billion. Add in the carbon tax, and another 100-200 billion from other kinds of externalities, and you get about one to two trillion per year. 

Probably the only way that all this could be politically acceptable is if it funded a UBI, basically making it revenue-neutral by giving all American adults their share of the tax. With 250 million adults, this gets you a $4-8k UBI per person.

Friday, January 10, 2020

Roaring 20s

I am confident about the upcoming decade. I think by the end of it, the current mess will be mostly over and we will be on track for a good sustainable semi-utopian future for humanity. I'll briefly list the reasons below. Even if you don't share my overall optimism, please at least consider these as good and interesting developments to watch out for.

Biotechnology: Things will start to arrive in a big way. By the end of the decade, we will see many new medical treatments, improved industrial processes, efficient carbon sinks, etc. Things will only just be getting started, relative to their potential, but a lot of good things will have already happened and people will understand that more good stuff is coming. 

Plant-based meat: Impossible Foods is shooting for 2022 as the year that high-quality plant-based burger patties become as cheap as the dead-cow version. I'll apply the 'Elon Musk Adjustment' to that goal and say it will probably happen between 2023 and 2027. Assuming they (or someone with similar tech) can scale up production, people will be able to conveniently replace most of their meat consumption, giving us a huge win for health, environment, and ethics.

Scientific Research: We have learned many lessons from the replication crisis. Data openness is now an established thing. By the end of the decade, most research will be freely available, with all source data online in ways that allow not just for replication and error-checking, but also massive meta-analyses that give us highly-powered studies to untangle a lot of hard questions.

Renewable Energy: With continued progress in solar and next-gen nuclear power, we should see grid parity in lots of places by the end of the decade. Between this and plant-based meat, our carbon footprint should start to get a lot smaller.

Space Travel. I am pretty confident that we will have a manned research outpost on Mars by the end of the decade, as well as the beginnings of a space-based economic infrastructure with asteroid mining etc. that will allow humanity to really start expanding beyond our home planet.

Politics: If you take a step back from the hype cycle of people trying to generate clicks by preying on your negative emotions, you will realize that the current mess shows just how resilient the system is. Most of the day-to-day details of governance, like food safety inspections, are ticking along in regular order despite the chaos and dysfunction that grabs the headlines. Just like with all the crap and violence in the 60s and 70s (which was much worse than the last two decades) by the end of the decade people should settle down and realize that we are all in this together and things are mostly okay.

Consumer Tech: Never underestimate the capacity of our civilization to create compelling consumer products. I won't make any predictions about specific things, but by the end of the decade, there will probably be at least one thing that generates value and convenience equivalent to the introduction of smartphones.

Logistics: Just as in the military, in the civilian economy the professionals talk logistics. By the end of the decade, much of the warehousing and delivery infrastructure will be extensively automated, making everything that can be delivered in a box cheaper and more convenient to purchase. This improved supply chain efficiency will have more of an immediate macroeconomic effect than anything (possibly everything) else on the list, significantly lowering consumer goods price inflation.

Basic Income: If you don't need the services of a credentialed professional, or space in a high-rent area, a good life is already incredibly cheap. Not counting rent and health insurance, I pay about $300 a month for a lifestyle that would seem shockingly rich and comfortable to anyone other than a modern professional. The basic essentials of life are only going to get cheaper in real terms. By the end of the decade, there will be a growing realization that wealthy countries can and should give their citizens a basic income of about $5,000 a year (i.e. spending about 10% of their GDP). Living only on this would require you to live in the boonies, and will only buy a 'minimal' standard of basic health care (i.e. the stuff that you could get 40 years ago, but of better quality), and you couldn't afford any expensive hobbies like drinking, but this would essentially end deprivation and exploitation.

Add all of this up, and we will be well on our way to a Star Trek kind of world, 100 years ahead of schedule (assuming we can skip the Eugenics Wars).

Wednesday, August 28, 2019


Art is a subset of the class of objects that are created by people and have an effect on the mind of other people. It may or may not have many other definitions and features, as people have argued over for millennia, but this is a true thing you can say about it. There are many mind-affecting things not created by people, but they are not art. The selection, curation, or framing of things not created by people might be art, but this is done by people. Furthermore, I believe everyone can agree that something that has no effect on the mind of any observer is not art.

So, no matter what other judgments you can make about art, you can always make judgments about it that you could apply to any other object in the class of things that are created by people and have an effect on the mind of other people.

The most relevant judgment is what effect it has on the mind of people. What are some general features of that effect, and is it a positive or negative effect?

One effect that it can have is to give people pleasure. Not all things that give people pleasure are art, and not all art gives people pleasure, but to the extent that art gives people pleasure, it can be judged by the standards of such pleasure giving objects. In many cases art, or a reproduction of it, gives a great deal of pleasure for a relatively low cost, so in that respect it is quite good.

Another effect of art is that it can teach people things. Not all things that teach people are art, and not all art teaches things, but to the extent that art teaches people things, it can be judged by the standards of things that teach. People can be taught things that are true or false, good or evil. It is legitimate for society to encourage the production of things that teach truth or good, and discourage the production of things that teach falsehood or evil. When dealing with art in this way, it is legitimate to focus on what is being taught and ignoring other aspects of it.

A third function of art, perhaps its main function, is to define and reinforce social boundaries between different groups of people. Not all things that define social group boundaries are art, and not all art serves to define a social group boundary, but to the extent that art is being used as a social group boundary, it can be judged in that manner. Because cohesive social groups are valuable to people, anything that helps create such cohesive social groups adds value to society.

However, there are better and worse ways to create and define social groups. The best way is for everyone to admit that their preferences are arbitrary, and their subculture is simply a collection of people united in enjoyment of an arbitrary thing that is neither better nor worse than other people's taste or fashion.

Things become more costly, in the sense of generating negative emotions and status anxiety, when people insist that one social group or subculture should be accorded higher status than another. In general, almost all arguments over what should be defined as art are actually arguments for one particular group of creators or consumers to be labeled as higher status than another group. The word 'art' denotes high status, so people spend a great deal of effort inventing impressive arguments for why the thing associated with their group is art, but the thing associated with somebody else's group is not art.

Things can become very bad when a subculture unites around a definition of art that is either deliberately ugly (meaning that is designed to not give pleasure to the typical person who views it), or teaches things that are false or evil. This can happen quite often because of countersignaling. In such a case, the broader society has a right to tell the subculture that they are simply wrong, and should be considered low status.

A very important confounding factor when discussing art is that people instinctively enjoy and appreciate a thing more when it is especially difficult to create. The difficulty of creation serves as a costly signal, and whenever a thing serves as a costly signal of quality, because it requires expensive ingredients or a great deal of training to produce, people will instinctively consider it high-status and wish to own it and affiliate their group with it.

However, this difficulty of production should always be counted as a cost rather than a benefit of art. It would be better for society if people got the benefits of art out of something that was much cheaper to produce, and the expense of production is very often nothing more than a costly and wasteful signalling game. People try to attract attention to their thing by paying for really high production values, but that simply makes everybody else's thing look relatively worse, without any net social benefit.

Art may have many other features, and it may have many other effects on human minds. However, most or all these features are either arbitrary, or impossible to measure with current technology and philosophical understanding. The main things that we can measure are how much people enjoy it, what it is teaching people, and what kind of social groups form around it.

Therefore, the ideal of art is something that has beneficial effects in all three of these categories. Good art produces pleasure, it teaches things that are true or good, and it unites people around common affiliation with it, ideally in a way that causes good group dynamics and does not cause the group to think less of outsiders. As with most other good things produced by people, we should find ways to produce it as cheaply as possible and in a great variety, and we should encourage people to enjoy whatever they like without feeling a social need to copy the behaviors of other people.

Wednesday, April 24, 2019

Bullshit Jobs are Organizational Signalling

Sometimes people have a good intuitive understanding that something is very wrong with the world, but they lack the analytical ability to explain why the thing is happening. An important skill, related to intellectual charity, is to learn to listen to these visionary heartfelt cries of pain even when the professed explanation or ideology of the author is clearly very confused.

This is the case with Bullshit Jobs. The author, an anthropologist, correctly identifies and gives voice to a very real and serious problem in our world: the fact that many salaried professionals spend their lives doing things that contribute nothing to social welfare. However, he is unable to understand exactly why this happens, and his various explanations range from laughably wrong (a vast evil conspiracy of rich right-wing people who are manipulating the moral fabric of society) to almost right (employees serve the needs of individual managers rather than the organization).

The correct answer is that organizations (and departments within organizations) face the same pressures to spend resources on costly signals and arms races that biological organisms do. Bullshit jobs emerge from the same game-theory dynamics that generate peacock tails, elk antlers, and other wasteful features in animals. Just like a peacock without a tail will not pass on his genes, a company without a marketing department will not sell its products. Just like an elk without antlers will not pass on his genes, a company without a legal department will not be able to keep its resources.

Sometimes the leaders of organizations know what is going on; they would prefer to not have the expense of a marketing or legal department but know that it is necessary for self-defense. However, organizations also face the same incentives for self-deception that Hanson discusses in Elephant in the Brain. People dislike admitting that they are doing things for selfish reasons, so they invent justifications.

It is rational for the organization to hire people to generate these costly signals, because they need to attract attention and resources, but these signals are only a way to take resources form someone else and do nothing to actually make the world a better place. The result is even more pointless and destructive than the the zero-sum games that individual humans play, because millions of human lives are being utterly wasted in the service of Moloch.

This is a very hard problem to solve. People often assume that the problem is due to capitalism and can be solved with central planning, but this is exactly wrong, because departments within government have even stronger incentives to generate and sustain bullshit jobs than market-disciplined organizations.

Ideally there would be some mechanism to identify and heavily tax bullshit jobs as the horrible externality they are. But this is very hard, because from the outside, it is quite difficult to tell the difference between valuable coordination activities and zero-sum signalling. Many managers and planners are extremely necessary and things would cease to function without them. Of course, when you are necessary like this, you have the ability and incentive to extract more resources, so most manager jobs end up as some combination of valuable coordination and value-destroying rent-seeking.

Still, at minimum, a competent society would enact massive taxes on marketing and legal services.

Tuesday, October 3, 2017

Color Sorting

Color was the topic of our last LessWrong meeting. We talked about color theory and physics, eyesight, aesthetics, different perceptions, etc.

To illustrate how people think differently about color, I brought six identical bags of colored Legos for people to sort into groups:

I randomized the number of groups by rolling an eight-sided die and adding four. The roll was 6, for 10 groups. When asked what the purpose of the sorting was, I replied "Teaching a young child about colors."

As I expected, there was a lot of variation in how people sorted the colors:


After we were done, we compared and discussed our sorting. This lasted a while.

Then, a family of Chinese tourists came up to us and asked what we were doing. (We meet in a public place.) We explained the activity, and invited them to sort the colors into groups however they wanted. A middle-aged woman and an older woman started sorting.

The middle-aged woman's sort was roughly similar to ours:

But the older woman, who did not speak English, made radically different choices. She basically sorted them by saturation rather than hue. We were especially fascinated by how she put bricks in different groups that we thought were identically colored, but upon closer inspection had slight differences in color due to age:

The exercise was strong evidence that people see and think about color differently, even when very culturally similar, and that people from different cultures can see things very differently.

Saturday, March 18, 2017

Utilitarianism: We are probably doing the math wrong.

Summary: Almost all utilitarian reasoning about specific interventions and thought experiments is wrong, because it fails to account for the fact that taking a thing away from people causes a utility loss that is significantly greater then the utility gain they would get from acquiring the thing. For any significant permanent change in circumstance, making people worse off causes four to six times the utility change. With a pure utilitarian calculus, intervention is therefore only justified if the gains are several times the losses.

Epistemic status [edited]: Uncertain. I may be exaggerating or overgeneralizing a temporary effect that should only be counted as a second-order term in the calculation in some or most situations. More research into the permanence of these feelings over time is required. It would also be very valuable to do a trolley problem survey with seven instead of five people on the track and see if that changes things. Thanks to everyone who discussed this.
Unexplored. Although this seems obvious now, I did not realize it last week. I have been studying related philosophy issues for over a decade and have never been exposed to any discussion of this point, either supporting or dismissing it. I have specifically looked for evidence that anyone else has made this point, and failed to find any mention of it. However, I am very suspicious of any assumption that I am the first person to realize an important thing. There is a valid outside-view reason that I might be in a position to do so (I have far more knowledge of and experience with cost-benefit analysis and preference valuation than most people who consider these questions, and I just attended a conference of the Society for Benefit Cost Analysis where these issues were explored in presentations) but I should still be skeptical of my reasoning. Feedback is appreciated.

Utilitarian Calculus

Consider the following moral questions:
1) Should you shove a fat man in front of a trolley to prevent the trolley from running over five people who are otherwise doomed?
2) Should you support a public policy that makes health insurance twice as expensive for 10% of the population, while giving equivalent free insurance to a different 20% of the population?
3) If the current social system makes 10% of the population happy (utility 20% above baseline) while oppressing 30% of the population (utility 20% below baseline), should you overthrow the system and institute an egalitarian one?

There are many ways to approach these moral questions, but for all of them, a utilitarian will almost always answer yes, under the assumption that the intervention will increase aggregate utility.

However, this 'utilitarian' answer ignores the robust experimental evidence on the large and persistent differences between willingness to accept (the amount people have to be compensated to accept a loss) and willingness to pay (the amount people would pay for a gain):

People value gains significantly less than they value losses, i.e. the utility increase from obtaining a thing is much less than the utility decrease from losing the same thing. For money, time and private goods (things that are easily traded and substituted or that people have in abundance), people 'only' value losses about 40-60% more than they value gains. But for irreversible, non-tradeable changes in their circumstances, of the kinds involved in most thought experiments and public policy questions, people value losses four to six times more than they value gains. This difference between willingness to pay and willingness to accept is not primarily driven by the declining marginal utility of wealth. It is observed for changes over scales where the relationship between money and utility is approximately linear, and also observed for direct tradeoffs that do not involve money.

Therefore, all three of the interventions above will reduce aggregate utility. The utility loss experienced by the losers will be greater than the utility gain experienced by the winners. A utilitarian should not support them based only on the evidence presented. Other moral reasons must be invoked to justify the policy, or it should be shown that there are relevant side effects that change the utilitarian calculus.

Policy Implications

1) Utilitarians should not support additional income redistribution unless the marginal utility of wealth for the people being taxed is less than 1/4th to 1/6th the marginal utility of wealth for the people receiving the benefits.
2) Utilitarians should not support coercive taxation to produce public goods unless the value of the public good is at least four to six times its production cost.
3) Utilitarians should not support coercive health and safety regulations unless the monetized benefits are at least four to six times the costs.
4) With the caveat that changing utility functions is dangerous and questionable, teaching people to value losses and gains more equally may cause a large increase in utility.


Many people might object that it is irrational to value losses so much more than gains. This is correct, at least for relatively wealthy people in the modern world (For people operating closer to subsistence, a loss is likely to kill you while a gain gives you relatively less benefit, so it is rational to be risk-averse.) Being more risk-neutral will encourage you to take chances and make tradeoffs that will dramatically improve your life. Gains and losses that do not cause significant changes in your overall wealth should be valued the same.

Given that most philosophical discussion happens in an abstract rational setting, and that utilitarians tend to be people with a more abstract and rational thinking style, and that the literature on the WTA/WTP ratio did not exist 30 years ago, and it is still new enough that most people have not had time to internalize its findings, it is understandable that all previous utilitarian discussion had the unquestioned default assumption that a gain and a loss are to be valued the same, the way a rational agent would value them.

However, utilitarianism is about maximizing the utility experienced by actual sentient entities in the real world. Maximizing the utility that would be experienced by imaginary rational risk-neutral actors is doing something that has no connection to reality. Imposing our will on others to maximize an imaginary utility function that we think they should have is insane tyranny.


The utilitarian position, properly understood, is extremely conservative and dramatically favors the status quo, even if the status quo is horribly unfair and a violation of rights. However, if you value rights and fairness for any reason other than their instrumental ability to improve aggregate utility, you are not a utilitarian.

Future generations

When calculating utility for people who have not yet been assigned an endowment, i.e. those behind the veil of ignorance, the traditional utilitarian calculus still applies, because there is no status quo and therefore no gains or losses. Any policy that makes total utility greater and also more equally distributed, such as #3 above, is unambiguously good. The short-term utility loss from implementing the policy may be outweighed by the utility gains for future generations. However, determining this for certain requires making decisions about discounting future utility, and the moral status of people who do not yet exist, which are beyond the scope of this post.

Final Thoughts

For the past several decades, many government agencies have been using improper gain=loss utilitarian calculus to make public policy decisions. Some of the current political upheaval can be traced to the failure of this approach, specifically its failure to adequately measure the utility loss of taking things away from people or imposing burdens on them.

If you are a utilitarian, you find these policy conclusions repugnant, and you cannot find any problem with my math or my understanding of the relevant literature, then please take a moment to build empathy for people who have always found utilitarian conclusions to be repugnant. Then I recommend examining Parfit's synthesis of rule consequentialism, contractualism, and steelmanned Kantian deontology.

Tuesday, March 14, 2017

The World of the Goblins

Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization - a niche we filled because we got there first, not because we are in any sense optimally adapted to it. - Nick Bostrom

Imagine a world inhabited by a species, call them goblins, that is just below the threshold of mental capacity required to start a technological civilization. The average goblin, or tribe of goblins, is just barely too stupid for civilization. Goblins can talk, and argue, and form coalitions, and play politics and signal status, and they can look at the world around them and dream and speculate and make art. They can use technology if someone smarter tells them now, and can sometimes even make simple tools and innovations if properly trained, but they just do not have what it takes to actually start a civilization on their own. Unless they have someone smarter to steal from, their society will inevitably forget important things and regress into stone-age savagery.

However, there is a genetic variation among goblins. Sometimes, by random chance, there will be a tribe whose average mental capacity raises above the civilization threshold, for a while, until mean reversion takes them below the threshold again.

What would you observe in your world, if you were a goblin?

You would observe a world filled with the ruins of fallen civilizations. You would see the crumbling remains of great buildings and structures that nobody knows how to build. You might see that these fallen civilizations transformed the land, making roads or canals or even altering entire ecosystems to suit their needs. There would be artifacts from these civilizations, strange items that nobody knows how to make. Sometimes nobody can even guess what they are meant to be used for.

If you were part of a tribe that was clever and curious enough to translate and read texts from these ruins, you might learn their history. You would know that, sometimes, a tribe of goblins would suddenly form a civilization, gain great wealth and power, and conquer and enslave all of the surrounding tribes. But then, over time, that civilization would, for some reason, become less capable. It would coast along, accomplishing little, feeding off the riches of its glory days, until some kind of shock like a natural disaster, resource shortage, or outside invasion would destroy it and leave nothing but ruins. 

If you were smart, you might wonder exactly why these great ancient civilizations were inevitably destroyed by trivial things, at a time when they had far more resources and power then they did when they were overcoming much harder obstacles, but you are probably not smart enough to ask questions like that.

If you were a goblin in the later years of one of these civilizations, what would you observe?

You would observe that your ancestors used long words you can barely understand, and sentences with grammar that you can barely parse. They would speak of concepts that mean little to you. They might be deeply concerned with things that seem bizarre or meaningless.

You would observe that goblins in other tribes outside your civilization can never seem to form or sustain a working civilization on their own, no matter how many resources or tools you give them.

You would observe your civilization slowly decaying. You would see that it takes your people a lot of time and money to do things that were once done swiftly and cheaply. You would observe that a lot of things seem to cost more, or are of worse quality. You would see things falling apart faster then they can be built or repaired.

You might observe different parts of your civilization decaying at different rates. If your civilization happens to have some kind of system that identifies the smarter goblins and collects them in special places, then those special places will function well, and may even advance, but the places that you took the smart goblins from will inevitably regress into barbarism in a generation or two.

Different factions in your civilization would all blame different things for the decay. If you were smart, you would notice that each faction blames the thing that it has always blamed for everything bad, and recommends solutions that would increase the wealth and social status of its members. But you are probably not that smart, so you accept your faction's explanation, and believe that things will be good again as soon as you gain power over the other faction and make them do what you say.