Oliver Sherouse Writes Occasionally

on Public Policy
and Python Programming

Afternoon Links

12 Jan 2015

Today I’m reading a few papers from NBER:

  • Cognitive Economics:

    “Cognitive Economics is the economics of what is in people’s minds. It is a vibrant area of research (much of it within Behavioral Economics, Labor Economics and the Economics of Education) that brings into play novel types of data—especially novel types of survey data. Such data highlight the importance of heterogeneity across individuals and highlight thorny issues for Welfare Economics. A key theme of Cognitive Economics is finite cognition (often misleadingly called “bounded rationality”), which poses theoretical challenges that call for versatile approaches. Cognitive Economics brings a rich toolbox to the task of understanding a complex world.”

  • Austerity in 2009-2013, ungated version here:

    “The conventional wisdom is (i) that fiscal austerity was the main culprit for the recessions experienced by many countries, especially in Europe, since 2010 and (ii) that this round of fiscal consolidation was much more costly than past ones. The contribution of this paper is a clarification of the first point and, if not a clear rejection, at least it raises doubts on the second.”

    I’m hoping that this paper on austerity will be a little more illumating than the fly-by analysis I was talking about earlier

Austerity Arguments are a Mess (Chart Fight!)

12 Jan 2015

Quick chart fight. A while back, Matt Yglesias posted this, saying that “2014 is the year American austerity came to an end”:

yglesias_chart

Econ blogger Angus argued that Yglesias is trying to re-define austerity because we’re now seeing some decent growth. He posted the nominal graph and quipped, “Either austerity means nominal cuts and we never had any of it, or austerity means cuts relative to trend and we are still savagely in its grasp”:

angus_chart

Kevin Drum says that’s bogus, because you have to look at real spending per capita, like so:

drum_chart

So here’s my entry. I’m going to add two economic indicators to that same chart: growth in real GDP per capita, and the prime-age employment-population ratio (which I like better than unemployment):

oliver_chart

To put growth and the E-P ratio on the same scale, I’ve arbitrarily subtracted 79%, which is about the average over the period in question. It’s the trend, not the level, that matters.

The point, as I see it, is this: to make an argument about the “end of austerity” and what it means, you have to look at that graph and say that the 2014 part of that chart is meaningfully different from the 2009-2013 part. If you see that, you have better eyes than I do.

This is why people don’t trust economists or economics writers. It’s why they shouldn’t. You can’t tell anything from that graph, and claiming you can means you’re at best overstating your case, and at worst lying. It can be a data point1, but only as part of a larger analysis and I haven’t seen any that I’m particularly thrilled about or ready to bank on.

  1. Paul Krugman, for what it’s worth, has taken this route, Scott Sumner responds to him and Simon Wren-Lewis here.

Can We Really Say Voter ID Suppressed Turnout?

10 Nov 2014

In a post dramatically entitled Voter Suppression in 2014, Sean McElwee of the think tank Demos argues that early statistics1 already suggest that meaningful numbers of voters were wrongly disenfranchised. He makes three points: first, that the number of people who cannot vote because they committed a felony was high relative to some victory margins; second, that states with voter ID laws saw suppressed turnout, and third, that states with same-day registration had higher turnouts.

I want to focus on the second point there, because it’s been a hot-button issue lately, and because I’m more skeptical than most people that voter ID makes much of a difference2. McElwee’s tries to demonstrate his point by graphing the mean voter turnout among states in three pools: those which require photo ID, those which require non-photo ID, and those with no ID requirement3.

mean

Mean turnout was highest in the no-ID states, and higher in the (presumably less restrictive) non-photo ID states than in the photo ID states. Case closed, right?

Not exactly. To use statistics like this to make a real point, you have to remember that you’re got an incredibly small sample size. What we really want to know is whether the variance between groups is bigger than the variance across groups.

For example, here’s another version of that graph, but I’ve added confidence intervals:

mean-ci

The idea here is that, if you tell me which group a state is in, I can be 95% sure, statistically, that the voter turnout for that state fell between the top and bottom of the black line. You can see that there’s a lot of overlap. A turnout of 38 percent, say, wouldn’t be out of line for any group.

Maybe we’d be better off if we didn’t look at the mean, but rather the median—the state that ranks exactly in the middle of its group in terms of turnout. This takes care of any outliers—observations that aren’t characteristic of the group as a whole:

median

Whoops! Now the suppression story doesn’t fit at all. There’s almost no difference between photo ID states and no-ID states, and non-photo ID states do worse for some reason. Of course, at this point, we start to suspect that it’s not so much a reason as chance, and other unexplained factors that affect turnout.

Heck, let’s do one more. Here’s a box plot:

box

The line in the middle is the mean—same as the first graph. The box represents the middle 50 percent of the states in that group. Finally, the lines (called “whiskers”) represent the entire range across the group, up to one and a half times the spread of the middle 50 above and below the mean.

Here we see an important point: there are two dots in the no-ID group that are so much higher than the rest that they fall outside that mean-plus-one-and-a-half-times-middle-fifty range. Those dots happen to represent Maine and Wisconsin, which had particularly high turnouts, and which pulled the mean of the no-ID group up quite a bit. Now, looking across the whole distribution, that data point looks a lot less compelling.

This all amounts to a huge statistical nothingburger. As more data comes out, I’m sure more careful analyses will be run on the numbers to see whether we think voter ID laws were important to the election. My bet’s on the null hypothesis, but I might be wrong.

But let’s not excite ourselves about statistically meaningless charts just yet, shall we?

  1. The turnout numbers come from Michael P. McDonald, a professor at the University of Florida, and his website, electproject.org.

  2. I believe some nefarious folks have tried to use voter ID to improve their chances in elections, I’m just skeptical that it worked.

  3. I put the data and script I used to create these charts in a GitHub repository for anyone who’s interested.

The Big Problem With Stata

29 Oct 2014

I use Python for almost all my data work, but both in my workplace and my field more generally Stata dominates. People use Stata for a reason1, and it provides a far wider range of advanced statistical tools than you can find with Python (at least so far), but I hate working in it.

I’ve always found it hard to explain to others just why I hate it so much. You can generally get your problem solved, the help files aren’t terrible, there’s lots of Google-able help online2, you can write functions if you want to learn how. And while I find lots of little things annoying (the way you get variable values, for example, or the terrible do-file editor), the big problem was the one other people didn’t understand.

Today, however, I was re-reading some pages about the Unix Philosophy, when I saw something that hit the nail on the head. It’s Rob Pike’s Rule 5:

Rule 5. Data dominates. If you’ve chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming

Stata only has one data structure: the dataset. A dataset is a list of columns of uniform length. You can only have one dataset open at a time.

This is the right data structure for performing the actual analysis of data—say, a regression—and the wrong data structure for literally everything else. The problem is, 90 percent of doing data work is cleaning, aligning, adjusting, aggregating, disaggregating, and generally mucking around with your source data, because source data always comes from people who hate you. And because the data structure is wrong, you’re forced to use algorithms that look like they come from an H.P. Lovecraft story.

Never having seen anything better, most Stata users seem to be resigned to doing things like creating an entire column to store a single number and writing impenetrable loops for simple tasks. Or they use sensible tools to create their datasets (increasingly Python, but also even something like Excel) and then use Stata just for the analysis.

The latter is my approach when I can’t avoid Stata entirely. But I’m really looking forward to the day when I can avoid the fundamentally flawed design of Stata altogether.

  1. In my graduate program, we started learning econometrics with a different statistical program, called SAS. SAS is…SAS is rough.

  2. I’m looking at you, R.

Progressives Need Amazon to be a Problem

20 Oct 2014

A few weeks ago, Franklin Foer wrote an article at The New Republic arguing that Amazon is now a monopoly and therefore should be broken up. The difference between Amazon and what we used to think of as monopolies, he says, is that Amazon squeezes its producers, not its customers, and consumers are complicit in the squeezing, which is just kind of assumed to be a bad thing.

Foer didn’t offer very specific recommendations, but he did point to, say AT&T which was broken up using antitrust law in the 1970s as a good example.

“That’s silly”, I thought when I first read the piece, and I didn’t expect to hear much more about it.

Today, however, Paul Krugman followed up with an op-ed that correctly identified Amazon’s relationship to its producers as a monopsony, not a monopoly1, and argued that it is totally not ok, guys.

Krugman’s argument zeros in on Amazon’s fight with publisher Hachette. Hachette won’t agree to the revenue sharing that Amazon wants, so Amazon has disadvantaged their books.2

Like Foer, Krugman calls to mind the old progressive “victories” like the breakup of Standard Oil, saying, “The robber baron era ended when we as a nation decided that some business tactics were out of line. And the question is whether we want to go back on that decision.”

I think that line explains why suddenly we’re all supposed to be up in arms about Amazon. It’s certainly not out of deep concern for book publishers. Everyone hates book publishers, who squeeze authors as much as Amazon squeezes them (and, interestingly, more than Amazon squeezes authors, at least at present).

In fact, in a sane hour, Krugman et al. would probably have no trouble agreeing that what we’re really seeing here is publishers losing value because what they do is not nearly as valuable when you don’t need to physically print all your books. Certainly they would agree that, if the market were well and truly competitive, none of the publishers would be making money anyway because profits in a competitive market go to zero.

But Amazon is a BIG BUSINESS with MARKET POWER, and BIG BUSINESSES with MARKET POWER are bad and exploitative in the progressive view of the world. The breakup of Standard Oil is a part of the progressive identity the same way that, say, the Reagan tax cuts are part of the conservative identity.

If Amazon isn’t actually hurting real people, then maybe BIG BUSINESSES with MARKET POWER aren’t always bad. Maybe the breakup of Standard Oil wasn’t all that huge a victory for real people after all. So it’s important to the progressive view of the world that Amazon be perceived as hurting people.3

Now, there’s nothing wrong with having general rules for policy, like “monopoly is bad, let’s avoid that” or “let’s not try anything for the first time at the national level.” They’re especially good when they’ve been learned over time. But the hyper-dynamic technology-driven economy, where it’s has been harder and harder to preserve market power, has presented a powerful challenge to these old progressive beliefs, and those of us not wed to them should demand that they prove themselves again.

  1. A monopoly is when you are the only one selling, a monopsony is when you are the only one buying.

  2. Bizarrely, Krugman also veers into conspiracy theory territory when he argues that Amazon wants you to read Paul Ryan’s book, but not a book about the Koch Brothers because the shipping time is different. As he puts it, “Uh-huh.”

  3. A conservative analogy might be the insistence that the Bush tax cuts paid for themselves when they probably didn’t, because acknowledging that might undermine the popular understanding of Reaganomics.