Tuesday, January 7, 2014

Testing the Bechdel Test

So, recently this article came out showing that of the top 50 movies of 2013, those that passed the Bechdel Test made more money overall at the US Box Office than those that didn't. For those not in the know, the Bechdel Test evaluates whether a movie has two or more named women in it who have a conversation about something other than a man. The test seems simple enough to pass, but surprisingly quite a lot of movies don't! Of the 47 top movies that were tested, only 24 passed the test (and at least* seven of those were a bit dubious). Gravity was understandably excluded from the test because it didn't really have more than two named characters**, and apparently no-one has bothered to test the remaining two.

The article comes with this nifty little infographic:



I've seen a couple of complaints on the web by people saying that this isn't enough proof - the somewhat ingenuous reasoning I saw was that the infographic shows totals and not averages, so can't prove that the average Bechdel-passing film performs better. Though there are more passes (24) than fails (23), the difference is not nearly enough to account for the almost 60% difference in total gross sales. The averages can quickly be calculated from the infographic above - the average passing film makes $176m, and the average failing film makes $116m, still a very substantial $60m difference!

A more reasonable criticism is that it may be possible that things just happened this way by chance. Maybe this year a handful of big films happened to be on the passing side, and if they had failed there'd be no appreciable difference? Well, we can test that as well using the information in the infographic. All we need to do is run what's called a randomisation test - this is where we randomly allocate the 50 tested movies in this list to the "pass", "fail" and "excluded" categories in the same numbers as in the real case (so, 24 passes, 23 fails, 3 excluded). We can use a random number generator to do this, or if you're playing along at home, put pieces of paper in a hat, whatever. We repeat this process a large number of times (I did it 10 million times) and see how often we can replicate that $60m difference between passing and failing films or better by chance alone.


It turns out that when you put your pieces of paper in a hat to make your own test, you'll only be able to beat the actual difference 0.71% of the time, or about 1 in 140 times. This is pretty good evidence that it's not a fluke and that the Bechdel Test really did influence movies' bottom lines this past year. One thing that we can't say based on this is whether this is a direct effect - i.e. that people consciously or subconsciously decided to go watch passing films over failing films. It could be that there is some indirect, or confounding effect, causing this phenomenon. For example, maybe directors who write films that pass the test tend to be better filmmakers in other ways which make people want to watch their films more? Either way, a trend towards more women in substantial roles in films can be no bad thing! (though it's worth mentioning that passing the Bechdel test by no means guarantees a "substantial role", and even failing movies can have their strong points - see this link)


* Having watched Man of Steel, I'd argue that it was pretty dubious too - I think the only non-about-a-man conversations between two women were one-sided one liners (hardly a conversation)... in any case, any feminist points it may have gained were swiftly taken away in my book by the female US Air Force Captain being mostly portrayed like a ditz rather than as a dedicated leader of people required for the rank. More here.
** So I'm told. I haven't watched it yet.