Bayesian analysis of Slate Star Codex survey data.

[Epistemic status: I’m teaching myself Bayesian analysis out of an O’Reilly-esque programming book; I haven’t yet mustered myself to crack the intimidating Andrew Gelman tome on my shelf. I beg you, correct me if I have screwed this up.]

Scott Alexander posted his survey data results several months ago, and recently has been posting some interesting things about how different groups perceive optical illusions.

As part of my quest to finally understand the differences between Bayesian analysis and frequentist analysis, I downloaded his data and poked at it with PyMC, again modeling my analyses after those in chapter 2 of Bayesian Methods for Hackers, by Cameron Davidson-Pilon (the A/B testing example and the Challenger example.)

Setup:

import pymc as pm
import numpy as np
import matplotlib.pyplot as plt
import csv
path = "/somewhere/"

with open(path+"Survey_CSV.csv") as f:
  reader = csv.reader(f)
  indata = [row for row in reader]
  headers = indata[0]
  data = indata[1:]

Questions at a glance:

for n, item in enumerate(headers):
  print(n, item)

How I’m guessing Scott coded the responses:

weakdancer = ('I was able to see the dancer as spinning in either direction, or even switch between them at will','I was eventually able to see the dancer as spinning in either direction, or even switch between them at will')
weakmask = ('I can see it as either of these two, or it seems to switch back and forth','A Einstein mask facing away from the viewer, as if it were ready to fit onto your own face')
transf = 'F (transgender m -> f)'
transm = 'M (transgender f -> m)'
trans = (transf,transm)
condition = ('I think I might have this condition, although I have never been formally diagnosed','I have a formal diagnosis of this condition')

Generalized modeling function for 2×2 analysis; takes two data columns and two answer sets, models binomial events from the second column assuming separate uniform distributions for both conditions in the first column:

def model(q1, vals1, q2, vals2):
  col1 = [row[q1] for row in data]
  col2 = [row[q2] for row in data]
  n1_0 = len([value for value in col1 if value not in vals1])
  n1_1 = len([value for value in col1 if value in vals1])
  n2_0 = len([row for row in data if row[q1] not in vals1 and row[q2] in vals2])
  n2_1 = len([row for row in data if row[q1] in vals1 and row[q2] in vals2])
  p0 = pm.Uniform('p0',0.0,1.0)
  p1 = pm.Uniform('p1',0.0,1.0)
  b0 = pm.Binomial('0',n=n1_0, p=p0,value=n2_0,observed=True)
  b1 = pm.Binomial('b1',n=n1_1, p=p1,value=n2_1,observed=True)
  m0 = pm.Model([p0,b0])
  m1 = pm.Model([p1,b1])
  mc0 = pm.MCMC(m0)
  mc0.sample(iter=50000,burn=10000)
  mc1 = pm.MCMC(m1)
  mc1.sample(iter=50000,burn=10000)
  ratios = mc1.trace('p1')[:, None]/mc0.trace('p0')[:, None]
  print(sum(ratios)/len(ratios))
  plt.gcf().clear()
  plt.hist(ratios)
  table = [(i, len([ratio for ratio in ratios if ratio>i])/len(ratios)) for i in np.arange(1.0,3.1,0.1)]
  print(table)

First, let’s run a test for trans versus cis people looking at the mask illusion:

model(4,trans,65,weakmask)

If I’m reading this correctly, this tells us the following:

  • We expect that trans people are 64% more likely than cis people to have a weak response.
  • There is a 99.2% chance that trans people are no less likely to have a weak response than cis people – the closest thing Bayesian analysis has to null hypothesis testing.
  • There is a 67.1% chance that trans people’s responses are at least 1.5 times more likely to be weak than cis people’s.
  • There is an 11.7% chance that trans people’s responses are at least twice as likely to be weak as cis people’s.

weakmask
We can repeat this analysis for schizophrenia and autism:

model(68,condition,65,weakmask)
model(69,condition,65,weakmask)

…which tells us to expect that schizophrenics are twice as likely to have a weak mask response and that autistic people are 1.4 times as likely to have a weak mask response; in both cases, the probability that these groups are at least a tiny bit more likely to have a weak response is in the high 90s.

Scott’s left/right political scale goes from 1 to 10, so the question of whether political views correlate with response to the mask illusion probably calls for a logistic model. We’ll flip the scale so we can make statements about liberals versus conservatives, because liberals are “the weird ones”:

usedata = [(row[65] in weakmask, int(row[30])) for row in data if row[30] != ' ']
libmask = [row[0] for row in usedata]
liberal = [11-row[1] for row in usedata]

Now I set up and run a logistic model. I could probably tweak this to allow for nonlinear effects, but I’m not sure I’m even doing the basic stuff right, so I’ll play it safe for now:

beta = pm.Normal("beta",0,0.001, value=0)
alpha = pm.Normal("alpha",0,0.001, value=0)


@pm.deterministic
def p(pol=liberal, alpha=alpha, beta=beta):
    return 1.0/(1.0+np.exp(alpha-beta*pol))

bern = pm.Bernoulli("bern", p, value=libmask, observed=True)
model = pm.Model([bern, beta, alpha])
map_ = pm.MAP(model)
map_.fit()
mcmc = pm.MCMC(model)
mcmc.sample(120000,100000,2)
alphas = mcmc.trace('alpha')[:, None]
base = 1/(1+np.exp(alphas))
plt.hist(base, range=(0,0.1),bins=25)
betas = mcmc.trace('beta')[:, None]
odds = np.exp(betas)
plt.gcf().clear()
plt.hist(odds)

My concerns from last time remain:

  • Why is it appropriate to assume that prior for the intercept centers on zero?
  • Where do we get the priors for the standard deviations on the normal distributions?

Anyway, with these priors, the model tells us to expect that every shift of one point to the left on the political spectrum makes someone 4.6% more likely to have a weak response to the mask illusion.
polscale
That’s a small effect for each one-point shift, but a reasonably large effect overall – the difference between “far right” and “far left” would then be roughly as large as the difference between neurotypicals and autistic people.

I’m left with two questions, concerning which I open the floor to my (thus far, mostly non-existent) readers:

  • Does my overall approach to applying Bayesian methods to these problems even make sense?  I’m still trying to wrap my head around the differences from the frequentist approach I’m used to.
  • Should I be adjusting for multiple comparisons in some way? I have not done multiple comparisons in my analysis, but it seems like Scott did, and my analysis is based on his.
  • How in dog’s name can I get WordPress to respect white space in the code blocks? Am I stuck writing these posts in raw HTML?
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s