Monday, October 11, 2010

Psychologists and Economists

We are very different types of social scientists. Seth Roberts does a good job of discussing how our approaches to data differ:
5. Psychologists rarely use observational data at all. To get them to appreciate sophisticated analysis of observational data is like getting someone who has never drunk any wine to appreciate the difference between a $20 wine and a $40 wine.


Clarissa said...

Speaking for myself and what I've observed of my colleagues, many of us are indeed bad at math. But, I wouldn't agree with Seth's poor observation of psychologists' use of observational data. Very little experimentation is done without gathering observational data, so I don't understand how they can be separate, as he suggests. Ecobehavioral assessment is just one example of ways to gather very complex observational data. Maybe because of my poor math skills, however, I don't quite understand what is being meant by observational data... Maybe an operational definition from Seth would have help...

Taggert said...

I think Seth's use of the term "observational data" is a bit misleading. Or maybe a better way to say it is that he has a more specific use in mind. I think he has in mind the following: Data used to answer a causal question, that was not gathered with the cause in mind. The kind of data a psychologist...and for the purposes of this discussion lets say school psychologists might gather is really observational in the sense that you want to collect a baseline. Then you devise an intervention, then collect more data. This is experimental in Seth's language. Economists rarely do that. Although it is more frequent than it used to be.

Instead we have data that was collected often for another purpose, and we try to tease out the causality using math/stats tricks. We do not have the opportunity to cleaning identify the intervention through designing the data collection. Its already collected. We need to identify the causal effect another way. Finding a "natural" experiment in the data. Say for example a school system is looking at the use of RTI. Lets say we have data on some school wide outcomes we care about. The economists is looking to find some discontinuity...or same accidental randomness that can help us test the effects of RTI. Maybe RTI was only used in classes with more than 20 students. That would provide a nice comparison, since the intervention is unrelated to the expected outcome. Notice the choice of 20 was totally arbitrary and not set up so as to test RTI (then this would really be experimental data) but rather it was a silly policy that can be exploited statistically.
What I thought was interesting is thinking about how we - economists and psychologists - try to figure out similar things, but habit has us use our disciplines tools.