I attended another UW-Madison seminar, this one on: "EL Inference for Partially Identified Models: Large Deviations Optimality and Bootstrap Validity". I have to admit, I was hanging on for dear life, understanding maybe 25% of what was going on. But I'm an applied researcher, as long as I can implement it in STATA, who cares? Only kidding...mostly.
I have to admit, I find the issues of partial identification very interesting. I just read a piece by Charles Manski, Identification Problems from the Social Sciences and Everyday Life. I also have read part of his book, Identification Problems in the Social Sciences and, while deeply interested I'm also deeply depressed. I'm pretty sure we don't know anything about anything. Well, maybe that is overly pessimistic, but his fundamental point (pun intended) is that point identification is really impossible except by absurdly tight assumption, we are better off in a world without point identification, but where we merely try to narrow the bounds of identification.
I'm pretty sure this puts him in the same camp as Heckman in regards to the ever popular explosion in the use of IVs. As Manski points out, in many cases we replace bad assumptions with weak instruments, yielding no real improvement in the quality of results we generate.
Think about how big this statement is. If one of the identification problem arises with missing data, something EVERY survey suffers from, then the usual solution of ignoring it but treating it as missing at random, is the first egregious mistake in the process. And that say nothing about any of the other identification issues that inevitably arise.
Now I'm depressed again.