Before I started working in science, I was a bit naïve. In my imagination, science took place in a beautiful, purified paradise. Researchers didn't worry about money. They precisely executed carefully designed experiments, obtained clear and distinct data, and most of all, only started building theories post hoc.
In reality, science is done by people, in the real world. We want personal advancement. We are under financial and deadline pressure. Labs are messy and experimental set-ups are usually haphazard. Data are often ambiguous. Research is tedious and time-consuming. In the face of all this, it's nice to take some uncertainty out of the picture. So, more often than not, we choose experiments where we know the outcome ahead of time. Most work is just applying a well-known phenomenon to a new situation -- while it is technically "new research," the results are very much expected.
Except when they come out "wrong." A great article from WIRED mentions something I've seen several times, working in different labs. We take our results expectation too far, and begin committing bad science -- pushing statistics, denying results, dropping projects. Shouldn't theories be designed to fit data, not the other way around? While I thought that maybe this would be a problem, I was shocked by how common it was to ignore "bad data," even as a instructional point. If it's repeatable, and nothing is wrong with our methods, then what can we do?
No comments:
Post a Comment