Stephen Few’s latest newsletter, “Information Visualization Research as Pseudo-Science” is a critique of the academic process in visualisation research. In it, he savages one paper in particular: “Beyond Memorability: Visualization Recognition and Recall.” He uses this as an example of what he thinks is a problem widespread in this field.
I agree there are problems in this paper. I agree with his suggestions for fixes.
However, I think it’s unfair to say this is a problem with visualisation research: it’s a problem with all research. In all fields, there are great studies and there are bad studies.
In this post, I’ll explain my own thoughts on the flaws of the paper, then the areas where I think Stephen is being unfair.
1. Stop publishing 2-column academic papers online!
Why are academic papers STILL written in two columns? This is ridiculous in a time when most consumption is on screen. To read a 2-column PDF on my phone or tablet I need to do ridiculous down-up-right-down-left scrolling to follow the text. Come on academia: design for mobile!
2. Why are they measuring memorability?
I agreed with Stephen on the key problem: why are they measuring memorability? Isn’t it more important to understand the message of a visualisation?
3. Hang on, Steve! Problems with experimental technique are not unique to visualisation research
Stephen goes to town dismantling the study’s approach. For example, he criticises the small sample size and much of its methodology. I am not as expert as Stephen in this, but I find myself agreeing with most of this.
But where I differ is how he damns visualisation research as if the rest of research doesn’t have the same problems.
Let’s look at some:
i. Statistical unreliability
There are no shortage of academics papers with statistical problems caused by small samples. Here’s one on fish oil, dismantled by Ben Goldacre. Incidentally, the study he refers to also used 33 subjects.
He also outlines a statistical anomaly so extreme, that half of all neuroscience studies are statistically wrong.
Conclusion? Statistical problems are not unique to visualisation research.
ii. Methodological misdirection
How many of the 53 landmark studies in cancer had results that could be replicated? 6.
Conclusion? Methodological problems exist in all science.
iii. Logical fallacies
Logical fallacies are hardly unique to visualisation research. For example, this list of the top 20 logical fallacies is a good example of how this is a problem in all science, not just visualisation research.
Part of this critique is surely just part of scientific rigor?
For a conclusion, I acknowledge that I’m not an academic and I don’t read many academic papers, so I am naive.
Part of me thinks that surely lots of this critique is just part of scientific research? Researchers publish papers and the world responds, positively and negatively. Future research then improves.
I assume Stephen’s frustration stems from the fact that many of these problems are perpetual and should have been fixed before the study started. I can’t disagree with that. But I don’t think the paper is “fundamentally flawed” as Stephen describes. Maybe memorability of the view is important? If so, this is a first step in the iterative, slow advance of academic research. The paper at the very least makes us consider the question of what it’s important to remember from looking at a visualisation. Having read it critically, I have considered the question and formed an opinion. That’s of value, surely?
I found it very interesting to sit and really read an academic paper in detail. I don’t do it often, and I respect people who can wade through the dense formulaic wording to get to the meaning.
[Updated 5pm 3 Dec to expand my summary]