The International Journal of Indexing

This just needs to be re-posted [from Kottke]:

[F]or the Society of Indexers, book indices are a topic that holds endless fascination. And I do mean endless.

The Prime Minister of England wrote to the Society of Indexers at the society’s founding back in freaking 1958.

“I can scarcely conceal from you the fact that I am at present somewhat occupied with other matters, so that I cannot say all that comes into my mind and memory on the subject of indexing.” 

One of the longest running features of the society’s publication, The Indexer, is its reviews of indices which are snippets culled from book reviews that pertain to the book’s index… They also regularly publish articles that meditate on what it means to be an indexdefend indexing, and a look at the history of indexing societies.

These guys should definitely be invited to the World Congress on Referencing Styles.

Proposal for A World Congress on Referencing Styles

I have been busy over the last few days correcting proofs for two forthcoming articles. One of the journals accepts neither footnotes nor endnotes so I had to find place in the text for the >20 footnotes I had. As usual, most of these footnotes result directly from the review process so getting rid of them is not an option even if many are of marginal significance. The second journal accepts only footnotes – no in-text referencing at all – so I had to rework all the referencing into footnotes. Both journals demanded that I provide missing places of publication for books and missing page numbers for articles. Ah, the joys of academic work!

But seriously… How is it possible that a researcher working in the XXI century still has to spend his/her time changing commas into semicolons and abbreviating author names to conform to the style of a particular journal? I just don’t get it. I am all for referencing and beautifully-formatted bibliographies but can’t we all agree on one single style? Does it really matter if the years of a publication are put in brackets or not? Who cares if the first name of the author follows the family name or the other way round? Do we really need to know the place of publication of a book? Where do you actually look for this information? Is it Thousand Oaks, London, or New Delhi? All three appear on the back of a random SAGE book I picked from the shelf… Who would ever need to know whether it was Thousand Oaks or London in the first place? Maybe libraries, but they certainly don’t get their data from my references. Obviously, the current referencing system is a relic from very different and distant times when knowing the publishing place was necessary to get access to the book. Now, collecting and providing this information is a waste of time and space.

And yes, I have heard of Endnote and BibTeX, and I do use reference management software. But most journals still don’t have their required styles available for import into these programs. So the publisher doesn’t find it necessary to hire somebody for a few hours to prepare an official Endnote style sheet for the journal, but it demands from all authors to spend days in order to rework their references to conform to its rules?!

And why are there different referencing styles anyways? Can you imagine the discussions that journal editors and publishers have before they settle for a particular referencing style?

– Herr Professor, I must insist that we require journal names to be in italics!
– That’s the most ridiculous thing I have ever heard – everybody knows that journal names are supposed to be in bold, not in italics!
– But gentlemen, research by our esteemed colleagues in psychology has shown that journal names put in a regular font and encircled by commas are perceived as 3% more reliable than others.
– Nonsense! I demand that journal names are underlined and every second one in the list should be abbreviated as well.

And so on and so forth… To remedy the situation I boldly propose a World Congress on Referencing Styles. All the academic disciplines and publishers will send delegates to resolve this perennial problem once and for all. There will be panels like Page Numbers: Preceded by a Comma, a Colon, or a Dash, and seminars on topics like Recent Trends in Abbreviating Author Names. No doubt several months of deliberation will be needed, but eventually the two main ‘Chicago’ and ‘Harvard’ parties will reach a compromise which will be endorsed by the United Nations amid the ovations of the world leaders. The academic universe would never be the same again!

Until that day, happy referencing to you all!

Review the reviews

Frank Häge alerts me to a new website which gives you the chance to review the reviews of your journal submissions:

On this site academic social science researchers have the opportunity to comment on the reviews they have received, and the process of decision-making about reviews, affecting articles submitted for publication, book proposals, and funding applications.

So far there seems to be only one submission (by the site’s author) but I can see the potential. The addition of a simple scoring system so that you can rate your experience with certain journals might work even better. The danger is of course that the website becomes just another channel for venting the frustration of rejected authors.

In my opinion, making the reviews public (perhaps after the publication of the article) is the way to go in order to increase the accountability of the review system.

Writing with the rear-view mirror

Social science research is supposed to work like this:
1) You want to explain a certain case or a class of phenomena;
2) You develop a theory and derive a set of hypotheses;
3) You test the hypotheses with data;
4) You conclude about the plausibility of the theory;
5) You write a paper with a structure (research question, theory, empirical analysis, conclusions) that mirrors the steps above.

But in practice, social science research often works like this:
1) You want to explain a certain case or a class of phenomena;
2) You test a number hypotheses with data;
3) You pick the hypotheses that matched the data best and combine them in a theory;
4) You conclude that this theory is plausible and relevant;
5) You write a paper with a structure (research question, theory, empirical analysis, conclusions) that does not reflect the steps above.

In short, an inductive quest for a plausible explanation is masked and reported as deductive theory-testing. This fallacy is both well-known and rather common (at least in the fields of political science and public administration). And, in my experience, it turns out to be tacitly supported by the policies of some journals and reviewers.

For one of my previous research projects, I studied the relationship between public support and policy output in the EU. Since the state of the economy can influence both, I included levels of unemployment as a potential omitted variable in the empirical analysis. It turned out that lagged unemployment is positively related to the volume of policy output. In the paper, I mentioned this result in passing but didn’t really discuss it at length because 1) the original relationship between public support and policy output was not affected, and 2) although highly statistically significant, the result was quite puzzling.

When I submitted the paper at a leading political science journal, a large part of the reviewers’ critiques focused on the fact that I do not have an explanation for the link between unemployment and policy output in the paper. But why should I? I did not have a good explanation why these variables should be related (with a precisely 4-year lag) when I did the empirical analysis, so why pretend? Of course, I suspected unemployment as a confounding variable for the original relationship I wanted to study, so I took the pains of collecting the data and doing the tests, still that certainly doesn’t count as an explanation for the observed statistical relationship between unemployment and policy output. But the point is, it would have been entirely possible to write the paper as if I had strong ex ante theoretical reasons to expect that rising unemployment increases the policy output of the EU, and that the empirical test supports (or more precisely, does not reject) this hypothesis. That would certainly have greased the review process, and it only takes moving a few paragraphs from the concluding section to the theory part of the paper. So, if your data has a surprising story to tell, make sure it looks like you anticipated it all along – you even had a theory that predicted it! This is what I call ‘writing with the rear-view mirror’.

Why is it a problem? After all, an empirical association is an empirical association no matter whether you theorized about it beforehand or not. So where is the harm? As I see it, by pretending to have theoretically anticipated an empirical association, you grant it undue credence. Not only is data consistent with a link between two variables, but there are strong theoretical grounds to believe the link should be there. A surprising statistical association, however robust, is just what it is – a surprising statistical association that possibly deserves speculation, exploration and further research. On the other hand, a robust statistical association ‘predicted’ by a previously-developed theory is way more – it is a claim that we understand how the world works.

Until journals and reviewers act as if proper science never deviates from the hypothetico-deductive canon, writers will pretend that they follow it. While openly descriptive and exploratory research is frowned upon, sham theory-testing will prevail.

Eventually, my paper on the links between public support, unemployment and policy output in the EU got accepted (in a different journal). Surprisingly given the bumpy review process, it has just been selected as the best article published in that journal during 2011. Needless to say, an explanation why unemployment might be related to EU policy output is still wanting.