The Economist welcomes your views. Please stay on topic and be respectful of other readers. Review our comments policy.
You must be logged in to post a comment. Log in to your account.Don't have an account? Register
The problem is that progress works by disruption, but Western academia still works by creating 'prestigious' journals and 'well-established' 'peers' who snub 'predatory journals' from 'emerging countries without a mature research culture'. Western science publishing is exactly the model which produced stagnation and failure in other areas of life.
It is difficult nowadays to find important and good papers when there is an inflation of journals and papers and many people want to game the system. In some disciplines it would require to take into account only publications in prestigious and well-established journals. But even there a cheating cannot be excluded. I am aware of at least two articles where a trivial error (order of magnitude) has been found but publication was accepted.
Promotion committees are usually formed from several disciplines and that presents a problem. At my university I proposed that candidates for promotion should list what they considered to be their 10 best papers and provide each member the committee with copies of each of the 10 papers. An awful lot of paper perhaps but this is a serious undertaking. Scanning the papers and asking other opinions if necessary would give members of the committee a clearer idea of the candidates scientific worth rather than simply reading the candidate's bibliography. It is well known that titles of papers are often chosen to sound more impressive than they actually are.
The academic journal, especially in print and for profit, is a relic. Scientific research ought to be on the internet and open source. The community of interest ought to provide open peer review, and the authors open replies. Authors who will not share their data ought to be excluded. The process could be managed by the major society in each field, eg, the American Chemical Society.
Some journals obviously do a less-than-stellar job of peer review.
For example, Crimson Publishers could have probably read through the second paragraph of the introduction to the paper 'Testing Inter-hemispheric Social Priming Theory in a Sample of Professional Politicians-A Brief Report', where they might have found this odd passage:
' . . . social priming suggests that one’s position on the left-right political
dimension might be embodied. In short, one might expect to find
that the hand one wipes one’s bottom with is predictable by one’s
political position. This prediction is complicated, however, by inter-hemispheric
cross-talk. Specifically, the left-wing political affiliate
might wipe the bottom with the right hand, and vice versa. '
This paper was submitted to expose the fraudulent nature of this journal's peer review. It's worth reading for a good laugh. It's hard to imagine anyone read a whole paragraph from anywhere in the "paper" and didn't realize it was a gag.
I noticed the sample used in the paper you cite, Nick, didn't include a single politician who possessed an automatic bottom-cleansing toilet. My understanding is that there were such devices on sale in Japan a couple of decades ago. I hypothesize that use of such toilets would erase the left-right political spectrum in the psychologies of their users, but I have no hypotheses as to how such devices would specifically affect inter-hemispheric cross-talk nor what type of political spectrum widespread use of such devices would introduce to humanity if endorsed by the UN or included as a national objective by nations signing on to the Nuclear Non-Proliferation Treaty.
I do, however, imagine every reader of this blog will agree with me in rating this research issue to be the most significant one still outstanding from the Second Millennium AD. Full disclosure: I have no shares in Crimson Publications.
"But the “predatory” label has proven broadly misleading.... Most authors who publish in dodgy journals probably reckon the benefits of an apparently impressive résumé outweigh the risk of being caught, in which case, anyway, one could always claim ignorance."
Accurate critique of the corrupted language of this mis-labeling. The label was invented, as in 'predatory lending', to shift blame from cheating borrowers to the big-bad banks. Its astounding success also prompted wide usage on China's belt-road projects... but I've digressed... hehehe....
It is a diversion. Fake papers which caused most damage by wasted funds and efforts are the ones published in recognized, celebrity journals. Not obscure journals with low citation index called predatory.
The talk about predatory and unpredatory publishing diverts attention from the failure of the whole Byzantine system of publication numbers and network of good friends called peers.
Are you aware that pharma companies don't trust any publication in celebrity Nature or Science until they duplicate results in their labs? And they fail to replicate up to 90% discoveries. Landmark discoveries in top American journals, which companies would very much wish to follow. So much about non-predatory journals which bash these so-called predatory journals.
I don't want to think what is the situation in other areas of science where results cannot be so easily verified.
Not a bad article but there are three further points I'd make:
1. It's a bigger problem for emerging countries without a mature research culture. Countries with a mature research culture (e.g. UK REF) tend to weigh the quality and impact of the research, not the number of papers.
2. There is a huge grey zone. At one end there are completely bogus journals which readily publish gibberish for money. At the other end there are true scholarly journals under firm academic control. But in between there are many genuine mid-ranking journals that peer-review and publish legitimate work, but where the peer-reviews may be mediocre, and where honest but weak work still gets published (to the financial benefit of the publisher).
3. There has been enormous ethical concern over the past couple of decades about failure to publish negative results leading to "publication bias" (the impression from reading published work is unduly rosy). To solve this problem, academics must publish their negative findings, but since these are of little interest they will tend to end up in low-ranking journals that might superficially resemble predatory journals.
The economics of all this is really quite interesting.
Don't fool yourself that 'mature research culture' gives any additional confidence in science publications. It is appeal to authority which is well known to be fallacious argument. As is the whole system of peers and journal hierarchy.
A great deal of what this TE Explains is true.
Hence the more heightened the need exists for lay folks to distinguish, when they read stuff, what is fake, what is real, what is diamond, what is zirconia, what is glass, what is plastic.
Fortunately, for each field of discipline, from law to psychiatry to nuclear physics to Keynesian economics to Marxist theory, there are the real pros. For them, they are not unduly disturbed or perturbed, for they all know each other and can smell an "un-pro" from light years away.
Unfortunately, for "journalists" who purport to report by reading precisely these "predatory academic journals" (I commend TE for coming up with this appellation) often can't tell the difference, making it far easier for the shysters to pry their trade. Sometimes, the results of reading such show up on TE's own blogs, making them cringeworthy to read.
"[T]here are the real pros [who] are not unduly disturbed or perturbed, for they all know each other and can smell an "un-pro" from light years away."
I feel an irregular verb coming on
We form an academic elite
You are a bit closed minded
They are blinkered fuddy duddies hiding in ivory towers
Academia needs mechanisms by which the establishment can be shown up. Unfortunately the amount of noise being generated by the merely average, and the arms race to get top academic posts is making real progress ever harder to achieve, and truly talented people harder to identify within the dross.
Governments demand visible research as evidence that universities are doing something worthwhile. As Goodhart's law reminds us, whatever is measured is immediately suspect. What we need is high quality research; what we are getting is a high quantity of mediocre research. Appointment committees should perhaps look hard at the number of citations a candidate's research has attracted, rather than merely the number of papers overall. Google scholar does provide such a measure, though of course Goodhart's Law might strike again...
There is little you say I disagree with.
Academics are a sheltered lot. They tend to think they are "elite". In the real world, they are not really. In pure theoretical research, their self-claimed (or other-foisted) elitism may not result in grave consequences to society. In research that has practical applications, it is a totally different ballgame. In the latter, it behooves the examiner or adjudicator of the "research" to know the subject matter well enough to tell at a glance what quality the work is. I think genuine experts do.
My own view, a highly biased one, is that the quality of research in all the social sciences have gone to hell, from hypothesis to methodology to conclusion. Most of them are trash. Physical sciences has survived much better; it is also much harder to cheat in the physical sciences.
On resume padding, they all read like they copy each other, in particular the "buzz words". That's when you know the person offering the resume is suspect.
I don't understand what you meant by "I feel an irregular verb coming on".
"Hence the more heightened the need exists for lay folks to distinguish, when they read stuff, what is fake, what is real, what is diamond, what is zirconia, what is glass, what is plastic."
A lofty ideal for sure, but how to achieve that? We an hardly turn all members of the public into experts in every field of science. As an academic, I don't claim the ability to properly vet articles in another discipline.
My conclusion is that lay folks need to be able to judge not the content, but the source. Does it only publish articles after rigorous, independent review? For medicine and health, is the journal included in PubMed? Or is this a paper that was never cited in such databases, and contains errors of language that suggest that little review has taken place?
Similarly for websites. A claim on the legit website of NASA can be trusted, a claim on the website of a 'think tank' with a lofty name that doesn't disclose its funding sources, probably not.
It requires a bit of effort to dig this out, but way less than learning to master a whole field of science.