Pseudo-scientific conferences
Not all conferences are valid venues for publishing scientific work.
This page provides pointers to this problem and sketches how to get around
it.
See also
WhereWePublish.
Similar problems exist at many journals, in particular Open-Access journals.
See in particular
John Bohannon: Who's Afraid of Peer Review?
The problem
There are many true scientific conferences, symposia, workshops and other meetings in informatics. Many of them are good or very good (and it is difficult to get a paper accepted there); many are mediocre (and it is easy to get a paper accepted there).
However, there are also a number of events that claim to be scientific conferences but that in fact no self-respecting scientist should go to, because they are highly non-reputable: They accept essentially everything submitted, even complete nonsense, because their true purpose is not advancing science, but rather just making money.
Such meetings are not scientific, they just try to look scientific.
It is not always clear which events belong into this category, but there are a number of sources that have repeatedly identified some organizers whose events often appear to be pseudo-scientific. (And the evidence these sources have collected is sometimes extremely entertaining; go and read some.)
Some details
In a nutshell: There is reason to believe one should stay away from anything organized by
IPSI,
GESTS,
Enformatika,
WASET,
and this list is certainly not complete.
Some of these organizations also publish journals, from which you also should better not hope to obtain much scientific reputation when publishing there…
Note: Some other organizations in principle hold more-or-less valid scientific meetings, but
have lower quality on average in the material they get and publish
than we would like to find in the places where we publish our research.
It can be difficult to decide which organizations belong in this category;
one possible procedure is to check whether its conferences appear in ACM's
The Guide to Computing Literature, which contains references from more than 3000 different publishers -- what is
not in there has likely very little impact in Informatics and is thus not valuable for us to submit to.
For more information, see for instance the following sources:
- http://www.rereviewed.com/roguesemiotics/?p=619 : A blog discussion listing many cases and web resources. Very confusing, but also quite interesting.
- http://pdos.csail.mit.edu/scigen/ : SCigen: A computer program that generates random papers that superficially look like scientific papers, but make essentially no sense whatsoever. Several times, such papers have been submitted to these conferences and were accepted for publication. See also the example stories farther down on that page, even including videos of the presentations given for these papers!
But watch out: As you know from your favorite espionage movie, information can be made harder to obtain by dispersing disinformation.
- For an example see the comment by 'R. Fisher'(?) and the reply by 'Prof. E. Wang'(?) on the aforementioned blog page from rereviewed.com. It is not obvious who (if any) of these two is speaking the truth, but it can hardly be both.
- Or look at a site that claims to bring light to the conference quality assessment jungle by applying objective criteria to create a ranking of many conferences: cs-conference-ranking.org. Look at their approach and the resulting conference ranking lists. Looks reasonable, eh?
And now think for a minute: How can they possibly evaluate their criteria RR and JA for all of these conferences? It is almost impossible.
- A number of web sites on the topic have been shut down, e.g.:
- http://www. anti-plagiarism.org/
- http://www. fakeconferences.org/
- http://www. plagiarism-revealed.org/ for whatever that means. They may have contained information or disinformation or even both.
How to select a conference to submit to
Here is a suggestion for how to evaluate the quality of a particular
conference:
- Go and find the website of a previous instance of the same conference. If you cannot find it, that is a bad sign.
- Have a look at the list of papers from that previous conference. Do the titles look like they are from the same area as your own work? If not, that would be a reason not to go there, independent of the quality of the conference. If you cannot even find the list of papers, that is a bad sign.
- Have a look at the list of program committee members (this year or previous years). You should have heard of at least a few of these people. If not, that's not a good sign.
- There is no program committee at all? Then it is not a scientific conference.
- Obtain access to the proceedings volume of that conference. If you can find it neither in any library you have access to nor online, that is a bad sign. And it is also in itself a reason not to publish there: You would want others to be able to find your contribution, would you not?
- Assess a number of interesting-looking papers from a previous year. If none of them is of good quality, that is a bad sign.
Some guidance for finding good conferences:
- Most conferences sponsored by ACM or the IEEE Computer Society tend to be fairly good. You can find lists of these on their webpages.
- Most conferences where you have previously found a really good paper are probably good. Top work is not often published in low-quality places.
- This article in CACM suggests that even statistical criteria may suffice to identify very low-quality conferences.
- See also WhereWePublish.
However…
Be aware, however, that even well-reputed publishers are not fully free
from accepting nonsense -- and some people even assess or exploit those
weaknesses massively.
See for example