Time for a harder line on evaluation

I have written in an earlier post about my concerns that the research community is being driven by targets to publish work that clearly isn’t ready for publication. I made the point that papers are submitted to conferences that don’t contain evaluations of the work and papers that are supposedly about software systems but where the systems have not actually been implemented.

Well – I had the unhappy experience today of reviewing conference papers (not HCI this time) on agile methods and software engineering – I reviewed 5 papers and not 1 had any information about evaluation. I am guessing that most of these papers were written by PhD students and that they felt compelled by the prevailing publication culture to submit papers to conferences of work in progress. This is really utter nonsense.  Sometimes PhD students produce solid publishable work during their time as a student and sometimes they don’t. I have supervised both kinds of student and one is not better than another. It may make more sense to write a single, in-depth paper at the end of a 3 or 4 year period rather than a series of shorter papers.

But the people to blame here are the student’s supervisors or advisors (who are sometimes named on the papers). They should not be encouraging the submission of unfinished and premature work. They should be making absolutely clear to students that papers about vapourware or papers where there is no evaluation or comparison of the work with other approaches are simply not good enough.

There is also a need for organisers of conferences to make clear that papers that propose some practical approach and that do not include a discussion of evaluation will be rejected without review.And they should screen papers before sending them out for review – wasting reviewers time means that we will be less inclined to do reviews in future.  If this means fewer paper submissions and so fewer conferences, this would be good for everyone concerned.



Filed under research, software engineering

5 responses to “Time for a harder line on evaluation

  1. I am really surprised by this! My recent experience is that people are overdoing the evaluation – with too many questionnaires, student groups and spurious stats. We need the right mixture of meaningful hypotheses and tightly described experiments.

  2. Good points on always including evaluation of existing approaches. Otherwise chances are that you’re just reinventing the wheel so to speak. Though I would argue that not all publications need to be about completed work. If you have to take an approach to completion then that removes a lot of collaboration potential with peers. Which is essential for innovation and adoption. I would suggest there should be enough to set an approach apart from existing approaches before publication. I’m coming at this from a non-academic perspective.

  3. These days, a recent Ph.D. graduate with less than five or ten peer-reviewed papers is likely to remain unemployed or barely employed. Students who are aware of this will put tremendous pressure on the supervisors.

    Anyhow, it is difficult to have a sane system with a broken employment market.

    Disclaimer: I am a promoted tenured professor and could afford to publish one paper every ten years if I wanted to.

    • I agree that students looking for a faculty post need papers – but if a PhD student publishes 5 papers from their PhD work then really there’s going to be a lot of overlap. Frankly, I’d rather employ someone with a couple of good papers than 10 papers from obscure conferences and workshops. But I understand the pressures that students are under nowadays.

  4. Ahmed El-Deeb

    I believe the issue of paper publication is getting to a point more drastic even. Just look at the size of publications and try to filter out really good ones that add to the body of knowledge. Literature review became a job of filtering out insignificant papers before you really reaches a ‘Literature’ to review and evaluate your research upon. Researchers and PhD students are publication-oriented in a exaggerated manner; publications are good and needed for a foot print, but over reliance on publish publish publish without the research is mature or verified to be adding real knowledge is something misleading the professional community. The hope is in the hands of loyal reviewers and honest supervisors.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s