Category Archives: research

Is it possible to validate LSCITS research?

For the past 5 years or so, I’ve been working on a UK research programme of research and education into large-scale complex IT systems (LSCITS). This has involved partners in other universities and industry. Overall, I think we’ve done a good job with lots of interesting research results. Thanks to the flexibility of EPSRC funding, we’ve been able to be responsive to new development that weren’t anticipated when we put the proposal together such as social networking and cloud computing.

You can see a list of what we’ve produced at the LSCITS web site.

So, academically all is well. Lots of publications, students have received PhDs and staff have been promoted. We’ve ran successful workshops and achieved our aim of creating an LSCITS community.

Yet, in spite of this, I am left with a feeling of unease. So far, very few of our results have had any impact on practice. This is not, in itself, a problem as it takes a while after a project finishes before the results can have an impact. But, if and when they are used, how will we know how good they are? I feel uneasy because, frankly, even with commitment and support from industrial users, I have no idea how we can assess the value of our work for improving real large-scale systems engineering practice.

Let us assume that some company or collaboration decides to take some of our ideas on board – let’s say those on socio-technical analysis.  They apply these on a project and eventually go on to create a system that the stakeholders are happy with. Does this mean our ideas have helped? Or, if the project is deemed to be a failure, does this mean that our ideas don’t work?

The problem with large-scale systems is just that – they are large-scale and their size means that there are lots of factors that can affect the success or otherwise of development projects. These factors are present in all projects but the influence of particular factors varies significantly – for example, real-time response is a key success factor in some systems but less important in others. Not only do we not know in advance which factors are likely to be significant, but we don’t really maintain enough information from previous projects even to hazard a guess.  We don’t understand how these factors relate to each other so we don’t know the consequences of changing one or more of them.

So, is it impossible to validate if LSCITS research makes a difference? If so, what is the purpose of doing that research? My answer to the first question is that I think it is practically if not theoretically impossible; the second, I’ll make the topic of another blog post.


Filed under LSCITS, research

Personal responsibility and research funding

I was recently asked to review a research proposal where the proposer, who I know and respect, is  past the ‘normal’ retirement age. He has a distinguished track record of research and the work proposed was very good. It was a high quality proposal and I supported it.

But I think it was highly irresponsible of the applicant to submit this proposal. It may be good for science but it was bad for the research community. Why? Because the old guys are getting more than their share of the money and this means that we are making it more and more difficult for young researchers to get their first step on the ladder.

It might be argued that research funding bodies make dispassionate decisions based on the quality of the research that is proposed and if young researchers have the best ideas, then these will get funded. This is arrant nonsense. Not only is proposal writing a skill in itself, which takes time and experience to learn, researchers with an established track record get an easier ride. This makes complete sense – if people have done good work in the past, you can expect them to do good work in the future. Consequently, reviewers overlook issues in a proposal that they would be otherwise be concerned about.

Therefore, if you have no track record and are starting out on your research career it is harder to get funding. But it is absolutely essential that we (the research community) provide a route for early career researchers to develop as independent research scientists and engineers.

The best way to do this, of course, would be for research funders to make a policy decision that no-one over the age of 60 (say) can submit a new application for funding as a principal investigator. But, age discrimination is not allowed so this is impossible.

But, we can do this voluntarily – instead of submitting proposals as principals, we old guys should step back and help our younger colleagues rather than competing with them. Let us use our expertise to advise the next generation instead of creating a situation where many of them will be so disillusioned that they will simply give up research. We don’t need the money and, if we are good enough to get proposals funded, we’ve already established our reputation.

It will be argued that this might mean that the ‘best’ science is not supported. Again, arrant nonsense. Research ought to be risky which means that what is ‘the best research’ cannot be decided objectively. The research councils reject lots of high quality proposals and often the decision on where a proposal is ranked is simply an accident of reviewing.

Personally, I have about 30 years experience of writing successful research proposals so I think that I can speak with some authority here. I don’t intend to hang up my researcher hat yet but I will not submit any new research proposals as a principal applicant to the EPSRC or to EU research programmes.

I know that in an ideal world, this would not be necessary as there would be sufficient funding for everyone. But waiting for an ideal world has never been a very practical strategy.

Leave a comment

Filed under research

Time for a harder line on evaluation

I have written in an earlier post about my concerns that the research community is being driven by targets to publish work that clearly isn’t ready for publication. I made the point that papers are submitted to conferences that don’t contain evaluations of the work and papers that are supposedly about software systems but where the systems have not actually been implemented.

Well – I had the unhappy experience today of reviewing conference papers (not HCI this time) on agile methods and software engineering – I reviewed 5 papers and not 1 had any information about evaluation. I am guessing that most of these papers were written by PhD students and that they felt compelled by the prevailing publication culture to submit papers to conferences of work in progress. This is really utter nonsense.  Sometimes PhD students produce solid publishable work during their time as a student and sometimes they don’t. I have supervised both kinds of student and one is not better than another. It may make more sense to write a single, in-depth paper at the end of a 3 or 4 year period rather than a series of shorter papers.

But the people to blame here are the student’s supervisors or advisors (who are sometimes named on the papers). They should not be encouraging the submission of unfinished and premature work. They should be making absolutely clear to students that papers about vapourware or papers where there is no evaluation or comparison of the work with other approaches are simply not good enough.

There is also a need for organisers of conferences to make clear that papers that propose some practical approach and that do not include a discussion of evaluation will be rejected without review.And they should screen papers before sending them out for review – wasting reviewers time means that we will be less inclined to do reviews in future.  If this means fewer paper submissions and so fewer conferences, this would be good for everyone concerned.


Filed under research, software engineering