Category Archives: software engineering

Agile development for government IT systems – beware of the hype

Agile development gets lots of hype. Governments around the world are saying that we must be agile and use agile practices for IT systems development. Whatever the question, ‘agile’ is the answer.

I am sympathetic to the agile manifesto and I think some agile practices such as time-bounded increments and test-driven development are universally useful. I would guess that agile approaches, in some form, are almost universal in companies developing software products.

Agile approaches place a great deal of focus on the ‘user’ – they may include users as part of the development team, they develop requirements in parallel with implementation with extensive user involvement, they rely on users to help develop ‘system tests’. This works really well for product development where the products are clearly aimed at users. Of course, real users are rarely involved but some form of user proxy can be involved – someone from a sales team, other developers playing the role of users, support staff who have real user feedback, etc.

In those cases, if the ‘users’ ask for something that’s too hard to build or too expensive, the organisation itself can decide what to do. It owns both the specification (what’s to be done) and its implementation. It can adapt by changing either or both.

However, when it comes to enterprise systems and, especially, government systems, then things are different. The owner of the specification and the system developer are not the same – some requirements can’t simply be dropped because they are too complex or expensive. Furthermore, the notion of what is a ‘user’ becomes much more complex. Typically, these are large systems focused on complex problems – such as medical record keeping – and there are many different types of user. These systems may have complex governance arrangements, may have to conform to national and international laws and regulations, may have stringent security requirements and their success or failure may affect the careers of politicians. In short, they are very complex systems.

Their are various problems with a user-driven approach to development in such circumstances:

1.     Users tend to put their convenience first and other requirements later. They don’t want the overhead of security and don’t always understand the restrictions that are imposed by those involved in system governance.

2.     Users are not lawyers. They don’t know what the rules and regulations that apply to the system are.

3.     As those involved in the system governance are often not actually users of the system, it is difficult to know how to include them in an agile requirements process. Often they don’t have functional requirements but they simply place constraints on the system.

4.     Users are very busy people. They often simply don’t have time or the inclination to stop what they are doing and discuss requirements for a system which may or may not affect them sometime in the future. When users get involved, they are sometimes the wrong people – those who have a personal interest in technology and are not typical of real users.

Agile methods don’t really, as far as I can see, have good ways of coping with these issues. They present an idealised world where users are engaged and interested and where user interests rather than enterprise constraints are what matter most. This is not the kind of world that I see when looking at national IT systems.

It makes lots of sense to adopt some agile practices for government systems and to try to engage end-users during the development process. However, I am convinced that there is still a need for old-fashioned requirements engineering where all stakeholders are considered, rather than simply diving into agile development.

Advertisements

8 Comments

Filed under agile methods, complexity, requirements, software engineering

Requirements conflicts, governance and complexity

I’ve written in previous posts about how I am starting to look at the requirements for a new digital learning platform for Scottish schools.  Technically, this does not appear to be a very complex system but once you start to look at it you see that the complexity does not arise from the technical components of the system but from its governance.

I wrote in a paper recently published in the CACM (copy here) about how it was impossible to control change in a system where there were multiple independent organisations involved in its management and governance – and the way in which digital learning is supported in Scottish schools exemplifies this.

In Scotland, funding for age 5-18 education is the responsibility of local government – and there are 32 local authorities across the country. The national government provides support services (such as the current learning platform Glow) but cannot direct local authorities to take a particular course of action (that’s democracy – see my post on this).

Schools themselves are not legal entities so local authorities take responsibility for failings in the school system and, in particular, are the bodies that would be legally liable in the event of an issue of child protection and internet safety. This means that many (not all) take a very risk averse approach to internet filtering policies and limit what both teachers and students can do. I was astonished by the diversity of policies in this recently published survey. Local authorities are also responsible for funding school hardware and networking – and they all make their own decisions on this too.  Naturally, the provision differs markedly from one area to another.

A consequence of the risk-averse approach adopted by local authorities is that the current Glow system has traded off security against usability and this is perhaps the primary reason why  it is difficult to use in class teaching. As a consequence, it is hardly used by teachers and students – it is certainly not meeting its original requirements of providing effective learning support.

So what we have here is a situation where there are 33 different bodies  (32 local authorities plus the Scottish government) setting policies that influence the use of digital learning platforms.  Each body interprets regulations in its own way and profoundly influences how systems can be used.  There is little point in us specifying another secure system that will satisfy the local authority stakeholders if the security features mean that it is unusable by teachers and students. On the other hand, if we propose what teachers would prefer – an essentially unregulated system, then the local authority stakeholders are very unlikely to approve the use of the system (and they have to power to cripple it simply using internet filtering).

This type of complexity is by no means uncommon in complex multi-organisational systems and is why I despair when I read statements by eminent computer scientists that all we need to do is to produce simpler systems. And why the problems of requirements conflicts will forever be with us.

As a final word,  I have no idea at this stage how we will resolve the fundamental requirements conflicts in this system. Perhaps it is an insoluble problem.

5 Comments

Filed under complexity, software engineering

Risk-driven approaches to dependability

I’ve been reading a lot about cloud security today as, perhaps rather hastily, I offered to lead a discussion on my gut feeling that there is really nothing new in cloud security.  When you read articles on this topic, what strikes you is that they focus on security technicalities rather than the security risks that businesses face every day. I’ve written about the specific issues around cloud security in my Cloudscape blog.

But this brings me to a more general point that I make in my book but which perhaps needs emphasising again. When you have a limited amount of resources to spend on achieving dependability, start by identifying the risks and threats to system dependability. Focus on those risks which have a (relatively) high probability of occurring and the risks that have serious consequences.  Think about how your software and your testing process should cope with these problems – if you can avoid the biggies, then you will achieve dependability.

This is one of the problems that I have with automated testing. There is an emphasis on taking bottom-up approach, where you write unit tests for a component, with no idea of whether these include practical usage scenarios. There is a tendency to think that software that passes all the automated tests is necessarily dependable – but if you haven’t covered all the risks, then you could be in for a surprise.

Leave a comment

Filed under dependability, software engineering

Time for a harder line on evaluation

I have written in an earlier post about my concerns that the research community is being driven by targets to publish work that clearly isn’t ready for publication. I made the point that papers are submitted to conferences that don’t contain evaluations of the work and papers that are supposedly about software systems but where the systems have not actually been implemented.

Well – I had the unhappy experience today of reviewing conference papers (not HCI this time) on agile methods and software engineering – I reviewed 5 papers and not 1 had any information about evaluation. I am guessing that most of these papers were written by PhD students and that they felt compelled by the prevailing publication culture to submit papers to conferences of work in progress. This is really utter nonsense.  Sometimes PhD students produce solid publishable work during their time as a student and sometimes they don’t. I have supervised both kinds of student and one is not better than another. It may make more sense to write a single, in-depth paper at the end of a 3 or 4 year period rather than a series of shorter papers.

But the people to blame here are the student’s supervisors or advisors (who are sometimes named on the papers). They should not be encouraging the submission of unfinished and premature work. They should be making absolutely clear to students that papers about vapourware or papers where there is no evaluation or comparison of the work with other approaches are simply not good enough.

There is also a need for organisers of conferences to make clear that papers that propose some practical approach and that do not include a discussion of evaluation will be rejected without review.And they should screen papers before sending them out for review – wasting reviewers time means that we will be less inclined to do reviews in future.  If this means fewer paper submissions and so fewer conferences, this would be good for everyone concerned.

5 Comments

Filed under research, software engineering