Why I am leaving the IET after 25 years

I am moving to a new blog but will link posts from here for a while. All posts from here have been imported to it.

http://iansommerville.com/techstuff/

My latest entry – IET discriminates against students from Scotland in its scholarship scheme

http://iansommerville.com/techstuff/2012/10/07/why-i-am-leaving-the-iet-after-25-years/

Leave a comment

Filed under CS education

Agile development for government IT systems – beware of the hype

Agile development gets lots of hype. Governments around the world are saying that we must be agile and use agile practices for IT systems development. Whatever the question, ‘agile’ is the answer.

I am sympathetic to the agile manifesto and I think some agile practices such as time-bounded increments and test-driven development are universally useful. I would guess that agile approaches, in some form, are almost universal in companies developing software products.

Agile approaches place a great deal of focus on the ‘user’ – they may include users as part of the development team, they develop requirements in parallel with implementation with extensive user involvement, they rely on users to help develop ‘system tests’. This works really well for product development where the products are clearly aimed at users. Of course, real users are rarely involved but some form of user proxy can be involved – someone from a sales team, other developers playing the role of users, support staff who have real user feedback, etc.

In those cases, if the ‘users’ ask for something that’s too hard to build or too expensive, the organisation itself can decide what to do. It owns both the specification (what’s to be done) and its implementation. It can adapt by changing either or both.

However, when it comes to enterprise systems and, especially, government systems, then things are different. The owner of the specification and the system developer are not the same – some requirements can’t simply be dropped because they are too complex or expensive. Furthermore, the notion of what is a ‘user’ becomes much more complex. Typically, these are large systems focused on complex problems – such as medical record keeping – and there are many different types of user. These systems may have complex governance arrangements, may have to conform to national and international laws and regulations, may have stringent security requirements and their success or failure may affect the careers of politicians. In short, they are very complex systems.

Their are various problems with a user-driven approach to development in such circumstances:

1.     Users tend to put their convenience first and other requirements later. They don’t want the overhead of security and don’t always understand the restrictions that are imposed by those involved in system governance.

2.     Users are not lawyers. They don’t know what the rules and regulations that apply to the system are.

3.     As those involved in the system governance are often not actually users of the system, it is difficult to know how to include them in an agile requirements process. Often they don’t have functional requirements but they simply place constraints on the system.

4.     Users are very busy people. They often simply don’t have time or the inclination to stop what they are doing and discuss requirements for a system which may or may not affect them sometime in the future. When users get involved, they are sometimes the wrong people – those who have a personal interest in technology and are not typical of real users.

Agile methods don’t really, as far as I can see, have good ways of coping with these issues. They present an idealised world where users are engaged and interested and where user interests rather than enterprise constraints are what matter most. This is not the kind of world that I see when looking at national IT systems.

It makes lots of sense to adopt some agile practices for government systems and to try to engage end-users during the development process. However, I am convinced that there is still a need for old-fashioned requirements engineering where all stakeholders are considered, rather than simply diving into agile development.

8 Comments

Filed under agile methods, complexity, requirements, software engineering

Requirements conflicts, governance and complexity

I’ve written in previous posts about how I am starting to look at the requirements for a new digital learning platform for Scottish schools.  Technically, this does not appear to be a very complex system but once you start to look at it you see that the complexity does not arise from the technical components of the system but from its governance.

I wrote in a paper recently published in the CACM (copy here) about how it was impossible to control change in a system where there were multiple independent organisations involved in its management and governance – and the way in which digital learning is supported in Scottish schools exemplifies this.

In Scotland, funding for age 5-18 education is the responsibility of local government – and there are 32 local authorities across the country. The national government provides support services (such as the current learning platform Glow) but cannot direct local authorities to take a particular course of action (that’s democracy – see my post on this).

Schools themselves are not legal entities so local authorities take responsibility for failings in the school system and, in particular, are the bodies that would be legally liable in the event of an issue of child protection and internet safety. This means that many (not all) take a very risk averse approach to internet filtering policies and limit what both teachers and students can do. I was astonished by the diversity of policies in this recently published survey. Local authorities are also responsible for funding school hardware and networking – and they all make their own decisions on this too.  Naturally, the provision differs markedly from one area to another.

A consequence of the risk-averse approach adopted by local authorities is that the current Glow system has traded off security against usability and this is perhaps the primary reason why  it is difficult to use in class teaching. As a consequence, it is hardly used by teachers and students – it is certainly not meeting its original requirements of providing effective learning support.

So what we have here is a situation where there are 33 different bodies  (32 local authorities plus the Scottish government) setting policies that influence the use of digital learning platforms.  Each body interprets regulations in its own way and profoundly influences how systems can be used.  There is little point in us specifying another secure system that will satisfy the local authority stakeholders if the security features mean that it is unusable by teachers and students. On the other hand, if we propose what teachers would prefer – an essentially unregulated system, then the local authority stakeholders are very unlikely to approve the use of the system (and they have to power to cripple it simply using internet filtering).

This type of complexity is by no means uncommon in complex multi-organisational systems and is why I despair when I read statements by eminent computer scientists that all we need to do is to produce simpler systems. And why the problems of requirements conflicts will forever be with us.

As a final word,  I have no idea at this stage how we will resolve the fundamental requirements conflicts in this system. Perhaps it is an insoluble problem.

5 Comments

Filed under complexity, software engineering

Would you prefer simplicity or democracy?

I was inspired by a couple of tweets I saw recently from @CompSciFact to write this.

Computer scientists for many years have made a plea for simplicity.

Edsger Dijkstra, one of the most eminent, said “we have to keep it crisp, disentangled, and simple if we refuse to be crushed by the complexities of our own making” and Fernando Corbato, one of the developers of Multics (a 60s operating system which inspired Unix) said ““The general problem with ambitious systems is complexity. … it is important to emphasize the value of simplicity and elegance, for complexity has a way of compounding difficulties.”.

Throughout the years, there have been similar statements, with computer scientists telling the world that the answer was to simplify.

And, do you know what? The world paid them no attention at all. Systems have got (much) more complex not less. Why – it’s the price we pay for democracy? Our societies and our businesses and our governments and inherently complex because people make them that way. Every time you try to simplify something be it a tax system or a chemical plant, there will be losers. Some people have to pay more tax or have a chemical plant belching fumes in their backyard. And they vote against the people who caused these problems for them.

So, we invent complex systems so that we minimise the number of losers (or at least make sure the losers have as little political influence as possible). If you want simplicity, the price you will have to pay is dictatorship.

Personally, I’ll stick with complexity.

3 Comments

Filed under complexity

Getting back to the sharp end

The nature of careers is often that, as you get older, you spend more time managing, negotiating and facilitating rather than actually doing software engineering. I’m no exception to this and I haven’t had operational involvement in a significant systems project for quite some time. However, I’ve now had the opportunity of getting involved again in some real systems requirements engineering.

I’ve been asked to lead a group looking at the architectural requirements for a replacement system for Glow – a system that’s supposed to support collaborative learning and resource sharing in all Scottish schools. The Glow system does not have a great reputation amongst teachers and we hope that the replacement system will be more acceptable and better suited to what they need.

I’m at the stage at the beginning of the project where I’m overwhelmed by the complexity of the problem and it’ll take a bit of time to work through this and get started.  As always, the real problems with this system are not technical but are political  – senior politicians will have egg on their face if this goes wrong and the governance of the system is split between a number of different bodies. There’s a very wide variety of stakeholders and many, maybe most users, frankly, couldn’t care less if the system is replaced or not.

Luckily, this has come up at a time when I’ve just finished a major job and hadn’t yet started anything else so I will be able to give it some time. We have a group of motivated and enthusiastic teachers involved and at least one very articulate student user. This is a major challenge but I’m very excited by the prospect of getting back to the sharp end and doing some practical engineering.

So, when it comes to the next edition of Software Engineering,  expect a new case study on IT systems for education.

2 Comments

Filed under Uncategorized

Is it possible to validate LSCITS research?

For the past 5 years or so, I’ve been working on a UK research programme of research and education into large-scale complex IT systems (LSCITS). This has involved partners in other universities and industry. Overall, I think we’ve done a good job with lots of interesting research results. Thanks to the flexibility of EPSRC funding, we’ve been able to be responsive to new development that weren’t anticipated when we put the proposal together such as social networking and cloud computing.

You can see a list of what we’ve produced at the LSCITS web site.

So, academically all is well. Lots of publications, students have received PhDs and staff have been promoted. We’ve ran successful workshops and achieved our aim of creating an LSCITS community.

Yet, in spite of this, I am left with a feeling of unease. So far, very few of our results have had any impact on practice. This is not, in itself, a problem as it takes a while after a project finishes before the results can have an impact. But, if and when they are used, how will we know how good they are? I feel uneasy because, frankly, even with commitment and support from industrial users, I have no idea how we can assess the value of our work for improving real large-scale systems engineering practice.

Let us assume that some company or collaboration decides to take some of our ideas on board – let’s say those on socio-technical analysis.  They apply these on a project and eventually go on to create a system that the stakeholders are happy with. Does this mean our ideas have helped? Or, if the project is deemed to be a failure, does this mean that our ideas don’t work?

The problem with large-scale systems is just that – they are large-scale and their size means that there are lots of factors that can affect the success or otherwise of development projects. These factors are present in all projects but the influence of particular factors varies significantly – for example, real-time response is a key success factor in some systems but less important in others. Not only do we not know in advance which factors are likely to be significant, but we don’t really maintain enough information from previous projects even to hazard a guess.  We don’t understand how these factors relate to each other so we don’t know the consequences of changing one or more of them.

So, is it impossible to validate if LSCITS research makes a difference? If so, what is the purpose of doing that research? My answer to the first question is that I think it is practically if not theoretically impossible; the second, I’ll make the topic of another blog post.

5 Comments

Filed under LSCITS, research

The Fear Index – a novel about LSCITS

I read the Fear Index by Robert Harris on holiday last week.  Harris states in an afterword that ‘I would like to write a new version of Nineteen Eight-Four, based on the idea that it was the modern corporation, strengthened by computer technology, that had supplanted the state as the greatest threat to individual liberty’.

In a nutshell, the book is about algorithmic trading and a trading program created by a reclusive physicist that uses machine learning to predict the market and make trades on that basis. Its premise is that the market is affected by fear – as indicated by the use of certain words in the news, websites etc. as well as future trading indexes and that this information is a predictor of future stock prices.  So far, so good. Then it gets silly – in Harris’s scenario the machine learning creates a ubiquitous ‘super intelligent machine’ that builds its own data centers to ensure its survivability, tries to kill its creator (for reasons that are never clear and using a stupidly obscure approach) and manipulates not just the market but world events that will change the market.  The novel ends with the Flash Crash which is supposedly created by this machine to hide its actions.

I like Harris’s novels but like Woody Allen films, the earlier ones were the best. Fatherland and Enigma were, I thought, excellent and his novels of classical Rome were pretty good. I wasn’t impressed by the Ghost – reflecting Harris’s dislike of Tony Blair and this one was really pretty grim.

I think it’s great that popular novelists write about technology and no-one expects them to do anything but simplify and exaggerate for effect.  This could have been an excellent book about the dangers of algorithmic trading and complex systems – we are creating systems whose operation we don’t understand. But Harris’s ignorance of the technology means that he has written a book that is anti-technology and which grossly exaggerates the dangers.  He is absolutely right about the risks of algorithmic trading but exaggerating these means that his message will simply not get through.

Harris is an excellent writer but he should stick to history – this is a bad book.

Leave a comment

Filed under Book review, LSCITS