Tag Archives: LSCITS

Would you prefer simplicity or democracy?

I was inspired by a couple of tweets I saw recently from @CompSciFact to write this.

Computer scientists for many years have made a plea for simplicity.

Edsger Dijkstra, one of the most eminent, said “we have to keep it crisp, disentangled, and simple if we refuse to be crushed by the complexities of our own making” and Fernando Corbato, one of the developers of Multics (a 60s operating system which inspired Unix) said ““The general problem with ambitious systems is complexity. … it is important to emphasize the value of simplicity and elegance, for complexity has a way of compounding difficulties.”.

Throughout the years, there have been similar statements, with computer scientists telling the world that the answer was to simplify.

And, do you know what? The world paid them no attention at all. Systems have got (much) more complex not less. Why – it’s the price we pay for democracy? Our societies and our businesses and our governments and inherently complex because people make them that way. Every time you try to simplify something be it a tax system or a chemical plant, there will be losers. Some people have to pay more tax or have a chemical plant belching fumes in their backyard. And they vote against the people who caused these problems for them.

So, we invent complex systems so that we minimise the number of losers (or at least make sure the losers have as little political influence as possible). If you want simplicity, the price you will have to pay is dictatorship.

Personally, I’ll stick with complexity.

Advertisements

3 Comments

Filed under complexity

Is it possible to validate LSCITS research?

For the past 5 years or so, I’ve been working on a UK research programme of research and education into large-scale complex IT systems (LSCITS). This has involved partners in other universities and industry. Overall, I think we’ve done a good job with lots of interesting research results. Thanks to the flexibility of EPSRC funding, we’ve been able to be responsive to new development that weren’t anticipated when we put the proposal together such as social networking and cloud computing.

You can see a list of what we’ve produced at the LSCITS web site.

So, academically all is well. Lots of publications, students have received PhDs and staff have been promoted. We’ve ran successful workshops and achieved our aim of creating an LSCITS community.

Yet, in spite of this, I am left with a feeling of unease. So far, very few of our results have had any impact on practice. This is not, in itself, a problem as it takes a while after a project finishes before the results can have an impact. But, if and when they are used, how will we know how good they are? I feel uneasy because, frankly, even with commitment and support from industrial users, I have no idea how we can assess the value of our work for improving real large-scale systems engineering practice.

Let us assume that some company or collaboration decides to take some of our ideas on board – let’s say those on socio-technical analysis.  They apply these on a project and eventually go on to create a system that the stakeholders are happy with. Does this mean our ideas have helped? Or, if the project is deemed to be a failure, does this mean that our ideas don’t work?

The problem with large-scale systems is just that – they are large-scale and their size means that there are lots of factors that can affect the success or otherwise of development projects. These factors are present in all projects but the influence of particular factors varies significantly – for example, real-time response is a key success factor in some systems but less important in others. Not only do we not know in advance which factors are likely to be significant, but we don’t really maintain enough information from previous projects even to hazard a guess.  We don’t understand how these factors relate to each other so we don’t know the consequences of changing one or more of them.

So, is it impossible to validate if LSCITS research makes a difference? If so, what is the purpose of doing that research? My answer to the first question is that I think it is practically if not theoretically impossible; the second, I’ll make the topic of another blog post.

5 Comments

Filed under LSCITS, research

The Fear Index – a novel about LSCITS

I read the Fear Index by Robert Harris on holiday last week.  Harris states in an afterword that ‘I would like to write a new version of Nineteen Eight-Four, based on the idea that it was the modern corporation, strengthened by computer technology, that had supplanted the state as the greatest threat to individual liberty’.

In a nutshell, the book is about algorithmic trading and a trading program created by a reclusive physicist that uses machine learning to predict the market and make trades on that basis. Its premise is that the market is affected by fear – as indicated by the use of certain words in the news, websites etc. as well as future trading indexes and that this information is a predictor of future stock prices.  So far, so good. Then it gets silly – in Harris’s scenario the machine learning creates a ubiquitous ‘super intelligent machine’ that builds its own data centers to ensure its survivability, tries to kill its creator (for reasons that are never clear and using a stupidly obscure approach) and manipulates not just the market but world events that will change the market.  The novel ends with the Flash Crash which is supposedly created by this machine to hide its actions.

I like Harris’s novels but like Woody Allen films, the earlier ones were the best. Fatherland and Enigma were, I thought, excellent and his novels of classical Rome were pretty good. I wasn’t impressed by the Ghost – reflecting Harris’s dislike of Tony Blair and this one was really pretty grim.

I think it’s great that popular novelists write about technology and no-one expects them to do anything but simplify and exaggerate for effect.  This could have been an excellent book about the dangers of algorithmic trading and complex systems – we are creating systems whose operation we don’t understand. But Harris’s ignorance of the technology means that he has written a book that is anti-technology and which grossly exaggerates the dangers.  He is absolutely right about the risks of algorithmic trading but exaggerating these means that his message will simply not get through.

Harris is an excellent writer but he should stick to history – this is a bad book.

Leave a comment

Filed under Book review, LSCITS