I’ve been reading a lot about cloud security today as, perhaps rather hastily, I offered to lead a discussion on my gut feeling that there is really nothing new in cloud security. When you read articles on this topic, what strikes you is that they focus on security technicalities rather than the security risks that businesses face every day. I’ve written about the specific issues around cloud security in my Cloudscape blog.
But this brings me to a more general point that I make in my book but which perhaps needs emphasising again. When you have a limited amount of resources to spend on achieving dependability, start by identifying the risks and threats to system dependability. Focus on those risks which have a (relatively) high probability of occurring and the risks that have serious consequences. Think about how your software and your testing process should cope with these problems – if you can avoid the biggies, then you will achieve dependability.
This is one of the problems that I have with automated testing. There is an emphasis on taking bottom-up approach, where you write unit tests for a component, with no idea of whether these include practical usage scenarios. There is a tendency to think that software that passes all the automated tests is necessarily dependable – but if you haven’t covered all the risks, then you could be in for a surprise.
IEEE Computer, 43 (1), January 2010
This provocative article challenges accepted academic thinking of formal methods and suggests that the current approach to formal methods has been a complete failure and that our whole notion of formal methods of software engineering needs to be rethought. Parnas proposes a relational approach and sets out problems and issues that have to be addressed before formal methods can be practically useful for software development.
Parnas sometimes over-states his case when he is trying to make a point and this paper is no different – formal methods and those who believe in them are not as bad as he suggests. However, I basically agree with most of what he has to say here – most of us who wanted formal methods to become mainstream have been disappointed and there is no point in thinking that this avenue of research will be significantly more fruitful in future.
The terms fault and failure are sometimes used loosely to mean the same thing but they are actually quite different. A fault is something inherent in the software – a failure is something that happens in the real world. Faults do not necessarily lead to failures and failures often occur in software that is not ‘faulty’.
The reason for this is that whether some behaviour is a failure or not, depends on the judgement of the observer and their expectations of the software. For example, I recently tried to buy 2 day passes on the Lisbon metro for myself and my wife. They use reusable cards so you buy 2 cards then credit them with the appropriate pass. The dialogue with the machine went as follows:
How many cards (0.5€ each): 2
How many passes (3.7€ each): 2
Total to pay: 15.8€
To put it mildly, I was surprised. I tried twice, the same thing happened. I then bought the passes one at a time and all was fine – I paid the correct fee of 8.4€.
From my perspective, this was a software failure. It meant that I had to spend longer than I should have buying these passes. On the train, I tried to think about what might have happened. What I guess is the situation is that it is possible to have buy more than 1 day pass at a time and have it credited to the card. So, the 2nd question should have been:
How many passes on each card?
From a testing perspective, the software was probably fine and free of defects and, if you understood the system, then you would have entered 1 pass per card.
So, failures are not some absolute thing that can be tested for. They will always happen because different people will have different expectations of systems. That’s the theme of my keynote talk at SEPGEurope 2010 conference in Porto. We need to design software to help people understand what its doing and help them recover from failures.