Software reliability according to Edsgar W. Dijkstra

Dijkstra (A position paper on Software Reliability) argued that the notion of software reliability is meaningless, because the environment in which the software is made to work cannot be dealt with in scientific ways. There is always a gap between the formal specifications and the behavior, which the software user really wanted from the software.

I agree with the meaninglessness of the notion. But I disagree that the environment cannot be treated scientifically.

Consider, for the example, software for controlling an airplane, which flies by wire. Let’s assume, for the sake of argument, that takeoffs and landings are 100% manually controlled. Let’s also assume that there is no issue of collision with mountains. Then, it is possible to specify all weather conditions, which the airplane might face while being en route. Thus, the control software can be fully specified to control the airplane no matter what air turbulences, rain or snow conditions the airplane might face as long as it is being flown in Earth’s atmosphere.

Author: Omer Zak

I am deaf since birth. I played with big computers which eat punched cards and spew out printouts since age 12. Ever since they became available, I work and play with desktop size computers which eat keyboard keypresses and spew out display pixels. Among other things, I developed software which helped the deaf in Israel use the telephone network, by means of home computers equipped with modems. Several years later, I developed Hebrew localizations for some cellular phones, which helped the deaf in Israel utilize the cellular phone networks. I am interested in entrepreneurship, Science Fiction and making the world more accessible to people with disabilities.

4 thoughts on “Software reliability according to Edsgar W. Dijkstra”

  1. Did you specify what the controller does when it detects one engine failing? How exactly would it avoid false positives? False negatives?
    [Not that it makes sense to specify “fly in all conditions” anyway — there are lots of impossible conditions (air pressure dropping too suddenly, for example), lots of conditions in which it is useless to try to do anything sane (tsunami) etc. Your specification, sadly, makes no sense — and that's for the simplest possible software, an air controller with no interaction with any humans.]


    1. While I didn't recite everything relevant, my argument is that it is possible to specify boundaries, such that the combined system of environment+airplane+software always stay within those boundaries.

      It is also possible to specify some reasonable behavior (including asking the pilots for manual intervention) for every trajectory inside those boundaries – any relevant function is continuous, because discontinuous functions do not describe happenings, which are physically possible (even black holes, which are the classical discontinuous case, have Hawking radiation).

      The important point is that the environment can be dealt with in scientific ways.

      1. Engine failures – can be described using a model, in which the engine thrust falls unpredictably from 100% to 0% within few milliseconds. Software is expected to identify this and deal with it appropriately.
      2. False positives/false negatives – analyze engine sensor failure modes and devise various ways of measuring engine thrust (like measurement of effect of micro-perturbation of rudders if all sensors fail).
      3. Air pressure dropping suddenly – What is the maximum possible drop in air pressure in Earth's atmosphere, and over what volume can it occur?
      4. Conditions over which it is useless to try anything – are analogous to the case in which you try to extract real square root of a negative number. You just document that software is not guaranteed to work under those conditions. Then you simplify the description of the conditions to make them easily-understandable by the pilots.


    2. Then you simplify the description of the conditions to make them easily-understandable by the pilots.
      …and at that point, I'm afraid to say it seems like you are realizing yourself that even dealing scientifically with such an isolated environment is pure fantasy.


    3. This is more like the operation of extraction of real square root of a function, which can assume both positive and negative values for a certain range of values.

      You tell the clients that there may be negative numbers in certain sub-ranges of the range. You choose intervals which cover all the dangerous sub-ranges and which are large enough to simplify the description, yet small enough to leave most of the range clear.

      If a client needs to tread inside a dangerous sub-range, he gets the exact (and complicated) description of where the function assumes negative values and therefore its real root should not be used.

      If you want to argue about environments, which cannot be treated scientifically, you should first accept that there are environments, which can be treated scientifically as in the simplified airplane example.

      Only then, can you discuss more complicated environments. Nuclear reactor control software. Accounting software which deals with complicated and time-varying tax codes. C++ compilers. Their environments are probably not capable of being dealt with fully scientific ways in today's state of science.

      My argument was that there exist some software applications such that their environments can be dealt with scientifically. For those applications, the notion of software reliability might make sense if it were not for other reasons (such as operating profiles).

      Therefore one should look for other reasons to discredit the notion of software reliability, in addition to Dijkstra's reason of poorly-specified environments.


Comments are closed.