Basically, this is a true story. However, some of the details were made different from what happened in reality.
Once upon a time, in a nonprofit organization, which will remain nameless, a volunteer sysadmin set up a PC to serve as the nonprofit’s public server, serving few Web sites, managing mailing lists and some other services.
The guy was clever and configured the OS (Linux) on the PC in a nonstandard but highly secure way.
Few years later, other volunteers took over the PC. They preferred not to bother to learn how the system is configured and how to administer it. They preferred instead to reconfigure the PC to a more conventional and familiar configuration.
End of story.
The arguments, which erupted due to the above preference, led me to ponder the general question: when and why do software professionals prefer to reinvent the wheel?
On one hand, operating systems and computer languages are not, as a rule, reinvented all the time. Most people are content to learn an existing environment, become expert in it and stick to it. Only very few venture forth and write a new OS or a development framework for a new programming language.
On the other hand, when confronted by legacy software or existing installation, several people prefer to discard any existing work and start from afresh.
What differentiates among those two extremes? I tried to build a list of the relevant variables:
- How well is the framework designed for extensibility or for building upon it?
- Quality and thoroughness of documentation – especially instructions how to make changes to the system.
- Amount of wisdom invested in the basic system design, which is worthy of learning due to its own sake.
In the case of the above story, the first two variables seem to explain the reluctance of the other volunteers to use the first volunteer’s system.
Think about it – are the “other volunteers” the ones that suffer from Not-Invented-Here syndrome and try to reinvent the wheel, or, perhaps it's actually the first volunteer (!) who is the one who had these problems?
After all the nameless “first volunteer”, instead of taking one of the existing Linux (or other) distributions as is, decided that unless he redesigns the system, it won't be good enough. This is classic NIH syndrome, and reinventing the wheel. He thought that the people who designed all the available Linux distributions are idiots that don't care about security, and only he with his infinite wisdom, can design a system which is actually secure.
The “other volunteers”, on the other hand, were *not* reinventing the wheel. They simply had to choose between two existing wheels: one option is the system designed by the first volunteer, a wheel which is currently a bit flat and nobody understand how to inflate, and the second option is a standard Linux distribution which like a standard wheel, anybody knows how to inflate.
(http://livejournal.com/users/)
Do you know – I was going to say exactly the same myself – if I understood a word of it, of course.
I feel my online diary is FAR superior to this but that's only MY opinion,of course !
Peach
xx
(http://livejournal.com/users/)
My question is this: did the clever guy who worked hard on bringing together our spherical system in vacuum, also worked equally hard on making his ideas, thoughts and policies clear and well-documented, did he adhere to the common sense at least, if not the industry standards? If not, all the wisdom and innovative design of the black box the system became isn't worth the silicon the circuits are etched in. Why should I invest my work time in trying to untie the Gordian knot, when I can simply slice it with my sword? 🙂
(http://livejournal.com/users/)
(http://livejournal.com/users/tddpirate)