Quote:
Originally Posted by Philippe D.
The thing is, 99.9% of the time, when computers "make mistakes" it's actually a programmer who made it.
[...]
My guess is that's where your adage comes from - a computer with no such "unexpected situation" failsafes will happily go on doing whatever it was doing, increasing the damage tenfold)
|
Yes, humans use tools to magnify their influence (whether it be strength, endurance, calculation speed and so on). When a human makes a mistake on their own the impact is limited, when they are using tools the impact of their mistakes is magnified too. You can't have one without the other.
But it achieves little to blame computer errors on programmers. It may be strictly true, but it doesn't help. (Remember the first part of that adage: to err is human. The errors
are going to happen, accept it.) Even if/when we get AIs writing smarter AIs it should be feasible to trace an error back to a human programmer that should have foreseen a problem and put something in place to prevent it. But programming doesn't really work like that, and I doubt if it will even when AIs take over. We write little modules that our little brains can get their minds around, and then we put these modules together into larger and larger conglomerations. We have long since passed the point where a single human brain can contort its mind around how all the modules in a modern computer will interact. We now see errors showing up in code written decades ago - but they weren't all errors then, sometimes its just that the environment changed. So we are now down to test like mad and then suck it and see ... and sometimes it really sucks!