Quote:
Originally Posted by Quoth
https://www.schneier.com/blog/archiv...ng-models.html
And you can't easily secure a machine "learning" system.
But how can we trust the systems that haven't been outsourced? The companies with the resources have shown themselves to be untrustworthy and have the attitude that laws only apply to companies not using the Internet (Google, Meta/Facebook, Uber, AirBnB etc).
|
Exactly the same way we trust (or don't trust) them now. Even with most of the open-source software that I use I have to trust that someone somewhere has done appropriate checking, because I simply don't have the time to check it myself. (And in many cases I don't have the expertise even if I did have the time.)
The problem is that "undetectable backdoor" mentioned - which may be intentional or accidental. AIs are trained rather than programmed partly due to the volume of information involved and partly due to the complexity that is inherent in the volume. The result is something a human cannot get their head around (if they could they could be programming it directly). The result is not the implacably logical thing of science-fiction, but something messy and potentially unpredictable. So it is no longer a matter of trusting that someone has checked it, now it's a matter of
knowing that no one has been able to check it except in the most superficial manner. Of knowing that evaluating an AI has become a matter of suck-it-and-see. Of course, entities like Google have access to a few billion people willing to do their sucking for free, so that may be a vote in their favour.