View Single Post
Old 11-02-2022, 10:31 PM   #138
Solecismic
Solecismic Software
 
Join Date: Oct 2000
Location: Canton, OH
Quote:
Originally Posted by Brian Swartz View Post
On this, I'm just not comfortable saying that experts in a field are equivalent to fantasy sci-fi. YMMV. It's not a case of androids collectively saying it though, all it takes is one sufficiently advanced AI. The proverbial paper-clip example of such an AI being willing to do anything necessary to make more/better paper clips, and if it determines humans are a limitation on this, it hacks/co-opts/subverts/etc. whatever is needed to eliminate the threat.

If the top scientists in the field were even, say, split evenly on the issue, or there was a close consensus, I wouldn't be concerned about it. I can't find any reputable source that doesn't say they are almost all on the side of 'this is a huge problem/threat, and we're not prepared for it'.

Hacks/co-opts/subverts what, exactly? I don't see the gain in terror from what we clearly have now (machines that can be programmed to kill) and machines that are given the tools to kill and programmed not to use them, but do anyway because of a bug.

Sure, we can create scary machines. The potential problem is not AI, it's the people programming the machines.
Solecismic is offline   Reply With Quote