I watched the videos The Orthogonality Thesis, Intelligence, and Stupidity and Why Not Just Think of AGI Like a Corporation? which were quite good. This is not the first time I’ve heard of the analogy of AGI and corporations.
While I don’t disagree with Robert’s analysis in the AGI video, the video doesn’t address some key points and the importance of the analogy.
First, consider these questions:
- Do corporations have goals independent that are not identical to other agents?
- Do corporations act with intelligence? Are they effective at achieving their goals?
The analogy between corporations and A.I. is useful not just to consider how humans might control or align superintelligences, but because corporations already exist, are sometimes a threat, and may become a larger threat in the future. Have you seen Robocop?!
So if we agree that Corporations have goals and that corporations are often above-average-Intelligence agents (which Robert also agrees) then many important questions can be considered of very immediate importance.
- How well are we currently controlling corporations?
- Are corporations becoming more intelligent?
- Are corporations becoming more powerful?
- Are corporations becoming harder or easier to control?
- Do corporations ever have goals orthogonal to the majority of humans?
The analogy between A.I. and Corporations as non-human intelligent agents is important for many reasons.
- We should recognize the current threat of corporations.
- Non-human intelligence that is at the human level or higher is cause for great concern. Typically people use thought experiments such as an AI that wants to get rid of spam email and then kills all humans or the stamp collecting one Robert uses. However, similar real-life examples exist, such as corporations trying to maximize the amount of baby formula the company sells without regard to the families that buy it (Old but important story I heard that on the Swindled podcast). Therefore, non-super-intelligent A.I. is cause for concern, just as non-super-intelligent corporations are cause for concern. Of course super-intelligent A.I. would be a greater threat as Robert described well.
- If corporations are often above-average-intelligence, then it is highly probable that a corporation will be the one to create superintelligence, though maybe the military/government is likely to do so as well. If we are not currently doing well at controlling corporations, and corporations sometimes have goals orthogonal to the human race, then it seems highly probable to me that if there are no major changes in our world, then corporations may develop intelligence or superintelligence with goals orthogonal to humans. If humans want to survive we should prevent that event from occurring.