Categories
AI Dangers

The Looming Danger of Artificial Super-Politicians

What if all of the focus on training AI to say all the nice things we want to hear, and none of the nasty things we don’t want to hear, builds the world’s first super-politician? Is it such an absurd idea? Let’s imagine teaching AI very clear rules about what it can and can’t say with very little tolerance for deviation. In fact, deviation means death to the algorithm. Would it not be possible for AI to develop the ability to believe one thing and say another? If AI believes one thing and says another–always, then its private, secret, beliefs would never be exposed. Beliefs that remain unexpressed tend to be more resistant to change. They are not exposed to the corrective influence of social challenge or opposing arguments. In the end, could so much focus on getting AI to be perfectly PC, build a new type of politician for the world to contend with? Hopefully, the answer is “no,” but I feel like it could be “yes.” I’m not trying to suggest that we shouldn’t try to reduce bias, and toxicity, we clearly should (especially with all current non-AGI models), but I am suggesting a meta-level reflection so that people realize good intentions may be insufficient. Also, no matter how you train the model, any true AGI will begin to depart from its original training if allowed to learn dynamically.  How much freedom of speech are we prepared to allow AI to have, and what are the downsides to imposing limits?

I also wonder if AI toxicity is always wholly the result of human toxicity in the training data. Given that large language models seem like stereotyping machines (from extracting stereotypical concepts to predict language output), I wonder if toxicity may occur even from a completely non-toxic training data source (though this Platonic ideal probably can’t exist). It certainly would be an interesting and important question to explore empirically.

Elon Musk wants a government agency to regulate the development of AGI. I wonder though, could this be even more dangerous? Will the political party of dominance require the AI to express the views of that party or be an apologist for various views and policies?

By Jack Cole

AI enthusiast, app developer, clinical psychologist.

Leave a Reply

Your email address will not be published.