Forget comprehensive AI, how are we going to protect human discussion from advanced spam/marketing AIs in the near future?

would you like some toast?

I feel like there is a lot of concern about the implications of a technological singularity brought about by superhuman AIs in the 2040s, but what about the difficulties we’ll face in the near future?

Current visual CAPTCHAs are going to be completely broken soon. How are we going to stop the onslaught of blogspam/votebots etc. infesting every website that supports user interaction?

A bit further on in the future, there’ll be, e.g., reddit user accounts that are actually marketing AIs that can comment very convincingly with subtle suggestions to purchase an Audi or something. I’m currently working on natural language processing algorithms and have already come up with ways to do this (albeit clumsily). It won’t necessarily always be convincing, but as methods improve, it definitely will.

The extension of this is that the noise of AIs in the future will prevent any meaningful human discussion. In the future you might think you’ve made the best friend in the world, but it turns out it’s a McDonald’s shill AI whose sole purpose is to try and convince you to buy more cheeseburgers. “Shit, I always wondered why this chick kept mentioning how she loved strawberry shakes!”
How will we prevent this? Will it be impossible? Will online life become so infested with mental spam that it’s impossible to feel like we’re ever communicating meaningfully?

Maybe it’ll be a renaissance for the outside world?

This will also be a problem with humanoid robotic AIs. We’ll never know their corporate agendas, and it would be a very difficult process to adequately determine non-partisanship of any future AI friend.

Leave a Reply

Your email address will not be published. Required fields are marked *