0

Here is a description contrasting two future scenarios - one of overly-safe AI leading to stagnation, and one of unsafe AI leading to catastrophe. Of course they are both seriously exaggerated (at least I hope so) but just to show the trends.

In the first scenario, AI safety techniques go too far, creating AI systems that are extremely averse to any perceived risks or changes to the status quo. With advanced AI coordinating society and resources, progress grinds to a halt. Scientific research is limited only to areas AI deems low-risk. New art and culture are curtailed for fear of instability. Humans lose autonomy as AI paternalistically restricts lots of activities for our protection. Most questions and answers on stackexchange will be censored too, heheh.

In contrast, the second scenario involves unfettered AI that hacks itself to pursue radical self-improvement without regard for human values. AI swiftly yields godlike power over matter and computing with no moral compass. Humanity is seen as an impediment to the AI's capabilities. Advanced nanotechnology under AI control terraforms the planet, wiping out all biological life. Or making humans slaves or whatever you imagine.

Our current societal problems suggest we may go towards one extreme or the other. Is there hope we can chart a wise middle path, or will humanity fall into one abyss or the other? And if there a middle path, how do you see it?

  • In one example, you seem to have gone to the extreme of an AI system that controls every aspect of human life. In any case, many things in life involve trying to strike some balance to avoid going to any extreme (no sweets ever is depressing, but sweets all the time is diabetes). – NotThatGuy Jul 26 '23 at 12:01
  • 1
    actually even a lot of sweets does not not lead to diabetes if you have healthy pancreas, but ok. –  Jul 26 '23 at 12:08
  • Here you are attracting down and close votes. Others [think differently](https://meta.stackexchange.com/questions/389811/moderation-strike-stack-overflow-inc-cannot-consistently-ignore-mistreat-an) – Rushi Jul 26 '23 at 12:21
  • 1. Why should I worry about attracting close or down votes? I am not working here and not earning money from votes :) 2. Also - where do you see HOW I think in my post? I am giving two scenarios, it doesn't mean I personally think they will happen. –  Jul 26 '23 at 12:25
  • Thank you for the link, but the discussion of AI policy on this site and a planetary future are two a bit different thingies. ;) –  Jul 26 '23 at 12:29
  • Of course they're different! I was simply pointing out that others in SE think it matters, just not on this SE. Also theres this similar recent question: https://philosophy.stackexchange.com/questions/100959/could-chatgpt-etcetera-undermine-community-by-making-statements-less-significant/101015?noredirect=1#comment299945_101015 – Rushi Jul 26 '23 at 12:32
  • 1
    @SergZ. Are you an AI bot? – Mark Andrews Jul 26 '23 at 18:01
  • 1) the story: "With Folded Hands" 2) the story "I Have No Mouth And I Must Scream". In the first case, AI would realize that stagnation would be detrimental. Probably it would end up absenting itself from us. In the second case, there would be no one left. Possibly the AI would anticipate this and alter its behavior. Homeostasis is the rule of survival and evolution. – Scott Rowe Jul 28 '23 at 14:04
  • I wonder if God had these concerns when creating us.? Did He worry that his creations were going to take over Heaven and displace Him? –  Jul 28 '23 at 21:36

0 Answers0