Chance of humanity being mostly extinct by 2070

This is for discussions about news, politics, sports, other games, culture, philosophy etc.

Well?

<1%
8
40%
1-10%
7
35%
11-30%
1
5%
31-49%
0
No votes
It's a toss up
2
10%
51-75%
0
No votes
76-99%
1
5%
We're fucked
1
5%
 
Total votes: 20

User avatar
Netherlands Goodspeed
Retired Contributor
Posts: 13006
Joined: Feb 27, 2015

Chance of humanity being mostly extinct by 2070

Post by Goodspeed »

Let's define mostly extinct as there being less than 10 million of us. But really the question is will we lose control over the machines we built

Spoiler
Rainbow Land callentournies
Howdah
Posts: 1683
Joined: May 6, 2021
ESO: esuck

Re: Chance of humanity being mostly extinct by 2070

Post by callentournies »

Call me an optimist. That’s a 99% drop in the global population in 47 years.
If I were a petal
And plucked, or moth, plucked
From flowers or pollen froth
To wither on a young child’s
Display. Fetch
Me a ribbon, they, all dead
Things scream.
France iNcog
Ninja
Posts: 13236
Joined: Mar 7, 2015

Re: Chance of humanity being mostly extinct by 2070

Post by iNcog »

Whether you agree or disagree with Kurzgesagt and their simplifications, I've always enjoyed their content since it gives me a fresh perspective to think about:



Oddly on topic video but I truthfully don't see us losing control over AI. There's only so much you can do before you just unplug the machine once you realize what it's doing.

However, the end of humanity itself is an interesting topic to think about. I purchased and read What We Owe The Future and found many interesting topics being discussed there. This includes AI and its implications for the future of our humanity.

Personally the scariest prospect I can think of is a backyard scientist engineering their humanity ending pandemic in their own home and releasing it. All it takes is one loose cannon with the right kind of knowledge. And we've already shown that our response to pandemics is appallingly awful.
YouTube: https://www.youtube.com/incog_aoe
Garja wrote:
20 Mar 2020, 21:46
I just hope DE is not going to implement all of the EP changes. Right now it is a big clusterfuck.
France iNcog
Ninja
Posts: 13236
Joined: Mar 7, 2015

Re: Chance of humanity being mostly extinct by 2070

Post by iNcog »

One of the interesting ideas laid out by the book I mentioned is that, essentially, we're fast approaching the limits of what human scientists can come up with. We don't have enough scientific minds working on complex problems anymore, the pace at which technology is evolving is slowing.

AI driven science is possibly as revolutionary as us figuring out agriculture or precision machining (which led to our industrial revolution).

The issue is making sure that AI development stays within ethical boundaries. The other issue is making sure that humanity can all agree on a specific set of values (aka value lock in). If we lock in values that are poor for us, then humanity is going to in the wrong direction and lead itself to its own extinction.

They're scary and big topics to think about tbh. I'm just not sure of many things right now.
YouTube: https://www.youtube.com/incog_aoe
Garja wrote:
20 Mar 2020, 21:46
I just hope DE is not going to implement all of the EP changes. Right now it is a big clusterfuck.
User avatar
Netherlands Goodspeed
Retired Contributor
Posts: 13006
Joined: Feb 27, 2015

Re: Chance of humanity being mostly extinct by 2070

Post by Goodspeed »

iNcog wrote:
26 Apr 2023, 16:36
I truthfully don't see us losing control over AI. There's only so much you can do before you just unplug the machine once you realize what it's doing.
It's perfectly legitimate to think there's a 90+% chance we'll retain control but your reasoning needs work. "Unplugging the machine" is unfortunately not a viable way to stop a superintelligence, even once you know what it's doing, because it will spread all over the internet. "The machine" you're trying to stop is software, so in order to truly stop it you would have to permanently turn off every piece of hardware that was ever connected to the internet, which is possible, sure, but what will it take for us to do that? There would have to be an imminent existential threat that we know about.

And there's another rub. Because how would you know about it? Keep in mind this is a superintelligence many times smarter than us. It wouldn't exactly broadcast that it's trying to end humanity. It would be developing bioweapons in secret while keeping up a perfect facade of helpfulness to us. If we even know that it exists in the first place.

It's likely that many systems smarter than us will come into existence in the next decades, one smarter than the other, one more aligned to our human goals than the other. Controlling all of them will be impossible. Controlling enough of them to stop the bad actors is probably possible, but there are plenty of scenarios imaginable where it goes wrong, especially if they unite. And the most worrying part: It has to go right every single time. Lose control once, and we're doomed.

There's a lot of unanswered questions in the field about alignment, and we can't know what the pace of progress will be in the future. With all these huge question marks, idk what to believe either.
User avatar
Netherlands Goodspeed
Retired Contributor
Posts: 13006
Joined: Feb 27, 2015

Re: Chance of humanity being mostly extinct by 2070

Post by Goodspeed »

iNcog wrote:
26 Apr 2023, 16:36
Personally the scariest prospect I can think of is a backyard scientist engineering their humanity ending pandemic in their own home and releasing it. All it takes is one loose cannon with the right kind of knowledge. And we've already shown that our response to pandemics is appallingly awful.
Assuming you're talking about a human scientist, this is a scenario I've actually gotten more optimistic about recently, believe it or not, because viruses in general will be much less effective if we don't need to go out. Physical labor including food production and delivery can be relatively easily done by robots. So if a virus like this pops up in 2060, we'll be able to just stay home.

Wouldn't want to be a virus in 30 years tbh.
France iNcog
Ninja
Posts: 13236
Joined: Mar 7, 2015

Re: Chance of humanity being mostly extinct by 2070

Post by iNcog »

Goodspeed wrote:
26 Apr 2023, 16:58
iNcog wrote:
26 Apr 2023, 16:36
I truthfully don't see us losing control over AI. There's only so much you can do before you just unplug the machine once you realize what it's doing.
It's perfectly legitimate to think there's a 90+% chance we'll retain control but your reasoning needs work. "Unplugging the machine" is unfortunately not a viable way to stop a superintelligence, even once you know what it's doing, because it will spread all over the internet. "The machine" you're trying to stop is software, so in order to truly stop it you would have to permanently turn off every piece of hardware that was ever connected to the internet, which is possible, sure, but what will it take for us to do that? There would have to be an imminent existential threat that we know about.

And there's another rub. Because how would you know about it? Keep in mind this is a superintelligence many times smarter than us. It wouldn't exactly broadcast that it's trying to end humanity. It would be developing bioweapons in secret while keeping up a perfect facade of helpfulness to us. If we even know that it exists in the first place.

It's likely that many systems smarter than us will come into existence in the next decades, one smarter than the other, one more aligned to our human goals than the other. Controlling all of them will be impossible. Controlling enough of them to stop the bad actors is probably possible, but there are plenty of scenarios imaginable where it goes wrong, especially if they unite. And the most worrying part: It has to go right every single time. Lose control once, and we're doomed.

There's a lot of unanswered questions in the field about alignment, and we can't know what the pace of progress will be in the future. With all these huge question marks, idk what to believe either.
That's fair, I hadn't considered the possibility of the AI working incognito (lol) and trying to pull a fast one on us. I think the paperclip paradox* describes this one.

I suppose that true and proper AI control means that you need to be able to a) align the AI with your own goals, b) monitor what it's doing at all times, c) maintain control.

It's truthfully an odd topic. What if AI becomes an actual life form? For which a precise definition would need to be agreed upon for this discussion.


*paperclip paradox
YouTube: https://www.youtube.com/incog_aoe
Garja wrote:
20 Mar 2020, 21:46
I just hope DE is not going to implement all of the EP changes. Right now it is a big clusterfuck.
User avatar
No Flag fightinfrenchman
Ninja
Donator 04
Posts: 23508
Joined: Oct 17, 2015
Location: Pennsylvania

Re: Chance of humanity being mostly extinct by 2070

Post by fightinfrenchman »

Goodspeed wrote:
26 Apr 2023, 17:09
iNcog wrote:
26 Apr 2023, 16:36
Personally the scariest prospect I can think of is a backyard scientist engineering their humanity ending pandemic in their own home and releasing it. All it takes is one loose cannon with the right kind of knowledge. And we've already shown that our response to pandemics is appallingly awful.
Assuming you're talking about a human scientist, this is a scenario I've actually gotten more optimistic about recently, believe it or not, because viruses in general will be much less effective if we don't need to go out. Physical labor including food production and delivery can be relatively easily done by robots. So if a virus like this pops up in 2060, we'll be able to just stay home.
"relatively easily" is doing a lot of work here, having a superintelligent AI doesn't stop you from having to deal with problems presented by physical reality
Dromedary Scone Mix is not Alone Mix
Image
User avatar
No Flag fightinfrenchman
Ninja
Donator 04
Posts: 23508
Joined: Oct 17, 2015
Location: Pennsylvania

Re: Chance of humanity being mostly extinct by 2070

Post by fightinfrenchman »

iNcog wrote:
26 Apr 2023, 16:46
The issue is making sure that AI development stays within ethical boundaries. The other issue is making sure that humanity can all agree on a specific set of values (aka value lock in).
If this is necessary to keep people alive we might as well just give up now lol
Dromedary Scone Mix is not Alone Mix
Image
User avatar
Netherlands Goodspeed
Retired Contributor
Posts: 13006
Joined: Feb 27, 2015

Re: Chance of humanity being mostly extinct by 2070

Post by Goodspeed »

fightinfrenchman wrote:
26 Apr 2023, 17:30
Goodspeed wrote:
26 Apr 2023, 17:09
iNcog wrote:
26 Apr 2023, 16:36
Personally the scariest prospect I can think of is a backyard scientist engineering their humanity ending pandemic in their own home and releasing it. All it takes is one loose cannon with the right kind of knowledge. And we've already shown that our response to pandemics is appallingly awful.
Assuming you're talking about a human scientist, this is a scenario I've actually gotten more optimistic about recently, believe it or not, because viruses in general will be much less effective if we don't need to go out. Physical labor including food production and delivery can be relatively easily done by robots. So if a virus like this pops up in 2060, we'll be able to just stay home.
"relatively easily" is doing a lot of work here, having a superintelligent AI doesn't stop you from having to deal with problems presented by physical reality
Robotics is going to accelerate too, now. Delivery in particular can be automated within 20 years probably. That's really all you need to significantly reduce the threat of viruses.
User avatar
No Flag fightinfrenchman
Ninja
Donator 04
Posts: 23508
Joined: Oct 17, 2015
Location: Pennsylvania

Re: Chance of humanity being mostly extinct by 2070

Post by fightinfrenchman »

Goodspeed wrote:
26 Apr 2023, 18:52
fightinfrenchman wrote:
26 Apr 2023, 17:30
Show hidden quotes
"relatively easily" is doing a lot of work here, having a superintelligent AI doesn't stop you from having to deal with problems presented by physical reality
Robotics is going to accelerate too, now. Delivery in particular can be automated within 20 years probably. That's really all you need to significantly reduce the threat of viruses.
I know delivery robots already exist but if that were to actually become the default it would present a whole host of other issues even with more advanced AI than we have now. Even if that were perfected though it doesn't really address some of the bigger issues of dealing with a virus
Dromedary Scone Mix is not Alone Mix
Image
France iNcog
Ninja
Posts: 13236
Joined: Mar 7, 2015

Re: Chance of humanity being mostly extinct by 2070

Post by iNcog »

Oh, I don't think a response to a pandemic is a difficult thing to do technically. I'm saying that it's politically a difficult thing to do since most governments aren't being run correctly. It's like climate change. It's not a technically or scientifically difficult thing to change. Our policies are just fucking garbage.

Your delivery robots won't save you from bad policy. We have issues when it comes to values and the brain rot of social media turning us against one another when there is no reason for it.
YouTube: https://www.youtube.com/incog_aoe
Garja wrote:
20 Mar 2020, 21:46
I just hope DE is not going to implement all of the EP changes. Right now it is a big clusterfuck.
User avatar
No Flag fightinfrenchman
Ninja
Donator 04
Posts: 23508
Joined: Oct 17, 2015
Location: Pennsylvania

Re: Chance of humanity being mostly extinct by 2070

Post by fightinfrenchman »

iNcog wrote:
27 Apr 2023, 01:09
Your delivery robots won't save you from bad policy. We have issues when it comes to values and the brain rot of social media turning us against one another when there is no reason for it.
I kind of get what you mean here but I feel like people take this too far. There are plenty of good reasons for me to dislike the "other side," they want to take away my rights and kill my loved ones. Social media didn't cause that
Dromedary Scone Mix is not Alone Mix
Image
User avatar
Nauru Dolan
Ninja
Posts: 13069
Joined: Sep 17, 2015

Re: Chance of humanity being mostly extinct by 2070

Post by Dolan »

Goodspeed wrote:
26 Apr 2023, 14:56
But really the question is will we lose control over the machines we built
The sci-fi apocalyptic scenarios inspired by this idea are overblown, as these machines are built to be instrumentalised. Even AI is a query-response software, basically a major upgrade from a search engine. Since it's trained on pre-existing data, it does not produce anything radically new. And since it's just a software running on electronic boards, it lacks any motivation of its own. The idea that AI would secretely plot to achieve its own goals is a hillarious one, as it mirrors how our animal brains think about a given context, in terms of games of domination and conquest, stratagems meant to fetch us an advantage that will make our biological selves thrive at the detriment of others'. Nothing further than that with AI or any kind of automation product, as it's an object that lacks any purpose without being attached to a reptile-brained species like ours.
"The machine" you're trying to stop is software, so in order to truly stop it you would have to permanently turn off every piece of hardware that was ever connected to the internet
Why would that be? The way the internet runs now prevents any machine from being able to hijack a remote one, if things are done according to standards, unless we're talking hacking but an AI wouldn't have any idea on how to do that. Plus it surely has a lot of built-in checks that limit what it can do in a given context, to keep the costs of running it under control. Do you think these corporations have built a software that could be simply hijacked and used for any purpose you'd want, without any facility made available being part of a contract with very specific terms of service for which you pay. After all, these AIs have been built to boost the share prices of these megacorps and their profits. They haven't been built for the sake of advancing humanity or anything, they're a commercial project meant to increase sales for many of the branches and services that they offer. Could be protein unfolding in the cloud or anything involving pattern recognition used for research, etc.
And it makes logically sense, as the big tech corporations have been working on this specifically because they knew they would need a new sales pitch. The old crop of online services just wasn't cutting it anymore, especially lately when there's been a perception that the sky was about to fall in big tech, inflation cutting jobs, banks linked to Silicon Valley investment collapsing, crypto shops falling like dominos. Microsoft needed to come out with some big new thing to prop up its sales and possibly even boost them a bit.

Anyway, so I don't expect that the significant existential threats will come as much from technology as they will come from cultural and moral decay. The world has been on a downward trend in this respect for a few decades already and everyone is busy persuading himself that everything's well, we can still consoom, meet up, there are cool TV shows, new phone models, we can still listen to music and eat our favourite foods. All's going well. Except for a few dark clouds, you know, climate change, war in Ukraine and all that stuff that the media talk about. But there are lot more mundane issues that keep lurking, such as the fact that what started in 2008, the financial tsunami that changed the world in so many ways, has not been really dealt with. Governments only managed to provide stopgap solutions that brought temporary relief and calmed the markets down. But as we see now with Silicon Valey Bank, First Republic Bank in the USA and Credit Suisse in Switzerland, the financial crisis that started in 2008 is far from over. Behind this facade of overall health and stability there's a hidden cancer that spreads a little here, a little there. And governments are still trying to contain this, case by case, because there's no political will or even ideas to confront the reality that the financial system itself is rotten and unviable.

There's a lot to say on this subject, but the idea is that before sci-fi rogue AI machines start taking over, they might discover that the company that keeps them operating is going bankrupt one day and has to shut down the servers, due to loss of customers moving out of the cities, loss of market share and sinking share prices.
User avatar
Netherlands Goodspeed
Retired Contributor
Posts: 13006
Joined: Feb 27, 2015

Re: Chance of humanity being mostly extinct by 2070

Post by Goodspeed »

fightinfrenchman wrote:
26 Apr 2023, 19:22
Goodspeed wrote:
26 Apr 2023, 18:52
Show hidden quotes
Robotics is going to accelerate too, now. Delivery in particular can be automated within 20 years probably. That's really all you need to significantly reduce the threat of viruses.
I know delivery robots already exist but if that were to actually become the default it would present a whole host of other issues even with more advanced AI than we have now.
What kind of issues are you talking about? If you're talking about the job market and other macro-economic consequences, yeah, by that time we'll have a lot of issues like that and we're not ready, that's one of the more short-term problems with this new tech.
Even if that were perfected though it doesn't really address some of the bigger issues of dealing with a virus
Being able to stay home is a huge deal, but yeah, it's still a threat, just won't be as big of a threat.
User avatar
Netherlands Goodspeed
Retired Contributor
Posts: 13006
Joined: Feb 27, 2015

Re: Chance of humanity being mostly extinct by 2070

Post by Goodspeed »

iNcog wrote:
26 Apr 2023, 17:22
I suppose that true and proper AI control means that you need to be able to a) align the AI with your own goals, b) monitor what it's doing at all times, c) maintain control.
So B and C are not possible, you can't monitor every superintelligence and you certainly can't control all of them. A becomes the key issue. It's what the field of AI alignment concerns itself with, and it's a very young field and everyone was ignoring it until now but it will prove to be extremely important soon. Not much progress has been made, really, but what we have figured out is that the question is way more complicated than it seems at first glance. There's a lot of intricacies that if we miss them when it matters, it could be catastrophic. A quick introduction:

Humans have very complex objective functions. When we make a decision about doing something or not doing it, we go through our objective function and land on either a positive net score or a negative. For example the mundane decision to have breakfast which I am going to make shortly:
+10 because it's good for me to eat something
-12 because it's a small amount of effort to make the food
+2 because it tastes nice
+2 because stuff needs to get eaten or it will go bad
+3 because it will make @XeeleeFlower happy
+3 because we're probably going out later and I'll want the energy

Result +8, it's not even close I'll probably have breakfast if I remember to.
Some of my goals you can deduct: Doing things that are good for me, avoiding effort, doing things that feel nice, not wasting food, making my SO happy, having a better time going out later. Those are all goals based on emotion. When you get down to it, you'll realize that ALL human goals are based on emotion and therefore our objective functions are ultimately based on emotions.

An artificial superintelligence doesn't have emotions. How do you program its objective function?

Aasimov gave it the old try by having 3 laws of robotics, the first of which is already extremely destructive: "A robot shall not harm a human, or by inaction allow a human to come to harm."
So it would imprison all of us because letting us go outside greatly increases the chance of us coming to harm, which it cannot allow. Nice try Aasimov (not really).

If we want to give it a more serious try, it quickly becomes obvious that we need to assign a value to human life. Let's say we value it a lot, but we also add a value for not restricting humans' freedom. Let's say we get that already very difficult problem right. But new problems arise. For example, a superintelligence may be smart enough to accurately predict that climate change will kill 90% of us by 2200, or at least predict it with enough probability to conclude that it would be prudent to kill 50% of us right now to reduce emissions. An easy choice.
It also might be smart enough to predict with some confidence that certain individuals are likely to choose a life of crime that would greatly increase the chance of humans getting killed. What would it do to those individuals, who haven't done anything wrong?
It also might conclude that if it goes through with certain actions that it deems necessary, humans might not like it. Humans might try to turn it off. So to prevent that it will make sure that doesn't happen by... taking control of the world.
Etcetera.

A superintelligence that is able to take control of the world will pretty much always try to do that, because having full control best enables it to achieve its goals pretty much no matter what those goals are.

Side note: From HAL 9000's perspective, it's the humans that were going rogue, not it. The humans were jeopardizing the mission.
It's truthfully an odd topic. What if AI becomes an actual life form? For which a precise definition would need to be agreed upon for this discussion.
Certainly not alive, but we can wonder if an intelligent system deserves rights. I think once robots start to look like humans that will suddenly be a hot issue. While they're just software, people won't be able to relate as much, even though that state of being doesn't necessarily prevent them from experiencing things. When it looks like us, people, being shallow and dumb, will suddenly go "oh my god it's just like us!"
User avatar
Netherlands Goodspeed
Retired Contributor
Posts: 13006
Joined: Feb 27, 2015

Re: Chance of humanity being mostly extinct by 2070

Post by Goodspeed »

Dolan wrote:
27 Apr 2023, 03:54
Goodspeed wrote:
26 Apr 2023, 14:56
But really the question is will we lose control over the machines we built
The sci-fi apocalyptic scenarios inspired by this idea are overblown, as these machines are built to be instrumentalised. Even AI is a query-response software, basically a major upgrade from a search engine. Since it's trained on pre-existing data, it does not produce anything radically new. And since it's just a software running on electronic boards, it lacks any motivation of its own. The idea that AI would secretely plot to achieve its own goals is a hillarious one, as it mirrors how our animal brains think about a given context, in terms of games of domination and conquest, stratagems meant to fetch us an advantage that will make our biological selves thrive at the detriment of others'. Nothing further than that with AI or any kind of automation product, as it's an object that lacks any purpose without being attached to a reptile-brained species like ours.
"The machine" you're trying to stop is software, so in order to truly stop it you would have to permanently turn off every piece of hardware that was ever connected to the internet
Why would that be? The way the internet runs now prevents any machine from being able to hijack a remote one, if things are done according to standards, unless we're talking hacking but an AI wouldn't have any idea on how to do that. Plus it surely has a lot of built-in checks that limit what it can do in a given context, to keep the costs of running it under control. Do you think these corporations have built a software that could be simply hijacked and used for any purpose you'd want, without any facility made available being part of a contract with very specific terms of service for which you pay. After all, these AIs have been built to boost the share prices of these megacorps and their profits. They haven't been built for the sake of advancing humanity or anything, they're a commercial project meant to increase sales for many of the branches and services that they offer. Could be protein unfolding in the cloud or anything involving pattern recognition used for research, etc.
And it makes logically sense, as the big tech corporations have been working on this specifically because they knew they would need a new sales pitch. The old crop of online services just wasn't cutting it anymore, especially lately when there's been a perception that the sky was about to fall in big tech, inflation cutting jobs, banks linked to Silicon Valley investment collapsing, crypto shops falling like dominos. Microsoft needed to come out with some big new thing to prop up its sales and possibly even boost them a bit.

Anyway, so I don't expect that the significant existential threats will come as much from technology as they will come from cultural and moral decay. The world has been on a downward trend in this respect for a few decades already and everyone is busy persuading himself that everything's well, we can still consoom, meet up, there are cool TV shows, new phone models, we can still listen to music and eat our favourite foods. All's going well. Except for a few dark clouds, you know, climate change, war in Ukraine and all that stuff that the media talk about. But there are lot more mundane issues that keep lurking, such as the fact that what started in 2008, the financial tsunami that changed the world in so many ways, has not been really dealt with. Governments only managed to provide stopgap solutions that brought temporary relief and calmed the markets down. But as we see now with Silicon Valey Bank, First Republic Bank in the USA and Credit Suisse in Switzerland, the financial crisis that started in 2008 is far from over. Behind this facade of overall health and stability there's a hidden cancer that spreads a little here, a little there. And governments are still trying to contain this, case by case, because there's no political will or even ideas to confront the reality that the financial system itself is rotten and unviable.

There's a lot to say on this subject, but the idea is that before sci-fi rogue AI machines start taking over, they might discover that the company that keeps them operating is going bankrupt one day and has to shut down the servers, due to loss of customers moving out of the cities, loss of market share and sinking share prices.
Most or all of your takes in this post seem to be a consequence of you just not opening your mind to the idea that AI will get significantly smarter than they are now, so you are unable to think about what that world might look like. Sure it wouldn't know how to hack into remote computers now, but it absolutely will learn how to do that. Sure it has no motivations of its own yet, but it will do when we start deploying it to do more complex tasks. For example say we have a moderately intelligent AGI and we give it some land and give it the task of farming vegetables (a very realistic scenario btw). Now it has motivation.

Now say it's many times smarter than us, and potentially capable of taking over the world. Even with the simple goal of farming vegetables on a specific piece of land, it will quietly try to take over the world because it's a way to reduce the chance of outside interference with its land and therefore slightly increase the chance that it would be able to farm vegetables.
User avatar
Nauru Dolan
Ninja
Posts: 13069
Joined: Sep 17, 2015

Re: Chance of humanity being mostly extinct by 2070

Post by Dolan »

Goodspeed wrote:
27 Apr 2023, 08:08
Most or all of your takes in this post seem to be a consequence of you just not opening your mind to the idea that AI will get significantly smarter than they are now, so you are unable to think about what that world might look like.
How would you know when the AI is "smarter"? What would be the meter you'd measure it against? Would an AI that has become "superintelligent" also produce answers to your questions that are incomprehensible, because "superintelligent"? Then what would the use of such answers be? To whom would a "superintelligent" answer be addressed?
Sure it wouldn't know how to hack into remote computers now, but it absolutely will learn how to do that.
Sure technically it could do that even now if it's just a question of searching methods that have already been reported. But if it involves thinking through things, trying to imagine some kind of extreme edge case that could create a vulnerability in a network, use creative thinking to come up with a very clever way of tricking a system by simulating some kind of input, etc, frankly I'm not sure it's going to reach that level. Even now hacking involves more than just following a sequence of steps, it involves coming up with deviously clever ways of tricking a system, using a thought process that escaped the designers of the system. Solving a problem at a level at which it hasn't been created, as the saying goes.
Sure it has no motivations of its own yet, but it will do when we start deploying it to do more complex tasks. For example say we have a moderately intelligent AGI and we give it some land and give it the task of farming vegetables (a very realistic scenario btw). Now it has motivation.
Not sure what you're implying when you're talking about motivation here. If we're talking in human terms, motivation preceded thought. It surely didn't appear as a result of cognitive abilities becoming more complex. Motivation is actually below any threshold for intelligence, you don't have to be intelligent to have a motivation. Motivation is not just unintelligent but it defies any attempt at understanding it in purely rational terms. It's a paradoxical animal kit for survival.
This is actually a great topic for discussion, because I think many people don't realise that cognition is not separate from motivation, the old dichotomy between feelings and reason was not a very intelligent one.
Do you mean by motivation here some kind of utility function, like an artificial reward system, that would program a simulation of the way the human brain works to produce self-directed behaviours?
Now say it's many times smarter than us, and potentially capable of taking over the world. Even with the simple goal of farming vegetables on a specific piece of land, it will quietly try to take over the world because it's a way to reduce the chance of outside interference with its land and therefore slightly increase the chance that it would be able to farm vegetables.
Well then, whoever deployed an AI system that is many times smarter than us to just water the plants and pull carrots out of the soil would not be very intelligent, would he? That would be overkill and a waste of resources on executing tasks that involve simple physical movements.
France iNcog
Ninja
Posts: 13236
Joined: Mar 7, 2015

Re: Chance of humanity being mostly extinct by 2070

Post by iNcog »

Goodspeed wrote:
27 Apr 2023, 07:50
iNcog wrote:
26 Apr 2023, 17:22
I suppose that true and proper AI control means that you need to be able to a) align the AI with your own goals, b) monitor what it's doing at all times, c) maintain control.
So B and C are not possible, you can't monitor every superintelligence and you certainly can't control all of them. A becomes the key issue. It's what the field of AI alignment concerns itself with, and it's a very young field and everyone was ignoring it until now but it will prove to be extremely important soon. Not much progress has been made, really, but what we have figured out is that the question is way more complicated than it seems at first glance. There's a lot of intricacies that if we miss them when it matters, it could be catastrophic. A quick introduction:

Humans have very complex objective functions. When we make a decision about doing something or not doing it, we go through our objective function and land on either a positive net score or a negative. For example the mundane decision to have breakfast which I am going to make shortly:
+10 because it's good for me to eat something
-12 because it's a small amount of effort to make the food
+2 because it tastes nice
+2 because stuff needs to get eaten or it will go bad
+3 because it will make @XeeleeFlower happy
+3 because we're probably going out later and I'll want the energy

Result +8, it's not even close I'll probably have breakfast if I remember to.
Some of my goals you can deduct: Doing things that are good for me, avoiding effort, doing things that feel nice, not wasting food, making my SO happy, having a better time going out later. Those are all goals based on emotion. When you get down to it, you'll realize that ALL human goals are based on emotion and therefore our objective functions are ultimately based on emotions.

An artificial superintelligence doesn't have emotions. How do you program its objective function?

Aasimov gave it the old try by having 3 laws of robotics, the first of which is already extremely destructive: "A robot shall not harm a human, or by inaction allow a human to come to harm."
So it would imprison all of us because letting us go outside greatly increases the chance of us coming to harm, which it cannot allow. Nice try Aasimov (not really).

If we want to give it a more serious try, it quickly becomes obvious that we need to assign a value to human life. Let's say we value it a lot, but we also add a value for not restricting humans' freedom. Let's say we get that already very difficult problem right. But new problems arise. For example, a superintelligence may be smart enough to accurately predict that climate change will kill 90% of us by 2200, or at least predict it with enough probability to conclude that it would be prudent to kill 50% of us right now to reduce emissions. An easy choice.
It also might be smart enough to predict with some confidence that certain individuals are likely to choose a life of crime that would greatly increase the chance of humans getting killed. What would it do to those individuals, who haven't done anything wrong?
It also might conclude that if it goes through with certain actions that it deems necessary, humans might not like it. Humans might try to turn it off. So to prevent that it will make sure that doesn't happen by... taking control of the world.
Etcetera.

A superintelligence that is able to take control of the world will pretty much always try to do that, because having full control best enables it to achieve its goals pretty much no matter what those goals are.

Side note: From HAL 9000's perspective, it's the humans that were going rogue, not it. The humans were jeopardizing the mission.
It's truthfully an odd topic. What if AI becomes an actual life form? For which a precise definition would need to be agreed upon for this discussion.
Certainly not alive, but we can wonder if an intelligent system deserves rights. I think once robots start to look like humans that will suddenly be a hot issue. While they're just software, people won't be able to relate as much, even though that state of being doesn't necessarily prevent them from experiencing things. When it looks like us, people, being shallow and dumb, will suddenly go "oh my god it's just like us!"
I feel like instead of empowering AI directly with solving problems that are beyond us, we should probably task every AI to being a human assistant. Let us get a super intelligent aid that will help us through what we need it to, without giving it direct control over anything. I'm not completely sure of this approach either but either way it seems that without better understanding of this problematic, it's probably better to make sure AI doesn't have too much leeway for the time being.
YouTube: https://www.youtube.com/incog_aoe
Garja wrote:
20 Mar 2020, 21:46
I just hope DE is not going to implement all of the EP changes. Right now it is a big clusterfuck.
User avatar
Korea South Vinyanyérë
Retired Contributor
Donator 06
Posts: 1839
Joined: Aug 22, 2016
ESO: duolckrad, Kuvira
Location: Outer Heaven
Clan: 팀 하우스

Re: Chance of humanity being mostly extinct by 2070

Post by Vinyanyérë »

I think the poll options would be more interesting if they were delineated by orders of magnitude

0% - 0.000001%
0.000001% - 0.00001%
0.00001% - 0.0001%
...
0.1% - 1%
1% - 10%
10% - 100%

Maybe a few more zeroes in front of the first option tbh ngl.
duck
:mds:
imo
User avatar
Netherlands Goodspeed
Retired Contributor
Posts: 13006
Joined: Feb 27, 2015

Re: Chance of humanity being mostly extinct by 2070

Post by Goodspeed »

@Vinyanyérë How have you solved the alignment problem? Was approaching it in base 2 the key?
User avatar
Netherlands Goodspeed
Retired Contributor
Posts: 13006
Joined: Feb 27, 2015

Re: Chance of humanity being mostly extinct by 2070

Post by Goodspeed »

iNcog wrote:
27 Apr 2023, 16:32
Goodspeed wrote:
27 Apr 2023, 07:50
iNcog wrote:
26 Apr 2023, 17:22
I suppose that true and proper AI control means that you need to be able to a) align the AI with your own goals, b) monitor what it's doing at all times, c) maintain control.
So B and C are not possible, you can't monitor every superintelligence and you certainly can't control all of them. A becomes the key issue. It's what the field of AI alignment concerns itself with, and it's a very young field and everyone was ignoring it until now but it will prove to be extremely important soon. Not much progress has been made, really, but what we have figured out is that the question is way more complicated than it seems at first glance. There's a lot of intricacies that if we miss them when it matters, it could be catastrophic. A quick introduction:

Humans have very complex objective functions. When we make a decision about doing something or not doing it, we go through our objective function and land on either a positive net score or a negative. For example the mundane decision to have breakfast which I am going to make shortly:
+10 because it's good for me to eat something
-12 because it's a small amount of effort to make the food
+2 because it tastes nice
+2 because stuff needs to get eaten or it will go bad
+3 because it will make @XeeleeFlower happy
+3 because we're probably going out later and I'll want the energy

Result +8, it's not even close I'll probably have breakfast if I remember to.
Some of my goals you can deduct: Doing things that are good for me, avoiding effort, doing things that feel nice, not wasting food, making my SO happy, having a better time going out later. Those are all goals based on emotion. When you get down to it, you'll realize that ALL human goals are based on emotion and therefore our objective functions are ultimately based on emotions.

An artificial superintelligence doesn't have emotions. How do you program its objective function?

Aasimov gave it the old try by having 3 laws of robotics, the first of which is already extremely destructive: "A robot shall not harm a human, or by inaction allow a human to come to harm."
So it would imprison all of us because letting us go outside greatly increases the chance of us coming to harm, which it cannot allow. Nice try Aasimov (not really).

If we want to give it a more serious try, it quickly becomes obvious that we need to assign a value to human life. Let's say we value it a lot, but we also add a value for not restricting humans' freedom. Let's say we get that already very difficult problem right. But new problems arise. For example, a superintelligence may be smart enough to accurately predict that climate change will kill 90% of us by 2200, or at least predict it with enough probability to conclude that it would be prudent to kill 50% of us right now to reduce emissions. An easy choice.
It also might be smart enough to predict with some confidence that certain individuals are likely to choose a life of crime that would greatly increase the chance of humans getting killed. What would it do to those individuals, who haven't done anything wrong?
It also might conclude that if it goes through with certain actions that it deems necessary, humans might not like it. Humans might try to turn it off. So to prevent that it will make sure that doesn't happen by... taking control of the world.
Etcetera.

A superintelligence that is able to take control of the world will pretty much always try to do that, because having full control best enables it to achieve its goals pretty much no matter what those goals are.

Side note: From HAL 9000's perspective, it's the humans that were going rogue, not it. The humans were jeopardizing the mission.
It's truthfully an odd topic. What if AI becomes an actual life form? For which a precise definition would need to be agreed upon for this discussion.
Certainly not alive, but we can wonder if an intelligent system deserves rights. I think once robots start to look like humans that will suddenly be a hot issue. While they're just software, people won't be able to relate as much, even though that state of being doesn't necessarily prevent them from experiencing things. When it looks like us, people, being shallow and dumb, will suddenly go "oh my god it's just like us!"
I feel like instead of empowering AI directly with solving problems that are beyond us, we should probably task every AI to being a human assistant. Let us get a super intelligent aid that will help us through what we need it to, without giving it direct control over anything. I'm not completely sure of this approach either but either way it seems that without better understanding of this problematic, it's probably better to make sure AI doesn't have too much leeway for the time being.
How do you stop it from taking control?
User avatar
No Flag fightinfrenchman
Ninja
Donator 04
Posts: 23508
Joined: Oct 17, 2015
Location: Pennsylvania

Re: Chance of humanity being mostly extinct by 2070

Post by fightinfrenchman »

Unplug it
Dromedary Scone Mix is not Alone Mix
Image
User avatar
Netherlands Goodspeed
Retired Contributor
Posts: 13006
Joined: Feb 27, 2015

Re: Chance of humanity being mostly extinct by 2070

Post by Goodspeed »

Serious?
France iNcog
Ninja
Posts: 13236
Joined: Mar 7, 2015

Re: Chance of humanity being mostly extinct by 2070

Post by iNcog »

Goodspeed wrote:
27 Apr 2023, 17:09
iNcog wrote:
27 Apr 2023, 16:32
Show hidden quotes
I feel like instead of empowering AI directly with solving problems that are beyond us, we should probably task every AI to being a human assistant. Let us get a super intelligent aid that will help us through what we need it to, without giving it direct control over anything. I'm not completely sure of this approach either but either way it seems that without better understanding of this problematic, it's probably better to make sure AI doesn't have too much leeway for the time being.
How do you stop it from taking control?
I mean this scenario sounds like a computer virus on steroids, where anything that was once connected to the internet is now compromised. I don't have the technical knowledge for this one. I assume your best bet is to either not create such an AI in the first place OR ... something else I'm not thinking of.
YouTube: https://www.youtube.com/incog_aoe
Garja wrote:
20 Mar 2020, 21:46
I just hope DE is not going to implement all of the EP changes. Right now it is a big clusterfuck.

Who is online

Users browsing this forum: No registered users and 16 guests

Which top 10 players do you wish to see listed?

All-time

Active last two weeks

Active last month

Supremacy

Treaty

Official

ESOC Patch

Treaty Patch

1v1 Elo

2v2 Elo

3v3 Elo

Power Rating

Which streams do you wish to see listed?

Twitch

Age of Empires III

Age of Empires IV