DeepMind vs Starcraft II
DeepMind vs Starcraft II
After DeepMind beat the world champion in Go last year, researchers announced that the next target for DeepMind to conquer is Starcraft II. Does anybody know how good the AI is in starcraft at the moment? Do you guys think it will take decades, or will it be fairly soon?
@Goodspeed
https://deepmind.com/blog/deepmind-and- ... vironment/
@Goodspeed
https://deepmind.com/blog/deepmind-and- ... vironment/
last time i cryed was because i stood on Lego
Re: DeepMind vs Starcraft II
I would like to see it play a treaty game. Get a real test of IQ.
Re: DeepMind vs Starcraft II
Yeah this is exciting, finally we will see near-perfect play in RTS. I'm so excited for the prospect of an AI playing AoE3, it's like being given the answers we've all been looking for for so long, like being shown what the correct way to play each match up is. That won't happen for a while but if DeepMind is successful at Starcraft it could be successful at any RTS, or even any game, or even any clearly defined task. It would be a massive step for AI.
Definitely won't take decades but I don't expect any results in 2017. I haven't kept up but I don't think there have been any detailed status reports as of yet. I doubt they've written a single line of code for the project, probably still breaking their heads over some of the problems introduced by imperfect information games as opposed to perfect information games. Big issues there for AI, I wonder how they'll solve 'em. It feels like, to get to the level pro Koreans are at, you would need a level of creativity that would be impossible to achieve by any known methods.
But then again what is creativity other than reapplying learned patterns? And really, what are we humans other than machines which are very efficient at exactly that? In a way AlphaGo was already creative. It made non-standard moves and in a game as intuitive (as in, simply calculation won't get you very far) as Go, moves like that can be considered creative. It made these moves based only on applying patterns it learned from studying massive amounts of professional games.
@kami_ryu They are indeed limiting the APM to human levels. That way it can't just auto-win with perfect micro.
Definitely won't take decades but I don't expect any results in 2017. I haven't kept up but I don't think there have been any detailed status reports as of yet. I doubt they've written a single line of code for the project, probably still breaking their heads over some of the problems introduced by imperfect information games as opposed to perfect information games. Big issues there for AI, I wonder how they'll solve 'em. It feels like, to get to the level pro Koreans are at, you would need a level of creativity that would be impossible to achieve by any known methods.
But then again what is creativity other than reapplying learned patterns? And really, what are we humans other than machines which are very efficient at exactly that? In a way AlphaGo was already creative. It made non-standard moves and in a game as intuitive (as in, simply calculation won't get you very far) as Go, moves like that can be considered creative. It made these moves based only on applying patterns it learned from studying massive amounts of professional games.
@kami_ryu They are indeed limiting the APM to human levels. That way it can't just auto-win with perfect micro.
Re: DeepMind vs Starcraft II
I wish someome made a perfect micro bot for AOE, even if it was just for musk vs musk. Doesn't seem that hard to make.
last time i cryed was because i stood on Lego
- Vinyanyérë
- Retired Contributor
- Posts: 1839
- Joined: Aug 22, 2016
- ESO: duolckrad, Kuvira
- Location: Outer Heaven
- Clan: 팀 하우스
Re: DeepMind vs Starcraft II
My understanding of the subject is that DeepMind is losing out in one area when going from Go to SC2 (perfect information to imperfect), but gaining in another. In SC2 (AoE3, chess, LoL, DOTA, etc.) it's not too difficult to estimate how a game is going, whereas from what I know this is extremely difficult to do in Go.
duck
imo
imo
Re: DeepMind vs Starcraft II
Goodspeed wrote:Yeah this is exciting, finally we will see near-perfect play in RTS. I'm so excited for the prospect of an AI playing AoE3, it's like being given the answers we've all been looking for for so long, like being shown what the correct way to play each match up is. That won't happen for a while but if DeepMind is successful at Starcraft it could be successful at any RTS, or even any game, or even any clearly defined task. It would be a massive step for AI.
Definitely won't take decades but I don't expect any results in 2017. I haven't kept up but I don't think there have been any detailed status reports as of yet. I doubt they've written a single line of code for the project, probably still breaking their heads over some of the problems introduced by imperfect information games as opposed to perfect information games. Big issues there for AI, I wonder how they'll solve 'em. It feels like, to get to the level pro Koreans are at, you would need a level of creativity that would be impossible to achieve by any known methods.
But then again what is creativity other than reapplying learned patterns? And really, what are we humans other than machines which are very efficient at exactly that? In a way AlphaGo was already creative. It made non-standard moves and in a game as intuitive (as in, simply calculation won't get you very far) as Go, moves like that can be considered creative. It made these moves based only on applying patterns it learned from studying massive amounts of professional games.
@kami_ryu They are indeed limiting the APM to human levels. That way it can't just auto-win with perfect micro.
I wonder if they would ever release a version of it to the public so you can apply it to whatever game/problem you want. I'm not sure it was deep mind but I think I've heard recently of some AI helping with research in physics.
last time i cryed was because i stood on Lego
- Vinyanyérë
- Retired Contributor
- Posts: 1839
- Joined: Aug 22, 2016
- ESO: duolckrad, Kuvira
- Location: Outer Heaven
- Clan: 팀 하우스
Re: DeepMind vs Starcraft II
Last update on the subject that I know of:
https://us.battle.net/forums/en/sc2/topic/20753825636
fucking lasers man
https://us.battle.net/forums/en/sc2/topic/20753825636
kami_ryu wrote:Vinyanenya wrote:My understanding of the subject is that DeepMind is losing out in one area when going from Go to SC2 (perfect information to imperfect), but gaining in another. In SC2 (AoE3, chess, LoL, DOTA, etc.) it's not too difficult to estimate how a game is going, whereas from what I know this is extremely difficult to do in Go.
Do you know why Protoss is even good?
fucking lasers man
duck
imo
imo
Re: DeepMind vs Starcraft II
Tbh I think they used a really smart trick to make it good at go. I believe they just uploaded millions of board states from millions of online go matches with information of which board state would eventually win. The AI then was simply building towards board states that would lead to a win. The way it played the game with that information was still really impressive, but I dont see how a trick like this can be applied to an RTS. Theoretically speaking deepmind is a very efficient trial and error machine though so it could possibly end up getting good at an rts. I dont see it happen soon though.
Re: DeepMind vs Starcraft II
Oh cool they are releasing the API. For people who don't know what this means: basically Blizzard is releasing the toolset Google will end up using for their AI to interact with the game publicly, so others can have a go at making bots for it. I'm sure some devs will go crazy with that, should be good.Vinyanenya wrote:Last update on the subject that I know of:
https://us.battle.net/forums/en/sc2/topic/20753825636kami_ryu wrote:Vinyanenya wrote:My understanding of the subject is that DeepMind is losing out in one area when going from Go to SC2 (perfect information to imperfect), but gaining in another. In SC2 (AoE3, chess, LoL, DOTA, etc.) it's not too difficult to estimate how a game is going, whereas from what I know this is extremely difficult to do in Go.
Do you know why Protoss is even good?
fucking lasers man
@momuuu There are way too many different game states in Go for them to use a trick as simple as what you described. Even in Chess that wouldn't work. You can't just build towards a winning position because after 5% of the game is played your game state is already unique and, to a brute force machine, unrelatable to any other game state. They used a neural network to recognize patterns in the moves human professionals made and tried to mimic those moves as closely as possible. It can be argued that simply that (trying to reapply what worked previously to new and unique situations) is what creativity is. Why wouldn't the same approach work for an RTS?
Re: DeepMind vs Starcraft II
I am really sure they used a trick like that but then also of course theres some sort of intelligence that the AI does. I don't think it ends up being an impossibility though. In that sort of method go becomes easier to calculate because the AI doesnt actually have to think far ahead which means it won't struggle with that enormous branching factor.
Re: DeepMind vs Starcraft II
If they didn't think it could work I'm sure they wouldn't have made a big announcment that they will try to make it work.
last time i cryed was because i stood on Lego
Re: DeepMind vs Starcraft II
But see, it's not a "trick". On the contrary, it is a big step in the field of AI to go from calculating every possibility as efficiently as possible, which is how Chess computers still operate to this day, to recognizing patterns seen in previous games and applying them to a new, unique game state. Removing the necessity to calculate everything and still playing a game like Go better than any human is precisely what is so impressive about the AlphaGo project.Jerom wrote:I am really sure they used a trick like that but then also of course theres some sort of intelligence that the AI does. I don't think it ends up being an impossibility though. In that sort of method go becomes easier to calculate because the AI doesnt actually have to think far ahead which means it won't struggle with that enormous branching factor.
Pattern recognition. Call it a trick, but really that is what we do all day every day as humans. We just either do it very efficiently or the computing power in our brains is so vast that we don't have to. Probably a combination of the two.
- milku3459
- Howdah
- Posts: 1216
- Joined: Nov 8, 2016
- ESO: milku3459
- Location: in your base, killing your dudes
Re: DeepMind vs Starcraft II
If we left a bunch of computer parts together in a pile for billions of years, I wonder, would DeepMind appear eventually?
Or even a Babbage Calculator for that matter.
Or even a Babbage Calculator for that matter.
Re: DeepMind vs Starcraft II
milku3459 wrote:If we left a bunch of computer parts together in a pile for billions of years, I wonder, would DeepMind appear eventually?
Or even a Babbage Calculator for that matter.
nah g up2??? xoxox
- milku3459
- Howdah
- Posts: 1216
- Joined: Nov 8, 2016
- ESO: milku3459
- Location: in your base, killing your dudes
Re: DeepMind vs Starcraft II
_RNZ_ wrote:milku3459 wrote:If we left a bunch of computer parts together in a pile for billions of years, I wonder, would DeepMind appear eventually?
Or even a Babbage Calculator for that matter.
nah g up2??? xoxox
jakey we all know that's you
Re: DeepMind vs Starcraft II
Goodspeed wrote:But see, it's not a "trick". On the contrary, it is a big step in the field of AI to go from calculating every possibility as efficiently as possible, which is how Chess computers still operate to this day, to recognizing patterns seen in previous games and applying them to a new, unique game state. Removing the necessity to calculate everything and still playing a game like Go better than any human is precisely what is so impressive about the AlphaGo project.Jerom wrote:I am really sure they used a trick like that but then also of course theres some sort of intelligence that the AI does. I don't think it ends up being an impossibility though. In that sort of method go becomes easier to calculate because the AI doesnt actually have to think far ahead which means it won't struggle with that enormous branching factor.
Pattern recognition. Call it a trick, but really that is what we do all day every day as humans. We just either do it very efficiently or the computing power in our brains is so vast that we don't have to. Probably a combination of the two.
You could call it a trick or just something smart to do but its clearly not that applicable to sc2 or rts games.
Re: DeepMind vs Starcraft II
Why not?
Re: DeepMind vs Starcraft II
ITs basically exactly how the human brain works. You make a response based on how you learned previously.
mad cuz bad
Re: DeepMind vs Starcraft II
Goodspeed wrote:Why not?
The amount of "board states" is too large, not to mention the inperfect information.
Re: DeepMind vs Starcraft II
n0el wrote:ITs basically exactly how the human brain works. You make a response based on how you learned previously.
While we are good at experience based learning, the human brain is mostly good at extrapolating expetience and then applying it to other cases. Something deepmind cant do at all.
Re: DeepMind vs Starcraft II
That's debatable. In a way, that's what AlphaGo did. After all it's not doable to calculate every variation. It favoured certain (types of) moves based on what it learned. How is that different from what we do?
The amount of board states has been proven not to be an issue, considering how they approached Go (in which the amount of board states is also much too large). The imperfect info indeed creates some complications, but I don't see how it fundamentally disables the approach they used with AlphaGo. Instead of acting on all of the information, it will act on the info that is there. It should then naturally attempt to gather more of it.The amount of "board states" is too large, not to mention the inperfect information.
Who is online
Users browsing this forum: No registered users and 0 guests