fightinfrenchman wrote:Horsemen wrote:When do we get to see the Deepthroat demonstration?
Did it take you four days to come up with this joke?
Does it take you four years to get off this community?
fightinfrenchman wrote:Horsemen wrote:When do we get to see the Deepthroat demonstration?
Did it take you four days to come up with this joke?
Horsemen wrote:fightinfrenchman wrote:Horsemen wrote:When do we get to see the Deepthroat demonstration?
Did it take you four days to come up with this joke?
Does it take you four years to get off this community?
momuuu wrote:Did the AI do a good job at understanding the counter system and basic build order or did it do worse at that than what humans can code into the AI? It did worse than what humans can do.
Is the AI in that sense not a leap forward? It is not a leap forward.
Did this demonstration show anything special then? No, we already saw OpenAI do the same thing but better in dota2.
What did this demonstration then show? Starcraft 2 can be won by mechanics.
Should we be impressed by that? No, unless you're a simple mind.
I haven't seen an RTS AI do so well at army positioning and picking fights. More generally, I haven't seen an AI do so well at an RTS, period, despite the limitations that previous bots didn't have.momuuu wrote:Goodspeed wrote:Yes and I disagree. Is that really so hard to believe? I mean it wasn't Sun Tzu reincarnate but I think its understanding of strategic concepts was actually very impressive, especially considering we're still in the early stages of its development.If you read closely I don't think I'm saying that a hard coded AI would be better. I'm saying that strategically this AI was lackluster.
You should appreciate something for what it is. Not realizing you can split units to defend a warp prism or make phoenixes to kill it, or that you probably shouldn't go full stalker against full immortal is straight up terrible. Thats sub gold level strategic understanding. It might have done okay in other aspects, such as army movement or build orders (the former is generally refered to as tactics in strategy games like these, and the latter was probably mostly the result of augmented learning), but that doesn't make this excusable. Especially once you get your facts straight and realize that AI can already do these things well.
To me, the fact that it still has weaknesses doesn't mean it was strategically "lackluster". It was impressive, but not perfect of course.I didn't cherry pick a thing that's very easy to code, because that's the entire point. It's very bad at something that's very easy to hard code. It's good at things that aren't easy to hard code, but we already knew (or at least should have known if you had followed AI developments in this area) that an AI can be good at these things. It seems entirely reasonable to 'cherry pick' on that one thing it was blatantly fucking terrible at because that was the main flaw. How can you even call this cherry picking, when I'm literally pointing at a fucking glaring problem and stating that it's a big deal.
I don't agree that it was bad at that, though. That's your opinion, and if someone disagrees with you that doesn't mean they misunderstood you.This AI was bad at this strategic concepts while that was the primary thing it needed to be good at to be impressive or 'revolutionary'.
Slightly different? For many reasons, Dota is an easier game for AI to tackle, and while OpenAI's results were also impressive, in my opinion this, and especially the potential considering we are at the early stages of this project while OpenAI vs Dota has been a thing for years, is a big step up.Again, if you had followed AI developments you'd have realized that without showcasing that it can outsmart humans it doesn't actually show anything new. This was just the same thing as openAI but then half a year later in a slightly different game. Nothing new.
What do you mean by augmented learning? It's a neural network that makes the first connections by "studying" replays and then learns by playing against itself. The more it plays against itself the more it will improve. It seems likely that at some point it will learn about the counter system and about other concepts it seemed to not know much about this time. I do agree that the way they currently train the agents may not be optimal, but what about its learning process would make you think there isn't much room for improvement?And once it does manage that, I'll be impressed. But for now it didn't really seem to figure out much on a strategical level. I wonder how much of its strategical understanding of build orders comes from the augmented learning and how much it discovered by itself. It seems way too early to conclude that deepmind can improve much more. 200 years of learning and it didn't figure out that you can split your army against a warp prism.. That isn't very promising. The potential might not be as huge, especially if you consider the possibility that most of its strategic understanding comes from augmented learning.
OpenAI is also a neural net based on reinforcement learning? And Dota is not an RTS.I'm pretty sure you can't make an AI like A*, or one that outperforms it, by "hard coding". Besides AlphaGo is a finished project. They only just started on this one.
Last I checked, in Brood war, there are bots which go over 10.000 APM and they are still getting demolished by professional human players.
openAI. Check it out.
If you take all of its weaknesses and hard code them away (actually a lot harder than you think considering this is a neural network we're talking about), yes you'd improve it. You could improve every human the same way. But what a waste of time this was if that was really your main point.Also, I think currently A* mixed with some hard coding would be better than a raw A*. That's the main thing. Strategy was its big handicap in this sort of game. That's the new thing an AI should achieve. It didn't achieve that. Instead it showcased that an AI can be good at the things that humans do based on intuition (positioning, when to move out and things like that). But that's not something new.
These results suggest that AlphaStar’s success against MaNa and TLO was in fact due to superior macro and micro-strategic decision-making, rather than superior click-rate, faster reaction times, or the raw interface.
Goodspeed wrote:No need for low-effort memes. Yeah it's ones and zeros, big whoop, so are humans.
Goodspeed wrote:No need for low-effort memes. Yeah it's ones and zeros, big whoop, so are humans.
Goodspeed wrote:ones and zeros, big whoop, so are humans.
That meme has been a thing for years and it's a bit outdated since they started with neural network based machine learning. But yeah, in the end it's still binary.momuuu wrote:I think you missed the point haha
Your previous username comes to mind.Dolan wrote:Humans are made of logic gates?Goodspeed wrote:ones and zeros, big whoop, so are humans.
Users browsing this forum: edeholland and 8 guests
Which top 10 players do you wish to see listed?
Which streams do you wish to see listed?