Alright, I'm back from the trip down to Power of Play :D It was my first time at an expo showing my game (Other than Full Indie, that's more of a MeetUp than an expo ) and I had a blast doing it!
I got to meet a lot of really cool developers and some people from the industry, made great connections and got some awesome feedback and suggestions for the game!
At one point I was talking to Auston (Developer from Sportsball and Starr Mazer) about my problems doing the player AI from the game, I didn't know how I could make the AI move around the level as a player would. By using A* the AI would always be picking the "optimal path" which gets quite weird and robotic. As we were talking Casey (Handmade Hero) was trying the game and suggested: "Why don't you just use Rapidly Exploring Random Trees?"
He then proceeded to explain how the rapidly exploring random tree worked and it made so much sense to use for this problem D: as soon as I got back to Matt's house I started coding. (Matt from FMJ Games was nice enough to receive me as a guest at his place for the event :D thanks Matt!).
Little vine showing the tree being generated: Vine.co
So pretty much I start by adding a point into a point list for my original position.
I then raycast towards my goal point, if there is a free path between the original position and the goal then add a line between those and consider that the path to take.
If there isn't a free path between me and my goal, pick a random position on the map (checking if it's not inside a wall of course). Then find the closest point in the point list to it and try to raycast to it, if there is a free path between this new random point and the closest point to it, add it to the point list and create a line between the two, if there isn't a path, drop this point and find a new one.
If the random point was added to the point list, raycast to check if there is a free path between it and the goal. If there is, stop the process and read the lines from the goal back to the parents recursively in tht tree to find the path. If there isn't a free path between the new point and the goal, find a new position and repeat the process.
The first night at Matt's place I managed to get the tree working and finding interesting paths between the original position and the destination position, but still it wasn't tied to any movement.
I went to sleep thinking on how I could better implement this into the existing code I already have for the players and how to get the AI to do what it had to do.
As I woke up on the saturday I remembered a talk that my Game Design teacher, Dr. Kimberly Voll gave at GDC this year. Less is More: Designing Awesome AI.
A couple of important things I took from that talk:
- All behaviour controller-based. This tip already saved me a lot of time in implementing this, and is going to save me a lot more time in the future. Instead of writing an AI component that moves the player around, talks to the weapon to release the weapon, talks to the movement to do dashes and so on and so forth, the AI component simply acts as a controller. Instead of talking to the other components directly it simulates the input the player would be doing, "moving the analog stick towards this direction", "pressing the A button" as opposed to "move to this position", "throw the fist".
- Watch people play. "You need to know how humans play if you're going to do something that looks like a human". I've been watching people play the game for a long time now, but that day in the expo I watched even more closely how people were moving around the levels and interacting with each other and the weapons.
- Start stupid. "You can obscure intelligent behavior that is naturally emerging if you go complex too fast". I decided to start just by having the AI move around the level randomly and see how closely that behaviour matched the players behaviour, surprisingly enough, right out of the box the Rapidly Expanding Random Tree AI was already moving around how I've seen many players move that day.
- Identify (in)Appropriate Behaviors and Fix. From watching the first "stupid" implementation, I started to look on what was working and should be kept, and what should be changed, as I did that I kept being surprised how quick things were working and looking some-what human, every little thing I fixed or added only made it better.
With those things in mind when I got into the train to travel back to vancouver I started implementing the actual movement. I had the actual picked path points be added to a separated list called "desiredPath". If there is any points in the desiredPath list I get the latest point and find the direction towards it. I then normalize and break the direction into Horizontal and vertical analog stick values and inject it into the player controller I use to receive input from the controllers. Whenever the distance between me and this latest point is smaller than 1 I drop that point out of the list (which makes me do the same for the new latest point, until there are no more points in the list).
Little vine showing the player AI start to move around: Vine.co
Since most players love to dash around, I made it so every 1 and a half to 2 seconds the AI would have a 40% of chance of pressing the A button, which made the AI start dashing around all funny like humans do.
Little vine showing the player AI start to dash around + a bit of the view from the train: Vine.co
I then wanted to start actually adding goals, still in the interest of "starting simple", I simply wanted his goal to be: "Get a rocket fist". So I made a function that looks in the level for the closest available rocket fist (not harmful and not being used by other player) and returns that as a goal to create a path using the RERT. If the AI already has a fist or there aren't any fists available in the environment, pick a random floor position as a goal to create a path using the RERT.
Little vine showing the player AI finding picking up and throwing the fist: Vine.co
From there, funny enough, the AI not only got the fist, but was throwing it around because he kept pressing the A button every couple of seconds, I kept watching what it was doing around in that level and… it was looking very human-like
Bigger Gfycat showing the player walking around after the fist and throwing it around, with the view from the tree visible:Gfycat.com
By that time the train was arriving at the station and I started my walk back home and got a good night of sleep. Today I decided to put a couple of those Derp AIs to fight and watch what would happen (I decided to name this AI derp, as it's kinda dumb and is not exactly trying to fight as much as just get the rocket and throw it randomly).
Today I watched more than 30 minutes of Derp fights. I started with 1v1s, then I went 4p. I didn't implement anything else to make them try and kill each other, just the simple goals of: "No fist? Find a fist. Have a fist? Walk to random spot. Every 2~ seconds roll a 40% chance of pressing the A button."However, some times… the Derps were actually playing very well D: I saw some of them executing perfect counter-kills that I haven't been able to execute yet, I've seen them executing stun-steal-kills and I've seen them doing crazy long-shots and engaging in stun-battles! For a long stretch there Green Derp won 3 entire last man standing matches in a row. I know for a fact that he has the same AI as all the others, but a little part of me still thinks he's smarter! xD
Now I need to add more goals (if you have a weapon, find the closest player you can kill and try throwing it in his general direction maybe?) and I need to figure out a "menu" way to let you add AIs into the game… I gotta figure out ow I'm going to make the actual mode selection menu.