Monthly Archives: January 2014

“The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”

—Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk

I spoke with a few of the friendly folks in the SomethingAwful Game Development IRC channel today. My last few attempts at making games have fallen flat. They’ve been short on content, as was the case with the Awful Game Dev Challenge, or short on polish, as was the case with Candy Saga: Crush Them All. Now that I’m back in class, I need to strike a balance between my academic work, my industry work, and my personal fun projects. I’ve wanted always to make a really immersive game — something with atmosphere and solid gameplay, but those don’t come easily or cheaply to a lazy software developer with limited musical and artistic experience.

This goal is almost unobtainable under the time and resource constraints above. I needed to make something which was still applicable and reusable in my work life and in my school life. Those two conditions mean it has to be some kind of machine learning project. So, how am I to combine machine learning and video games in a way that’s really useful? That’s a harder question than it seems. I know what you’re thinking, “But all games need AI! It’s of almost central importance!” This is completely true. Games need AI. Unfortunately, though, most games use a cheap approximation of AI. They hard code decision trees (Player is here, move to player. In range? Attack player.) instead of learning because the latter is really hard and resource intensive. A purely learned AI system also makes it difficult to tweak the performance of the bots if they become too hard. If you’re writing an FPS, for example, and the bots shoot you in the face as soon as you enter their cone of vision, that’s no fun for the player. You need to make it difficult, but not impossible. In that case, you’d add a random amount of noise to the bot’s aim so they won’t shoot perfectly every time. Still doing too well? Add a delay so they don’t recognize the player for 30ms, on par with humans. Still too well? Add caps on the rotation speed of the aiming. Repeat. This kind of tweaking isn’t something you can do with most machine learning approaches. Once you get them to the stage where they perform really well, you can’t fine tune behavior for the sake of gameplay. Mostly, though, it’s hard to get them to the stage where they perform really well. I’d idly conjecture that maybe the reason zombie games are popular now is that it’s not as hard to make realistic zombie AI as it is to make realistic human AI.

I asked for help.

< jo> I want to try and make a game out of a chat bot.
< jo> I wonder how that would go.  Probably be frustrating as shit.
< Kinsie> an ai-based game where you have to convince dr. sbaitso to turn off the trap before it kills you
< Kinsie> (you are trapped in a crushing-ceiling room)

I fell in love with the idea pretty much immediately. Thanks, Kinsie! Let’s see where it goes.

See Kinsie’s site here:

My advisor last week assigned me the simple task of finding a dataset on which to test the machine learning code I’d been running. Simple enough. There were numerous publications on automatic vehicle labeling and automatic traffic flow analysis. As the existence of this short rant no doubt indicates, the task proved to be significantly more difficult than it appeared in first light. The first twenty or so papers I found which dealt with the subject were behind paywalls. I finally came across the University of Southern California’s dataset and KIT-IPF’s dataset. USC’s dataset, unfortunately, did not contain ground truth, and KIT-IPF’s images were taken too close to the targets. After some more digging, I found a reference to the Fort Hood vehicle dataset, which seemed to contain both ground truth and data which is similar to the materials I’ve been studying so far. The images are black and white, and taken from a great distance. The average vehicle should be around 12 pixels, and the images themselves should be around 1200 pixels, minimum. My task is to train a network to find the appropriate tiny black dots and label them as vehicles. The FH dataset, while provided as a link in the paper, had gone dead. A quick hoop jump through the wayback machine showed a more aged version which linked to a publicly available dataset. If it’s as I hope, I’ll provide some example excerpts in a later update.

Ahh. Data at last.

I visited home this past Christmas. The trip took me from Philadelphia, my unfortunate place of residence, to Chicago, my hometown. In hindsight, I suspect my enthusiasm for the trip largely stemmed from the desire to eat lots of protein bars and drink RedBull. The combination of the two, coming so shortly after a series of catastrophically stressful exams and work deadlines, left me slightly mentally stewed. Along the way, somewhere in the Appalachian Mountains, probably around the Allegheny Rance, I saw a pedestrian bridge which popped over the highway below. The roadway swung to the left and tracked alongside the mountain, which meant, had I enough speed in my tiny, effeminate Toyota Yaris, I could make the jump. I turned to the devil on my shoulder who told me, “RAMP IT,” then to the angel on my other shoulder who told me, “RAMP IT.” I turned instead to the reptilian brain in my head, the one responsible for survival and reproduction, and it said, “What is ramp?”

When I stopped rolling I came to the realization that the thousands of dollars of damage to my suspension, engine, windows, wheels, transmission, car, self, and environment may better have been invested in protein bars and plane tickets. The total for the trip was about $200 in gas and 24 hours in time, round trip — cheaper than flying in a strictly financial sense, but neglecting time costs. 24 is a very optimistic estimate. The total was likely closer to 34 hours, since every two hours or so I’d stop for a fifteen to twenty minute break.

This picture becomes somehow less pretty when you realize this is basically Indiana.

Philadelphia (most of the east coast, really) was in the middle of a warm spell when I departed. It came as something of a shock to me when I opened my door to get gas, only to have my face immediately freeze and break off. The temperature dropped from 60F (15C) in Philly to 4F (-15C) in Chicago with a windchill of negative fuck you. By linear regression, Chicago was due to hit absolute zero in 4.55 days.

Then came family. At least in my case, it’s easy to get psyched about seeing family. Harder, though, is maintaining enthusiasm after your arrival. The cleanliness of the house writes something of a poetic metaphor for the optimism of the entire ordeal. Arrive with great excitement, everything is clean. Set down your suitcase and catch up, slight clutter here and there. Finish reflecting, cook some food, whatever, we’ll clean up later. Someone’s getting irritated for no reason, dirty plates build up on the counter. Leave with as much enthusiasm as you arrived. Just before departure, a moment of steadfast, lingering sadness as you realize your irrational irritation is fleeting and that soon enough you won’t have any family to which you can return.

The road back is less stressful than the ride there somehow, though by this time you’ve had enough protein bars to shit some decent grade dog food. Hour four passes. Almost there. Another… eight hours? Okay. A whole day at the office left.

“Oh god I have to go to work tomorrow.”

I said this out loud at around midnight, when I realized my arrival time was going to be 2:00 AM. The trip degraded into a spiteful pissing match between myself and Google Maps. Google was utterly convinced I would arrive at 1:52AM, despite my meticulous speeding and chemical abuse. I figured Google was being kind and assuming the speed limit would be maintained. I was right, and by keeping myself consistently 15% above the posted speed limit I was able to arrive at 1:51AM. Suck it, big data.

Happy New Year!