Archive

Monthly Archives: December 2015

Game Jam Time!

libGDX is doing a game jam from December 18 to January 18th. (http://itch.io/jam/libgdxjam)

Nearly a year ago I prototyped a game called Metal Sky Arena wherein players would float around and use gravity guns to launch each other into spikes. After a few minutes of play, the entertainment value disappeared. Given the time I’ve spent playing Super Smash Brothers and Towerfall, this came as a bit of a surprise. Perhaps gravity only is too feeble a mechanic. That’s why I’m going to dust off the idea and begin with Metal Sky Arena II, similar in mechanics, but instead of gravity guns, we’ll give everyone shotguns! Let’s do it!

Game Design Document!

Story

The theme is Life In Space. A regular part of life in space is probably the eradication of hoards upon hoards of carnivorous extra-terrestrials. Is there a more appropriate weapon than the trusty shotgun? I think not. Our bold crew of Russian and American astronauts must keep their vessel safe for as long as possible while they careen towards a celestial body of unknown origin!

Characters

Ivan Robotovich: The Russian Robot.

Neal “Flint” Swaggerty: Playboy Captain.

Level/environment design

Levels are square, minimally dynamic. Designed to look like the interior of a futuristic space ship. Hazards like floating explosive barrels may drift through some.

Gameplay

Smash TV meets Duck Game meets Super Smash Brothers in zero-gee. Enter a room. A wave of aliens or some hazard enters. When cleared, move to the next area. When you’ve cleared the ship, you win! Single-player pits a person versus waves of enemies. Coop and competitive if time allows.

Art

Aiming for 960×540 devices with 16×16 characters. Need to experiment with how close the camera should be to a player.

Sound and Music

Keep it simple. Short jam.

User Interface, Game Controls

Touch to fire in a given direction. Tap on player to curl up, release to spring out. If touching a wall, this allows the person to spring quickly, slightly gaining speed, otherwise the player slightly loses speed on wall-bounce.

Roadmap

20 - Project built on my machines + running on phone.  Design doc done.
21 - Travel day.
22 - Character on screen + shooting.
23 - Enemies on screen + moving towards player.
24 - Holiday
25 - Holiday
26 - Level loading + display. No interaction.
27 - Travel day.
28 - Travel day.
29 - Travel day.
30 - Interaction with level (collisions + environmental)
00 (31) - Holiday.
01 - Holiday.
02 - Player damage + destroyable entities.
03 - Start game screens + game over screen.
04 - Advance to next area.
05 - Victory condition.
06 - Replace placeholder art and sound with better sounds.
07 - More placeholder replacement.
08 - Test public beta.
09 - Bugfixes.
10 - Bugfixes.
11 - Soft launch deadline.
12 - Buffer
13 - Buffer
14 - Buffer
15 - Buffer
16 - Buffer
17 - Hard deadline.
18 - Buffer

The development of Aij has slowed in the past few months, due primarily to the release of Google’s TensorFlow library. (http://tensorflow.org)

TensorFlow is amazingly well documented and broken into easily digestible sub-modules. Compared to Theano, it’s easier to debug because it will tell you where (on what line) objects fail to cast between things, and you don’t have to do any goddamn magic to build it (like mxnet) or to deserialize modules (like Theano). It’s also way faster to compile the graphs than Theano, especially for large unrolled ones like LSTM. Another nice bonus is you don’t have to install gfortran to get it to run like mxnet AND Theano. It’s one of the few things where you do a pip install and it’s done. No GCC bullshit or library insanity.

mxnet is faster, but it doesn’t feel as well organized as TensorFlow. It’s very much a manual process to define a graph, determine the derivatives, and write an optimizer. There’s no clear, single path for doing that (though there are many, seemingly contradictory ways to do it). Do I want to do foo = mx.symbol.model() with .fit? What if I don’t have data I want to fit like that? What if I want to use a different optimizer? What if I want to train multiple graphs at different points? I think the reason for my adoration is that the documentation for TF is _really_ good. MXNet’s docs are absolutely terrible in comparison and Theano’s look laughable. Otherwise, mxnet and TensorFlow seem pretty much on par in terms of functions (with a slight advantage to mxnet for their .c export), but TF is still better organized in terms of modules and ease of use.