Monthly Archives: December 2015

Today, we’re going to learn how to setup Blender to render a full-immersion 3D video and upload it to YouTube. We’ll start by covering some gear to preview your videos (with links to Amazon), quickly fabricate a scene, configure Blender for 3D output, do the required prep, and finally upload to YouTube. Nothing we do here is particularly novel or difficult, but it will hopefully save you some time in making videos of your own.

Here’s a preview of the finished product. Open in the YouTube app to view as a Cardboard video.

Direct link:


And the .blend file:


I played with Google Cardboard at home this winter. The setup and use stood in stark relief to the setup and use of my Oculus at home. Connecting the camera of the Oculus, the HDMI out, the power, installing drivers, updating the device, all took on the order of hours. In contrast, the Cardboard took on the order of 15 minutes to assemble, and the Cardboard App downloaded in parallel. It’s not a replacement for the Oculus, but as a function of dollars+effort in to entertainment value out, it’s quite effective.


Pretty much any Google Cardboard kit will do fine. I picked up this because I wanted something I could use without my glasses. It supports focal and IPD adjustments:

If you’re on the market for something cheaper and Prime ready, this is also an option:

Again, any cardboard device will do. Here is a lengthy Amazon list of cheap devices.

Setting Scene and View

Open the scene you want to use for 3D video. Reset the orientation and position of your camera, then set the camera’s rotation so that it is pointing straight down the positive Y-axis. If you’d like to use a pre-made scene, download the .blend file above.

Render settings

Be sure you’re using the Cycle’s renderer. Blender’s default rendering engine doesn’t have the camera settings we need to correctly export a video.


Next, open up the “Render Layers” section on the right side and check “Views” at the bottom.

02 Scene Settings

By default, Stereo 3D provides left and right views. We’re done with this pane.

Camera Settings

Make your way over to camera settings. We will switch the camera type from Perspective to Panoramic. This allows us to capture the scene in its entirety in a single render pass. In the “Type” option below Lens type, switch to “Equirectangular.” Google’s tools expect equirectangular output.

Convergence Plane Distance and Interocular distance can remain the same.

Set the pivot to whatever you’d like. I prefer ‘center’.

Your camera tab should look like this:

03 Camera Settings

Output settings

Update the following settings:

Device: GPU Compute! Don’t forget this! You _can_ use your CPU to run Cycles, but it’s going to take a lot longer.

Output: MPEG (Needed for Google’s Metadata tool.)

Views Format: Stereo 3D

Stereo Mode: Top-Bottom (or left-right, but I like top-bottom because it’s easier to view the video before upload.)


Format: MP4 (Needed by YouTube’s tool.)

Codec: H.264

Audio Codec: AAC

Then set your start frame/end frame and resolution. Mash render. You settings should look like this:

04 Render Settings

YouTube Prep

Download the YouTube 360-Degree Video Tool here:

Unzip it and fire it up.

Open the video you just created. Check ‘spherical’ and ‘3D Top-bottom’.

‘Save as’ and place your newly created file wherever you’d like.

05 Spherical Metadata

YouTube Upload

Upload the newly created video to YouTube as you would any other.

When completed, go to ‘Info and Settings’ on the video page.

06 Info and Settings

Select the ‘Advanced Options’ tab and check “This Video is 3D.” Select the Top-Bottom option and save your changes.

07 Advanced

That’s it! Now you should be able to view your 3D video in browser or in Cardboard on your device.

Had some time to myself this morning after a few days without internet access. Got TMX maps loading and drawing, and fixed a bug with collision. For the curious, setOrigin(float, float) does NOT accept a value from [0, 1] which determines the centerpoint. It takes a value IN PIXELS. Also, getX() and getY() do not subtract out the origin, so you’ve got to do that yourself when calling draw().

libGDX Jam - Collisions with Map

Wrapping up the end of the first day. There’s a chance I’ll do more tonight, but I’ve got to pack for my trip. Progress was faster than expected. I have characters on the screen and movement.

One hiccup I had today was unprojecting the from a screen click to a point in physics space. I was doing camera.unproject(blah blah) and couldn’t figure out why my Y-coordinate was flipped when I had correctly set my view. It seemed that whether or not I set y-down in my orthographic camera, as I clicked closer to the bottom of the frame, the larger the number got! It turns out if you’re attaching an InputListener to a STAGE object in libGDX, it will be called with correctly unprojected x and y values based on the current camera, so I was double unwrapping. I figured this out when I made my camera follow the player and had different values coming in while not moving the mouse. Important safety tip.

The fruits of today’s labor:


Game Jam Time!

libGDX is doing a game jam from December 18 to January 18th. (

Nearly a year ago I prototyped a game called Metal Sky Arena wherein players would float around and use gravity guns to launch each other into spikes. After a few minutes of play, the entertainment value disappeared. Given the time I’ve spent playing Super Smash Brothers and Towerfall, this came as a bit of a surprise. Perhaps gravity only is too feeble a mechanic. That’s why I’m going to dust off the idea and begin with Metal Sky Arena II, similar in mechanics, but instead of gravity guns, we’ll give everyone shotguns! Let’s do it!

Game Design Document!


The theme is Life In Space. A regular part of life in space is probably the eradication of hoards upon hoards of carnivorous extra-terrestrials. Is there a more appropriate weapon than the trusty shotgun? I think not. Our bold crew of Russian and American astronauts must keep their vessel safe for as long as possible while they careen towards a celestial body of unknown origin!


Ivan Robotovich: The Russian Robot.

Neal “Flint” Swaggerty: Playboy Captain.

Level/environment design

Levels are square, minimally dynamic. Designed to look like the interior of a futuristic space ship. Hazards like floating explosive barrels may drift through some.


Smash TV meets Duck Game meets Super Smash Brothers in zero-gee. Enter a room. A wave of aliens or some hazard enters. When cleared, move to the next area. When you’ve cleared the ship, you win! Single-player pits a person versus waves of enemies. Coop and competitive if time allows.


Aiming for 960×540 devices with 16×16 characters. Need to experiment with how close the camera should be to a player.

Sound and Music

Keep it simple. Short jam.

User Interface, Game Controls

Touch to fire in a given direction. Tap on player to curl up, release to spring out. If touching a wall, this allows the person to spring quickly, slightly gaining speed, otherwise the player slightly loses speed on wall-bounce.


20 - Project built on my machines + running on phone.  Design doc done.
21 - Travel day.
22 - Character on screen + shooting.
23 - Enemies on screen + moving towards player.
24 - Holiday
25 - Holiday
26 - Level loading + display. No interaction.
27 - Travel day.
28 - Travel day.
29 - Travel day.
30 - Interaction with level (collisions + environmental)
00 (31) - Holiday.
01 - Holiday.
02 - Player damage + destroyable entities.
03 - Start game screens + game over screen.
04 - Advance to next area.
05 - Victory condition.
06 - Replace placeholder art and sound with better sounds.
07 - More placeholder replacement.
08 - Test public beta.
09 - Bugfixes.
10 - Bugfixes.
11 - Soft launch deadline.
12 - Buffer
13 - Buffer
14 - Buffer
15 - Buffer
16 - Buffer
17 - Hard deadline.
18 - Buffer

The development of Aij has slowed in the past few months, due primarily to the release of Google’s TensorFlow library. (

TensorFlow is amazingly well documented and broken into easily digestible sub-modules. Compared to Theano, it’s easier to debug because it will tell you where (on what line) objects fail to cast between things, and you don’t have to do any goddamn magic to build it (like mxnet) or to deserialize modules (like Theano). It’s also way faster to compile the graphs than Theano, especially for large unrolled ones like LSTM. Another nice bonus is you don’t have to install gfortran to get it to run like mxnet AND Theano. It’s one of the few things where you do a pip install and it’s done. No GCC bullshit or library insanity.

mxnet is faster, but it doesn’t feel as well organized as TensorFlow. It’s very much a manual process to define a graph, determine the derivatives, and write an optimizer. There’s no clear, single path for doing that (though there are many, seemingly contradictory ways to do it). Do I want to do foo = mx.symbol.model() with .fit? What if I don’t have data I want to fit like that? What if I want to use a different optimizer? What if I want to train multiple graphs at different points? I think the reason for my adoration is that the documentation for TF is _really_ good. MXNet’s docs are absolutely terrible in comparison and Theano’s look laughable. Otherwise, mxnet and TensorFlow seem pretty much on par in terms of functions (with a slight advantage to mxnet for their .c export), but TF is still better organized in terms of modules and ease of use.