$ cat picture.png | catpicture

This is a stupid simple utility that I'm now really glad to have. I call it "catpicture" and it's a tool to cat pictures to the command line. I spend a lot of time SSH-ed into remote machines doing ML or image processing, and I hate having to pull down a picture just to see if it's garbage. This tool will take (either via stdin or the args) an image and dump a rescaled version to the terminal using ANSI colors.

You can view the source at https://github.com/josephcatrambone/catpicture.

You can download a version of the utility for Linux here: http://josephcatrambone.com/projects/catpicture/catpicture_linux

catpicture_screenshot1 catpicture_screenshot2

Peter Norvig (yes, that Peter Norvig) wrote a brief blog post about building a spellcheck application. It's a beautifully simple approach which demonstrates the unreasonable effectiveness of simple frequency and edit distance tricks. His original blog post can be read here: http://norvig.com/spell-correct.html

I decided to write a version of my own in Rust to learn the language.

The full GitHub project with Cargo and wordlist is here: https://github.com/JosephCatrambone/RustSpellcheck

And the Rust code of interest:


use std::io;
use std::collections::HashMap;
use std::io::Read;
use std::fs::File;

static WORD_FILE: &'static str = "words.txt";
static QUIT_COMMAND: &'static str = "quit";

fn edits(word : &str) -> Vec{
	let alphabet = ["a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z"];
	let mut edits = Vec::::new();
	// Find corruptions of word
	for i in 0..word.len() {
		let (a, b) = word.split_at(i);
		// Deletions
		if b.len() > 0 {
			edits.push(a.to_string() + &b[1..]);
		}
		// Transpositions
		if b.len() > 1 {
			let mut transposition = a.to_string();
			transposition.push(b.chars().nth(1).expect("Panic while building character transposition.  Unable to decode character."));
			transposition.push(b.chars().nth(0).expect("Panic while building character transposition.  Unable to decode character."));
			transposition.push_str(&b[2..]);
			edits.push(transposition);
		}
		// Replacements
		if b.len() > 0 {
			for character in &alphabet {
				edits.push(a.to_string() + &character + &b[1..]);
			}
		}
		// Insertions
		for character in &alphabet {
			edits.push(a.to_string() + &character + b);
		}
	}
	// &String can automatically coerce to &str, but str -> String 
	edits
}

fn update_frequency_count(model : &mut HashMap, words : String) -> () {
	let word_iterator = words.split_whitespace();
	// TODO: Find a more generic iterator.
	for word in word_iterator {
		let lower_word = word.to_lowercase();
		let count = model.entry(lower_word).or_insert(0);
		*count += 1;
	}
}

fn correct(model : &HashMap, word : String) -> String {
	// If the word is spelled right, return it.
	if model.contains_key(&word) {
		return word;
	}

	// Allocate some placeholders for our frequency and best match.
	let mut best_match = String::new();
	let mut frequency : u64 = 0;

	// First degree corruption
	// Get the corruptions of each
	let corruptions = edits(&word);
	for corruption in &corruptions { // &word so it casts to &str.
		match model.get(&corruption.to_string()) {
			Some(f2) => {
				if *f2 > frequency {
					best_match = corruption.to_string();
					frequency = *f2;
				}
			},
			None => {}
		}
	}
	if frequency > 0 {
		return best_match;
	}
	
	// Second degree corruption
	// Frequency is still zero if we're here.
	for corruption in &corruptions {
		let double_corruptions = edits(&corruption);
		for c2 in &double_corruptions {
			match model.get(&c2.to_string()) {
				Some(freq) => {
					if *freq > frequency {
						best_match = c2.to_string();
						frequency = *freq;
					}
				},
				None => {}
			}
		}
	}
	if frequency > 0 {
		return best_match;
	}

	// No matches at all.
	println!("No match.");
	word
}

fn main() {
	// Read words.
	let mut fin = File::open(WORD_FILE).unwrap();
	let mut lines = String::new();
	fin.read_to_string(&mut lines).unwrap(); // Just bury read errors.

	// Gather words into hash table.
	let mut model = HashMap::::new();
	update_frequency_count(&mut model, lines);

	loop {
		let mut user_input = String::new();
		io::stdin().read_line(&mut user_input).expect("Problem reading from stdin.");
		user_input = user_input.trim().to_lowercase().to_string();
		if user_input.trim() == QUIT_COMMAND {
			break;
		}
		let correction = correct(&model, user_input.to_string()).to_string();
		println!("{}", correction);
	}
}

I should give the caveat that this is probably not idiomatic Rust. It's probably not even particularly good Rust. Such is the way of the web, though. I hope it proves useful for someone.

Today, we're going to learn how to setup Blender to render a full-immersion 3D video and upload it to YouTube. We'll start by covering some gear to preview your videos (with links to Amazon), quickly fabricate a scene, configure Blender for 3D output, do the required prep, and finally upload to YouTube. Nothing we do here is particularly novel or difficult, but it will hopefully save you some time in making videos of your own.

Here's a preview of the finished product. Open in the YouTube app to view as a Cardboard video.

Direct link:
https://www.youtube.com/watch?v=4QP9vHB7-Rw

Embed:

And the .blend file:

https://drive.google.com/file/d/0BxkijDBoaFrmd2Rua0k1NnZIeGc/view?usp=sharing


Motivation


I played with Google Cardboard at home this winter. The setup and use stood in stark relief to the setup and use of my Oculus at home. Connecting the camera of the Oculus, the HDMI out, the power, installing drivers, updating the device, all took on the order of hours. In contrast, the Cardboard took on the order of 15 minutes to assemble, and the Cardboard App downloaded in parallel. It's not a replacement for the Oculus, but as a function of dollars+effort in to entertainment value out, it's quite effective.


Gear


Pretty much any Google Cardboard kit will do fine. I picked up this because I wanted something I could use without my glasses. It supports focal and IPD adjustments:

If you're on the market for something cheaper and Prime ready, this is also an option:

Again, any cardboard device will do. Here is a lengthy Amazon list of cheap devices.


Setting Scene and View


Open the scene you want to use for 3D video. Reset the orientation and position of your camera, then set the camera's rotation so that it is pointing straight down the positive Y-axis. If you'd like to use a pre-made scene, download the .blend file above.


Render settings


Be sure you're using the Cycle's renderer. Blender's default rendering engine doesn't have the camera settings we need to correctly export a video.

01_cycles

Next, open up the "Render Layers" section on the right side and check "Views" at the bottom.

02 Scene Settings

By default, Stereo 3D provides left and right views. We're done with this pane.


Camera Settings


Make your way over to camera settings. We will switch the camera type from Perspective to Panoramic. This allows us to capture the scene in its entirety in a single render pass. In the "Type" option below Lens type, switch to "Equirectangular." Google's tools expect equirectangular output.

Convergence Plane Distance and Interocular distance can remain the same.

Set the pivot to whatever you'd like. I prefer 'center'.

Your camera tab should look like this:

03 Camera Settings


Output settings


Update the following settings:

Device: GPU Compute! Don't forget this! You _can_ use your CPU to run Cycles, but it's going to take a lot longer.

Output: MPEG (Needed for Google's Metadata tool.)

Views Format: Stereo 3D

Stereo Mode: Top-Bottom (or left-right, but I like top-bottom because it's easier to view the video before upload.)

Encoding:

Format: MP4 (Needed by YouTube's tool.)

Codec: H.264

Audio Codec: AAC

Then set your start frame/end frame and resolution. Mash render. You settings should look like this:

04 Render Settings


YouTube Prep


Download the YouTube 360-Degree Video Tool here: https://support.google.com/youtube/answer/6178631?hl=en

Unzip it and fire it up.

Open the video you just created. Check 'spherical' and '3D Top-bottom'.

'Save as' and place your newly created file wherever you'd like.

05 Spherical Metadata


YouTube Upload


Upload the newly created video to YouTube as you would any other.

When completed, go to 'Info and Settings' on the video page.

06 Info and Settings

Select the 'Advanced Options' tab and check "This Video is 3D." Select the Top-Bottom option and save your changes.

07 Advanced


That's it! Now you should be able to view your 3D video in browser or in Cardboard on your device.

Had some time to myself this morning after a few days without internet access. Got TMX maps loading and drawing, and fixed a bug with collision. For the curious, setOrigin(float, float) does NOT accept a value from [0, 1] which determines the centerpoint. It takes a value IN PIXELS. Also, getX() and getY() do not subtract out the origin, so you've got to do that yourself when calling draw().

libGDX Jam - Collisions with Map

Wrapping up the end of the first day. There's a chance I'll do more tonight, but I've got to pack for my trip. Progress was faster than expected. I have characters on the screen and movement.

One hiccup I had today was unprojecting the from a screen click to a point in physics space. I was doing camera.unproject(blah blah) and couldn't figure out why my Y-coordinate was flipped when I had correctly set my view. It seemed that whether or not I set y-down in my orthographic camera, as I clicked closer to the bottom of the frame, the larger the number got! It turns out if you're attaching an InputListener to a STAGE object in libGDX, it will be called with correctly unprojected x and y values based on the current camera, so I was double unwrapping. I figured this out when I made my camera follow the player and had different values coming in while not moving the mouse. Important safety tip.

The fruits of today's labor:

libGDX_GameJam_2015_D01