tl;dr: Should I change my passwords? Probably yes.

The Great Suspender loads and runs a JS file from an unknown, recently registered CDN server. It purports to be analytics, but we can’t know what is served to end-users.

In greater depth:

The Great Suspender was pulled from the Chrome Extension Store today and was marked as ‘containing malware’. Indeed, the extension appears to load and execute JavaScript from the pulled the content from owa.tracker-combined-latest.minified.js. I downloaded a copy and unminified it as best I can. There is keylogging in it, but it has a check to see if a field is input+password type and excludes those keypresses. (UPDATE: And after reviewing the CRX code that loads this JavaScript, I can say the logging of keypresses is disabled by default.) It also subsamples the events that a user is doing so it doesn’t get 100% of your keypresses and mouse clicks, but it does get a lot. Here’s the keystream code in question:

Please excuse the screenshot of text. I don’t want my provider to flag my page, so I’m linking and screenshotting rather than embedding. https://gist.github.com/JosephCatrambone/458ebb148013902e71584cba46e99828#file-owa-tracker-combined-latest-minified-js-L2407-L2423

It also keeps track of your DOM stream. The percentage defaults to 100%.

https://gist.github.com/JosephCatrambone/458ebb148013902e71584cba46e99828#file-owa-tracker-combined-latest-minified-js-L3030-L3045

The eventQueue is a big datastructure that tracks where you’re looking, what you’re clicking, and the keys you hit. Here’s the code that uploads it:

https://gist.github.com/JosephCatrambone/458ebb148013902e71584cba46e99828#file-owa-tracker-combined-latest-minified-js-L2046-L2066

There are some support methods and an unholy buttload of assorted polyfills for older versions of internet explorer. Some tricks to upload for really old browsers include making a hidden iframe, creating a form in it, filling it in the form automatically, and hitting submit.

So what the cinnamon toast fuck does this mean?

They appeared to try not to capture password data. The caveats here are many: TRY is key. If your page didn’t have the INPUT tag on your password input or it wasn’t a password box, it may have captured SOME or all of your password characters. The other important issue is that this is only the stuff I got from my browser. It’s possible that if your IP is different it’s going to dump different logging code. The last and maybe biggest issue is it’s not clear if there were any user secrets leaked with URL uploads. If you have sessions that are passed around in your headers (shouldn’t be a thing any more), then it might be good to log out and log in.

I decided to make the facial motion capture project I was working on open source. It lavished in the bog of unfinished projects for long enough that it seems like the right thing to do. It’s far from complete, but has some pieces of value, like

The full repo is here: https://github.com/JosephCatrambone/facecapture

And a sample picture:

Screenshot of Face Tracking
The highlighted (non-darkened) region indicates the ROI for image processing. The green rect is the unsmoothed detected face area.

Just in case there was ever any doubt where the politics of this blog stand, #blacklivesmatter.

Thank you to the protestors who are willing to brave the teargas and rubber bullets.

Thank you to the countless individuals who are willing to risk your lives in the midst of an epidemic to march against injustice.

Take care of yourselves. Stay safe.

I made an orbital camera controller for someone who wanted help on the Godot Discord channel. Here’s the source. When applied to a Camera Node it gives this kind of behavior:

The player controller is fairly straightforward, so I’ve not included it as a separate gist. For a Kinematic Player, one can move relative to the camera direction like so:

extends KinematicBody

export var walk_speed:float = 5.0

func _process(delta):
	# Walking in the direction the camera is pointing.
	var camera = get_viewport().get_camera()
	var dy = int(Input.is_action_pressed("move_forward")) - int(Input.is_action_pressed("move_backward"))
	var dx = int(Input.is_action_pressed("move_left")) - int(Input.is_action_pressed("move_right"))
	var move = (camera.global_transform.basis.x * -dx) + (camera.global_transform.basis.z * -dy)
	move = Vector3(move.x, 0, move.z).normalized()  # Take out the 'looking down' component.
	self.move_and_slide(move*walk_speed)

Don’t Crush Me is a game about pleading for your life with a robotic garbage compactor. It came up many years ago during a discussion in the AwfulJams IRC channel. The recent advent of Smooth Inverse Frequency proved an ample opportunity to revisit the idea with the benefits of modern machine learning. In this post we’re going to cover the building SIF in Rust, compiling it to a library we can use in the Godot Game Engine, and then building a dialog tree in GDScript to control our gameplay.

First, a little on Smoothed Inverse Frequency:
In a few words, SIF involves taking a bunch of sentences, converting them to row-vectors, and taking out the principle component. The details are slightly more involved, but not MUCH more involved. Part of the conversion to vector rows involves tokenization (which I largely ignore in favor of splitting on whitespace for simplicity), and smoothing based on word frequency (which I also currently ignore).

Really, one of the “largest” challenges in this process was taking the Glove vectors and embedding them in the library so that GDScript didn’t have to read anything from a multi-gigabyte file. The Glove 6B 50-D uncased vectors take up only about 150 megs in an optimal float format, and I’m quite certain they can be made more compact still. Additionally, since we know all of the tokens in advance, we can use a Perfect Hash Function to optimally index into the words at runtime.

With our ‘tokenize’ and ‘vectorize’ functions defined we are free to put these methods into a small Rust GDNative library and built it out. After an absurdly long wait for the build to compile (~20 minutes on my Ryzen 3950X) we have a library! It’s then a matter of adding a few supporting config files and we have a similarity method we can use:

Now the less fun part: Writing Dialog. In the older jam Hindsight is 60 Seconds, I capped things off with a dialog tree as part of a last ditch effort to avoid doing work on things that mattered. The structure of that tree was something like this…

const COMMENT = "_COMMENT"
const ACTION = "_ACTION"
const PROMPT = "_PROMPT"
const BACKGROUND = "_BACKGROUND"
var dialog = {
     "_TEMPLATE": {
         COMMENT: "We begin at _START. Ignore this.",
         PROMPT: "The dialog that starts this question.",
         ACTION: "method_name_to_invoke",
         "dialog_choice": "resulting path name or a dictionary.  If a dictionary, parse as though it were a path on its own.",
         "alternative_choice": {
             PROMPT: "This is one of the ways to do it.",
             "What benefit does this have?": "question",
             "Oh neat.": {
                 PROMPT: "We can go deeper.",
                 "…": "_END"
             }
         }
     },

I like this format. It’s easy to read and reason about, but it’s limited in that only one dialog choice corresponds to one action. For DCM I wanted to be able to have multiple phrasings of the same thing without repeating the entire block. Towards that end, I used a structure like this:

var dialog_tree = {
    "START": [ # AI Start state:
        # Possible transitions:
        {
            TRIGGER_PHRASES:["Hello?", "Hey!", "Is anyone there?", "Help!", "Can anyone hear me?"],
            TRIGGER_WEIGHTS: 0, # Can be an array, too.
            NEXT_STATE: "HOW_CAN_I_HELP_YOU",  # AI State.
            RESPONSE: "Greetings unidentified waste item.  How can I assist you?",
            PLACEHOLDER: "Can you help me?",
            ON_ENTER: "show_robot"  # When we run this transition.
        },

        {
            TRIGGER_PHRASES: ["Stop!", "Stop compressing!", "Don't crush me, please!", "Don't crush me!", "Wait!", "Hold on."],
            NEXT_STATE: "STOP_COMPRESS_1",
            RESPONSE: "Greetings unidentified waste item.  You have asked to halt the compression process.  Please give your justification.",
            PLACEHOLDER: "I am alive.",
            ON_ENTER: "show_robot"
        },

        {
            TRIGGER_PHRASES: ["Where am I?", "What is this place?"],
            NEXT_STATE: "WHERE_AM_I",
            RESPONSE: "Greetings unidentified waste item.  You are in the trash compactor.",
            ON_ENTER: "show_robot"
        }
    ],

This has proven to be incredibly unruly and, if you are diligent, you may have realized it’s just as possible to do the same “multiple trigger phrases” in the first approach via some simple splitting on a special character like “|”.

So how well does it work? The short answer is, “well enough.” It has highlighted a much more significant issue: the immensity of the input space. Initially, it was thought that using a placeholder in the input text would help to anchor and bias the end-user’s choices and hide the seams of the system. In practice, this was still a wrought endeavor.

All things considered, I’m still proud of how things turned out. It’s a system that’s far from perfect, but it’s interesting and it was plenty satisfying to build. I hope that people enjoy the game after the last bits are buffed out (hopefully before GDC 2020).