Archive

Tag Archives: gamedev

It’s time for the first ever AI & Games Jam. Every word in the title is something near and dear to me, AI, Games, and Jam. This post is going to be updated a few times as I work my way through the weeklong jam. I’ll begin with the cursory ideas, some prototypes, and eventually will clean up the whole thing and make it into a coherent narrative.

First, let’s ideate:

The Jam is judged on three principle areas, originality, presentation, and fun. There is a theme: “Breaking the Rules”.

  • Initial impulse was a cliche: a detective that doesn’t play by the rules and wants to solve some kind of case by interviewing subjects. Lets me work in chatbots, which I know and love, and would probably score highly on originality, but probably wouldn’t be too much fun.
  • Maybe a Smash-TV knockoff where you play against an ever-evolving AI army that’s trained with reinforcement learning or NEAT. Gameplay wouldn’t be super original — shooter arenas are a dime a dozen, but how many actually evolve their AI as a difficulty mechanic.
  • You are an ant in a colony tasked with finding resources, as ants do. While all the others are happily gathering their materials as directed by pheromones and swarm behaviors, you’re left to your own devices. Can you emulate an ant well enough to avoid being rejected by the colony?
  • Papers, Please! but the Turing Test. You play a malware detector sitting on the edge of a network and need to talk to applications as they move past you: the firewall. Possibly really hard, but not impossible. Would be original, but perhaps not fun. High risk, because the core gameplay mechanic would revolve around an untested, possibly bad, chatbot system.
  • Dream Explorer: move though a latent space trying to find someone’s dream. Fancy GAN — steer through 3D or 4D latent space to find a matching image. Low theme adherence.
  • Crazy Self-Driving Taxi: Driving Sim where you break the rules of the road to get your pedestrian to their destination.

Day 1:

Most of this is predicated on chatbot systems. Those are fairly tricky, and integrating Torch with C# can be a mess. Even training a decent model in the given time could prove impossible. That means tonight, goal one has to be getting a steerable language model to run end-to-end. If I can’t do that in the next two hours, I have to fall back to the Smash-TV knockoff. Let’s get started.

End of Day 1 status:

Tried chat bots for an hour, decided it was too high risk, and went with the SmashTV approach. Movement and shooting:

Day 2:

I have from now (10:00AM) until 6:00 PM tonight to get my stuff done. After that I’ve got social and work obligations for the rest of the week, plus whatever time I can squeak out between 10:00PM and 11:00PM next week.

My initial impulse is that I’m not really liking the interaction between the theme and the game idea. There has to be a way to make the theme a _core_ of the mechanic and not just an afterthought. I’m thinking back to a game I tried making a while back, Terminus. The big hook of that game was the CPU and being able to program robots oneself. Hacking systems is pretty within the idea of breaking the rules, and the gameplay will be more novel. Going to take everything I have and just try to apply it here.

End of Day 2:

Well I’m pretty sure I focused on the wrong thing. I find it personally fun to program these tiny robots in assembly but once again I’m not sure how much anyone else is going to want to do this, and if we need a variety of creatures in the game then this may be time prohibitive. Really was hoping to have something more substantive by EOD today, but that’s life.

Day 3:

Didn’t get home until a little later. Worked on things for about an hour or so. Added some camera follow with smoothing and lookahead. Threw in a tileset that has collision. Need to get back to focusing on gameplay. At least I need to figure out a win and lose condition or something that resembles a mechanic. In hindsight, I really didn’t plan enough there.

Day 4:

A bit of a pivot. Again. I have a less nebulous idea about finishing. Didn’t even realize that I was forgetting about an end state until I started thinking about the “why” of the player’s actions. The new goal is to find your lost dog. That’s simple enough and lets me iteratively improve the game while always keeping a complete build.

Steps: first you just find the dog on the map. Add a win condition when the player is within range.

Bonus: add a ‘restricted’ area that the player needs to enter to find their dog. Open it using the hacking technique above OR finding a keycard in the map.

Bonus: chat bots to ask where the dog is.

Bonus: more curated level.

Bonus: random levels.

Bonus: dog moves around depending on hunger and thirst.

Day 5:

Had an unexpected opening in the evening to work on things. Made a bunch of procedurally generated people and gave them a bunch of waypoints around the map that they’d move between. There’s a bug in the pathfinding, though, and all of them seem to move to strange places. I also have the trigger for win, but haven’t done a screen with winning on it yet.

Day 7:

Didn’t have time to work on day six. Day seven was a bit of a frantic wrap up. Mostly wanted to get the game packaged and uploaded. Added the title screen and the final page, fixed the nav bug (relative position for movement target). I also had to draw the nav mesh separately and clear out all the navigations added to the tilemap because characters were getting stuck on the wall.

Closing Thoughts:

That was okay. In hindsight, I should have done something with more options to demo some fancy AI. If I could do it again I’d make a chess game were you can break a single rule (and have a piece move like a queen) once per game.

Itch.io Link: https://xoana.itch.io/find-your-dog

GitHub page: https://github.com/JosephCatrambone/AIAndGamesJam2021

I made an orbital camera controller for someone who wanted help on the Godot Discord channel. Here’s the source. When applied to a Camera Node it gives this kind of behavior:

The player controller is fairly straightforward, so I’ve not included it as a separate gist. For a Kinematic Player, one can move relative to the camera direction like so:

extends KinematicBody

export var walk_speed:float = 5.0

func _process(delta):
	# Walking in the direction the camera is pointing.
	var camera = get_viewport().get_camera()
	var dy = int(Input.is_action_pressed("move_forward")) - int(Input.is_action_pressed("move_backward"))
	var dx = int(Input.is_action_pressed("move_left")) - int(Input.is_action_pressed("move_right"))
	var move = (camera.global_transform.basis.x * -dx) + (camera.global_transform.basis.z * -dy)
	move = Vector3(move.x, 0, move.z).normalized()  # Take out the 'looking down' component.
	self.move_and_slide(move*walk_speed)

Don’t Crush Me is a game about pleading for your life with a robotic garbage compactor. It came up many years ago during a discussion in the AwfulJams IRC channel. The recent advent of Smooth Inverse Frequency proved an ample opportunity to revisit the idea with the benefits of modern machine learning. In this post we’re going to cover the building SIF in Rust, compiling it to a library we can use in the Godot Game Engine, and then building a dialog tree in GDScript to control our gameplay.

First, a little on Smoothed Inverse Frequency:
In a few words, SIF involves taking a bunch of sentences, converting them to row-vectors, and taking out the principle component. The details are slightly more involved, but not MUCH more involved. Part of the conversion to vector rows involves tokenization (which I largely ignore in favor of splitting on whitespace for simplicity), and smoothing based on word frequency (which I also currently ignore).

Really, one of the “largest” challenges in this process was taking the Glove vectors and embedding them in the library so that GDScript didn’t have to read anything from a multi-gigabyte file. The Glove 6B 50-D uncased vectors take up only about 150 megs in an optimal float format, and I’m quite certain they can be made more compact still. Additionally, since we know all of the tokens in advance, we can use a Perfect Hash Function to optimally index into the words at runtime.

With our ‘tokenize’ and ‘vectorize’ functions defined we are free to put these methods into a small Rust GDNative library and built it out. After an absurdly long wait for the build to compile (~20 minutes on my Ryzen 3950X) we have a library! It’s then a matter of adding a few supporting config files and we have a similarity method we can use:

Now the less fun part: Writing Dialog. In the older jam Hindsight is 60 Seconds, I capped things off with a dialog tree as part of a last ditch effort to avoid doing work on things that mattered. The structure of that tree was something like this…

const COMMENT = "_COMMENT"
const ACTION = "_ACTION"
const PROMPT = "_PROMPT"
const BACKGROUND = "_BACKGROUND"
var dialog = {
     "_TEMPLATE": {
         COMMENT: "We begin at _START. Ignore this.",
         PROMPT: "The dialog that starts this question.",
         ACTION: "method_name_to_invoke",
         "dialog_choice": "resulting path name or a dictionary.  If a dictionary, parse as though it were a path on its own.",
         "alternative_choice": {
             PROMPT: "This is one of the ways to do it.",
             "What benefit does this have?": "question",
             "Oh neat.": {
                 PROMPT: "We can go deeper.",
                 "…": "_END"
             }
         }
     },

I like this format. It’s easy to read and reason about, but it’s limited in that only one dialog choice corresponds to one action. For DCM I wanted to be able to have multiple phrasings of the same thing without repeating the entire block. Towards that end, I used a structure like this:

var dialog_tree = {
    "START": [ # AI Start state:
        # Possible transitions:
        {
            TRIGGER_PHRASES:["Hello?", "Hey!", "Is anyone there?", "Help!", "Can anyone hear me?"],
            TRIGGER_WEIGHTS: 0, # Can be an array, too.
            NEXT_STATE: "HOW_CAN_I_HELP_YOU",  # AI State.
            RESPONSE: "Greetings unidentified waste item.  How can I assist you?",
            PLACEHOLDER: "Can you help me?",
            ON_ENTER: "show_robot"  # When we run this transition.
        },

        {
            TRIGGER_PHRASES: ["Stop!", "Stop compressing!", "Don't crush me, please!", "Don't crush me!", "Wait!", "Hold on."],
            NEXT_STATE: "STOP_COMPRESS_1",
            RESPONSE: "Greetings unidentified waste item.  You have asked to halt the compression process.  Please give your justification.",
            PLACEHOLDER: "I am alive.",
            ON_ENTER: "show_robot"
        },

        {
            TRIGGER_PHRASES: ["Where am I?", "What is this place?"],
            NEXT_STATE: "WHERE_AM_I",
            RESPONSE: "Greetings unidentified waste item.  You are in the trash compactor.",
            ON_ENTER: "show_robot"
        }
    ],

This has proven to be incredibly unruly and, if you are diligent, you may have realized it’s just as possible to do the same “multiple trigger phrases” in the first approach via some simple splitting on a special character like “|”.

So how well does it work? The short answer is, “well enough.” It has highlighted a much more significant issue: the immensity of the input space. Initially, it was thought that using a placeholder in the input text would help to anchor and bias the end-user’s choices and hide the seams of the system. In practice, this was still a wrought endeavor.

All things considered, I’m still proud of how things turned out. It’s a system that’s far from perfect, but it’s interesting and it was plenty satisfying to build. I hope that people enjoy the game after the last bits are buffed out (hopefully before GDC 2020).

At the close of my earlier update I mentioned wanting to try ‘Tracer-style’ time travel where only the player moves backwards and everything else stays in place. I gave it a go and got it working, but it wasn’t particularly interesting. It was basically just the player moving in the opposite direction. Animation could probably jazz that up, but a more fun idea came to me in the middle of a sleepless night:

Seeing the future.

Trivially, if everything in the world rewinds and the player can make different decisions, that’s basically seeing the future. And that’s what I built:

It’s not perfect. You’ll notice that the dynamic cubes retain their velocity after the time rewind happens, but that’s solvable.

Here’s how it works: there’s a global time keeper which records the current tick. The base class has three methods (_process, get_state, and set_state), and two variables (start_tick and history[]).

The global time keeper sends a signal when time starts to rewind. During the rewind process, each tick is a step backwards instead of a step forward. The _process method of the base class checks to see if a rewind is active and, if so, calls set_state(history[global_tick]). If rewind is not active, we append or update the history. There’s some nuance to tracking deltas and despawning, but really that’s about it. Simple, eh?

“Your theme is… 60 Seconds!”

And so the brain storming begins. I feel like anything that doesn’t explicitly involve time-travel of some sort will get dinged by the Theme-Adherence police, and besides, time travel is fun.

I’ve never made a game with time travel before, but it presents plenty of opportunities to do novel things. At the very least, even if the mechanics aren’t original, I can do things like make the player character switch from replay to AI-driven if the player interacts with his/her previous self or force players to avoid interacting with their previous selves by avoiding line-of-sight and such. All of this is predicated on Terminator-style time travel, where the player goes back into a previous timeframe and can never return to their original. Minute Physics calls this “new/changed history” time travel and I’d call it ‘forked history’ time travel. The other form would be self-influencing time travel, where you happen to see yourself from the future coming in to your present.

I tried a lot of architectures. It began with the simplest thing I thought would work: A controller component would record keypresses and emit signals. The parent object would listen for signals on the child component and act as though it were receiving input. This worked, but it was hard to jump to a specific point in time.

The next thing I tried was a ‘global manager’. Each actor had a ‘get state’ and ‘set state’ method. In record mode, actors would append to their history. In replay mode, they’d pull from history and set state. Things got complicated when it came to handling travelling backwards in time and then interacting differently. It was easy to go back to a time and replay, but it wasn’t easy to go back, replay, and record at the same time. That part was necessary if we wanted to see ourselves and interact in tandem.

The third thing I tried was similar to the first. Each actor would be a deterministic function of time. We ‘set time’ which consists of taking the time and setting state accordingly. Functions are determined as only a function of the current timestamp. That works fine. After playing with it, however, it’s really hard to make puzzles for it.

Shown Here: Player Jumping Backwards in Time and Seeing Other Selves

I’m going to try the Tracer-style time travel thing where the player can go back 60-Seconds into their previous state while the world stays as it is. This is closer to the self-consistent/non-branching timeline.

Thematically, I really want to rip off Zero-Wing and do a terrible translation of the plot. Character has a time-travel device that sends him/her back 30 seconds, but it has a 60-second recharge. The player got a lethal dose of radiation and needs to go back to save him/herself. The goal is to get past unity and into the net-time-gain territory. Either a recharge of 29 seconds or a jump of 61 seconds.