Team Aurora Games, A Post Mortem

I’m sure this is something that you’ve been wanting to read about.

It’s hard to begin with something like this, honestly. I guess I’ll start by saying that I’m no longer with Team Aurora, and haven’t been since May 2014. It’s crazy to think that 9 of us, for the majority of Team Aurora’s most active life, had been just jamming out this crazy game, Grey. And it never really saw the light of day.

Yes, sure, we did go to PAX East 2014, but it never saw the release that each of us had planned out in our minds. We all had these great ideas for a game and we all poured our hearts and souls into this amalgamation of ideas. Grey, and Team Aurora are massive topics, so I want to break down this post into a few sections that will discuss how and why we got together, what happened with the kickstarter, development issues, and more.


In October 2011, I was approached by a classmate, who we will call B, with an idea to make a game in our spare time. He pitched the general idea of just making something on the side so that we can stay busy and hone our skills. We could use it as something on a resume or just a project to show friends and family. I had been developing games for many years before going to college and I thought I would be able to offer some advice in the process of developing games.

The team was quite large from the start. We had 5 programmers, 3 designers, and a producer. B naturally rose up as the leader and was facilitating weekly meetings and more. At our first meeting we all started to get to know each other and learn a little bit about what we could do. It was clear from the start that we all shared one passion: Games. But before we could do anything, we started to think of a name for our group. We started to think of what the inspiration was for certain game company names, and one of the designers on the team, R, had a dog named Aurora. We liked the idea of naming the company after his dog, so we rolled with it.

With a name for our game all set, we started pitching concepts to each other trying to come up with an idea that each of us could follow and get in to. I never really got into the story aspect of a game when starting a new project. I always focused on systems, and things I loved about games before making a story. As the ideas started to fill up the whiteboard and everyone was bouncing ideas around, it was evident that everyone wanted to make an RPG game. As I learned from personal experience, you do not want to make an RPG game for a first project. But being young and timid, I never voiced my concern. I was however taken away by my thoughts of how I would go about programming this game.

Over time, and after many meetings we started forming the concept for the game we all imagined and would eventually become Grey. I was a prolific programmer and started programming tiling systems and AI systems and all sorts of things so that we could start laying the groundwork for the game. But by this time we were all contacted by B. B was leaving our group and dropping out of college. He had made the decision that college wasn’t for him and he wanted to do something else with his life. B came to me and handed the reigns over, he knew that I had the most experience developing games and asked me to take Team Aurora and our concept, Grey, to completion.

Grey: The Lost Technology

We quickly realized that Grey was going to be a pretty large game, and a big undertaking for us as full-time college students, but we thought with the right planning we could finish in a year. But we needed to make some changes if that was going to happen. Firstly we didn’t have any artists. We started asking around and talked to who we all thought were the top creative talent in our year, C and M. After some chatting, they were interested and came to a meeting. They decided that wanted to join and we left the creativity for the game come from them. They loved what we were going for and had a lot ideas.

We started getting concept art going and had some game jams working on the game. We all decided that we wanted to use XNA and build a game engine. Since none of us had direct experience working on a game of this size, we thought, “How hard could it be?” Oh what a mistake that was. We had no tools, no way for the designers to make quests or dialogue or characters. Anything really. So I had ideas on how we would lay out all of this work and started jamming on it. I worked on this game probably every single day, designing code flow, ui, editors, etc. I worked on a lot of tools for the team.

I don’t want to make this seem like a one man battle… We had other programmers, J, F, and D. D was busy working on other projects, but had initially come in with the potential to be our media person. D would manage our twitter and other services for us. D began to get preoccupied and could not contribute as much as he wanted to. F and J were both pretty excited to get working on stuff and J was pretty interested in artificial intelligence. But we had a lot of groundwork to lay down before we could start doing anything. I did my best to distribute the workload to each of us. We used git and bitbucket as source control for our work, so we were able to iterate pretty easily. But most of us still had some learning to do with it, and over time we all became proficient with it. Despite that we worked like crazy to get a game together and load in and move around and start iterating on something.

Then summer hit and we were all impacted pretty hard by it. We had discussions on putting in 12 hours each per week on the game. Each of us were putting in some hours. I was almost always hitting the amount, and most of the team was hitting it too. However, F was struggling to make the meetings (we were pretty flexible about them), and we all could tell something was going on. I had some chats with F about his performance and what was happening. He assured me that he would get going on stuff. Some time passes and we still see nothing from him, despite having assigned some tasks to him. I ask him what was going on and he was discouraged because portions of his code (something around 50%-75%) had been rewritten and he felt like his contributions didn’t mattter. They did matter, but as development goes, we all find better or more efficient ways of doing things, and we will end up rewriting or improving.

This turned into a pretty hard situation for us because we knew we wouldn’t see any contributions from F, and it hurt to see him go. It hurt in particular for me because I had never been in a situation to tell someone that they had been “fired”. It took me a lot of courage to make the phone call (it was still summer and we couldn’t meet in person), but I think the team benefited from it because we didn’t have to assign tasks and see them collect dust.


Early in the development of Grey, we had a lot of art coming in, and a lot of really cool story development and mechanics planned out by our designers. We were all really ecstatic, but from the early inception of our group, we had always been hearing about the business side from B. B was really into the side of running a business and not so much developing a game. I don’t know if he saw it as a source for money or if he was just being safe, but after he left I think some of us were scared about what would happen if we were to not take care of registering as an official business or having our copyrights verified.

We rushed into the town hall and got our paperwork together to register as an LLC and to get a Tradename. We got it notarized and sent it out. Once we were official, we were still in the phase that we need legal software to be developing this game or we could get sued and go to jail, or what have you. So that’s how we came up with the idea to make a Kickstarter. We all sat down in a meeting and looked at what we wanted to do with the game, and how much time we had in college. We thought long and hard about it. The biggest questions fell to us programmers about our capabilities. I think maybe we all felt a little cocky, but we gave bold estimates saying that we could do this by X date and that by Y date. Although what we really should have been doing was doubling the time we needed to get these features done. We never considered feature creep or our own inability to stick to the plan. We always strived for something greater, and that struck us hard in the later months of development.

But we were excited to create a Kickstarter, and we were contacted by a friend who was doing film work for the school and he said he wanted to document our journey. We were excited to get some coverage on what we were doing and we asked if he could help with our Kickstarter video. He was happy to help, and I think he did a really great job. We all think he did an awesome job! We all felt like it looked very professional. But for me, looking back, we should have been more realistic on what we asked for. We looked professional, but our knowledge of game industry was not that of someone who actually was a professional. The money was requested was not realistic.

Back to the Kickstarter itself. We weighed the budget of what we needed to be legal, and we looked into what was allowed by some of our current software and how it relates to commercial purposes. Most everything checked out, but we still needed some licences for photo editing software and money to buy a soundtrack. We literally asked for the amount we needed to get those. We didn’t ask for money to pay ourselves. Actually, we never made any money from the game, and at the end of the years only lost money, continually reporting losses on our taxes.

Once we meticulously looked over the Kickstarter, over and over, we hit the launch button. It was alive! We all couldn’t have been more happy. But one of the many oversights we made was that we had literally nothing to show for in-game play in first half of the Kickstarter. We just had an idea and that was it. We all resolved that we need to jam on this game real hard to get anything even remotely presentable. We got to work and spent hours on hours working on the game to create the demo video that went live on the Kickstarter page. To be honest, some of it we hacked together for the video, but most of that was done correctly by the end.

After a long 30 days we generated enough interest from users of Kickstarter to get funded!

Grey Development Post-Kickstarter Success

By the end of the summer, Grey wasn’t entirely in a state to send out to our backers and we made a post to say we needed more time. People understood, and that was nice of them.

We needed music  so we reached out to some of the people who contacted us looking for opportunities, and we talked to a guy who lived in Hollywood, and he sent us some samples. We really liked his stuff and wanted to talk to him more. We did a few Skype meetings and he was really laid back about finding a price that worked for both of us and was willing to give us a little more music than we initially paid for. We really couldn’t have found a better composer for the money we paid. After many meetings and months we ended up with the most amazing soundtrack. We shared some of the tracks with our backers and used the music at events we would occupy and it was awesome.

By this time, I think it was about 1.5 years after the Kickstarter and we had something radical coming together. We had audio transitions in the game, some AI, dialogues, map transitions and many enemies. The editor was even coming together. I had shifted some of the media responsibilities over to one of our designers, B. He was posting twitter updates and everything. We posted a few updates on our blog, and the kickstarter backers seemed happy that we were developing the game still.

We realized that we were going to be graduating soon and really needed to get things into gear. So we brought in some “interns”, an artist and business manager. I couldn’t take care of all of the business things anymore and wanted someone to take care of stuff like that. We needed to keep the development team developing. I had shifted most of the social media responsibilities to our business manager and she was busy doing her thing and B was able to continue designing mechanics for the game. Our new artist also fit in really nicely with C and M and she was making some really great art!

We were about 90% complete with art, and most of the systems were done in the game, and editor was nearly complete. But the game still didn’t feel right. It was interesting, but it wasn’t fun. After 2 years of development, the game wasn’t living up to our expectations, and the way we did the animations wouldn’t allow for as fluid of combat as we hoped. Despite this, we did all we could with what we had so we could get this game to the public.

We wanted to generate some hype so our business manager found a tech conference in Burlington for us to go to and was able to source a table for free! We went and our game generated a lot of buzz there. We had our sound track playing and we were able to really pitch our game as the cool and awesome product we envisioned. We had kids walk by who stuck there faces to the tv and fell in love with the game. At this conference we still had nothing playable, but people were still interested, nonetheless. We went to another conference hosted at our college, Green Mountain Games Festival where we had a full table and C was doing concept art, and we had a playable demo of the game for spectators. We got a lot great feedback and we were still improving on the game day after day.

We got the point where the game felt good, and development was winding down. It was nearing a point where the designers could design nearly the entire game (with the exceptions of cutscenes, but we decided to scrap that). However, we all realized how close we were to graduating and needed to do something quick.

Most of us had accepted job offers from other companies because we all wanted to get paid. We wanted to see some money, and we all had developed and honed our skills to a great degree. This was not great for Grey though. The team got together, and we discussed cutting features to scale back and get the game to play well and be fun. We started dropping features and working more on our showy features. We did get a little further in how it played and the combat was getting really interesting with multiple combos and finishers. We were all getting excited again. We got the opportunity to go to PAX East 2014 and bought a table. Some of the team went down and showed the game. I couldn’t make it for the entire weekend, but was able to get down for a day and we generated massive amounts of interest with this.

We all felt like it had gotten to a good point. Good enough for the remaining members of the team to handle the final touches.


Once we all graduated, it was time to hand over the reigns to D as he was free now to work on Team Aurora projects. I gave him all the credentials for our social media, website, servers, etc. I also transferred over all business documents and similar official documentation. Once this was done with, I hoped to see updates on the Kickstarter and the twitter, but it pretty much went dead after I transferred it over. Before transferring the Kickstarter over, I tried to change my name, but it didn’t work. Kickstarter would not allow it. I also tried to email Kickstarter many times but consistently never heard anything back from them. It was a real disappointment to me to experience absolutely terrible support from a company that assists in funding projects worth millions of dollars.

All in all, this is essentially where it ends for me. This massive, massive, massive project of ours, 2 years out from Kickstarter end and nothing to show backers for it. I think in our hearts we were all a little disappointed with ourselves and how we handled the development of the project. I have to hand it to the team though, we worked unbelievably hard to get the game where we got it for the PAX presentation and I couldn’t be more proud of the work we did there.

I think as a team we had written something like 60,000 lines of code for the game and had something like 1GB of art assets. The game was huge, but that’s what you get when you make an RPG for a first project. I think if I had the choice to do it all over again, I would smack down any request for an RPG and suggest we make mobile games. Mobile games are fun and can actually generate revenue! There is a low startup cost too.


So, that’s the story behind Grey and Team Aurora. Thanks.



wxWidgets for a Blob Game Editor

A blob game is something like Loco Roco or Gish where there are characters and objects in the game that can compress and squeeze and roll around. Using the physics system I have been working on this semester I wanted to replicate that and create a game with the following in mind:

  • Must contain a character that can roll around
  • Must contain a level editor for placing composite objects
  • Must display features of a mass aggregate system

For me, creating the game would not be hard, but I wanted to be able to pay special attention to the editor I was going to make for the game. I wanted to create my level editor so that it was styled similar to that of Grey: The Lost Technology, and by that I mean a separate control panel so that I do not have to create a rendering loop for the active, drawable game space within a window. I knew that you could use Windows Forms with C++ so I tried setting that up. The problem with this idea is that the C++ implementation of Windows Forms is managed. This immediately created problems because my current game implementation is written in unmanaged code. After some fiddling around with Windows Forms, I eventually gave up. The primary reason I wanted to use was that I could use the visual designer, so it was primarily the speed of quickly mocking up a UI that I wanted.

I started looking at other C++ UI libraries and I found a few, such as Qt and wxWidgets. Both are really powerful UI libraries that offer all the functionality that Windows Forms offers, but cross-platform. I had done my research and found that Qt is easier to use, but wxWidgets was meant to be the most natural looking for the given system it runs on. It’s meant to look as close to system dialogs as possible. I had also looked at wxWidgets before and had already read their documentation because I saw that a company that I like was using it for their tools development. The decision was made… already having some prior knowledge of the library, I chose wxWidgets.

wxWidgets turned out to be a really easy platform to implement into my existing codebase. I usually have a lot of trouble linking a new library into my existing code, but not with this. The documentation is very clear, and I didn’t even have to compile my own version of the source code! The setup instructions for getting wx into your Visual Studio can be found here. After that, you can grab any of the “Hello World” scripts or samples from the wx documentation page.


Here is my minor implementation of a small control panel for my editor:

#pragma once

#include <wx/wx.h>
#include "CompositeEditor.h"

class CompositeCPanel : public wxFrame
	CompositeCPanel(const wxString& title, CompositeEditor* editor);
	void OnUpdate(wxCommandEvent & event);
	void OnSave(wxCommandEvent & event);
	void OnComboBoxSelected(wxCommandEvent & event);

	wxTextCtrl * entryX;
	wxTextCtrl * entryY;
	wxTextCtrl * entryZ;
	wxTextCtrl * radius;

	wxTextCtrl* mName;
	wxComboBox* mCombo;

	CompositeEditor* mEditor;

#include "CompositeCPanel.h"

CompositeCPanel::CompositeCPanel(const wxString& title, CompositeEditor* editor)
	: wxFrame(NULL, wxID_ANY, title, wxDefaultPosition, wxSize(250, 350))
	, mEditor(editor)
	wxPanel *panel = new wxPanel(this, wxID_ANY);

	wxStaticText* xText = new wxStaticText(panel, wxID_ANY, "X:", wxPoint(5, 20), wxDefaultSize, 0L, wxStaticTextNameStr);
	entryX = new wxTextCtrl(panel, wxID_ANY, "", wxPoint(20, 20), wxDefaultSize, 0L, wxDefaultValidator, wxTextCtrlNameStr);
	wxStaticText* yText = new wxStaticText(panel, wxID_ANY, "Y:", wxPoint(5, 45), wxDefaultSize, 0L, wxStaticTextNameStr);
	entryY = new wxTextCtrl(panel, wxID_ANY, "", wxPoint(20, 45), wxDefaultSize, 0L, wxDefaultValidator, wxTextCtrlNameStr);
	wxStaticText* zText = new wxStaticText(panel, wxID_ANY, "Z:", wxPoint(5, 70), wxDefaultSize, 0L, wxStaticTextNameStr);
	entryZ = new wxTextCtrl(panel, wxID_ANY, "", wxPoint(20, 70), wxDefaultSize, 0L, wxDefaultValidator, wxTextCtrlNameStr);

	wxStaticText* radiusText = new wxStaticText(panel, wxID_ANY, "Radius:", wxPoint(5, 95), wxDefaultSize, 0L, wxStaticTextNameStr);
	radius = new wxTextCtrl(panel, wxID_ANY, "", wxPoint(50, 95), wxDefaultSize, 0L, wxDefaultValidator, wxTextCtrlNameStr);

	wxButton *button = new wxButton(panel, wxID_EXIT, wxT("Update"), wxPoint(20, 120));

	wxStaticText* nameText = new wxStaticText(panel, wxID_ANY, "Name:", wxPoint(5, 195), wxDefaultSize, 0L, wxStaticTextNameStr);
	mName = new wxTextCtrl(panel, wxID_ANY, "", wxPoint(50, 195), wxDefaultSize, 0L, wxDefaultValidator, wxTextCtrlNameStr);

	wxButton *buttonSave = new wxButton(panel, wxID_SAVE, wxT("Save"), wxPoint(20, 220));

	wxString items[] = {"rod", "spring"};

	wxStaticText* typeT = new wxStaticText(panel, wxID_ANY, "Link Type: ", wxPoint(5, 170), wxDefaultSize, 0L, wxStaticTextNameStr);
	mCombo = new wxComboBox(panel, wxID_OK, wxEmptyString, wxPoint(70, 170), wxDefaultSize, WXSIZEOF(items), items);


	mEditor->mFrame = this;

void CompositeCPanel::OnComboBoxSelected(wxCommandEvent & event)

void CompositeCPanel::OnUpdate(wxCommandEvent & WXUNUSED(event))
	// update the game.

void CompositeCPanel::OnSave( wxCommandEvent & event )

One of the problems I faced with using wxWidgets was that I did not fully understand how their event system worked and thus I was not able to take full advantage of their feature set. To achieve the goal that I wanted, I passed a reference to the class that this Dialog was linked to. While it’s probably not the best solution, it is a solution, and until I find out a better way this will be the way it stays for now.


Things to know about LibGDX

Hey everyone,

I just started porting over Crate Crash to libgdx to get it working on Android and through this process I’ve learned a few things.

  1. The coordinate system in LibGDX is in the bottom left, instead of the top left. This means that most things will be flipped if you port them over for a 2D project. Luckily, you can change the camera around to display things so you can provide values in a typical top-left scenario.
    1. camera = new OrthographicCamera(,;
      // setToOrtho (bool yIsDown, width, height)
  2. Due to flipping the coordinate system, the images then need to be flipped to accommodate the newly flipped camera.
    1. If using Sprites:
      mySprite.flip(false, true); // flip(bool x, bool y)
  3. Within LibGDX they have a physics engine. The physics engine that is being used is Box2D and unlike the normal distributions of Box2D, the pixel to meter ratio is 1:1 instead of the usual 30:1. Switching over to this is not that difficult and requires small changes if the swap is made early into the project
  4. By default, the version of LibGDX that I got uses OpenGL 1.x. This is only problematic if you want to use textures that are not in a power of two dimension. If you are using Android 1.6 or older (Because many devices that have 1.6 are very old), then you may not be able to use OpenGL 2. However, this is a minor limitation because 99% of devices used today are OpenGL 2 compatible.
  5. Handling multiple resolutions is really easy once you know the basic idea behind it. There are a few options for cross platform stability and display and one of them is to force the same aspect ratio on each device. By doing this you will effectively force the game to show black bars around the perimeter of the game screen. There is however one problem that comes from that: Mouse touch positions don’t line up. I wrote a function that returns the mouse position for the active game area, and ensures that the position is properly scaled. To achieve the black bar (gutter) effect, try the following in your main resize event:
    	    // Game Helper screen width/height holds the resolution we want the game to always be displayed in
    	    Vector2 size =, GameHelper.ScreenHeight, width, height);
    	    int viewportX = (int)(width - size.x) / 2;
    	    int viewportY = (int)(height - size.y) / 2;
    	    int viewportWidth = (int)size.x;
    	    int viewportHeight = (int)size.y;, viewportY, viewportWidth, viewportHeight);

    This will force your view to always have the proper aspect ratio no matter what. I have also learned that while doing this, and using the Stage object (for menus and UI position), you need to update the active size of the that as well, otherwise it won’t scale the right way and the stage could be off the screen.

While LibGDX does abstract a lot of the native code, preventing you from working primarily in pure Android, it does provide a very solid building block to start from. Starting with LibGDX is really easy and really straight forward. It’s also cross system compatible for iOS, Desktop, and  HTML5. I don’t know how the compilation schemes work, but I’m guessing that it’s fairly optimized.

Grey: The Lost Technology

About the Game

Grey: The Lost Technology is the first project for Team Aurora, the independent game studio a few friends and I co-founded. Grey was successfully Kick-started in May of 2012. We were able to travel to PAX East and meet with a lot of fellow developers and met some really cool people. The game is an action RPG that explores the possibility of an overpopulated Earth and the humans who leave to find a new planet to call home.


The game is programmed in C# using the XNA Framework. The game does not use external libraries, and instead uses the game engine we have programmed called “Aurora2D”.


The game is primarily Quest/Narrative driven. What this means is that you need to speak to someone, who will give you a quest. Once this quest is completed you will be awarded with another quest, or you will tasked to talk to a person who will give you quest. The dialogue system is something that I spent some time working on to make sure it functioned just right. Imagine the dialogue mechanic like any RPG, or text-based action game: The text shows up in a box and then continues to show up in chunks until a response is required. Each response can link to certain objects in the game. Talking to someone can give you:

  • Quests
  • Money
  • Health
  • Experience
  • Items

Alternatively, it can direct you to more dialog. Implementing this is primarily about a good structure and design of the dialogue objects themselves. A Talker, when spoken to, evaluates each dialogue that is tied to them. They accept commands and react accordingly.

Another large contribution of mine is the design and implementation of the map system. A map is designed like so:

  • Map
    • Levels (Can be viewed as subsections of a map)
      • Layers
        • Tiles
      • Group Objects
      • Spawners
      • In-location markers
      • Characters

A map is broken down into levels (read: sections), then a level is broken down to handle each component that is drawable from the tabs in the editor (See below). So the structure is simple, each type of object has their own list, and when the maps are saved, they are serialized in an overridden XmlWrite that structures each object list as a group. However the most revolutionary part about the serialization is about saving the tiles. When serializing the tiles, there is a dictionary that holds a dictionary, kind of like this: Dictionary<int, Dictionary<int, Tile>>. This can be seen as [x][y] = tile. You might be thinking that it should be stored as an array, but when saving an array it saves all the empty positions, and we don’t want that. So I wrote a loop that writes out each tile within an <X> element and a <Y> element. The <Y> element holds the tile position in the texture, and then also holds an index to a texture that it uses. The ending result looks kind of like this:


This saves us a tremendous amount of space and is a lifesaver in terms of distribution of the final product (because there are less files to transfer, it’s much easier to share). To read more about my theory and logic behind this serialization concept, check it out here: Problems of Storing Maps


Since we created the game with a framework, and did not use an existing game engine, we needed to create a game editor. The game editor is something of my own creation with the exception of the Navigation Mesh Editor written by our other programmer, Jacob Jackson. The editor is an XNA project that is supplemented by Windows Forms. When the game launches is creates the Control Panel that the designers will be using to manage game assets and map textures.

Control Panel

Grey Editor: Control Panel
Grey Editor: Control Panel

The control panel offers all the flexibility and options that a game designer needs to fill in a level and create everything that is going to be in the level. On the surface you can see that you have the ability to manage:

  • Texture Placement/Selection
    • Animated
    • Static
  • Objects
    • Map Links
    • Characters
    • Enemy Spawners
    • Location Marker/Zones
  • Particles
  • In-Game Items

By adjusting the Edit Mode, you will have access to different functionality offered by the editor. For example switching from TILE (which allows you to place all of the items offered in the tabs) to PATROL_EDITOR, you will be able to draw patrol path lists that and have the ability to customize them. There is also an Edit Mode for drawing the Navigation Mesh on a map. Lastly, there is another edit mode for drawing world-space collision geometry.

The Draw Mode combo box allows a designer to change between:

  • Drawing – Draw an object in the scene
  • Deleting – Delete an object in the scene
  • Editing – If drawing collision, use to add vertices to a shape
  • Transforming – If drawing collision, use to grab and move vertices of a shape


Collision Editor

animationeditorSome of the coolest parts of the editor are in the Tools Menu. The tools menu provide a designer with the ability to add collision geometry to each and every character model/animation that exists in the game. Using the Collision editor is simple. A list is populated with all the animations from the spritesheets and then the selected spritesheet is drawn and all that is required is to click and drag to draw a collision box. Rotating the collision box is done by holding space after you dragged the box out to it’s appropriate size.

Data Editor

The Data Editor gets into the nitty gritty of the actual game itself and allows the designer to

  • Create all items in the game
  • Create all the quests of the game
  • Create magic effects
  • Alter “New Game” configuration options

questeditorThe quest creation system is a portion of the editor that I had a lot of fun working with and was able to really break down a solid questing architecture.  The quest system is broken down like so:

  • Quest
    • Quest Steps
      • Continue Requirements
      • Failure Conditions
      • Rewards
      • Items to Remove
    • Rewards
    • Items to Remove

A quest must have steps to be completable. Think of a game like The Elder Scrolls: Skyrim where a quest often times has a few parts that must be completed to complete the overall goal. To evaluate each of the Continue Requirements, I hook events for each running quest and then when anything in the game happens and fires an event, each quest you are completing will be updated concurrently. This event driven implementation is the core of the quest system. As a supplement, I created what I like to call an action list. The action list is a rolling list of every action that our hero, Oren, performs in the game, whether it’s killing an enemy, clearing a spawner, or changing maps, it all gets recorded as separate actions. This implementation solves the problem of a player exploring the map and killing a boss or character that spawns once in a game, then getting the quest tells you to defeat him, and not being able to complete it. A quest checks the action list in parallel to the events that it handles.

createitemThe Item Editor is also really cool because to complete this I implemented a class parser using C# reflection to handle populating the editor. There is a main recursive function that handles the population aspect. This function takes the object and uses reflection to evaluate each public property and generate a component that best fits the type of property it’s evaluating.


The engine is a set of utility classes and objects that offer a full suite of functionality and easily pluggable into different situations. Some of the tools offered include:

  • Texture Parsing for Animations
    • Texture Atlases
    • Grouped Textures
    • Single sprite sheets
  • XML Serialization
  • Quad Trees
  • Separating Axis Theorem (SAT) Collision Detection
  • Baseline Menu Creation
    • Including Buttons and Sliders
  • Lots of Extensions for Primitive Classes
  • Input Management
  • Resolution Management
  • Event Management

The library is fully featured and provides us with a very solid base to build Grey with.

Graphical Solar System in C++

Weeks Spent
Team Size


For Graphics/Game Engine Programming II, our final assignment was to take all of the knowledge we have learned about graphics programming, and lighting models and create a solar system from it. There were two parts to this project:

  • Joint Based Animation
  • Graphics

The assignment was a team project, and I took the graphics side, and my partner took the animation side. We were given about 4 weeks to create this, and I wanted to do some really awesome effects for this, and bring the project to the next level.


SS_01  RockIcePlanetSolarSystem


Real Time Glow

The multiple stars in the scene have a real time glow applied to them. This effect is based on a rendering technique outlined in GPU Gems: Real Time Glow. The technique in GPU Gems focuses on technique that includes multiple rescaling operations on the render texture, scaling it down by small amouts a set amount of times, then rescaling it back up to take some of the work of the blur effect.

This sounded like a costly operation, copying and resizing a texture multiple times per frame for every frame. Instead, I opted to scale it down only once, and then back up. This reduced the amount of memory needed for render textures, and hardly effected render time. On each rendertarget resize, we perform a blur to get the glow texture.

The glow texture is a multicolor texture that is alpha blended on top of the scene to create the effect. It is important to remember to render each object that doesn’t have glow as a blacked out object so that we can block certain objects that will glow. This prevents the glow effect from appearing above objects in the scene.


The lighting model used in this solar system is a simple phong implementation. Each type of shader we wanted is available as it’s own HLSL .fx file. So a file that has diffuse and specular is available as a simple fx file, and each permutation that can be made.

You will notice that some of the planets have a lot of texture. As in they look very bumpy or mountainous. I used a heigh map to perform this, and the displace is performed in the vertex buffer using


by using this we are able to displace each vertex by a given amount represented by the height map.

Captain Crash

Days Spent
Team Size


Full Game: [Play at]
Sponsor: ArcadeBomb
Genre: Side-Scroller/Launcher
Release Date: September 25, 2009


Send Captain Crash flying in this awesome side-scrolling launcher game! Look out for obstacles, some of them can send you flying and others will stop you dead. Rack up enough cash to upgrade your cannon, or hook up Captain Crash with some awesome apparel! Shoot for the stars and get each badge in the game!

Development Details

Captain Crash is a collaboration with Chaz, an amazing artist and game designer. For Captain Crash we wanted to create a throwback to the classic “Kitten Cannon” from our younger years and pay tribute to the game that entertained us so much. Captain Crash was the result.



Captain Crash was developed fully in the Flash IDE. All art and assets were drawn in Flash by Chaz.


Captain Crash is a simple game, it involves a cannon that points to your mouse, and a charge amount. We wanted to create a way that was quick and easy to launch out character. What we did was allowed the user to have the ability to see the charge bar what holding the mouse button down and be able to move the mouse around. Sometimes it’s hard to tell what the launch vector will be for Captain Crash until the charge bar is drawn and gives a better idea of where Captain Crash will go.

For the collision, there was no reason to recreate the wheel here, so I opted to use ActionScript 3.0’s built-in collision tester, hitTestPoint. Since the game moves rather quickly, performing really precise collision didn’t make much sense, and was unnecessary.  Checking for ground collision is just a simple if statement and flipping the velocity with a damping amount (to eventually slow him to a stop).


We wanted to make some money off of this game, so we put the game on FlashGameLicense and waited to see if we could get any offers for sponsorship. We had a few people contact us about the game, but they weren’t impressive at all. We waited longer, and sure enough, we had an offer that was just right. We ended up making a deal for a Primary License of the game to

ArcadeBomb had a request for us, it was to implement their highscore board, which we did, because highscores are awesome! Otherwise, Chaz and I went 50/50 on the initial sponsorship amount, and we still had the ability to sell non-exclusive licenses to other websites.

Crate Crash 2

Weeks Spent
Team Size


Full Game: [Play at RIPI]
Genre: Physics/Action
Release Date: March 30, 2013


In Crate Crash 2 the amazing sequel to Crate Crash, use explosions and slingshot-like force to clear all the crates off the screen. Walls and floors around the crates make this more difficult than it sounds. Each level gives you a different configuration of crates and barriers, making you come up with new strategies. Shoot a crate at just the right angle with the perfect amount of force to send it in the right direction and keep it from bouncing back. See how fast you can clear a level, and once you succeed, try to do it again in fewer moves, or move on to the next level.

Screen Shots

crate-crash-2-p2 pIT4I6GYLS5RF

Development Details



Crate Crash 2, like Crate Crash, was developed fully in the Flash IDE and uses the Box2D engine.


The game was developed in the Flash IDE with Actionscript 3.

The game focuses on a simple concept, “Get all the crates off the screen.” To perform this task you are given the ability to click anywhere in a level to apply an impulse with a 4 unit radius. The impulse is applied as a linear strength based on distance. Clicking far away applies little force and a closer click applies a larger force. Alternatively you can click a crate and drag to create a launch vector then launch that specific crate.

With this simple mechanic in mind, I was able to design a final 52 levels that a user could play. This task was made easy by the use of a level editor. The level editor is simple, it allowed me to change obstacles in the level and place them anywhere I wanted. I could save the level as a my own fileformat and then pass that into the game and parse it.

The game also features new obstacles such as Ropes, Exploding Barrels, and Springs. This adds more variety to each level and opens opportunities for more unique level design. The game also features and in-game Level Editor.


The level editor is fully featured with all the tools that would be expected in a usable level editor such as:

  • Object placement, deletion, translation, and duplication
  • Control points for scaling objects
  • Ability to adjust initial properties (rotation speed, etc)
  • Saving and Loading
  • Submitting the levels to the developer

Post Mortem:

Crate Crash 2 & The Future of Crate Crash

Crate Crash

Days Spent
Team Size


Full Game: [Play at The Orange Day]
Genre: Physics/Action
Start Date: August 27, 2009
Release Date: February 08, 2010


Welcome to CRATE Crash! With over 70 levels, your goal is to EXPLODE all the crates off the screen by applying impulses near the crates and other objects to get them off the screen. Press R to restart the level. Also, right-click to disable the audio, and background.

Good Luck!
Danish translation thanks to Frederik Hermund.

Development Details



Crate Crash was developed fully in the Flash IDE and uses the Box2D engine.


The game was developed in the Flash IDE with Actionscript 3.

The game focuses on a simple concept, “Get all the crates off the screen.” To perform this task you are given the ability to click anywhere in a level to apply an impulse with a 4 unit radius. The impulse is applied as a linear strength based on distance. Clicking far away applies little force and a closer click applies a larger force.

With this simple mechanic in mind, I was able to design a final 72 levels that a user could play. This task was made easy by the use of a level editor. The level editor is simple, it allowed me to change obstacles in the level and place them anywhere I wanted. I could save the level as a my own fileformat and then pass that into the game and parse it.


The game itself has received a ton of attention getting over 3.2 million plays worldwide. Crate Crash has received attention from large companies such as, BigFish Games, and King.

Senior Production: Game Over Feedback

Believe it or not, we never implemented an in-game way of showing which team won when the game is over. This has created some confusion at the QA labs because when the game finishes, nothing happens. You can imagine that people get confused when it appears there are no more enemies left, but they can still run around.

I have spent this week working on getting a win/lose screen to show up when the game is over. To accomplish this we need to:

  1. Check to see if there are lives remaining on only 1 team
  2. Check to see if there are existing players
  3. Ensure that the remaining players are all on the same team.

It’s pretty easy doing the first step, I send an RPC to the server, and the server checks the remaining lives array. Checking the players is not easy though. When the server creates a player we store the data in a structure that holds a players team color, and the players NetworkPlayer. This last part is key for us because we can then do a search for all game objects that are networked players and see if their NetworkView has this stored player. We can filter for the team because the structure contains the team color.

Having this information on hand, I was able to check for all existing players and if they were found, update a counter for each team to count all the players on the screen for each team. I then checked these final counts, and if there is only one that has a count greater than zero, then that team wins.

To each NetworkView connected to the server, I send out the team that won, and each client then handles which image to show on the screen.

Win Screen
Win Screen
Lose Screen. See that Pun?
Lose Screen. See that Pun?

I did encounter a problem with tracking the players. I check if the game is done every time a player is supposed to be removed from the game (because they died, or were vacuumed up). This is achieved by sending an RPC to the server that tells it a player died and it should check for gameover. The problem is that the RPC gets sent before the player is actually removed. To counter this problem, I create a co-routine with a delay of 2 seconds before sending the RPC this way I can ensure that the player is removed before the server checks for gameover.

An Introduction to Leap Motion in Java

I just got my hands on a Leap Motion hand tracking device, and it’s pretty neat. For those of you who don’t know what it is, it’s a small sensor device that can track hand and finger movement. It’s very small, sizing up at 3 inches wide, 1.2 inches in depth, and 0.5 inches tall and weighing in at about 0.1lbs. The device itself is very simple, a rubberized bottom, aluminum chassis, and a tinted plastic top. The device uses 2 IR cameras and 3 IR LEDs to send light rays out rapidly. The cameras track and capture reflections and the data is sent back to the computer. Through their calculations they are able to identify hand orientation, finger tip position and orientation and it’s able to track both hands and all 10 fingers.

Application Development

Developing for the device is very well documented on their website. You have a wide array of languages to choose to develop in, from C++ to web-based Javascript. Each language has a “Hello, World” example documentation page that details both of the main methods for gathering what is referred to as “Frame” data. As far as the API documentation goes, it’s very detailed and easily accessible.

Leap Motion has their own application distribution platform and test bed known as “Airspace.” This application is installed with the drivers from the website and provides you a simple introduction to the capabilities of the Leap Motion device. The applications available include a tech demonstration, a sculpting application, and more.

The community support for the Leap Motion is very active. The community forums hosted on the official website sees frequent activity and the questions that are asked almost always get an answer. However, in terms of external support as far as blogs and tutorials go, results were far and few between. I found that there were developers who mentioned that they implemented Leap Motion into their game, but never get into the details or their thought processes for the device. One of the problems that developers face today with the device is how it’s actually used.

Current Applications

The website promotes many uses of the device from shooting games, sculpting, drawing, and medical applications. Leap Motion Inc. has also signed contracts with ASUS and HP to roll out products with the Leap Motion device built into laptops and keyboards. The most notable application of the Leap Motion is for an MRI Image Viewer where a doctor would be able to scroll through MRI sequences without having to remove their gloves or touch anything.

Leap Motion for Java

I wanted to become familiar with what the device has to offer and I wanted to test the capabilities of the hardware and effectively identify the discrepancies in precision of the tracking. Through my tests I was able to determine all of that. I decided to use their Java API to set up the Leap Motion.

I wanted to make a simple 2D physics games that  allowed you to perform a circle gesture to create spheres and then make a fist to smack the spheres around. The purpose of this was to identify the capabilities of the Leap Motion device especially when attempting to identify a closed fist vs an open fist, as well as individual finger tracking capabilities.

I decided to use LibGDX as my Java library because it’s very easy to set up, the physics library is there, and the rendering pipeline is solid. The documentation on the Leap Motion website was very easy to follow and straightforward. I do want to talk about the two common methods of gathering Frame data from the Leap device. Firstly, a “Frame” is a motion snapshot from the device where you have access to any hands in the viewport, or fingers, or gestures made, etc. The API also provides the ability to provide previous snapshots up to a given amount, so comparing the last few positions to find a trend or detect a velocity (although functions that provide these are built in) manually is easy. The first method is to continously just check for frame data in a consistent update loop, OR you can create a class that has inherits properties of a “Listener” class and then you can add this class to the listeners on the controller. Once that is done, the onFrame event in the Listener class is called when new frame information is available.

I went with the first method of data gathering, it was easy to implement and because the update loop is well defined in the game engine. I was able to track the fingers that the device could see by checking the frames fingerlist. Once I got the list I rendered a circle where each finger was being tracked.

		if (frame.fingers().count() > 0) {
			for (int g = 0; g < frame.fingers().count(); g++) {
			    Finger finger = frame.fingers().get(g);
			    Vector position = finger.tipPosition();
			    Vector2 scalePosition = scaleLeapPosition(position);
			    float offsetX = -80.0f;
			    float offsetY = -100.0f;
			    sprite.setPosition(scalePosition.x + camera.viewportWidth / 2.0f + offsetX, scalePosition.y + offsetY);

I also polled the gesture list to check to see if any gestures existed in a frame, and then if I found a Circle Gesture I created a circle physics object in the scene.

		if (frame.gestures().count() > 0 && canGenerateShape) {
			for (int g = 0; g < frame.gestures().count(); g++) {
				boolean createdOne = false;
			    switch (frame.gestures(previous).get(g).type()) {
			        case TYPE_CIRCLE:
			            //Handle circle gestures
			        	CircleGesture gesture = new CircleGesture(frame.gestures().get(g));

			        	// Get center of gesture
			        	Vector circleCenter =;

			        	// scale to accommodate current screen size.
			        	Vector2 scalePosition = scaleLeapPosition(circleCenter);
			        	float radius = gesture.radius();
			        	float rawX = scalePosition.x;
			        	float rawY = scalePosition.y;

			        	float x = rawX + camera.viewportWidth / 2.0f;
			        	float y = rawY;

			        	// lets create a body
			        	Body temp = PhysicsObjectFactory.CreateCircle(x / BOX_TO_WORLD, y / BOX_TO_WORLD, radius / BOX_TO_WORLD, BodyType.DynamicBody);
			        	Sprite newPile = new Sprite(pileTexture);
			        	newPile.setOrigin(newPile.getWidth() / 2.0f, newPile.getHeight() / 2.0f);
			        	newPile.setScale(radius / 128.0f * 2.0f);

			        	canGenerateShape = false;

			        	// this way we can break out
			        	createdOne = true;


			    if (createdOne) {

		// Wait one second before generating another point
		if (!canGenerateShape) {
			tempTime += 1.0f/60.0f;
			if (tempTime >= generationTimeMax) {
				tempTime = 0.0f;
				canGenerateShape = true;

Since the application can change resolution and I track the position explicitly, when the game resolution reaches a large size the finger tracking is limited to a small portion of the screen. So what I had to do was scale the Leap finger position to positions that reflect our resolution

	LEAPSCALEX = camera.viewportWidth / 480.0f;
	LEAPSCALEY = camera.viewportHeight / 320.0f;

	public Vector2 scaleLeapPosition(Vector leapPosition)
		Vector2 newPosition = new Vector2();

		newPosition.x = leapPosition.getX() * LEAPSCALEX;
		newPosition.y = leapPosition.getY() * LEAPSCALEY;

		return newPosition;


The finger tracking at times felt a little wonky at times, and sometimes the finger circles would flicker on the screen. I also felt like there were problems with the tracking itself, and by that I mean that I felt at the extremes the finger tracking was inconsistent. I think the device would benefit from wider tracking cameras because this would allow you to have a wider application and the there would be less problems with finicky tracker at the wider angles.

There are limited Gesture recognition capabilities. If you need to detect gestures that are not circles or swipes, then you need to look into alternative, or external libraries that can parse a list of points. I had originally planned on implemented a square recognition algorithm, but the actual algorithms are pretty complicated and I wanted to focus on the out of the box options that the Leap API had to offer.

Hand position recognition–specficially a fist–is not something is explicitly implemented, but there are options to detect when a fist is there. You have access to hands that are detected and palm positions. When checking for a fist, look for active hands that do not have any fingers in their individual finger lists. I did experience problems with this route because the device would at times “see” a finger in the scene.

Competitors and Similar Devices

There are a few other devices that exist that can be competitors to the Leap Motion that may be better than what Leap has to offer. One notable competitor is Haptix. Haptix claims that you can turn any surface into a touch surface. The primary use of it is to track objects, and hands in space with a solid surface below. It’s most powerful and attractive feature is that it works in all orientations and that it does not have to sit down on a flat surface facing upwards. This way you are not limited to your development environments.

Of course we have the Kinect with the Xbox One, and this tracking device can track entire bodies, hands, fingers, heart rate, and more. The Kinect is a fully developed tracking device that is used primarily for body tracking. The Kinect has the same limitation as the Leap Motion in that is must be mounted somewhere before use.


I think the technology is not where it needs to be yet, but I think it’s definitely something that society needs to become more comfortable with what the future has to offer. What is good about the Leap is that it’s NOT a bad device and the impression that it leaves for the public is generally a positive one. If the device had more precise tracking algorithms or hardware, then I think it would be better. And on a personal note and in my own recognition of the importance of tactile feedback, it is a very weird device to use simply because your hands are in the air the entire time and you are not touching anything stimulating that extra sense.  Overall, it’s a really cool device, and I think with enough community support there could be a lot of really great applications that can come from this.