Origins of a Game Developer

Origins of a Game Developer

Growing up in Kentucky, I fell in love with video games, one coin at a time, in arcades. There were so many great games that inspired me. Early on it was Space Invaders, Asteroids and Galaxian. I was hooked by the relentless pursuit of a new high score.

I turned over my first million points on Galaga, a game that would become a favorite, not too long after its arrival in our little town.

I was so into arcade games that, after being hired to paint the logo of our new local arcade (that’s me, on the left), every quarter earned from the job found its way back to the arcade within weeks. Of course, the owner of the arcade already knew that’s what would happen.

That jet-fueled teenage competitive streak transformed into a much richer gaming experience by way of D&D. I was already lost to story and plot, improvisation and character when, wet behind my elf ears, I started college. I majored in art and theatre, but spent more time playing D&D and fiddling with computers and coding than in class or acting in plays. I enjoyed being a DM — but I absolutely loved battling wits with other DMs as a player.

I began to think about how I might blend the excitement of twitchy arcade mechanics with something like world-building in D&D, and that deceptively simple thought was what began a life-long relationship with creating and programming video games, starting with the humble TRS-80. The TRS-80, by today’s standards, was an exceedingly modest machine, and perhaps not all that well-suited to video games. But I managed to pound out my first adventure game on it, complete with battle mechanics and stat bonuses, in BASIC. The programming constraints were unimaginable by today’s standards.

I left college with a year to go, dreaming of working in video games. I moved to Indianapolis, Indiana and looked for companies making games, a futile quest at the time. I decided to keep making games on my own while working various other odd jobs for a few years, then went to college again, this time to study mathematics and creative writing. It was an unusual combo — I’m pretty sure I was the only student there doing it, and my advisers didn’t quite know what to think. I’d been a math and puzzle geek since my first algebra class in 7th grade, which was profound and revelatory, almost a religious experience; on the other hand I loved to write, mostly short fiction, and had some talent for it.

I TA’d my last two semesters, including a new calculus class where the professor would use Mathematica in the lab four days a week, while I would work through problems with chalk in hand every Friday from 9 am to noon. I thought it would be a cakewalk — who would want to work problems on the board on a Friday morning? I couldn’t have been more wrong — not only did most of the class show up, half the students from the same course in two other time slots started coming. It turned out that learning advanced calculus on a computer was not an easy thing, and the prof was so focused on the shiny goodness of graphing and playing with equations in software, there was never any time to practice.

After graduation I took a job at one of the biggest tech-like companies in Indy, Macmillan Publishing, developing reference books on programming, networking, and the newest technology on the block, the Internet.

I kept tinkering with games and graphics in my spare time, mainly on Windows. I can’t tell you the number of little games I wrote and programmed, though in all honesty most of them were more like tech demos. But with each new idea, OS version and language/compiler iteration (mostly C), I became more and more interested in graphics, eventually spending as much or more time finding ways to optimize rendering as programming game logic. I became obsessively interested in tools and 3D authoring and rendering, including a brief descent into the magnificent rabbit hole that was the Amiga (which, by then, was no longer even a supported platform).

At work, I moved up, and found myself producing video games in Macmillan’s small software division. We did mid-range and value PC titles and add-ons, and we had some real hits (and plenty of duds, too). I worked on some of the first early 3D games for the lower end of the PC market. I went to my first GDC, my first E3, then back again each year with a half-dozen programming/platform conferences in between. I got to meet stars like John Carmack and Sid Meier. I began to understand how the industry was evolving, what players valued, the different genres, game mechanics, gameplay.


I also played a ton of games on PC and consoles, and from Meridian 59 onward, was hooked on MMOs. A lot was going on both in gaming and with the Internet. Macmillan was willing to take some chances on new business models, and I was in the right place at the right time. We started a new business for distributing add-on levels for popular PC games; RealmX was a highly ambitious, very early attempt at a form of DLC, something now commonplace, but it failed spectacularly. We then created an even more ambitious web product called InformIT, which was arguably the first online collection of  professional technical books on the Internet, including books on games. It survives to this day. I’ll never forget the weeks it took us to finalize the first-cut of the data model. By the time we were finished, we had hundreds of sheets of whiteboard paper wrapping every wall in the office.

But by then it was 1999, and I was ready to up my game.

My first job in Silicon Valley had nothing to do with games, but it was a foot in the door. I was hired to help relaunch a large hotel reservations website, both the content system and the server framework. Back then there was no Google infrastructure or AWS like there is today — you had to roll your own on top of other, relatively nascent, software. One of most important things we did was to switch the back-end from Microsoft’s IIS to Apache — a decision prompted by the reality that two full-time, on-call engineers, whose primary job was to reboot the server every four hours, was utterly absurd.

In six months we were done, and by that time I had turned back toward gaming, to a startup in Mountain View. Staccato Systems developed an audio subsystem for the PC that replaced a $27 wavetable chip on sound cards and also was used to create unique audio effects in PC games. I came in with a focus on helping the games side of the business and wound up coding applications to make the core technology accessible and usable by game developers including EA, Lucas and a few others. It was remarkable tech — physically-modeled, logically-controlled audio at a granular level. The engineers I worked with there were absolute geniuses (and there was no shortage of egos), although it never failed to amuse me that, at the end of the day, they were mostly hard-working hackers, like most people who do anything authentically novel in software. Staccato’s technology was first licensed by AC97 Codec manufacturers SigmaTel and then Analog Devices. It was sold to Analog Devices for $30M in 2001.

Around the same time as the acquisition, a whole new game market was starting to make waves — mobile games. I started programming feature phone games and eventually moved into smartphones, around 2007 when the iPhone landed. Companies I worked for, and helped lead, won awards. We brought dozens of titles to market, including high-profile mobile games like Guitar Hero Mobile, Duke Nukem Mobile and Prey Invasion. I started to get a little recognition. I spoke at GDC a couple of times. I was a gameplay programmer, a senior software engineer and engine architect, then a VP of Production, then a CTO. Through it all I was continually amazed by the talent and dedication in the industry, an industry that was going places it had never been!

These days I’m still working on games and tools, but I get to hop around a bit more from project to project. Not long ago I helped bring a wonderful children’s game to Unity/HTML5 and before that spent over a year working on a mobile casino game, right after a couple of years engineering a large framework for performing, essentially, extensive mobile CAD functions in Unity.

There’s almost always something new and exciting to do (right now it’s VR/AR/MR/XR — yes, the acronyms never end!), though there’s nothing like a great new stealth project, or prototype, or a new take on an old shader, or a fresh API. So much to do, so little time! I’m still in love, and I’m comforted by the thought that my best game projects are ahead of me.

Great Commodities

Great Commodities

I’ve been reading Paul Graham’s essays for many years and almost always find something insightful. His latest post, Let the Other 95% of Great Programmers In, is no exception.

However more great programmers will not help Silicon Valley.

Most US companies are based on a strongly-typed hierarchy whose evolutionary path is entropic and bureaucratic. This means shallow leadership, ineffective hiring practices and the inability to identify and reward greatness. A programmer cannot be a commodity if his or her value is dependent on this cluster-fuckery and as a non-commodity he is indistinguishable.

I wish that Graham didn’t think of programmers as commodities to begin with. Maybe he doesn’t, but I don’t know how he could have written the essay otherwise.

 

Dear Spaghetti Coder

Dear Spaghetti Coder

Dear Spaghetti Coder,

I grasp that you can’t be bothered with declaring your allegiance, once and for all, to a particular brace style. I know you like to “mix it up”.

I understand your need to never delete anything and instead leave in long blocks of old, unusable, commented-out code. You never know when you might need it.

I realize that there is never time to make real comments in your code, particularly anywhere near your numerous long, difficult switch cases. It’s not your fault you had to hard-code all those strings.

I know you must be clever since you use so many inexplicable, often funny, variable names. You’re such a show off.

I see that you keep re-writing the same unoptimized, two-to-four-banger nested loop functions over and over again instead of wrapping them all into a single, elegant function. You like to flex your muscles.

I’m sorry that you’ve been hurt bad by the Tab key. You’re appropriately working out this issue in your code instead of paying for expensive therapy.

And I grok your single-minded desire to arrange your code in such an arbitrary fashion that we get to Treasure-Hunt our way through your gamified tome. It must give you endless hours of DungeonMaster-like pleasure.

Sincerely,
He Who Must Fix All Your Crap And Make It Actually Work

P.S. I think you should consider a move into management. No, really.

Painting to Texture on iOS

Draw Something has continued to do very well in the App Store and now we’re seeing more derivatives — apps and games with basic painting and sketching capabilities. Last weekend I had some fun playing around with a basic painting setup, just to see how much brush (pun intended) I’d have to go through to get to the painting picnic.

On iOS, there are really only a couple of ways to implement a painting app — Quartz or OpenGL ES (there’s a nice little walk-through using Quartz here, and Apple put together a cool little OGL example called GLPaint here).

It should be relatively clear that the OGL approach is cleaner, more flexible and a bit faster. But while GLPaint is a nice place to start, it’s not very app- or engine-friendly in the context of a full-fledged OGL app. Since the point of GLPaint’s example code is demonstrate how to do basic painting, it has no need to consider the rest of your OGL surface’s render loop, nor does it concern itself with other important engine pieces, like your OGL redundancy state checker, sorting and synchronization issues between render and update calls, and most important, clearing the buffer.

That last bit really is most important because a nicely-performing painting app should never clear the buffer. Doing so will quickly slow things down to a slide-show. This should be obvious: In order to paint to the screen — whether you’re using GL_POINT_SPRITE_OES or rolling your own quads — you’ll need to draw a ton of sprites on-screen to get a continuous line of color and/or texture. If you clear every frame, you have to re-draw every frame, and voila, you’ll have molasses in less time than it takes to launch the simulator. If you don’t clear, you’re only drawing a handful of new sprites each frame.

The GLPaint example does this — it doesn’t clear the buffer. However in a real-world app, you must clear every frame in order for anything else — GUI elements/textures, mesh rendering, camera changes, etc. — to work. Hence the conundrum — you need a nice, normal clear/render loop but you also need a render-only call each time you want to paint.

Luckily there’s a straightforward solution: painting to a texture, then render that texture in your normal render loop. And setting up a texture to paint to is easy, by attaching it to an FBO. For example, a full-screen texture buffer:

- (GLenum) CreateRenderTexture
{
    m_texW = [[UIScreen mainScreen] bounds].size.width;
    m_texH = [[UIScreen mainScreen] bounds].size.height;

    glGenFramebuffers(1, &m_texFrameBuffer);
    glBindFramebuffer(GL_FRAMEBUFFER, m_texFrameBuffer);

    glGenTextures(1, &m_texTexture);
    glBindTexture(GL_TEXTURE_2D, m_texTexture);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, m_texW, m_texH,
        0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
        GL_TEXTURE_2D, m_texTexture, 0);

    return glCheckFramebufferStatus(GL_FRAMEBUFFER);
}

From there it’s simply a matter of drawing to the texture and only clearing the texture when you need to, e.g., by calling a function like this:

- (void) StartTextureRender:(BOOL)clear color:(COLOR)color
{
    glBindFramebuffer(GL_FRAMEBUFFER, m_texFrameBuffer);
    if (clear)
    {
        glClearColor(color.r, color.g, color.b, color.a);
        glClear(GL_COLOR_BUFFER_BIT);
    }
    glViewport(0, 0, m_texW, m_texH);
    // setup ortho matrix, render and client states here...
}

One of the cool things about Draw Something is that it records and replays your drawing. This is relatively straightforward functionality to implement, and GLPaint kinda-sorta does it as a nice bonus. However their implementation is on the oddball side and a bit shy of more readable, that-makes-sense-to-me production code. A clearer way to implement it is to do a standard 2D lerp between the current touch (as you move your finger on the screen) and the last touch. Then record the time it took between finger-down and finger-up for playback later. For instance:

- (void) Draw:(float)x y:(float)y
{
    if (numVerts == MAX_PATH_VERTS) return;
    end         = Vec2(x, y);
    dist        = Vec2Dist(start, end);
    num         = (int)dist;
    if (num > 0)
    {
        startVert   = numVerts;
        numVerts    = MaxInt(numVerts + num, MAX_PATH_VERTS);
        for (int i = startVert; i < numVerts; i++)
        {
            Vec2Lerp(&verts[i], start, end, (i - startVert) / dist);
        }
        time += [Engine GetTimeElapsed];
    }
    start = end;
    [self DrawRender];
}

Below is the entire class (note that several of the vars are structs elsewhere in the engine, but you get the gist).

// INTERFACE
#define MAX_PATH_VERTS  20000
@interface Path : NSObject
{
@public
    VEC2            verts[MAX_PATH_VERTS];
    int             numVerts;
    int             startVert;
    VEC2            start;
    VEC2            end;
    VEC2            cur;
    Texture*        texture;
    COLOR           color;
    float           size;
    float           tick;
    float           time;
    int             num;
    float           dist;
    BOOL            replaying;
    int             vertCount;
    int             curVert;
    int             endVert;
}
@property (nonatomic, readwrite) BOOL replaying;
- (id)   initWithColorTextureSize:(COLOR)c texture:(Texture*)t size:(float)s;
- (void) DrawStart:(float)x y:(float)y;
- (void) Draw:(float)x y:(float)y;
- (BOOL) Replay;
- (void) ReplayStart;
@end

// IMPLEMENTATION
@implementation Path
@synthesize replaying;
- (id) initWithColorTextureSize:(COLOR)c texture:(Texture*)t size:(float)s
{
    if (!(self == [super init])) return nil;
    numVerts        = 0;
    texture         = t;
    color           = c;
    size            = s;
    return self;
}
- (void) DrawRender
{
    if (num > 0)
    {
        glEnablePointSprite(GL_TRUE, size);
        glSetTexture(texture.index);
        glSetColor(color.r, color.g, color.b, color.a);
        glSetVertexPointerEx(&verts[0], sizeof(VEC2), 2);
        glDrawArrays(GL_POINTS, startVert, num);
    }
}
- (void) DrawStart:(float)x y:(float)y
{
    if (numVerts == MAX_PATH_VERTS) return;
    verts[0]    = start = end = Vec2(x, y);
    numVerts    = num = 1;
    startVert   = 0;
    time        = 0;
    [self DrawRender];
}
- (void) Draw:(float)x y:(float)y
{
    if (numVerts == MAX_PATH_VERTS) return;
    end         = Vec2(x, y);
    dist        = Vec2Dist(start, end);
    num         = (int)dist;
    if (num > 0)
    {
        startVert   = numVerts;
        numVerts    = IEMaxInt(numVerts + num, MAX_PATH_VERTS);
        for (int i = startVert; i < numVerts; i++)
        {
            Vec2Lerp(&verts[i], start, end, (i - startVert) / dist);
        }
        time += [Engine GetTimeElapsed];
    }
    start = end;
    [self DrawRender];
}
- (BOOL) Replay
{
    if (replaying)
    {
        tick    = Max(tick + [Engine GetTimeElapsed], time);
        curVert = endVert;
        endVert = (int)Max(Lerp(0, numVerts, tick / time), numVerts);
        dist    = Vec2Dist(start, end);
        end     = verts[endVert];
        for (int i = startVert; i < endVert; i++)
        {
            Vec2Lerp(&verts[i], start, end, (i - startVert) / dist);
        }
        start = end;
        int count = MinInt(endVert - curVert, 1);
        if (count > 0)
        {
            glEnablePointSprite(GL_TRUE, size);
            glSetTexture(texture.index);
            glSetColor(color.r, color.g, color.b, color.a);
            glSetVertexPointerEx(&verts[0], sizeof(VEC2), 2);
            glDrawArrays(GL_POINTS, curVert, count);
        }
        replaying = (endVert != numVerts);
    }
    return replaying;
}
- (void) ReplayStart
{
    curVert   = 0;
    endVert   = 0;
    replaying = (curVert < numVerts);
    if (replaying)
    {
        tick  = 0;
        time  = Min(time, 0.001);
        start = verts[0];
        end   = verts[1];
    }
}
@end

An NSMutableArray of multiple instances of this class is kept by the caller; each instance is born on finger-down (where we set color, brush texture and size) and dies on finger-up. Replay is easy — essentially just a programmatic rendering of the verts that were previously recorded while iterating over the NSMutableArray, handled with a flag in the Render() function. Below is the basic idea.

- (void) TouchDown:(float)x y:(float)y
{
    if (curSize == -1) // we're erasing here
    {
        [Engine StartTextureRender:YES color:curBackColor];
    }
    else
    {
        [Engine StartTextureRender:NO color:curBackColor];
        curPath = [[Path alloc] initWithColorTextureSize:curColor
                                                 texture:curTexture
                                                    size:curSize];
        [paths addObject:curPath], [curPath release];
        [curPath DrawStart:x y:y];
    }
}
- (void) TouchMove:(float)x y:(float)y
{
    [Engine StartTextureRender:NO color:curBackColor];
    [curPath Draw:x y:y];
}
- (void) TouchUp:(float)x y:(float)y
{
    curPath = nil;
}
- (void) Render
{
    [Engine RenderToTexture];
    if (replaying)
    {
        [Engine StartTextureRender:NO color:curBackColor];
        if (![curPath Replay])
        {
            curPath = nil;
            for (Path* m in paths) { if (m.replaying) { curPath = m; break; } }
            replaying = (curPath != nil);
        }
    }
}

One of the cool things about using OGL for painting and sketching is that you can very easily change up the brush texture, for nice Photoshop-like texture brushes (care should be paid to how you setup the blending, however, due to pre-multiplied alpha on iOS). While it’s possible to do this with Quartz, it’s much easier to grok using OGL. And of course you can do silly/fun stuff like paint a background behind your 3D orc model (maybe there’s a game idea in there somewhere, hmm — ok, maybe not).

FAIL: Virtues of a Programmer

In the second edition of Programming Perl, Larry Wall famously outlined the Three Virtues of a Programmer:

1. Laziness — The quality that makes you go to great effort to reduce overall energy expenditure. It makes you write labor-saving programs that other people will find useful, and document what you wrote so you don’t have to answer so many questions about it.

2. Impatience — The anger you feel when the computer is being lazy. This makes you write programs that don’t just react to your needs, but actually anticipate them. Or at least pretend to.

3. Hubris — Excessive pride, the sort of thing Zeus zaps you for. Also the quality that makes you write (and maintain) programs that other people won’t want to say bad things about.

Besides the humor, anyone who has seriously slung code for a living connects with the three virtues. Programmers very often do have a wonderful kind of procrastinately excitable arrogance that enables them to do remarkable things. I like to say, “A programmer is both the laziest and hardest-working person on the planet — nobody works harder to find the easiest way to do something.”

But when it comes to the job of interviewing, the three virtues are dangerous. I hate to think of all the mistakes made by companies who let untrained programmers interview candidates; if you could convert all those missed hires to ROI, it would most certainly scare the crap out of the CTO. But it’s still prevalent, and not just at the usual suspects like Google and Facebook. It’s alive and well at startups, too.

Maybe it’s getting worse. Over the last several months I’ve heard more complaints than usual from engineering friends and associates about their interview experiences. And Glass Door seems to be full of stupid code problems and bad experiences from interviewees. It’s always the same: A small team of one or more programmers show up to conduct an interview. They’re a little late, somewhat nervous and dispense with small talk as if it’s a virus. They’re not into answering questions about their company, can’t talk in detail about what they’re working on, and need to look at their phones whenever possible.

Then it comes: Ye olde academic probleme. It’s a fairly rote, perhaps even difficult, [typically] algorithmic problem that has little or nothing to do with their day-to-day software engineering. In fact it’s a safe bet that the programmers giving the problem couldn’t work it themselves if they’d hadn’t recently played with it. To top it off they want you to code it up on a whiteboard (or worse, over IM/video chat).

It would be laughable it wasn’t so bad for the job seeker — and if it wasn’t so bad, ultimately, for the company looking to hire (especially startups).

First off, it’s patently absurd to ask a programmer to code something on a whiteboard. Design? Sure. Program flow? Fine. Basic approach to solving a problem? Okay. But solving an academic problem, on-the-spot, with pseudocode? Ridiculous. Programming is as much art as science and programmers need time to think, write, run and debug problems, especially difficult ones. They need to be in the zone — comfortable, engaged, with their tools and libs (and functions they stopped re-writing years ago that they now simply copy and paste) by their side. This is the case for every programmer I’ve ever worked with, from CS majors to Phd’s to the self-taught liberal arts majors, dropouts and homegrowners with no formal education.

Second, the code problem(s). Just why are they so academic? Other than maybe someone who is fresh out of college with no substantive personal code base, who among us actually rewrites a sorting algorithm from scratch,  or revisits an LRU cache problem for fun, or proudly displays our awesome linked list skills, or spends our evenings re-hashing time complexity for that old Connect Four or Game of Life problem from school? (Note: I’m not suggesting that time complexity isn’t super-important, but you don’t have to know how to write the Wikipedia article on Big O to understand it). But even if you know the problem ahead of time, can you really code it up nice and sweet, in something resembling workable code, in 10 or 15 minutes — in an interview, on a whiteboard? (Especially if you’re already coding for a living every day anyway?)

I could go on but you get the point. I’ve interviewed lots of programmers over the years and I’ve never put them through the above rigamarole. The best code test is given to the programmer before the interview, basta. The best evaluation is looking at real code that compiles. The best person for the job is almost always the engineer who is a great general problem-solver — he or she is far more valuable than rote language skill or recent exposure to a specific problem or class of algos, for example. Sure there are exceptions and times when you need more of a code-chimp — a trained warrior — than anything else, but 99% of the time you need wizards and mentalists and thieves. Those are the guys and gals who get things done, are constantly improving, and who work with or without a paycheck. You just have to teach them how to be good interviewers.

Page 1 of 212