Towards an Optimized Project Structure


In the past years, I got to take a look at a lot of different project structures, all of which have their individual up- and downsides. When I became the Lead Programmer of Daedalic Entertainment, I had to come up with what Mike McShaffry calls some kind of “bullet-proof directory structure”.

Based on his best practices in Game Coding Complete, I came up with the following one, which is used by most of my projects in some way or another now.

Project Structure

  • Bin
    • Main Project
      • PC
      • Mac
    • Tool 1
    • Tool 2
  • Documents (Game Design Document, Technical Design Document, Milestone Agreements, …)
  • Localization
    • English
      • Texts
      • Images
      • Speech Files
      • Cinematics
    • German
  • Media (Raw)
    • Images (Backgrounds, Concept Art, Characters, Items, …)
    • Meshes
    • Textures
    • Icons
    • Sound & Music
    • Cinematics Footage
  • Release
    • Demo/Full
      • Platform
        • Language
          • Installer
          • Patches
  • Source
    • Main Project
      • Source Files
      • Make Files
      • Levels & Imported Assets
    • Tool 1
    • Tool 2
  • Test
    • Debug Build
      • PC
      • Mac
    • Release Notes
    • QA Tools (Cheats, Unlock Tools, …)
  • Vendor

The Bin folder always contains the latest build of the game. You should be able to hand out the content of this folder to the press or clients at any time. If you’re developing any  game-specific tools (e. g. editors), their latest builds can be found here as well.

The directory Documents contains all texts and spreadsheets required for developing the game, such as balancing tables, the game design document or technical specifications.

The Localization folder contains spreadsheets with all game texts that are shown to the user, as well as all media that has to be localized.

All raw non-localized media, such as .wav files and Photoshop files are stored in the Media directory. Note that the Media folder contains raw files, only. All exported assets such as .png or .mov files should go to the Source directory explained below. Images that have to be localized, speech and cinematics should be stored in the Localization folder instead.

All setup and patch executables can be found in the Release folder.

In the Source folder you can find all source code and project and solution files required for building the game. All exported assets, such as .png or .mov files, can be found here as well. They are tied into the project structure in order to allow quick automated builds and they are replaced by the artists as required. If there are any game-specific tools in development, like editors or localization tools, their source code can be found here, too.

The directory Test always contains the latest nightly build. This build is used for internal testing, only, and writes verbose debug output for easier bug tracking. Here you can find the latest release notes, too, as well as anything else required for testing the game, such as cheats or tools for unlocking achievements.

The Vendor folder is meant to contain all third-party software that is required to develop and build your project. The idea behind this folder is that you should always be able to checkout the project anywhere and build it immediately without having to setup anything else.


Using the folder structure presented above, you ensure that every single file can be found at a single place, which makes it easier to keep all of them up-to-date. Additionally, you minimize the required time for finding any specific file, and thus increase the efficiency of your whole team.

“Why GitHub?”

Every now and then I get asked: “Google Code supports different version control systems. SourceForge features unlimited storage. Why do you use GitHub?”

Source Control: Why Github?

Well, first of all I’d like to minimize the amount of accounts, profiles and passwords I have to maintain, for obvious reasons. Both Google Code and Sourceforge allow open-source projects, only. Although I love the spirit of open-source, I’m forced to work on private workspaces from time to time, for example when I’ve licensed a plugin whose source code must not be released. While Github embraces the idea of open-source software, it provides the option of adding private repositories if required.

Assembla has private repos, too, but its pricing scales very bad in my opinion: $9/month is a helluva lot for three users and one project. Github currently gives you unlimited users on five projects for $7.

Additionally, Github allows you to share code snippets using a service called Gist:

Gist is a simple way to share snippets and pastes with others. All gists are git repositories, so they are automatically versioned, forkable and usable as a git repository.

Gist is a great way of storing or even sharing code that just pops into your mind and for which you either don’t have the time for implementation or don’t have access to the actual project source to integrate into right now.

However, everyone who’s been using Git before knows that this source control system has its issues with large binary files. With Github, you may add these files to the Downloads section of your project, adding a short description. But what if you want to publish a setup executable of a project that resides in one of your private repositories? Or if the critical file is deeply tied into the project itself, just like an asset bundle of your awesome 3D game?

I’ve found that people have severe problems forking Hostile Worlds, which just might be  due to the size of the repository. That’s why I was forced to fall back to an additional cheap cloud storage solution.

Cloud Storage: Why SkyDrive?

Don’t get me wrong: Dropbox is great. Dropbox is awesome for quickly sharing files among devices and friends, and it has a huge spread. However, Dropbox starts at 2 GB of memory, and I got tired of all of this referral and another-fancy-way-of-unlocking-additional-dropbox-space stuff very soon.

Microsoft SkyDrive starts at 7 GB and can be upgraded by an additional 20 GB for $8/year, instead of 100 GB for $99 for Dropbox Pro. Thus, as I got a Windows Live account for my Windows Phone anyway, I’ll stick to SkyDrive – if you’re an Android user, you might wanna check out Google Drive which starts at 5 GB.


I’m pretty happy with Github which allows me store my source code in public and private repos alike, and scales well doing so. As Windows Phone user, I’m going to stick to SkyDrive for storing large files and packages, but I’m afraid that I’ll never get rid of Dropbox for sharing files with friends and co-workers.

On Dictionaries and Events

This month featured two really intense coding experiences, one of which was a prototyping week that Christian Oeing and me used to start an own software project. The other one was the Daedalic Game Jam, and I want to summarize the learnings of both events in this post.

.NET Dictionary Lookup Performance

First of all, I got to learn how to work with the less obvious functions provided by generic .NET dictionaries. Consider the following code snippet that is actually taken from code that I’ve written before the prototyping week:

This code actually performs two dictionary lookups in order to retrieve a specific value: One is made by ContainsKey, while the other is part of the dictionary indexer used to retrieve the value. You can prevent this by using TryGetValue instead:

TryGetValue will try to retrieve the value from the dictionary and return null if it fails to do so. According to official .NET documentation, TryGetValue is more efficient in case you often try to access values that turn out not to be in the dictionary.

Another speed-up could be achieved by changing the way I iterated over dictionary entries. Take a look at the following code snippet:

This foreach loop iterates over all keys of the dictionary, and looks up the associated values after. However, if the whole dictionary is iterated over anyway, we can do completely without lookups:

By iterating of the key-value-pairs themselves, we can access the content of the whole dictionary without spending time on a single lookup.

Exceptions During Event Queue Processing

One week later, at the Daedalic Game Jam, I used an event system for processing user input and game events.

So far, so good. If any new events have occurred, all of them are processed by passing them to the appropriate listeners (using TryGetValue 😉 ). Additional events that occur during event processing are added to the newEvents list, preventing concurrent modification issues and maintaining the correct processing order.

However, this method might cause severe problems in one case (and did so during the game jam, actually 😉 ): If any of the event listeners throws an unhandled exception, this will cause the event processing loop to break and never recover! If ProcessEvents is called in every frame, it will fail at the exact same event every time. Even more, the same events will be processed over and over and over again, because the system never gets to clear its event queue.

Clearly, handling errors should be in the responsibility of the specific event handler in this case. However, if any co-worker of yours just forgets to do so for any tiny exception that might occur, your whole game will break and you can’t do anything to recover in that case. Thus, it might me a good idea to wrap the eventListeners(e) call with a try-catch-block, logging any exceptions the listener causes and at least go on processing the remaining events and clearing the queue.


As most of us know – but tend to forget in the most crucial moments – there is a subtle difference between getting things done and getting things done-done:

“Hey, Liz!” Rebecca sticks her head into Liz’s office. “Did you finish that new feature yet?”

Liz nods. “Hold on a sec,” she says, without pausing in her typing. A flurry of keystrokes crescendos and then ends with a flourish. “Done!” She swivels around to look at Rebecca. “It only took me half a day, too.”

“Wow, that’s impressive,” says Rebecca. “We figured it would take at least a day, probably two. Can I look at it now?”

“Well, not quite,” says Liz. “I haven’t integrated the new code yet.”

“Okay,” Rebecca says. “But once you do that, I can look at it, right? I’m eager to show it to our new clients. They picked us precisely because of this feature. I’m going to install the new build on their test bed so they can play with it.”

Liz frowns. “Well, I wouldn’t show it to anybody. I haven’t tested it yet. And you can’t install it anywhere—I haven’t updated the installer or the database schema generator.”

“I don’t understand,” Rebecca says irritably. “I thought you said you were done!”

“I am,” insists Liz. “I finished coding just as you walked in. Here, I’ll show you.”

“No, no, I don’t need to see the code,” Rebecca says. “I need to be able to show this to our customers. I need it to be finished. Really finished.”

“Well, why didn’t you say so?” says Liz. “This feature is done—all coded up. It’s just not done done. Give me a few more days.”

On the one hand, we got developers telling you the feature is done while there are still some crucial steps left. The feature might still be buggy or not yet tested, or it just works in a local sandbox – but “the code is done!”

For reminding myself again and again how to really finish a feature, I created a wrap-up checklist that is available at my Grab Bag of Useful Stuff. The list summarizes all steps I adhere to after my code compiles and runs, but before it is committed to version control, such as writing unit tests and API documentation. The checklist has been mandatory at my company for a few weeks now, and it increases code quality and uniformity by a great deal.

On the other hand, some engineers tend to overdo it. Optimizing your code saving every tiny bit of CPU time and memory should never be your goal unless you’re working on very critical parts of your code base. Clearly, most of your code will never be performance-critical anyway, because it’s event-driven or even just used once at initialization. Try and focus on getting the job done, and start optimizing as soon as it’s really necessary.

Oh, and should any project manager or lead programmer ever happen to ask you how long you’ll need for the implementation of a new feature: Please tell him your guess for getting it done-done 😉

NavMesh Generation Made “Easy”

Preparing this year’s version of my Introduction to the Unreal Development Kit, I got reminded of Epic having integrated Recast in February this year: Recast is an open-source library for automatic generation of navigation meshes. Although nav mesh generation has been part of the UDK for almost two years now, Epic finally decided to switch to Recast, gaining an almost ten times performance boost.

I highly recommend reading the corresponding CritterAI article in case you’re interested in that topic: Stephen Pratt explains the whole generation process from heightfield generation to region generation to contour generation to convex polygon generation. Many detailed explanations and elaborate illustrations make this article really worth reading. What are you waiting for?