Originally written by Jan Nedoma.
In our time, the world is getting smaller. Thanks to multinational corporations you’ll see the same goods in the shops in many countries all around the world, you’ll see the same TV show on TV in the US and in Iceland. Of course, this phenomena also applies to computer games. Game distributors would tell you games in local language usually sell much better, and the same could be said about indie games. But before the game can be “exported” to foreign markets, it needs to go through the process called “localization”. This article will discuss how to make your game localization-ready and what problems you’re likely to encounter.
By localization we typically mean translating the game to a different language, but generally speaking, it’s about adapting the game to cultural differences of a given country. You will find surprisingly few materials concerning this part of game development in the internet. This article is based on my personal experiences with game localizations and the procedures described here have been successfully used in real life. Nevertheless, neither me or anyone else knows everything, and I’m not claiming everything described here is the only and perfect way. Consider this article a set of suggestions, based on real-life experience. If you have any corrections and/or suggestions, you can contact me at [nedoma (at) dead-code (dot) org].
This article is written from the programmer’s point of view (on Windows platform), but most of the concepts are generally usable for localization.
To make your game easily localizable means making it localization-ready. Let’s get through several most important areas affecting the game localization readiness.
Localizing the in-game texts
The string table – creating and maintaining
The in-game texts are the most obvious localization target. There are several approaches, but ideally all the in-game texts should be summarized in a big text file (let’s call is a “string table”), which can be easily handed to translators to do the actual localization work. The game should be able to load all the texts from this file and use them. Which means, by simply replacing the string table file you’re able to switch languages on-the-fly at runtime (or at install time, whichever way you prefer). But how to achieve this ideal situation? Firstly, every single text needs to be augmented by a unique identifier. This will allow us to internally only reference the string IDs (which don’t change) and the actual text content can be switched without affecting the game code. Assigning the IDs to text strings is a quite complicated task, because it’s hard to manage large quantities of text and avoid duplicities, not to forget anything etc. Of course, it depends on the game type, hence on the amount of text. In case of simple action games which only contain a couple of strings such as “New game” and “You are dead” it’s easy to maintain a string table by hand, but a large RPG or adventure game will probably require some kind of utility which will do the dirty work for you.
The approach I’m using is to develop the game (relatively) ignoring the future translation, and only when the game is finalized enough (namely the texts are more or less final), I use some kind of tool to extract all the localizable texts from the game files and to export them to a string table. During this process every text is also assigned a unique ID, and the original text occurrences in game files are replaced by this ID.
This approach brings some pitfalls, though:
1) How to extract all the texts used in the entire game? I’ll give you a single advice: try to design the game data files structure so that it’s as easy as possible. The most important thing to remember is, forget about texts hard coded in the game executable. All the texts should be stored in data files, or in the game scripts. It’s better to use text-based data files, not binary ones. The popular XML format is a good candidate, because when choosing the right structure you’ll be able to extract all the localizable texts using some generic tool, which doesn’t even have to know the actual structure of your game files. For example, imagine an XML file describing units in a real-time strategy game:
<Description Localizable="true">Ineffective for close combat,
but deadly for ranged attacks.</Description>
<Description Localizable="true">A slow but very powerful unit.</Description>
Notice that the “Name” and “Description” entities are marked with a “Localizable” attribute. That means we can make a generic tool which will load an XML document, scan all the entities (without looking for any specific names) and if they contain the “Localizable” attribute, their content will be exported to a string table. The advantage of the XML format is that there are many ready-to-use parsers available, so you don’t have to write your own. But of course, you can use a similar approach when dealing with your custom file formats.
If your game uses some kind of scripting language, you’ll probably want to extract all the string constants from the scripts. In my opinion, the best approach here is to modify the script compiler itself, because it knows exactly when it encountered a string constant when compiling the script, including the exact line and column in the source file. It shouldn’t be a big problem to extract the string constant and replace it with an identifier. Sometimes scripts contain string constants which are not supposed to be localized. For this purpose I’m using an ignore list and a library of “known code patterns” to filter out irrelevant strings.
2) The process described above will yield the following results. All the in-game texts will be stored in a string table and their original location will be replaced by some unique identifier. Even though it’s basically what we wanted to achieve, the problem is, that the original files will become a little unreadable after that. Imagine the following script:
Frank.Talk("Hi Joe, how are you?");
Joe.Talk("Yeah, I'm okay.");
Frank.Talk("I gotta go.");
After our intervention and after exporting the texts, the script will become something like this:
While it’s pretty obvious what the first script does, the second one is totally unreadable because of the absence of the texts, which can be a problem for any subsequent script changes. There’s an elegant solution I’ve seen in some LucasArts games. Just keep both the original text AND the identifier in the source file, using some unambiguous syntax. For example:
Frank.Talk("STRING0001|Hi Joe, how are you?");
Joe.Talk("STRING0002|Yeah, I'm okay.");
Frank.Talk("STRING0003|I gotta go.");
As you can see, the texts in the scripts are now stored like this: “identifier | original text”. This approach brings us two advantages. Firstly, the code is still readable, and secondly, there’s a chance to get back to the original text if needed. For example, if the game cannot find any string table, it will still run, but it will only display the texts on the right side of the “pipe” character. Additionally, it’s easy to re-export all the texts from the game files, because the extraction tool will know exactly which texts have already been exported, and which ones are newly added (the ones not using the “pipe” syntax yet).
We haven’t looked yet at how does the generated string table look like. The decision is up to you. I’m using a simple text format, where each line contains one string. There’s the identifier, a “tab” character separator and the actual localizable text. The translator can simply open the file in any text editor and replace the texts with their translated equivalents. The string table from our example would look like this:
STRING0001 Hi Joe, how are you?
STRING0002 Yeah, I'm okay.
STRING0003 I gotta go.
And the German translator will send you back something like:
STRING0001 Hi Joe, wie geht's?
STRING0002 Alles bestens.
STRING0003 Ich muss weg.
You can improve the string table format to suit your needs. For example, lines starting with semicolon can be treated as comments (the game will ignore them when loading the table) so that the translators/designers can add notes to the string table file, etc.
The string ID itself can be improved too. For example, you can use some standardized format to encode the character speaking and/or the location where it’s being used etc. These are things that make translators’ work a lot easier.
In case of large games the designer will probably need to add comments to the table, to describe the context of some of the texts, otherwise the translators might get lost and the translation quality will suffer.
Another useful feature is to allow string table entries to reference other entries in the table. Typically one text appears in the game in different situations. If you allow the inter-table references, the translator can only translate the text once and “redirect” all the other occurrences of the same text to the translated entry. I don’t recommend exporting multiple occurences of the same string as a single string table entry, because they might need different translations in other language (depending on the context). Leave it to the translator to decide which strings should be the same.
Using the string table in your program.
All right, now we know how to create the string table, but how are we going to use it? Our game engine needs to load the table into memory. Considering the data structure (an identifier with a text attached) we’ll probably want to use some sort of hash table. If you’re programming in C++, the STL map container is probably a good choice.
Loading the string table file should be a trivial matter. We’ll simply read the file line by line; if the line is empty or starts with a semicolon (i.e. it’s a comment), we’ll ignore the line. Otherwise we split it into two parts: the part left of the “tab” character (the identifier) and the part right of the “tab” (the actual text) and we’ll store the pair in a hash table. Again, there’s a lot of space for improvements, such as reporting duplicate entries, reporting lines without a “tab” character (the translator accidentally replaced it by a space) etc.
A special note about whether to keep all the texts in memory at once. It might look like memory wasting to you, but the truth is most “talkative” games only contain a couple of hundreds of kilobytes of text. Very few games contain more than one megabyte of text. Considering the average memory capacity of today’s PCs, I think you can safely store the entire string table in memory. But if you’re still not convinced, or if you’re targeting a platform with limited memory (PocketPC), the solution can be dividing the table into more parts, for example generic texts and then specific texts for each level of the game etc. Then you’ll load and unload parts of the string table as needed.
From the programmer’s point of view, it’s a good idea to encapsulate the string table into a class, which will keep all the texts and provide a method (preferably a static one), which will accept the original text and return a translated text. By “original text” I mean the text in the “identifier|original text” format. The method will do the following:
- Check if the original text contains the identifier.
- If not, the text is not supposed to be translated, so simply return the original text we received.
- If yes, the method will split the original text to the identifier and the text part.
- Using the identifier, it will try to find a translated text in the string table.
- If the string table contains this identifier, a translated text is returned.
- If not, the method returns the textual part of the string it received.
That way we achieved that even if there’s no translated version of the text, the game will still display SOMETHING (the original untranslated version). Certainly better than displaying an empty string or some error.
Once again, we can further improve the process. A debug version of the game can report texts with missing translations, or it can return the translated text including the string table ID, so that the ID is displayed on screen. That can be useful for beta testing the translation. If the testers encounter some text with wrong translation or some grammar mistake, they can write down the identifier of the text and report it back to the translator.
The only thing left is to find all the “end points” of the program, where the texts are being used, and change these to call the method of our string table class to replace the original text with a translated one. What do I mean by “end points”? Typically these are the parts of the code where the text is actually presented to the user. Whether it be displaying it on screen or writing it to some text file etc. There are usually relatively few such points in a game engine (if it’s well designed, that is).
Let’s see one (dumb) example of how to add localization support to our code. Let’s assume we have a piece of code, which displays a message box whenever it encounters some critical error. The original code would look like this:
MessageBox(NULL, "A critical error has occured.", "Error", MB_ICONERROR);
To display the message in another language, we’ll modify the code like this:
MessageBox(NULL, CStringTable::Translate("SYSSTR0001|A critical error has occurred."), CStringTable::Translate("SYSSTR0002|Error"), MB_ICONERROR);
CStringTable is the class encapsulating our string table, Translate() is a static method, which gets the original text as a parameter and returns the translated text (described above).
The code got a bit more complicated, but not too much. Alternatively, you can use a macro to further shorten the code:
#define LOC(String) (CStringTable::Translate(String))
And the resulting code would look like this:
MessageBox(NULL, LOC("SYSSTR0001|A critical error has occurred."), LOC("SYSSTR0002|Error"), MB_ICONERROR);
Similarly, you’ll need to modify all the pieces of your code where the texts are in some way presented to the user.
Storing the in-game texts
ASCII versus Unicode
Note: The following section is intentionally a little simplified. I’m just trying to describe the concepts and motivations, not to make any in-depth analysis of the history of character encoding 🙂 Also, for further simplicity I’m going to use the term “ASCII” for any 8 bit characters, even though, technically speaking, this term isn’t always completely accurate.
Another area of interest is the actual storage of the game texts in memory. Historically, most programs were using (and are still using) texts stored in the form of 8 bit characters, i.e. one character takes one byte of memory. That means one can use up to 256 different characters. The ASCII norm, where this character storage originates, was designed to store just the basic Latin alphabet (a to z, A to Z), the numbers and the symbols. Even 7 bits were enough to describe those (128 characters at most). As the software was getting more and more complex, it became necessary to store various other characters, specific to other languages (most European languages, other than English, are using some kind of accented characters). The remaining unused 128 characters were used for storing these national characters. The problem is, if you take all the national characters used by various languages, you’ll get far more than 128 of them. That’s why the concept of so called “code pages” was introduced. There were several groups of national languages defined, and it was possible to switch the code page. That means, the 128 non-English characters were getting different meaning depending on what code page was currently being used. For example, for European languages we’d define groups like: Western European, Central and Eastern European, Cyrillic, Greek etc. Similarly the code page concept supports languages such as Arabic, Hebrew and others.
To make things even more complicated, there are several standards for code pages, incompatible with each other. Some encoding was used in DOS, other in Windows, other is an ISO standard, there are some local standards…
What does that mean for game localization? If we are storing texts as 8 bit characters (i.e. the classical C “char” data type) we’re able to display at most 256 different characters. If we want to support multiple languages (we do!) we’ll have to deal with code pages. In reality, this means one thing: our game must use different fonts for different code pages. But we’ll talk about fonts in the next chapter.
I think it’s obvious that the concept of code pages is far from being ideal. In programmer’s jargon I’d call it a “hack”, or how to stuff infinite number of characters into 256 available cells as easily as possible. Also notice that I didn’t mention Asian languages yet…
The effort of solving the national characters problem once and for all resulted in a standard called Unicode. The goal of Unicode is to consolidate the fragmented code pages for various languages/regions and to define a standard set of all possible characters in all languages (or at least their vast majority 🙂
From the programmer’s point of view it’s important to notice, that – as opposed to ASCII – one byte is no longer enough for storing a single character. Currently Unicode defines over 90 thousands of different characters. There are several standards for storing Unicode characters, two most commonly used standards are UTF-8 and UTF-16. The “UTF” abbreviation stands for “Unicode Transformation Format”, the number is a number of bits used for storage. You may object that neither 8 or 16 bits are enough to describe 90,000 different characters. The point is, that a single Unicode character can be described by multiple bytes. The number of bytes is different for various characters. In case of UTF-8, one character can take up one to five bytes (and “common” characters, i.e. latin, only take one byte, therefore e.g. English text looks the same in UTF-8 as it looks in ASCII). UTF-16 uses 16 bits (word) to store characters, and even though in theory 16 bits are still not enough, practically one 16 bit word is equal to one Unicode character (unless you’re using some “obscure” language, such as ancient Japanese).
All right, that was a very brief introduction to Unicode, let’s get back to game localization.
If you want to avoid the code page problems or if you want to support Asian languages, your game engine should store texts in Unicode format. More and more modern games are using Unicode, simply because localization is now an important part of game development process, and Unicode makes localization easier.
Above I mentioned two ways of storing texts, UTF-8 and UTF-16. UTF-8 uses normal 8 bit chars, UTF-16 uses the wchar_t data type, which is typically 16 bit integer (note: this is true for Windows platform; Linux uses 32 bit wchar_t by default). One great advantage of UTF-8 is that if you already have a finished old program using 8 bit characters, you don’t need to change anything to store UTF-8 strings. That’s a great time saver. Also all the standard C-style string manipulation functions (strcpy, strlen etc.) will work on UTF-8 strings. But of course, you must remember that one UTF-8 byte isn’t necessarily equal to one character (as I said, one UTF-8 encoded Unicode character can take up multiple bytes). It means, for example, that the strlen function will return the number of bytes in the string, NOT number of characters! Also if you’re accessing individual bytes of the string using the  operator, you’re getting bytes, not characters. But even with these disadvantages, UTF-8 is an (almost) ideal solution for those developers who need to add Unicode support to their legacy code quickly and easily.
Nevertheless, if you are starting to design a new system, I recommend using the wchar_t type for storing text strings. The ANSI C standard now contains all the string manipulation functions in two versions: the classical ones, working with “char” data type (strcpy, strlen…) and their equivalents for the “wchar_t” type (wcspy, wcslen…). Similarly, STL offers you both “string” and “wstring” types.
Of course, storing the texts is one thing, and displaying them on screen is another thing. But we’ll get to that later in the Fonts chapter.
Unicode on various platforms
Of course, the problem of national characters and localizations is not game-specific and needs to be handled on operating system and development platform levels. So let’s take a brief look at how some of the most popular platforms deal with Unicode.
Linux was historically built on ASCII characters, therefore Unicode support is commonly realized using the UTF-8 encoding on this platform, and it became de facto standard for string interchange.
The situation on the Windows platform is a bit more interesting. Windows systems have been for a long time developed in two parallel branches. On one side, it was the Win9x branch (Win95, Win98, WinMe) targeted at low-end hardware. These systems were using 8 bit ASCII strings. On the other hand, the WinNT branch (WinNT, Win2000, WinXP, Vista) are using 16 bit characters from the beginning. But to maintain backward compatibility with programs written for Win9x, the NT-based systems must be able to handle ASCII strings as well. In reality, all the Windows API functions working with strings are provided in two variations. The functions use the same name, but the ones working with 8 bit characters end with “A” (for ANSI) and the ones working with 16 bit characters end with “W” (for Wide). For example, we’ll find the MessageBox function as MessageBoxA and MessageBoxW. The “A” functions only work as a stub, which converts the string parameters from ASCII to Unicode and calls the “W” function. On Win9x systems this conversion is also partially supported, but the other way around and only for a couple of selected functions. Fortunately, Microsoft later released a separate library called “Microsoft Layer for Unicode”, which allows programs written for Win9x to use the entire set of Unicode variations of API functions.
Unlike Windows NT, the Windows CE (Windows Mobile) platform for mobile devices got rid of the compatibility and only supports 16 bit characters. If you are developing a game for WinCE, you can still use normal ASCII strings, but whenever you need to call some API function expecting a string, you’ll need to convert the string to UTF-16.
Modern runtime platforms Java and .NET are using 16 bit characters exclusively, even though they provide a large set of conversion routines from/to other text encodings.
In the end of this chapter I’d like to mention two useful Windows API functions for converting strings between 8 bit and 16 bit characters. MultiByteToWideChar converts from 8 bit characters to 16 bit and WideCharToMultiByte converts from 16 bit characters to 8 bit. The important thing is that both of these function can (among others) handle UTF-8 encoded Unicode strings.
In the previous chapters we were talking about how to separate in-game texts from the rest of the game and how to store them in memory. The result of all our efforts should be a localized text displayed on screen. Of course, we need fonts to actually render the text, but there are many possible approaches. Let’s take a closer look at some of them, especially taking localization into consideration.
The traditional approach to game text display is to load a special texture, which contains all the characters, and then render each character as a single quad, textured by the appropriate part of the texture with the image of a specific letter. This approach is understandable and easily implemented. The problem is, the font texture only contains a limited set of characters, typically 256 of them, i.e. a single code page (more on code pages in the previous chapter). To be able to display texts for various languages, our game will need to be able to use different fonts for different code pages (for example a texture with Russian font will contain completely different characters than a texture with Greek font). Now, it depends on how the font texture is made. There are generally two ways. Either the texture is manually created by an artist using utilities such as Bitmap Font Builder or FONText, or the texture is automatically generated by the engine at runtime. In the first case it will be necessary to manually create font textures for all the code pages our game is supposed to support. In the second case the game has to know which code page should be used when generating the font texture. For example, if we are using Win32 API for generating the font, we have to create the font using the CreateFont function, which accepts a code page identifier as its 9th parameter.
As you can see, we’re once again sinking into the limitations of code pages and ASCII strings. Besides, this approach has several more or less fatal limitations:
- By painting the text ourselves, letter by letter, we’re bypassing the font rendering engine. No matter if we’re using Windows API, FreeType or some other font rendering library, these libraries are usually designed for rendering continuous blocks of text. This allows them to render the text with respect to certain typographic rules. For example, they’re using so called kerning, i.e. different space between various letter pairs (kerning pairs), which improves the look of the rendered text, especially for larger text sizes. If we are rendering the text ourselves, we’re missing these advantages (or we have to reimplement them).
- By painting text letter by letter we’re losing advanced text formatting options. And here we’re getting back to localization. Some languages, namely Hebrew and Arabic, are writing text from right to left (right-to-left, RTL). While Windows API directly supports RTL texts, and we can enable the support by specifying one parameter of the ExtTextOut function, when painting the text ourselves we have to handle RTL ourselves as well. You might say implementing right-to-left text rendering shouldn’t be too hard, but it’s more complicated that that. Hebrew and Arabic texts can contain parts written in Latin (for example names) or numbers, and these parts need to be rendered left-to-right. It’s so called bidi text (bidi = bi-directional).
- And there are the Asian languages. While in case of European languages we can use code pages, it’s not quite feasible to render all Chinese characters into a single texture. We have to look for other solutions.
The problems described can be solved using the following method. Instead of using the existing font rendering engines just for painting individual letters into a font texture, we can let them paint into the texture the entire text we’re about to display on screen. That way we can use all the advanced text formatting functions practically for free. Of course, this approach also has its deal of disadvantages. Depending on the amount of text the resulting texture can be quite large and it will take a lot of video memory. Also, it might be necessary to divide the texture into smaller parts, depending on the capabilities of the video card.
Compared to the static font texture, this approach is more time and resource demanding when generating the text texture. It’s not feasible to re-generate the text texture in each frame. Fortunately, each text caption typically stays on screen for some time, therefore it’s possible to implement some sort of texture cache, which will hold, e.g. 10 most recently rendered texts. The text texture will be generated in one frame, and reused in all subsequent frames.
In any case, it’s up to your consideration to decide which of the described ways will better suit your needs. I just tried to summarize the advantages and disadvantages than might not be obvious at first.
Other aspects of game localization
In the previous chapters we talked about game localization, especially about storing the game texts and their rendering using fonts. These are no doubt the most important areas, but localization also affects other parts of development. Let’s take a look at these in this final chapter.
There’s a single most important rule: try to avoid using any text directly in graphics whenever possible. All the texts should be rendered programmatically using fonts. Keep in mind that all the textures containing text will need to be repainted for all language versions of your game. If you are creative, you can sometimes avoid using texts altogether. For example, don’t use buttons with text labels, use icons instead. Or just display a textual tooltip when the mouse is hovered over the button.
By user interface I mean various windows for saving game, main menu, confirmation dialogs etc. Even when designing the user interface you should think about localization. Keep a lot of additional free space for all the text labels. Remember that the translated text can be much longer than the original. This is especially true if your original language is English. English is in general quite brief and other languages usually need much more space to describe the same thing. Also remember, that for Asian languages you’ll probably need to use bigger fonts, so make the text area slightly higher than necessary.
Funny fact: German is probably one of the most critical languages when it comes to text lengthening. In Oblivion the player is picking up healing potions, described as “Light potion of healing” in the inventory. German localizers translated the name as “Schwacher Trank der Lebensenergie-Wiederherstellung”, which wouldn’t fit in the reserved screen space, so they had to abbreviate the name to “Schw.Tr.d.Le.en.-W.”, which is… kind of hard to comprehend 🙂 It’s a nice example of badly designed user interface, resulting in poor localization.
Voiceovers and videos
If your game contains voiceovers, keep these sound files separate from the generic sound effects. How to organize and name the voiceover files depends on situation, but you can use the string identifier from the string table to name the sound file. For example, if your string table contains something like:
STRING0001 Hi Joe, how are you?
The voiceover file for this line would be stored in a file called “string0001.mp3”. The game engine can then automatically find and play the voice over for the line being displayed on screen. Also, if your string table ID contains the character the line belongs to, you can easily filter lines for a specific voice actor etc. Well designed string table structure even allows you to generate entire scripts for each voice actor.
Always allow subtitles display for videos in case the localizers won’t be able to record localized voiceovers. You can use standard formats for video subtitles, such as .SUB or .SRT, because there are existing editing tools for them.
If your game uses any kind of keyboard input, such as the player name or saved game description, don’t use DirectInput for this purpose, but the Windows API messages, like WM_CHAR. That way Windows does the dirty job of mapping various keyboard layouts to national characters, “dead keys” handling etc.
When designing the game try to avoid references and jokes specific for one language and/or nation. They are hard to translate and complicate the localization process. If you have to use these, try to provide the translators with a sufficient description, so that they can adapt it to their local cultural environment.
Try to avoid game situations that can be offending to some national or religious groups (remember the Muhammad cartoons controversy, for example).
This article concentrated on the often-omitted area of localization readiness of game engines. We went through several most important topics, revealed some of the possible pitfalls and tried to find solutions. If you are planning to implement localization support in your game, hopefully this article will help you to avoid some dead ends. Thanks for reading.