Development/Tutorials/Localization/Unicode
What Is Unicode?
Unicode is a standard that provides a unique number for every character and symbol in the various languages used around the world. It is hardware, development platform and language independant. The current assigned codes are all within the range of a 16-bit integer with the first 256 entries being identical to ISO 8859 or Latin-1. This also means that the first 128 characters are identical to the familiar ASCII mapping.
Why use Unicode?
If you use a local 8-bit character set you can only create single language documents and interchange them only with people using the same charset. Even then, many languages contain more symbols than can be uniquely address by an 8-bit number.
Therefore Unicode is the only sensible way to mix languages in a document and to interchange documents between people with different locales. Unicode makes it possible to do things such as easily write a Russian-Hungarian dictionary or store documents in Asian languages which may have thousands of possible symbols.
UTF-8 and UTF-16
UTF stands for "Unicode Transfer Format". The two variants, UTF-8 and UTF-16, define how to express unicode characters in bits.
UTF-16 signifies that every character is represented by the 16bit value of its Unicode number. For example, the Latin-1 characters in UTF-16 have a hex representation of 00nn where nn is the hexadecimal representation in Latin-1.
UTF-8 signifies that the Unicode characters are represented by a stream of bytes. An 8-bit value whose value lies between 0 to 127 coresponds to an ASCII character. Other characters are represeted by more than one byte. Because the characters "\0" and "/" cannot occur in a multibyte character in UTF-8, you can treat UTF-8 strings as null-terminated C-Strings.
Below is a simple depiction showing how the the bits of Unicode character codes are put into UTF-8 bytes:
Bytes | Usable Bits | Representation |
---|---|---|
1 | 7 | 0vvvvvvv |
2 | 11 | 110vvvvv 10vvvvvv |
3 | 16 | 1110vvvv 10vvvvvv 10vvvvvv |
4 | 21 | 11110vvv 10vvvvvv 10vvvvvv 10vvvvvv |
Unicode in KDE Applications
Storing Unicode strings in memory and displaying Unicode on screen is very easy with Qt: QString and QChar provide full Unicode support transparently to your application.
QString and QChar both internally represent characters using their 16bit value. If you read in text pieces from or write it to an 8bit source such as QTextStream you must use a QTextCodec to convert the text from or to its 16bit representation. Normally you would use the UTF-8 codec to convert a Unicode string between a QByteArray and a QString.
Getting an appropriate Unicode QTextCodec is quite simple:
QTextCodec Utf8Codec = QTextCodec::codecForName("utf-8");
QTextCodec Utf16Codec = QTextCodec::codecForName("utf-16");
Display
Any widget that uses QString or QChar when painting text will therefore be able to show Unicode characters as long as the user has an appropriate font available.
For flexible text editing and display, consult the Qt documentation on rich text processing. The Scribe system provides a set of classes centered around the QTextDocument class that makes it easy to accurately display and print formated text, including Unicode.
In-Memory Buffers
To read Unicode data from an in-memory buffer, such as a QByteArray, one can use the codecs created earlier in this section:
QByteArray buffer;
// UTF-16
QString string = Utf16Codec->toUnicode(buffer);
// UTF-8
QString string = Utf8Codec->toUnicode(buffer);
Writing to a buffer looks very similar:
QString string; // my Unicode text
QByteArray array; // Buffer to store UTF-16 Unicode
QTextStream textStream(array, IO_WriteOnly);
textStream.setEncoding(QTextStream::Unicode);
textStream << string;
File Handling
For storage and retrieval to and from a file KDE applications should provide the option to store text data in Unicode instead of the locale character set. This way the user of your application can read and save documents with the locale charset as the default while having the option to store it in Unicode as an option. Both UTF-8 and UTF-16 should be supported. European users will generally prefer UTF-8 while Asian users may prefer UTF-16.
Reading Unicode data from a text file is demonstrated in the following example using the QTextCodecs we created earlier for UTF-8 data:
QTextStream textStream;
QString line;
// UTF-16, if the file begins with a Unicode mark
// the default is ok, otherwise:
// textStream.setEncoding(QTextStream::Unicode);
line = textStream.readLine();
// UTF-8
textStream.setCodec(Utf8Codec);
line = textStream.readLine();
Writing is equally straight-forward, also using the QTextCodecs we created earlier:
QTextStream textStream;
QString line;
// UTF-16
textStream.setEncoding(QTextStream::Unicode);
textStream << line;
// UTF-8
textStream.setCodec(Utf8Codec);
textStream << line;
Unicode Character Input
Being able to load, store and display Unicode is only part of the puzzle. One of the final pieces is allowing the user to input Unicode characters. This is generally solved using platform-specific multibyte input methods (IM), such as XIM on X11. KDE provides good support for these input methods via Qt and your application should not need to do anything special to enable such input.
Sample Unicode Files and Fonts
An easy way to get Unicode sample data is to take the kde-i18n package and convert some files from there to Unicode using the recode command line tool:
recode KOI8-R..UTF-8 ru/messages/kdelibs.po
recode ISO-8859-1..UTF-16 de/messages/kdelibs.po
recode ISO-8859-3..UTF-8 eo/messages/kdelibs.po
The majority of operating systems today come packaged with TrueType fonts that cover the full spectrum of the Unicode character range. If you do not have an Unicode fonts on your system there are several that can be downloaded from the Internet and a quick Internet search should provide links to various options.