|What's Next||Writing Applications With Localization In Mind|
|Further Reading||The Unicode website|
Unicode article at Wikipedia
Unicode for Unix/Linux FAQ
Unicode is a standard that provides a unique number for every character and symbol in various writing systems used around the world which makes it the basis for translatable software and international content creation. This tutorial provides a brief introduction to Unicode and how to work with Unicode data using Qt.
Unicode is hardware, development platform and language independent. As of version 5 there are codepoints assigned up to 0x10FFFF (but not all are used), the first 256 entries being identical to ISO 8859-1 a.k.a. ISO Latin-1. This also means that the first 128 characters are identical to the familiar ASCII mapping. Codepoints were assigned with ease of implementation for legacy character sets in mind, so Unicode is littered with such coincidences to keep translation tables and logic short.
If you use a legacy character set like ISO 8859-15 it is difficult to create documents containing more than only a couple of languages. You can interchange them only with people using the same character set. Even then, many writing systems contain more characters than can be uniquely addressed by an 8-bit number.
Therefore Unicode is the best of the sensible ways to mix different scripts and languages in a document and to interchange documents between people with different locales. Unicode makes it possible to do things such as writing a Russian-Hungarian dictionary.
UTF stands for "Unicode Transformation Format". The two most commonly used variants, UTF-8 and UTF-16, define how to express a Unicode character's assigned codepoint in bits.
UTF-16 signifies that every character is represented by the 16-bit value of its Unicode number. For example, in UTF-16 b (LATIN SMALL LETTER B) has a hex representation of 0x0062. There are also surrogate pairs used to encode characters with codepoints beyond 0xFFFF.
UTF-8 signifies that the Unicode characters are represented by a stream of bytes in a special encoding scheme described in  and . Unlike UTF-16, it is compatible to ASCII. Higher codepoints take two or three bytes to encode, rarely more than that.
Storing Unicode strings in memory and displaying Unicode on screen is very easy with Qt: QString and QChar provide full Unicode support transparently to your application.
QString and QChar both internally represent characters using UTF-16. If you read in text pieces from or write it to an 8-bit source such as QTextStream you must use a QTextCodec to convert the text between representations. Normally you would use the UTF-8 codec to convert a Unicode string between a QByteArray and a QString.
Getting an appropriate Unicode QTextCodec is quite simple:
QTextCodec Utf8Codec = QTextCodec::codecForName("utf-8"); QTextCodec Utf16Codec = QTextCodec::codecForName("utf-16");
Any widget that uses QString or QChar when painting text will therefore be able to show Unicode characters as long as the user has an appropriate font available.
For flexible text editing and display, consult the Qt documentation on rich text processing. The Scribe system provides a set of classes centered around the QTextDocument class that makes it easy to accurately display and print formated text, including Unicode.
To read Unicode data from an in-memory buffer, such as a QByteArray, one can use the codecs created earlier in this section:
QByteArray buffer; // UTF-16 QString string = Utf16Codec->toUnicode(buffer); // UTF-8 QString string = Utf8Codec->toUnicode(buffer);
Writing to a buffer looks very similar:
QString string; // my Unicode text QByteArray array; // Buffer to store UTF-16 Unicode QTextStream textStream(array, IO_WriteOnly); textStream.setEncoding(QTextStream::Unicode); textStream << string;
For storage and retrieval to and from a file KDE applications should provide the option to store text data in Unicode instead of the locale character set. This way the user of your application can read and save documents with the locale charset as the default while having the option to store it in Unicode. Both UTF-8 and UTF-16 should be supported. European users will generally prefer UTF-8 while Asian users may prefer UTF-16.
Reading Unicode data from a text file is demonstrated in the following example using the QTextCodecs we created earlier for UTF-8 data:
QTextStream textStream; QString line; // UTF-16, if the file begins with a Unicode mark // the default is ok, otherwise: // textStream.setEncoding(QTextStream::Unicode); line = textStream.readLine(); // UTF-8 textStream.setCodec(Utf8Codec); line = textStream.readLine();
Writing is equally straight-forward, also using the QTextCodecs we created earlier:
QTextStream textStream; QString line; // UTF-16 textStream.setEncoding(QTextStream::Unicode); textStream << line; // UTF-8 textStream.setCodec(Utf8Codec); textStream << line;
Being able to load, store and display Unicode is only part of the puzzle. One of the final pieces is allowing the user to input Unicode characters. This is generally solved using platform-specific multibyte input methods (IM), such as XIM on X11. KDE provides good support for these input methods via Qt and your application should not need to do anything special to enable such input.
An easy way to get Unicode sample data is to take the kde-i18n package and convert some files from there to Unicode using the recode command line tool:
recode KOI8-R..UTF-8 ru/messages/kdelibs.po recode ISO-8859-1..UTF-16 de/messages/kdelibs.po recode ISO-8859-3..UTF-8 eo/messages/kdelibs.po
The majority of operating systems today come packaged with Unicode TrueType fonts, see .