You need do distinguish between the source character set, the execution character set, the wire execution character set and it's basic versions:
The basic source character set:
§2.1.1: The basic source character set consists of 96 characters […]
This character set has exactly 96 characters. They fit into 7 bit. Characters like @
are not included.
Let's get some example binary representations for a few basic source characters. They can be completely arbitrary and there is no need these correspond to ASCII values.
A -> 0000000
B -> 0100100
C -> 0011101
The basic execution character set …
§2.1.3: The basic execution character set and the basic execution wide-character set shall each contain all the members of the basic source character set, plus control characters representing alert, backspace, and carriage return, plus a null character (respectively, null wide character), whose representation has all zero bits.
As stated the basic execution character set contains all members of basic source character set. It still doesn't include any other character like @
. The basic execution character set can have a different binary representation.
As stated the basic execution character set contains representations for carriage return, a null character and other characters.
A -> 10110101010
B -> 00001000101 <- basic source character set
C -> 10101011111
----------------------------------------------------------
null -> 00000000000
Backspace -> 11111100011
If the basic execution character set is 11 bits long (like in this example) the char data type shall be large enough to store 11 bits but it may be longer.
… and The basic execution wide character set:
The basic execution wide character is used for wide characters (wchar_t). It basicallly the same as the basic execution wide character set but can have different binary representations as well.
A -> 1011010101010110101010
B -> 0000100010110101011111 <- basic source character set
C -> 1010100101101000011011
---------------------------------------------------------------------
null -> 0000000000000000000000
Backspace -> 1111110001100000000001
The only fixed member is the null character which needs to be a sequence of 0
bits.
Converting between basic character sets:
§2.1.1.5: Each source character set member, escape sequence, or universal-character-name in character literals and string literals is converted to a member of the execution character set (2.13.2, 2.13.4).
Then a c++ source file is compiled each character of the source character set is converted into the basic execution (wide) character set.
Example:
const char* string0 = "BA\bC";
const wchar_t string1 = L"BA\bC";
Since string0
is a normal character it will be converted to the basic execution character set and string1
will be converted to the basic execution wide character set.
string0 -> 00001000101 10110101010 11111100011 10101011111
string1 -> 0000100010110101011111 1011010101010110101010 // continued
1111110001100000000001 1010100101101000011011
Something about file encodings:
There are several kind of file encodings. For example ASCII
which is 7 bit long. Windows-1252
which is 8 bit long (known as ANSI
).
ASCII
doesn't contain non-English characters. ANSI
contains some European characters like ä Ö ä Õ ø
.
Newer file encodings like UTF-8
or UTF-32
can contain characters of any language. UTF-8
is characters are variable in length. UTF-32
are 32 bit characters long.
File enconding requirements:
Most compilers offer command line switch to specify the file encoding of the source file.
A c++ source file needs to be encoded in an file encoding which has a representation of the basic source character set. For example: The file encoding of the source file needs to have a representation of the ;
character.
If you can type the character ;
within the encoding chosen as the encoding of the source file that encoding is not suitable as a c++ source file encoding.
Non-basic character sets:
Characters not included in the basic source character set belong to the source character set. The source character set is equivalent to the file encoding.
For example: the @
character is not include in the basic source character but it may be included in the source character set. The chosen file encoding of the input source file might contain a representation of @
. If it doesn't contain a representation for @
you can't use the character @
within strings.
Characters not included in the basic (wide) character set belong to the execution (wide) character set.
Remember that the compiler converts the character from the source character set to the execution character set and the execution wide character set. Therefore there needs to be way how these characters can be converted.
For example: If you specify Windows-1252
as the encoding of the source character set and specify ASCII
as the execution wide character set there is no way to convert this string:
const char* string0 = "string with European characters ö, Ä, ô, Ð.";
These characters can not be represented in ASCII
.
Specifying character sets:
Here are some examples how to specify the character sets using gcc. The default values are included.
-finput-charset=UTF-8 <- source character set
-fexec-charset=UTF-8 <- execution character set
-fwide-exec-charset=UTF-32 <- execution wide character set
With UTF-8 and UTF-32 as default encoding c++ source files can contain strings with character of any language. UTF-8 characters can the converted both ways without problems.
The extended character set:
§1.1.3: multibyte character, a sequence of one or more bytes representing a member of the extended character set of either the source or the execution environment. The extended character set is a superset of the basic character set (2.2).
Multibyte character are longer than an entry of the normal characters. They contain an escape sequence marking them as multibyte character.
Multibyte characters are processed according the locale set in the user's runtime environment. These multibyte characters are converted at runtime to the encoding set in user's environment.
Best Answer
Here is a break down of the different character sets used by the compiler itself (all reference to the standard are for C++14, actually):
The basic source character set is what the compiler, at least conceptually, consumes. It is produced from the physical source file characters and either mapping them to their respective basic character or to a sequence of basic characters representing the physical source character using a universal character name (see 2.2 [lex.phases] paragraph 1). The basic source character set is a just a set of 96 character (2.3 [lex.charset] paragraph 1):
The mapping between the physical source character set and the basic character set is implementation defined.
The basic execution character set and the basic execution wide-character set are characters set capable of representing the basic source character set expanded by a few special character:
The difference between the non-wide and the wide version is whether the characters are represented using
char
orwchar_t
.The execution character set and the execution wide-character set are implementation defined extensions of the basic character set and the basic wide-character set. In 2.3 [lex.charset] paragraph 3 it is stated that the additional members and the values of the additional members of execution character set are locale specific. It isn't clear which locale is referred to but I suspect the locale used during compilation is meant. In any case, the execution character sets are implementation defined (also according to 2.3 [lex.charset] paragraph 3).
Character and string literals are originally represented using the basic source character set with some characters possibly using universal character names. All of these are converted at compile time into the execution character set. According to 2.14.3 [lex.ccon] character literals representable as one
char
in the execution character set just work. If multiplechar
s are needed the character literals may be conditionally supported (and they'd have typeint
). For string literals the conversion is described in 2.14.5 [lex.string]. Paragraph 9 states that UTF-8 string literals (e.g.u8"hello"
) result in a sequence of values corresponding to the code units of the UTF-8 string. Otherwise the translation of characters and universal character names is the same as that for character literals (in particular, it is implementation defined) although characters resulting in multi-byte sequences for narrow string just result in multiple characters (this case isn't necessary support for character literals).So far, only the result of compilation is considered. Any character which isn't part of a character or a string literal is used to specify what the code does. The interesting question is what happened to the literals? The literals are all basically translated into an implementation defined representation. That is implementation defined means that it is somewhere documented what is supposed to happen but it can differ between different implementations.
How does that help when dealing with characters or strings coming from somewhere? Well, any character or string which is read is converted to the corresponding execution character set. In particular, when a file is read, all characters are transformed to this common representation. Of course, for this transformation to work, the locale used for reading a file needs to be setup according to the encoding of that file. If the locale isn't explicitly mentioned the global locale is used which is initially determined by the system is used. The initial global locale is probably set somehow based on user preferences, e.g., based no environment variables. If a a file is read which uses a different encoding than this global locale, a corresponding different locale matching the encoding of the file needs to be used.
Correspondingly, when writing characters using one of the execution character sets, these are converted according to the encoding specified by the current locale. Again, it may be necessary to replace the locale if a specific encoding is needed.
All this effectively means that internally to a program all string and character processing happens using the implementation defined execution character set. All characters being read by a program need to be converted to this character set and all characters written start as characters in this execution character set and need to be converted appropriately to the external encoding. If course, in an ideal set up the conversion between the execution character set and the external representation is the identity, e.g., because the execution character set uses UTF-8 and the external representation also uses UTF-8. Correspondingly for the execution wide-character set except in this case UTF-16 would be used (one of the two variations as UTF-16 can either use big endian or little endian representation).