||
and &&
are boolean operators and the built-in ones are guaranteed to return either true
or false
. Nothing else.
|
, &
and ^
are bitwise operators. When the domain of numbers you operate on is just 1 and 0, then they are exactly the same, but in cases where your booleans are not strictly 1 and 0 – as is the case with the C language – you may end up with some behavior you didn't want. For instance:
BOOL two = 2;
BOOL one = 1;
BOOL and = two & one; //and = 0
BOOL cand = two && one; //cand = 1
In C++, however, the bool
type is guaranteed to be only either a true
or a false
(which convert implicitly to respectively 1
and 0
), so it's less of a worry from this stance, but the fact that people aren't used to seeing such things in code makes a good argument for not doing it. Just say b = b && x
and be done with it.
The main reason would be that std::vector<bool>
is special, and its specification specifically permits an implementation to minimise memory usage.
For vectors of anything other than bool
, the reference type can actually be a true reference (i.e. std::vector<int>::reference
can actually be an int &
) - usually directly referencing an element of the vector itself. So it makes sense for the reference type to support all operations that the underlying type can. This works because vector<int>
effectively manages a contiguous array of int
internally. The same goes for all types other than bool
.
However, to minimise memory usage, a std::vector<bool>
may not (in fact probably will not) work internally with an actual array of bool
. Instead it might use some packed data structure, such as an array of unsigned char
internally, where each unsigned char
is a bitfield containing 8
bits. So a vector<bool>
of length 800 would actually manage an array of 100
unsigned char, and the memory it consumes would be 100
bytes (assuming no over-allocation). If the vector<bool>
actually contained an array of 800
bool
, its memory usage would be a minimum of 800
bytes (since sizeof(bool) must be at least 1
, by definition).
To permit such memory optimisation by implementers of vector<bool>
, the return type of vector<bool>::operator[]
(i.e. std::vector<bool>::reference
) cannot simply be a bool &
. Internally, it would probably contain a reference to the underlying type (e.g. a unsigned char
) and information to track what bit it actually affects. This would make all op=
operators (+=
, -=
, |=
, etc) somewhat expensive operations (e.g. bit fiddling) on the underlying type.
The designers of std::vector<bool>
would then have faced a choice between
specify that std::vector<bool>::reference
support all the
op=
and hear continual complaints about runtime inefficiency from
programmers who use those operators
Don't support those op=
and field complaints from programmers who think such things are okay ("cleaner code", etc) even though they will be inefficient.
It appears the designers of std::vector<bool>
opted for option 2. A consequence is that the only assignment operators supported by std::vector<bool>::reference
are the stock standard operator=()
(with operands either of type reference
, or of type bool
) not any of the op=
. The advantage of this choice is that programmers get a compilation error if trying to do something which is actually a poor choice in practice.
After all, although bool
supports all the op=
using them doesn't achieve much anyway. For example, some_bool |= true
has the same net effect as some_bool = true
.
Best Answer
If the number of bits are fixed at compile time, you would be much better off using
std::bitset
If not, (i.e. number of bits varies at runtime), then you should see and can use
boost::dynamic_bitset
)In both of these, it is extremely easy to do all the bitwise operations.