Sometimes I see Integer constants defined in hexadecimal, instead of decimal numbers. This is a small part I took from a GL10 class:

```
public static final int GL_STACK_UNDERFLOW = 0x0504;
public static final int GL_OUT_OF_MEMORY = 0x0505;
public static final int GL_EXP = 0x0800;
public static final int GL_EXP2 = 0x0801;
public static final int GL_FOG_DENSITY = 0x0B62;
public static final int GL_FOG_START = 0x0B63;
public static final int GL_FOG_END = 0x0B64;
public static final int GL_FOG_MODE = 0x0B65;
```

It’s obviously simpler to define `2914`

instead of `0x0B62`

, so is there maybe some performance gain? I acutallly don’t think so, since then it should be the compiler’s job to change it.

It is likely for organizational and visual cleanliness. Base 16 has a much simpler relationship to binary than base 10, because in base 16 each digit corresponds to exactly four bits.

Notice how in the above, the constants are grouped with many digits in common. If they were represented in decimal, bits in common would be less clear. If they instead had decimal digits in common, the bit patterns would not have the same degree of similarity.

Also, in many situations it is desired to be able to bitwise-OR constants together to create a combination of flags. If the value of each constant is constrained to only have a subset of the bits non-zero, then this can be done in a way that can be re-separated. Using hex constants makes it clear which bits are non-zero in each value.

There are two other reasonable possibilities: octal, or base 8 simply encodes 3 bits per digit. And then there is binary coded decimal, in which each digit requires four bits, but digit values above 9 are prohibited – that would be disadvantageous as it cannot represent all of the possibilities which binary can.

### Answer：

“It’s obviously simpler to define 2914 instead of 0x0B62”

I don’t know about that specific case, but quite often that is not true.

Out of the two questions:

- A) What is the bit value of 2914?
- B) What is the bit value of 0x0B62?

B will be answered more correctly faster by a lot of developmers. (This goes for similar questions as well)

0x0B62 (it is 4 hex digits long so it reprensents a 16-bit number)

- the bits of 0 = 0000
- the bits of B = 1011
- the bits of 6 = 0110
- the bits of 2 = 0010

->

0000101101100010

(I dare you to do the same with 2914.)

That is one reason for using the hex value, another is that the source of the value might use hex (the standard of a specification for example).

Sometimes I just find it silly, as in:

```
public static final int NUMBER_OF_TIMES_TO_ASK_FOR_CONFIRMATION = ...;
```

Would almost always be silly to write in hex, I’m sure there are some cases where it wouldn’t.

### Answer：

Readability when applying hexadecimal masks, for example.

### Answer：

There will be no performance gain between a decimal number and a hexadecimal number, because the code will be compiled to move byte constants which represent numbers.

Computers don’t do decimal, they do (at best) binary. Hexadecimal maps to binary very cleanly, but it requires a bit of work to convert a decimal number to binary.

One place where hexadecimal shines is when you have a number of related items, where many are similar, yet slightly different.

```
// These error flags tend to indicate that error flags probably
// all start with 0x05..
public static final int GL_STACK_UNDERFLOW = 0x0504;
public static final int GL_OUT_OF_MEMORY = 0x0505;
// These EXP flags tend to indicate that EXP flags probably
// all start with 0x08..
public static final int GL_EXP = 0x0800;
public static final int GL_EXP2 = 0x0801;
// These FOG flags tend to indicate that FOG flags probably
// all start with 0x0B.., or maybe 0x0B^.
public static final int GL_FOG_DENSITY = 0x0B62;
public static final int GL_FOG_START = 0x0B63;
public static final int GL_FOG_END = 0x0B64;
public static final int GL_FOG_MODE = 0x0B65;
```

With decimal numbers, one would be hard pressed to “notice” constant regions of bits across a large number of different, but related items.

### Answer：

Would you rather write `0xFFFFFFFF`

or `4294967295`

?

The first one much more clearly represents a 32 bit data type with all ones. Of course, many a seasoned programmer would recognize the latter pattern, and have a sneaking suspicion as to it’s true meaning. However even in that case, it is much more prone to typing errors, etc.

### Answer：

When it comes to big numbers, representing them in hexadecimal makes them more readable, because they’re more compact.

Also, sometimes it is significant for conversions to binary: a hexadecimal number can be very easy converted to binary. Some programmers like to do this, it helps when doing bit operations on the numbers.

As for the performance gain: no, there is none.

### Answer：

Hexadecimal is the closest readable format to binary format. This simplifies a lot bit operations for example

### Answer：

0xB62 equals 2914 🙂

For developers it is much easier to mentally picture the bit pattern of a constant when it is presented in hexadecimal than when it is presented as an base 10 integer.

This fact makes presentation in hexadecimal more suited for constants used in API’s where bits and their positions (used as individual flags for instance) are relevant.

### Answer：

Ahh but 0xDECAFF is both prime (1460959), and a pleasent purple color (in RGB).

For Colors hex is MUCH more convenient.

FF FF FF is white

00 00 FF is blue

FF 00 00 is red

00 FF 00 is green

It’s easy to see the color relationships as numbers (though the gamma and fidelity of the human eye tend to throw things off, but we’ll ignore those inconvenient physical facts for pure mathamatical precision!

### Answer：

There is no performance gain.

However, if these constants correspond to certain bits underneath, most programmers prefer Hex (or even binary) to make that clear and more readable.

For example, one can easily see that GL_EXP2 has 2 bits on, the 1 bit and the 0x0800 bit (which is 2048 decimal). A decimal value of 2049 would be less clear.

### Answer：

Sometimes it’s easier using bit-related algorithms. Other times, it deals with bit comparisons, as my statement in a comment, 4 bits (binary digits) convert to 1 hex letter, so, A3 = 10100011.

Other times, it’s either fun or breaks the monotony, though people not familiar with hex may think that you are doing things with pointers

```
int data = 0xF00D;
if ( val != 0xC0FFEE )
{
data = 0xDECAF;
}
```

I sometimes use it to check bounds of things like ints. For example, you can use 0x7FFFFFFF (0x80000000 works in many cases but the 0x7F… is safer) to get a max int bounds. It’s handy for setting a very high error constant if you don’t have a language that has something like MAX_INT. The technique scales as well, since for 64-bit, you can use 0x7FFFFFFFFFFFFFFF. You may notice that Android uses 0x7_**__** for R.id table look-ups.

I bet they are doing it for clarity’s sake. You can easily use integers, but if you are familiar with hex, it’s not bad. It looks like they are reserving x values for certain functions. In decimal, you would do something like 0-99 is errors, 100-199 for something else, and so forth. How they are doing it is scaled differently.

Performance-wise, you gain nothing at runtime since the compiler (even a lot of assemblers) converts whatever format to binary in the end, whether decimal, octal, hexadecimal, float, double, etc.