Number Base
Converter
Instantly convert any number between binary (base 2), octal (base 8), decimal (base 10), and hexadecimal (base 16). Type in any field — all others update live. Includes a bit visualiser, ASCII/Unicode lookup, hex color picker, and bitwise operations.
2 in a binary field has no effect). There's no convert button to press.A, @, or €) in the Character field to see its decimal code, binary representation, hex code, octal, and HTML entity. Or type a decimal code (like 65) in the Decimal code field to find the character it represents. Works for standard ASCII (0–127) and extended Unicode characters.FF5733) to see it rendered as a color swatch — showing its RGB breakdown and binary representation of the red channel. Click the swatch itself to open a color picker and choose any color to get its hex, decimal, and binary values. Click preset swatches on the right for common colors.0xFF), a binary constant (0b11111111), or a decimal value into your editor. The button flashes green to confirm the copy.0b for binary (e.g. 0b1010), 0x for hexadecimal (e.g. 0xFF), 0o for octal (e.g. 0o17). No prefix means decimal. CSS hex colors use a # prefix (#FF5733). SQL and some other languages use 0x for hex only.
| Dec | Binary | Hex | Oct | Dec | Binary | Hex | Oct |
|---|
Why do computers use binary?
Computers are built from transistors — microscopic switches that are either on (1) or off (0). Binary maps perfectly to this physical reality: every bit of data stored in memory, transmitted over a network, or processed by a CPU is ultimately a sequence of on/off states. Using base 2 means the entire logic of computing can be implemented with just two states, making circuits simpler, faster, and more reliable. All other number systems (decimal, hex, octal) are just human-readable representations of the same underlying binary data.
Why do programmers use hexadecimal?
Hexadecimal (base 16) is a shorthand for binary — each hex digit represents exactly 4 binary bits (a nibble). So an 8-bit byte (like 11111111) is just two hex digits (FF). This makes hex far more compact and readable than raw binary for things like memory addresses, color values, error codes, and bytecodes. A 32-bit memory address takes 8 hex digits vs 32 binary digits. Most debugging tools, hex editors, and developer consoles display values in hex for this reason.
What is octal used for?
Octal (base 8) was more common in early computing when machines used 6-bit and 12-bit word sizes (both divisible by 3, making octal a natural fit). Today its main use is Unix/Linux file permissions — the permission flags rwxrwxrwx map to 3-bit groups, making octal the natural way to express them. chmod 755 means owner=7 (rwx=111₂), group=5 (r-x=101₂), others=5. You'll also see octal in some network protocols and escape sequences — \077 in C is the octal escape for ?.
What are bitwise operations used for?
Bitwise operations work directly on the binary representation of numbers and are used throughout systems programming, embedded development, and performance-critical code. AND is used for bit masking (isolating specific bits). OR sets specific bits. XOR flips bits or checks for differences (used in encryption and checksums). NOT inverts all bits. Left shift (≪) is a fast way to multiply by powers of 2. Right shift (≫) divides by powers of 2. Game engines, graphics pipelines, compression algorithms, and cryptography all rely heavily on bitwise operations.
1011 in binary: (1×2³) + (0×2²) + (1×2¹) + (1×2⁰) = 8 + 0 + 2 + 1 = 11 in decimal. Or just type the binary number into the Binary field above and read the decimal result instantly. The bit visualiser below the binary field shows each bit's position, making the process easy to follow.0xFF is a hexadecimal literal meaning 255 in decimal, or 11111111 in binary. The 0x prefix tells the programming language to interpret the following digits as hexadecimal. 0xFF is commonly used as a byte mask — ANDing any value with 0xFF extracts the lowest 8 bits. It also appears in RGB colors (where #FFFFFF = white = R:255, G:255, B:255), memory addresses, and byte-level data manipulation. In CSS, hex color codes use # instead of 0x.11111111 (255 unsigned). This is why a signed 8-bit integer ranges from −128 to +127. The "Signed" value shown in this converter uses 32-bit two's complement, matching how most programming languages represent integers.Convert between binary, decimal, hexadecimal, and octal instantly with visual bit breakdown.