What is a Bit?

What Does Bit Mean

We explain what a bit is, what are its different uses and the methods in which this computing unit can be calculated.

A bit is the smallest unit of information that computing uses.

What is a bit?

In computing , a value of the binary numbering system is called a bit (an acronym in English for Binary digit , that is, "binary digit") . This system is so named because it comprises only two base values: 1 and 0, with which an infinite number of binary conditions can be represented: on and off, true and false, present and absent, etc.

');
}

');
}

A bit is, then, the minimum unit of information used by computing , whose systems are all supported by said binary code. Each bit of information represents a specific value: 1 or 0, but by combining different bits many more combinations can be obtained, for example:

2-bit model (4 combinations):

00 - Both off

01 - First off, second on

10 - First on, second off

11 - Both on

With these two units we can represent four point values . Now suppose we have 8 bits (one octet), equivalent in some systems to one byte : we get 256 different values.

In this way, the binary system operates paying attention to the value of the bit (1 or 0) and its position in the represented string: if it is on and appears in a position to the left, its value is doubled, and if it appears to the right, cut in half. For example:

To represent the number 20 in binary

Net binary value : 1 0 1 0 0

Numeric value per position: 168421

Result: 16 +0 +4 +0 + 0 = 20

Another example: to represent the number 2.75 in binary, assuming the reference in the middle of the figure:

Net Binary Value : 0 1 0 1 1

Numeric value per position: 4210.50.25

Score: 0 +2 +0.5 +0 + 0.25 = 2 7 5

The bits in value 0 (off) are not counted, only those of value 1 (on) and their numerical equivalent is given based on their position in the string, to thus form a representation mechanism that will later be applied to alphanumeric characters ( called ASCII ).

Thus operations are recorded the microprocessors of computers : may be architectures 4, 8, 16, 32 and 64 bits . This means that the microprocessor handles that internal number of registers, that is, the calculation capacity that the Arithmetic-Logic Unit possesses.

For example, the first computer x86 series (the Intel 8086 and Intel 8088) had processors 16 bits, and the marked difference between their speeds had to do not both its processing capacity, and with the additional help of a 16 and 8 bit bus respectively.

Similarly, bits are used to measure the storage capacity of a digital memory.

See also: HTML

Go up