Data Representation 5.3. Numbers

Data representation.

  • 5.1. What's the big picture?
  • 5.2. Getting started

Understanding the base 10 number system

Representing whole numbers in binary, shorthand for binary numbers - hexadecimal, computers representing numbers in practice, how many bits are used in practice, representing negative numbers in practice.

  • 5.5. Images and Colours
  • 5.6. Program Instructions
  • 5.7. The whole story!
  • 5.8. Further reading

In this section, we will look at how computers represent numbers. To begin with, we'll revise how the base 10 number system that we use every day works, and then look at binary , which is base 2. After that, we'll look at some other charactertistics of numbers that computers must deal with, such as negative numbers and numbers with decimal points.

The number system that humans normally use is in base 10 (also known as decimal). It's worth revising quickly, because binary numbers use the same ideas as decimal numbers, just with fewer digits!

In decimal, the value of each digit in a number depends on its place in the number. For example, in $123, the 3 represents $3, whereas the 1 represents $100. Each place value in a number is worth 10 times more than the place value to its right, i.e. there are the "ones", the "tens", the "hundreds", the "thousands" the "ten thousands", the "hundred thousands", the "millions", and so on. Also, there are 10 different digits (0,1,2,3,4,5,6,7,8,9) that can be at each of those place values.

If you were only able to use one digit to represent a number, then the largest number would be 9. After that, you need a second digit, which goes to the left, giving you the next ten numbers (10, 11, 12... 19). It's because we have 10 digits that each one is worth 10 times as much as the one to its right.

You may have encountered different ways of expressing numbers using "expanded form". For example, if you want to write the number 90328 in expanded form you might have written it as:

A more sophisticated way of writing it is:

If you've learnt about exponents, you could write it as:

The key ideas to notice from this are:

  • Decimal has 10 digits – 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
  • A place is the place in the number that a digit is, i.e. ones, tens, hundreds, thousands, and so on. For example, in the number 90328, 3 is in the "hundreds" place, 2 is in the "tens" place, and 9 is in the "ten thousands" place.
  • Numbers are made with a sequence of digits.
  • The right-most digit is the one that's worth the least (in the "ones" place).
  • The left-most digit is the one that's worth the most.
  • Because we have 10 digits, the digit at each place is worth 10 times as much as the one immediately to the right of it.

All this probably sounds really obvious, but it is worth thinking about consciously, because binary numbers have the same properties.

As discussed earlier, computers can only store information using bits, which have 2 possible states. This means that they cannot represent base 10 numbers using digits 0 to 9, the way we write down numbers in decimal. Instead, they must represent numbers using just 2 digits – 0 and 1.

Binary works in a very similar way to decimal, even though it might not initially seem that way. Because there are only 2 digits, this means that each digit is 2 times the value of the one immediately to the right.

The base 10 (decimal) system is sometimes called denary, which is more consistent with the name binary for the base 2 system. The word "denary" also refers to the Roman denarius coin, which was worth ten asses (an "as" was a copper or bronze coin). The term "denary" seems to be used mainly in the UK; in the US, Australia and New Zealand the term "decimal" is more common.

The interactive below illustrates how this binary number system represents numbers. Have a play around with it to see what patterns you can see.

Thumbnail of Base Calculator interactive

Base Calculator

Find the representations of 4, 7, 12, and 57 using the interactive.

What is the largest number you can make with the interactive? What is the smallest? Is there any integer value in between the biggest and the smallest that you can’t make? Are there any numbers with more than one representation? Why/ why not?

  • 000000 in binary, 0 in decimal is the smallest number.
  • 111111 in binary, 63 in decimal is the largest number.
  • All the integer values (0, 1, 2... 63) in the range can be represented (and there is a unique representation for each one). This is exactly the same as decimal!

You have probably noticed from the interactive that when set to 1, the leftmost bit (the "most significant bit") adds 32 to the total, the next adds 16, and then the rest add 8, 4, 2, and 1 respectively. When set to 0, a bit does not add anything to the total. So the idea is to make numbers by adding some or all of 32, 16, 8, 4, 2, and 1 together, and each of those numbers can only be included once.

If you get an 11/100 on a CS test, but you claim it should be counted as a 'C', they'll probably decide you deserve the upgrade.

Choose a number less than 61 (perhaps your house number, your age, a friend's age, or the day of the month you were born on), set all the binary digits to zero, and then start with the left-most digit (32), trying out if it should be zero or one. See if you can find a method for converting the number without too much trial and error. Try different numbers until you find a quick way of doing this.

Figure out the binary representation for 23 without using the interactive? What about 4, 0, and 32? Check all your answers using the interactive to verify they are correct.

Can you figure out a systematic approach to counting in binary? i.e. start with the number 0, then increment it to 1, then 2, then 3, and so on, all the way up to the highest number that can be made with the 7 bits. Try counting from 0 to 16, and see if you can detect a pattern. Hint: Think about how you add 1 to a number in base 10. e.g. how do you work out 7 + 1, 38 + 1, 19 + 1, 99 + 1, 230899999 + 1, etc? Can you apply that same idea to binary?

Using your new knowledge of the binary number system, can you figure out a way to count to higher than 10 using your 10 fingers? What is the highest number you can represent using your 10 fingers? What if you included your 10 toes as well (so you have 20 fingers and toes to count with).

A binary number can be incremented by starting at the right and flipping all consecutive bits until a 1 comes up (which will be on the very first bit half of the time).

Counting on fingers in binary means that you can count to 31 on 5 fingers, and 1023 on 10 fingers. There are a number of videos on YouTube of people counting in binary on their fingers. One twist is to wear white gloves with the numbers 16, 8, 4, 2, 1 on the 5 fingers respectively, which makes it easy to work out the value of having certain fingers raised.

The interactive used exactly 6 bits. In practice, we can use as many or as few bits as we need, just like we do with decimal. For example, with 5 bits, the place values would be 16, 8, 4, 2 and 1, so the largest value is 11111 in binary, or 31 in decimal. Representing 14 with 5 bits would give 01110.

Write representations for the following. If it is not possible to do the representation, put "Impossible".

  • Represent 101 with 7 bits
  • Represent 28 with 10 bits
  • Represent 7 with 3 bits
  • Represent 18 with 4 bits
  • Represent 28232 with 16 bits

The answers are (spaces are added to make the answers easier to read, but are not required).

  • 101 with 7 bits is: 110 0101
  • 28 with 10 bits is: 00 0001 1100
  • 7 with 3 bits is: 111
  • 18 with 4 bits is: Impossible (not enough bits to represent value)
  • 28232 with 16 bits is: 0110 1110 0100 1000

An important concept with binary numbers is the range of values that can be represented using a given number of bits. When we have 8 bits the binary numbers start to get useful – they can represent values from 0 to 255, so it is enough to store someone's age, the day of the month, and so on.

Groups of 8 bits are so useful that they have their own name: a byte . Computer memory and disk space are usually divided up into bytes, and bigger values are stored using more than one byte. For example, two bytes (16 bits) are enough to store numbers from 0 to 65,535. Four bytes (32 bits) can store numbers up to 4,294,967,295. You can check these numbers by working out the place values of the bits. Every bit that's added will double the range of the number.

In practice, computers store numbers with either 16, 32, or 64 bits. This is because these are full numbers of bytes (a byte is 8 bits), and makes it easier for computers to know where each number starts and stops.

Candles on birthday cakes use the base 1 numbering system, where each place is worth 1 more than the one to its right. For example, the number 3 is 111, and 10 is 1111111111. This can cause problems as you get older – if you've ever seen a cake with 100 candles on it, you'll be aware that it's a serious fire hazard.

The image shows two people with birthday cakes, however a cake with 100 candles on it turns into a big fireball!

Luckily it's possible to use binary notation for birthday candles – each candle is either lit or not lit. For example, if you are 18, the binary notation is 10010, and you need 5 candles (with only two of them lit).

There's a video on using binary notation for counting up to 1023 on your hands, as well as using it for birthday cakes .

It's a lot smarter to use binary notation on candles for birthdays as you get older, as you don't need as many candles.

Most of the time binary numbers are stored electronically, and we don't need to worry about making sense of them. But sometimes it's useful to be able to write down and share numbers, such as the unique identifier assigned to each digital device (MAC address), or the colours specified in an HTML page.

Writing out long binary numbers is tedious – for example, suppose you need to copy down the 16-bit number 0101001110010001. A widely used shortcut is to break the number up into 4-bit groups (in this case, 0101 0011 1001 0001), and then write down the digit that each group represents (giving 5391). There's just one small problem: each group of 4 bits can go up to 1111, which is 15, and the digits only go up to 9.

The solution is simple: we introduce symbols for the digits from 1010 (10) to 1111 (15), which are just the letters A to F. So, for example, the 16-bit binary number 1011 1000 1110 0001 can be written more concisely as B8E1. The "B" represents the binary 1011, which is the decimal number 11, and the E represents binary 1110, which is decimal 14.

Because we now have 16 digits, this representation is base 16, and known as hexadecimal (or hex for short). Converting between binary and hexadecimal is very simple, and that's why hexadecimal is a very common way of writing down large binary numbers.

Here's a full table of all the 4-bit numbers and their hexadecimal digit equivalent:

0000 0
0001 1
0010 2
0011 3
0100 4
0101 5
0110 6
0111 7
1000 8
1001 9
1010 A
1011 B
1100 C
1101 D
1110 E
1111 F

For example, the largest 8-bit binary number is 11111111. This can be written as FF in hexadecimal. Both of those representations mean 255 in our conventional decimal system (you can check that by converting the binary number to decimal).

Which notation you use will depend on the situation; binary numbers represent what is actually stored, but can be confusing to read and write; hexadecimal numbers are a good shorthand of the binary; and decimal numbers are used if you're trying to understand the meaning of the number or doing normal math. All three are widely used in computer science.

It is important to remember though, that computers only represent numbers using binary. They cannot represent numbers directly in decimal or hexadecimal.

A common place that numbers are stored on computers is in spreadsheets or databases. These can be entered either through a spreadsheet program or database program, through a program you or somebody else wrote, or through additional hardware such as sensors, collecting data such as temperatures, air pressure, or ground shaking.

Some of the things that we might think of as numbers, such as the telephone number (03) 555-1234, aren't actually stored as numbers, as they contain important characters (like dashes and spaces) as well as the leading 0 which would be lost if it was stored as a number (the above number would come out as 35551234, which isn't quite right). These are stored as text , which is discussed in the next section.

On the other hand, things that don't look like a number (such as "30 January 2014") are often stored using a value that is converted to a format that is meaningful to the reader (try typing two dates into Excel, and then subtract one from the other – the result is a useful number). In the underlying representation, a number is used. Program code is used to translate the underlying representation into a meaningful date on the user interface.

The difference between two dates in Excel is the number of days between them; the date itself (as in many systems) is stored as the amount of time elapsed since a fixed date (such as 1 January 1900). You can test this by typing a date like "1 January 1850" – chances are that it won't be formatted as a normal date. Likewise, a date sufficiently in the future may behave strangely due to the limited number of bits available to store the date.

Numbers are used to store things as diverse as dates, student marks, prices, statistics, scientific readings, sizes and dimensions of graphics.

The following issues need to be considered when storing numbers on a computer:

  • What range of numbers should be able to be represented?
  • How do we handle negative numbers?
  • How do we handle decimal points or fractions?

In practice, we need to allocate a fixed number of bits to a number, before we know how big the number is. This is often 32 bits or 64 bits, although can be set to 16 bits, or even 128 bits, if needed. This is because a computer has no way of knowing where a number starts and ends, otherwise.

Any system that stores numbers needs to make a compromise between the number of bits allocated to store the number, and the range of values that can be stored.

In some systems (like the Java and C programming languages and databases) it's possible to specify how accurately numbers should be stored; in others it is fixed in advance (such as in spreadsheets).

Some are able to work with arbitrarily large numbers by increasing the space used to store them as necessary (e.g. integers in the Python programming language). However, it is likely that these are still working with a multiple of 32 bits (e.g. 64 bits, 96 bits, 128 bits, 160 bits, etc). Once the number is too big to fit in 32 bits, the computer would reallocate it to have up to 64 bits.

In some programming languages there isn't a check for when a number gets too big (overflows). For example, if you have an 8-bit number using two's complement, then 01111111 is the largest number (127), and if you add one without checking, it will change to 10000000, which happens to be the number -128. (Don't worry about two's complement too much, it's covered later in this section.) This can cause serious problems if not checked for, and is behind a variant of the Y2K problem, called the Year 2038 problem , involving a 32-bit number overflowing for dates on Tuesday, 19 January 2038.

A xkcd comic on number overflow

On tiny computers, such as those embedded inside your car, washing machine, or a tiny sensor that is barely larger than a grain of sand, we might need to specify more precisely how big a number needs to be. While computers prefer to work with chunks of 32 bits, we could write a program (as an example for an earthquake sensor) that knows the first 7 bits are the lattitude, the next 7 bits are the longitude, the next 10 bits are the depth, and the last 8 bits are the amount of force.

Even on standard computers, it is important to think carefully about the number of bits you will need. For example, if you have a field in your database that could be either "0", "1", "2", or "3" (perhaps representing the four bases that can occur in a DNA sequence), and you used a 64 bit number for every one, that will add up as your database grows. If you have 10,000,000 items in your database, you will have wasted 62 bits for each one (only 2 bits is needed to represent the 4 numbers in the example), a total of 620,000,000 bits, which is around 74 MB. If you are doing this a lot in your database, that will really add up – human DNA has about 3 billion base pairs in it, so it's incredibly wasteful to use more than 2 bits for each one.

And for applications such as Google Maps, which are storing an astronomical amount of data, wasting space is not an option at all!

It is really useful to know roughly how many bits you will need to represent a certain value. Have a think about the following scenarios, and choose the best number of bits out of the options given. You want to ensure that the largest possible number will fit within the number of bits, but you also want to ensure that you are not wasting space.

  • Storing the day of the week - a) 1 bit - b) 4 bits - c) 8 bits - d) 32 bits
  • Storing the number of people in the world - a) 16 bits - b) 32 bits - c) 64 bits - d) 128 bits
  • Storing the number of roads in New Zealand - a) 16 bits - b) 32 bits - c) 64 bits - d) 128 bits
  • Storing the number of stars in the universe - a) 16 bits - b) 32 bits - c) 64 bits - d) 128 bits
  • b (actually, 3 bits is enough as it gives 8 values, but amounts that fit evenly into 8-bit bytes are easier to work with)
  • c (32 bits is slightly too small, so you will need 64 bits)
  • b (This is a challenging question, but one a database designer would have to think about. There's about 94,000 km of roads in New Zealand, so if the average length of a road was 1km, there would be too many roads for 16 bits. Either way, 32 bits would be a safe bet.)
  • d (Even 64 bits is not enough, but 128 bits is plenty! Remember that 128 bits isn't twice the range of 64 bits.)

The binary number representation we have looked at so far allows us to represent positive numbers only. In practice, we will want to be able to represent negative numbers as well, such as when the balance of an account goes to a negative amount, or the temperature falls below zero. In our normal representation of base 10 numbers, we represent negative numbers by putting a minus sign in front of the number. But in binary, is it this simple?

We will look at two possible approaches: Adding a simple sign bit, much like we do for decimal, and then a more useful system called two's complement.

Using a simple sign bit

On a computer we don’t have minus signs for numbers (it doesn't work very well to use the text based one when representing a number because you can't do arithmetic on characters), but we can do it by allocating one extra bit, called a sign bit, to represent the minus sign. Just like with decimal numbers, we put the negative indicator on the left of the number — when the sign bit is set to "0", that means the number is positive and when the sign bit is set to "1", the number is negative (just as if there were a minus sign in front of it).

For example, if we wanted to represent the number 41 using 7 bits along with an additional bit that is the sign bit (to give a total of 8 bits), we would represent it by 00101001 . The first bit is a 0, meaning the number is positive, then the remaining 7 bits give 41 , meaning the number is +41 . If we wanted to make -59 , this would be 10111011 . The first bit is a 1, meaning the number is negative, and then the remaining 7 bits represent 59 , meaning the number is -59 .

Using 8 bits as described above (one for the sign, and 7 for the actual number), what would be the binary representations for 1, -1, -8, 34, -37, -88, and 102?

The spaces are not necessary, but are added to make reading the binary numbers easier

  • 1 is 0000 0001
  • -1 is 1000 0001
  • -8 is 1000 1000
  • 34 is 0010 0010
  • -37 is 1010 0101
  • -88 is 1101 1000
  • 102 is 0110 0110

Going the other way is just as easy. If we have the binary number 10010111 , we know it is negative because the first digit is a 1. The number part is the next 7 bits 0010111 , which is 23 . This means the number is -23 .

What would the decimal values be for the following, assuming that the first bit is a sign bit?

  • 00010011 is 19
  • 10000110 is -6
  • 10100011 is -35
  • 01111111 is 127
  • 11111111 is -127

But what about 10000000? That converts to -0 . And 00000000 is +0 . Since -0 and +0 are both just 0, it is very strange to have two different representations for the same number.

This is one of the reasons that we don't use a simple sign bit in practice. Instead, computers usually use a more sophisticated representation for negative binary numbers called two's complement .

Two's complement

There's an alternative representation called two's complement , which avoids having two representations for 0, and more importantly, makes it easier to do arithmetic with negative numbers.

Representing positive numbers with two's complement

Representing positive numbers is the same as the method you have already learnt. Using 8 bits ,the leftmost bit is a zero and the other 7 bits are the usual binary representation of the number; for example, 1 would be 00000001 , and 50 would be 00110010 .

Representing negative numbers with two's complement

This is where things get more interesting. In order to convert a negative number to its two's complement representation, use the following process. 1. Convert the number to binary (don't use a sign bit, and pretend it is a positive number). 2. Invert all the digits (i.e. change 0's to 1's and 1's to 0's). 3. Add 1 to the result (Adding 1 is easy in binary; you could do it by converting to decimal first, but think carefully about what happens when a binary number is incremented by 1 by trying a few; there are more hints in the panel below).

For example, assume we want to convert -118 to its two's complement representation. We would use the process as follows. 1. The binary number for 118 is 01110110 . 2. 01110110 with the digits inverted is 10001001 . 3. 10001001 + 1 is 10001010 .

Therefore, the two's complement representation for -118 is 10001010 .

The rule for adding one to a binary number is pretty simple, so we'll let you figure it out for yourself. First, if a binary number ends with a 0 (e.g. 1101010), how would the number change if you replace the last 0 with a 1? Now, if it ends with 01, how much would it increase if you change the 01 to 10? What about ending with 011? 011111?

The method for adding is so simple that it's easy to build computer hardware to do it very quickly.

What would be the two's complement representation for the following numbers, using 8 bits ? Follow the process given in this section, and remember that you do not need to do anything special for positive numbers.

  • 19 in binary is 0001 0011 , which is the two's complement for a positive number.
  • For -19, we take the binary of the positive, which is 0001 0011 (above), invert it to 1110 1100, and add 1, giving a representation of 1110 1101 .
  • 107 in binary is 0110 1011 , which is the two's complement for a positive number.
  • For -107, we take the binary of the positive, which is 0110 1011 (above), invert it to 1001 0100, and add 1, giving a representation of 1001 0101 .
  • For -92, we take the binary of the positive, which is 0101 1100, invert it to 1010 0011, and add 1, giving a representation of 1010 0100 . (If you have this incorrect, double check that you incremented by 1 correctly).

Converting a two's complement number back to decimal

In order to reverse the process, we need to know whether the number we are looking at is positive or negative. For positive numbers, we can simply convert the binary number back to decimal. But for negative numbers, we first need to convert it back to a normal binary number.

So how do we know if the number is positive or negative? It turns out (for reasons you will understand later in this section) that two's complement numbers that are negative always start in a 1, and positive numbers always start in a 0. Have a look back at the previous examples to double check this.

So, if the number starts with a 1, use the following process to convert the number back to a negative decimal number.

  • Subtract 1 from the number.
  • Invert all the digits.
  • Convert the resulting binary number to decimal.
  • Add a minus sign in front of it.

So if we needed to convert 11100010 back to decimal, we would do the following.

  • Subtract 1 from 11100010 , giving 11100001 .
  • Invert all the digits, giving 00011110 .
  • Convert 00011110 to a binary number, giving 30 .
  • Add a negative sign, giving -30 .

Convert the following two's complement numbers to decimal.

  • 10001100 -> (-1) 10001011 -> (inverted) 01110100 -> (to decimal) 116 -> (negative sign added) -116
  • 10111111 -> (-1) 10111110 -> (inverted) 01000001 -> (to decimal) 65 -> (negative sign added) -65

How many numbers can be represented using two's complement?

While it might initially seem that there is no bit allocated as the sign bit, the left-most bit behaves like one. With 8 bits, you can still only make 256 possible patterns of 0's and 1's. If you attempted to use 8 bits to represent positive numbers up to 255, and negative numbers down to -255, you would quickly realise that some numbers were mapped onto the same pattern of bits. Obviously, this will make it impossible to know what number is actually being represented!

In practice, numbers within the following ranges can be represented. Unsigned Range is how many numbers you can represent if you only allow positive numbers (no sign is needed), and two's complement Range is how many numbers you can represent if you require both positive and negative numbers. You can work these out because the range of 8-bit values if they are stored using unsigned numbers will be from 00000000 to 11111111 (i.e. 0 to 255 in decimal), while the signed two's complement range is from 10000000 (the lowest number, -128 in decimal) to 01111111 (the highest number, 127 in decimal). This might seem a bit weird, but it works out really well because normal binary addition can be used if you use this representation even if you're adding a negative number.

8 bit 0 to 255 -128 to 127
16 bit 0 to 65,535 -32,768 to 32,767
32 bit 0 to 4,294,967,295 −2,147,483,648 to 2,147,483,647
64 bit           0 to 18,446,744,073,709,551,615           −9,223,372,036,854,775,808 to 9,223,372,036,854,775,807

Adding negative binary numbers

Before adding negative binary numbers, we'll look at adding positive numbers. It's basically the same as the addition methods used on decimal numbers, except the rules are way simpler because there are only two different digits that you might add!

You've probably learnt about column addition. For example, the following column addition would be used to do 128 + 255 .

When you go to add 5 + 8, the result is higher than 9, so you put the 3 in the one's column, and carry the 1 to the 10's column. Binary addition works in exactly the same way.

Adding positive binary numbers

If you wanted to add two positive binary numbers, such as 00001111 and 11001110 , you would follow a similar process to the column addition. You only need to know 0+0, 0+1, 1+0, and 1+1, and 1+1+1. The first three are just what you might expect. Adding 1+1 causes a carry digit, since in binary 1+1 = 10, which translates to "0, carry 1" when doing column addition. The last one, 1+1+1 adds up to 11 in binary, which we can express as "1, carry 1". For our two example numbers, the addition works like this:

Remember that the digits can be only 1 or 0. So you will need to carry a 1 to the next column if the total you get for a column is (decimal) 2 or 3.

Adding negative numbers with a simple sign bit

With negative numbers using sign bits like we did before, this does not work. If you wanted to add +11 (01011) and -7 (10111) , you would expect to get an answer of +4 (00100) .

Which is -2 .

One way we could solve the problem is to use column subtraction instead. But this would require giving the computer a hardware circuit which could do this. Luckily this is unnecessary, because addition with negative numbers works automatically using two's complement!

Adding negative numbers with two's complement

For the above addition (+11 + -7), we can start by converting the numbers to their 5-bit two's complement form. Because 01011 (+11) is a positive number, it does not need to be changed. But for the negative number, 00111 (-7) (sign bit from before removed as we don't use it for two's complement), we need to invert the digits and then add 1, giving 11001 .

Adding these two numbers works like this:

Any extra bits to the left (beyond what we are using, in this case 5 bits) have been truncated. This leaves 00100 , which is 4 , like we were expecting.

We can also use this for subtraction. If we are subtracting a positive number from a positive number, we would need to convert the number we are subtracting to a negative number. Then we should add the two numbers. This is the same as for decimal numbers, for example 5 - 2 = 3 is the same as 5 + (-2) = 3.

This property of two's complement is very useful. It means that positive numbers and negative numbers can be handled by the same computer circuit, and addition and subtraction can be treated as the same operation.

The idea of using a "complementary" number to change subtraction to addition can be seen by doing the same in decimal. The complement of a decimal digit is the digit that adds up to 10; for example, the complement of 4 is 6, and the complement of 8 is 2. (The word "complement" comes from the root "complete" – it completes it to a nice round number.)

Subtracting 2 from 6 is the same as adding the complement, and ignoring the extra 1 digit on the left. The complement of 2 is 8, so we add 8 to 6, giving (1)4.

For larger numbers (such as subtracting the two 3-digit numbers 255 - 128), the complement is the number that adds up to the next power of 10 i.e. 1000-128 = 872. Check that adding 872 to 255 produces (almost) the same result as subtracting 128.

Working out complements in binary is way easier because there are only two digits to work with, but working them out in decimal may help you to understand what is going on.

Using sign bits vs using two's complement

We have now looked at two different ways of representing negative numbers on a computer. In practice, a simple sign bit is rarely used, because of having two different representations of zero, and requiring a different computer circuit to handle negative and positive numbers, and to do addition and subtraction.

Two's complement is widely used, because it only has one representation for zero, and it allows positive numbers and negative numbers to be treated in the same way, and addition and subtraction to be treated as one operation.

There are other systems such as "One's Complement" and "Excess-k", but two's complement is by far the most widely used in practice.

data representation to binary

If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

Computers and the Internet

Course: computers and the internet   >   unit 1, binary numbers.

  • Decimal system refresher
  • The binary number system
  • Converting decimal numbers to binary

Patterns in binary numbers

data representation to binary

Refresher: Decimal numbers

hundreds' placetens' placeones' place
234
hundreds' placetens' placeones' place
  • (Choice A)   0 110 ‍   A 0 110 ‍  
  • (Choice B)   0 101 ‍   B 0 101 ‍  
  • (Choice C)   10 0 0 ‍   C 10 0 0 ‍  
  • (Choice D)   0 111 ‍   D 0 111 ‍  

Converting decimal to binary

  • Grab a piece of paper or a whiteboard.
  • Draw dashes for each of the bits. If the number is less than 16 ‍   , draw 4 ‍   dashes. Otherwise, for numbers up to 255 ‍   , draw 8 ‍   dashes. Bigger numbers than that require more bits and take a while to do by hand, so let's focus on the smaller numbers.
  • Write the powers of 2 ‍   under each dash. Start under the right-most dash, writing 1 ‍   , then keep multiplying by 2 ‍   .
  • Now start at the left-most dash and ask yourself "Is the number greater than or equal to this place value?" If you answer yes, then write a 1 ‍   in that dash and subtract that amount from the number. If you answer no, then write a 0 ‍   and move to the next dash.
  • Keep going from left to right, keeping track of how much remainder you still need to represent. When you're done, you'll have converted the number to binary!
  • (Choice A)   1011 ‍   A 1011 ‍  
  • (Choice B)   1010 ‍   B 1010 ‍  
  • (Choice C)   110 0 ‍   C 110 0 ‍  
  • (Choice D)   1111 ‍   D 1111 ‍  
  • (Choice A)   0 0 0 1 10 0 1 ‍   A 0 0 0 1 10 0 1 ‍  
  • (Choice B)   0 0 0 1 0 111 ‍   B 0 0 0 1 0 111 ‍  
  • (Choice C)   0 0 0 1 1010 ‍   C 0 0 0 1 1010 ‍  
  • (Choice D)   0 0 0 1 1111 ‍   D 0 0 0 1 1111 ‍  
DecimalBinary
  • (Choice A)   0 1010101 0 1010101 ‍   A 0 1010101 0 1010101 ‍  
  • (Choice B)   0 1011101 0 1010110 ‍   B 0 1011101 0 1010110 ‍  
  • (Choice C)   0 1010101 1101 00 0 0 ‍   C 0 1010101 1101 00 0 0 ‍  
  • (Choice D)   0 1010101 0 1010110 ‍   D 0 1010101 0 1010110 ‍  
DecimalBinary
Bits ( )Highest number( )
1
2
3
4
  • (Choice A)   31 ‍   A 31 ‍  
  • (Choice B)   15 ‍   B 15 ‍  
  • (Choice C)   63 ‍   C 63 ‍  
  • (Choice D)   24 ‍   D 24 ‍  

Want to join the conversation?

  • Upvote Button navigates to signup page
  • Downvote Button navigates to signup page
  • Flag Button navigates to signup page

Incredible Answer

Binary Calculator / Converter

Use this tool in binary calculator mode to perform arithmetic operations with binary numbers (add, subtract, multiply and divide binaries). Use it in binary converter mode to easily convert a binary number to a decimal notation real number, a decimal number to a binary number (decimal to binary and binary to decimal converter ), as well as binary to hex and hex to binary.

Related calculators

  • What is a binary number?
  • Converting to and from binary numbers

Binary to decimal

Decimal to binary.

  • Arithmetic operations with binary numbers
  • Binary arithmetic calculation examples

    What is a binary number?

A binary number is a number expressed in the binary system which is a positional numeral system with a base of 2 which uses just 2 symbols: 0 and 1 to represent all possible numerical values. For example, 10 in decimal is 1010 in binary, 100 in decimal is 1100100 in binary, while 1,000 in decimal is 1111101000 in binary. Binary numbers have signs, just like decimal ones, for example -101 is equal to -5 in decimal. Negative numbers are, for the time being, not supported in the binary calculator / binary converter above.

While binary numerals were used historically in Egypt, China, India and other cultures, since the 20-th century they are predominantly used mostly in computing: computer system designers, software engineers and programmers etc. since the underlying computer systems encode everything with the presence or absence of an electrical charge. Thus, at the lowest level of abstraction everything in a computer system is represented by ones and zeroes. Most of us, thankfully, do not need to do any arithmetic or counting in binary, but a calculator or converter may often come into play in computer programming.

Using our binary calculator you can perform arithmetic operations (addition, subtraction, multiplication and division of binary numbers) as well as use it as a binary converter for binary to decimal, decimal to binary, hex to binary and binary to hex conversions.

Here is a table of some numbers represented in the decimal, hex and binary systems (base 10, base 2 and base 16).

Numbers in decimal, hex and binary
DecimalBinaryHex
0 0 0
1 1 1
2 10 2
3 11 3
5 101 5
10 1010 A
11 1011 B
12 1100 C
13 1101 D
14 1110 E
15 1111 F
50 110010 32
63 111111 3F
100 1100100 64
1000 1111101000 3E8
10000 10011100010000 2710

    Converting to and from binary numbers

Converting numbers to and from binary does not change the number itself, it just changes its form. Using our binary converter above, you can do both types of conversions quickly and easily or you can read how to do it manually below. Note that binary calculation and conversion are separate operations: you do not need to perform one in order to do the other.

Each position in a binary numeral represents a power of 2 the same way each position in a decimal number represents a power of 10. For example, the number 20 in decimal is 2 · 10 1 + 0 · 10 0 = 20. The binary number 101 is then 1 · 2 2 + 0 · 2 1 + 1 · 2 0 = 4 + 0 + 1 = 5 in decimal.

The process of binary to decimal conversion is therefore to take each position and multiply its value by 2 to the power of the position number, counting from right to left and starting at zero. If you need to calculate large exponents like 2 16 you might find our exponent calculator useful.

This process is a bit more complex as we are going from a higher base to a lower base. This is where you'd really appreciate having a tool like our binary converter handy. Let us say the number we want to convert from decimal to binary is X. Begin by finding the largest power of 2 ≤ X and denote it by E. Then determine how many times the power of 16 found above goes into X and make not of that. Denote the remainder by Y 1 .

Repeat the above steps using Y n as a starting value until 2 is larger than the remaining value and assing the remainder to the 2 0 position, then assign each of the values Y 1...n to its respective position and you will have your hex value.

Example decimal to binary conversion: Convert 100 in decimal to hex.

1.) Largest power E = 6 (2 6 = 64 ≤ 100, 2 7 = 128 ≥ 100)

2.) 100 / 2 6 = 1 (36 remainder); Y 1 = 1

3.) Largest power E = 5 (2 5 = 32 ≤ 36, 2 6 = 64 ≥ 36)

4.) 32 / 2 5 = 1 (4 remainder); Y 2 = 1

5.) Largest power E = 2 (2 2 = 4 ≤ 4, 2 3 = 8 ≥ 4)

6.) 4 / 2 2 = 0 (0 remainder); Y 3 = 0

7.) 0 < 2; end.

For each power you have used place in its position. For the remainder place zeroes. In this case we've used powers of 2, 5 and 6, therefore the result is: 11 00 1 00 ( 1 · 2 6 + 1 · 2 5 + 0 · 2 4 + 0 · 2 3 + 1 · 2 2 + 0 · 2 1 + 0 · 2 0 ).

Hex to binary and binary to hex conversion follows the same principles, but with base 16 instead of base 10.

binary numbers

    Arithmetic operations with binary numbers

Using our tool in binary calculator mode you can perform the four basic arithmetic operations on binary numbers: addition, subtraction, multiplication and division. E.g. it can easily be used as a binary addition calculator. In order to do the binary calculations yourself most would prefer using a table for smaller numbers and a calculator for larger ones. Subtraction works the same way as any other number system, except when borrowing a number you need to borrow a group of 2 10 instead of 10 10 as you would with decimals.

    Binary arithmetic calculation examples

A few examples of using base 2 numbers will be instructional in showing that it works similarly to ordindary decimal numbers. A simple simple to start with: add 10 2 and 11 2 . Adding these two binary numbers starting from right-to-left is 0 + 1 = 1, 1 + 1 = 10 so that is 0 with a carry of 1 2 so we get 01 2 and when the carry is added at the front we get the result: 101 2 .

For a more complex addition example let us add the hex numbers 111 2 and 101 2 . Starting from right to left:

  • (1) Add 1 2 and 1 2 , resulting in 10 2 , which is 0 2 and carry 1 2 over to the left.
  • (2) Add 1 2 and 0 2 , then add 1 2 from the carryover, resulting in 0 2 and a carry of 1 2 .
  • (3) Add 1 2 and 1 2 and the 1 2 carried over from (2) to get 11 2 .
  • Write down the outcome of (3), (2) and (1) next to each other to get 1100 2 - the result of adding the binary numbers 111 and 101.

This is how binary addition works, and likewise for binary subtraction, multiplication, and division.

Cite this calculator & page

If you'd like to cite this online calculator resource and information as provided on the page, you can use the following citation: Georgiev G.Z., "Binary Calculator" , [online] Available at: https://www.gigacalculator.com/calculators/binary-calculator.php URL [Accessed Date: 25 Jul, 2024].

     Math calculators

How-To Geek

What is binary, and why do computers use it.

4

Your changes have been saved

Email Is sent

Please verify your email address.

You’ve reached your account maximum for followed topics.

Apple Maps Now Has a Real Web App

Dualsense not doing it for you get a custom ps5 controller instead, verizon might give you a free nfl sunday ticket, quick links, counting in binary, so why do computers use binary, but why only base 2.

Computers don't understand words or numbers the way humans do. Modern software allows the end user to ignore this, but at the lowest levels of your computer, everything is represented by a binary electrical signal that registers in one of two states: on or off. To make sense of complicated data, your computer has to encode it in binary.

Binary is a base 2 number system. Base 2 means there are only two digits---1 and 0---which correspond to the on and off states your computer can understand. You're probably familiar with base 10---the decimal system. Decimal makes use of ten digits that range from 0 to 9, and then wraps around to form two-digit numbers, with each digit being worth ten times more than the last (1, 10, 100, etc.). Binary is similar, with each digit being worth two times more than the last.

data representation to binary

In binary, the first digit is worth 1 in decimal. The second digit is worth 2, the third worth 4, the fourth worth 8, and so on---doubling each time. Adding these all up gives you the number in decimal. So,

1111 (in binary)  =  8 + 4 + 2 + 1  =  15 (in decimal)

Accounting for 0, this gives us 16 possible values for four binary bits. Move to 8 bits, and you have 256 possible values. This takes up a lot more space to represent, as four digits in decimal give us 10,000 possible values. It may seem like we're going through all this trouble of reinventing our counting system just to make it clunkier, but computers understand binary much better than they understand decimal. Sure, binary takes up more space, but we're held back by the hardware. And for some things, like logic processing, binary is better than decimal.

There's another base system that's also used in programming: hexadecimal. Although computers don't run on hexadecimal, programmers use it to represent binary addresses in a human-readable format when writing code. This is because two digits of hexadecimal can represent a whole byte, eight digits in binary. Hexadecimal uses 0-9 like decimal, and also the letters A through F to represent the additional six digits.

The short answer: hardware and the laws of physics. Every number in your computer is an electrical signal, and in the early days of computing, electrical signals were much harder to measure and control very precisely. It made more sense to only distinguish between an "on" state---represented by negative charge---and an "off" state---represented by a positive charge. For those unsure of why the "off" is represented by a positive charge, it's because electrons have a negative charge---more electrons mean more current with a negative charge.

So, the early room-sized computers used binary to build their systems, and even though they used much older, bulkier hardware, we've kept the same fundamental principles. Modern computers use what's known as a transistor to perform calculations with binary. Here's a diagram of what a field-effect transistor (FET) looks like:

Essentially, it only allows current to flow from the source to the drain if there is a current in the gate. This forms a binary switch. Manufacturers can build these transistors incredibly small---all the way down to 5 nanometers, or about the size of two strands of DNA. This is how modern CPUs operate, and even they can suffer from problems differentiating between on and off states (though that's mostly due to their unreal molecular size, being subject to the weirdness of quantum mechanics ).

So you may be thinking, "why only 0 and 1? Couldn't you just add another digit?" While some of it comes down to tradition in how computers are built, to add another digit would mean we'd have to distinguish between different levels of current---not just "off" and "on," but also states like "on a little bit" and "on a lot."

The problem here is if you wanted to use multiple levels of voltage, you'd need a way to easily perform calculations with them, and the hardware for that isn't viable as a replacement for binary computing. It indeed does exist; it's called a ternary computer , and it's been around since the 1950s, but that's pretty much where development on it stopped. Ternary logic is way more efficient than binary, but as of yet, nobody has an effective replacement for the binary transistor, or at the very least, no work's been done on developing them at the same tiny scales as binary.

The reason we can't use ternary logic comes down to the way transistors are stacked in a computer---something called "gates" --- and how they're used to perform math. Gates take two inputs, perform an operation on them, and return one output.

This brings us to the long answer: binary math is way easier for a computer than anything else. Boolean logic maps easily to binary systems, with True and False being represented by on and off. Gates in your computer operate on boolean logic: they take two inputs and perform an operation on them like AND, OR, XOR, and so on. Two inputs are easy to manage. If you were to graph the answers for each possible input, you would have what's known as a truth table:

data representation to binary

A binary truth table operating on boolean logic will have four possible outputs for each fundamental operation. But because ternary gates take three inputs, a ternary truth table would have 9 or more. While a binary system has 16 possible operators (2^2^2), a ternary system would have 19,683 (3^3^3). Scaling becomes an issue because while ternary is more efficient, it's also exponentially more complex.

Who knows? In the future, we could begin to see ternary computers become a thing, as we push the limits of binary down to a molecular level. For now, though, the world will continue to run on binary.

Image credits: spainter_vfx /Shutterstock,  Wikipedia , Wikipedia , Wikipedia , Wikipedia

Page Statistics

Page Metric#
Views0
Avg. time spent0 s

Table Of Contents

  • Introduction to Functional Computer
  • Fundamentals of Architectural Design

Data Representation

  • Instruction Set Architecture : Instructions and Formats
  • Instruction Set Architecture : Design Models
  • Instruction Set Architecture : Addressing Modes
  • Performance Measurements and Issues
  • Computer Architecture Assessment 1
  • Fixed Point Arithmetic : Addition and Subtraction
  • Fixed Point Arithmetic : Multiplication
  • Fixed Point Arithmetic : Division
  • Floating Point Arithmetic
  • Arithmetic Logic Unit Design
  • CPU's Data Path
  • CPU's Control Unit
  • Control Unit Design
  • Concepts of Pipelining
  • Computer Architecture Assessment 2
  • Pipeline Hazards
  • Memory Characteristics and Organization
  • Cache Memory
  • Virtual Memory
  • I/O Communication and I/O Controller
  • Input/Output Data Transfer
  • Direct Memory Access controller and I/O Processor
  • CPU Interrupts and Interrupt Handling
  • Computer Architecture Assessment 3

Course Computer Architecture

Digital computers store and process information in binary form as digital logic has only two values "1" and "0" or in other words "True or False" or also said as "ON or OFF". This system is called radix 2. We human generally deal with radix 10 i.e. decimal. As a matter of convenience there are many other representations like Octal (Radix 8), Hexadecimal (Radix 16), Binary coded decimal (BCD), Decimal etc.

Every computer's CPU has a width measured in terms of bits such as 8 bit CPU, 16 bit CPU, 32 bit CPU etc. Similarly, each memory location can store a fixed number of bits and is called memory width. Given the size of the CPU and Memory, it is for the programmer to handle his data representation. Most of the readers may be knowing that 4 bits form a Nibble, 8 bits form a byte. The word length is defined by the Instruction Set Architecture of the CPU. The word length may be equal to the width of the CPU.

The memory simply stores information as a binary pattern of 1's and 0's. It is to be interpreted as what the content of a memory location means. If the CPU is in the Fetch cycle, it interprets the fetched memory content to be instruction and decodes based on Instruction format. In the Execute cycle, the information from memory is considered as data. As a common man using a computer, we think computers handle English or other alphabets, special characters or numbers. A programmer considers memory content to be data types of the programming language he uses. Now recall figure 1.2 and 1.3 of chapter 1 to reinforce your thought that conversion happens from computer user interface to internal representation and storage.

  • Data Representation in Computers

Information handled by a computer is classified as instruction and data. A broad overview of the internal representation of the information is illustrated in figure 3.1. No matter whether it is data in a numeric or non-numeric form or integer, everything is internally represented in Binary. It is up to the programmer to handle the interpretation of the binary pattern and this interpretation is called Data Representation . These data representation schemes are all standardized by international organizations.

Choice of Data representation to be used in a computer is decided by

  • The number types to be represented (integer, real, signed, unsigned, etc.)
  • Range of values likely to be represented (maximum and minimum to be represented)
  • The Precision of the numbers i.e. maximum accuracy of representation (floating point single precision, double precision etc)
  • If non-numeric i.e. character, character representation standard to be chosen. ASCII, EBCDIC, UTF are examples of character representation standards.
  • The hardware support in terms of word width, instruction.

Before we go into the details, let us take an example of interpretation. Say a byte in Memory has value "0011 0001". Although there exists a possibility of so many interpretations as in figure 3.2, the program has only one interpretation as decided by the programmer and declared in the program.

  • Fixed point Number Representation

Fixed point numbers are also known as whole numbers or Integers. The number of bits used in representing the integer also implies the maximum number that can be represented in the system hardware. However for the efficiency of storage and operations, one may choose to represent the integer with one Byte, two Bytes, Four bytes or more. This space allocation is translated from the definition used by the programmer while defining a variable as integer short or long and the Instruction Set Architecture.

In addition to the bit length definition for integers, we also have a choice to represent them as below:

  • Unsigned Integer : A positive number including zero can be represented in this format. All the allotted bits are utilised in defining the number. So if one is using 8 bits to represent the unsigned integer, the range of values that can be represented is 28 i.e. "0" to "255". If 16 bits are used for representing then the range is 216 i.e. "0 to 65535".
  • Signed Integer : In this format negative numbers, zero, and positive numbers can be represented. A sign bit indicates the magnitude direction as positive or negative. There are three possible representations for signed integer and these are Sign Magnitude format, 1's Compliment format and 2's Complement format .

Signed Integer – Sign Magnitude format: Most Significant Bit (MSB) is reserved for indicating the direction of the magnitude (value). A "0" on MSB means a positive number and a "1" on MSB means a negative number. If n bits are used for representation, n-1 bits indicate the absolute value of the number. Examples for n=8:

Examples for n=8:

0010 1111 = + 47 Decimal (Positive number)

1010 1111 = - 47 Decimal (Negative Number)

0111 1110 = +126 (Positive number)

1111 1110 = -126 (Negative Number)

0000 0000 = + 0 (Postive Number)

1000 0000 = - 0 (Negative Number)

Although this method is easy to understand, Sign Magnitude representation has several shortcomings like

  • Zero can be represented in two ways causing redundancy and confusion.
  • The total range for magnitude representation is limited to 2n-1, although n bits were accounted.
  • The separate sign bit makes the addition and subtraction more complicated. Also, comparing two numbers is not straightforward.

Signed Integer – 1’s Complement format: In this format too, MSB is reserved as the sign bit. But the difference is in representing the Magnitude part of the value for negative numbers (magnitude) is inversed and hence called 1’s Complement form. The positive numbers are represented as it is in binary. Let us see some examples to better our understanding.

1101 0000 = - 47 Decimal (Negative Number)

1000 0001 = -126 (Negative Number)

1111 1111 = - 0 (Negative Number)

  • Converting a given binary number to its 2's complement form

Step 1 . -x = x' + 1 where x' is the one's complement of x.

Step 2 Extend the data width of the number, fill up with sign extension i.e. MSB bit is used to fill the bits.

Example: -47 decimal over 8bit representation

As you can see zero is not getting represented with redundancy. There is only one way of representing zero. The other problem of the complexity of the arithmetic operation is also eliminated in 2’s complement representation. Subtraction is done as Addition.

More exercises on number conversion are left to the self-interest of readers.

  • Floating Point Number system

The maximum number at best represented as a whole number is 2 n . In the Scientific world, we do come across numbers like Mass of an Electron is 9.10939 x 10-31 Kg. Velocity of light is 2.99792458 x 108 m/s. Imagine to write the number in a piece of paper without exponent and converting into binary for computer representation. Sure you are tired!!. It makes no sense to write a number in non- readable form or non- processible form. Hence we write such large or small numbers using exponent and mantissa. This is said to be Floating Point representation or real number representation. he real number system could have infinite values between 0 and 1.

Representation in computer

Unlike the two's complement representation for integer numbers, Floating Point number uses Sign and Magnitude representation for both mantissa and exponent . In the number 9.10939 x 1031, in decimal form, +31 is Exponent, 9.10939 is known as Fraction . Mantissa, Significand and fraction are synonymously used terms. In the computer, the representation is binary and the binary point is not fixed. For example, a number, say, 23.345 can be written as 2.3345 x 101 or 0.23345 x 102 or 2334.5 x 10-2. The representation 2.3345 x 101 is said to be in normalised form.

Floating-point numbers usually use multiple words in memory as we need to allot a sign bit, few bits for exponent and many bits for mantissa. There are standards for such allocation which we will see sooner.

  • IEEE 754 Floating Point Representation

We have two standards known as Single Precision and Double Precision from IEEE. These standards enable portability among different computers. Figure 3.3 picturizes Single precision while figure 3.4 picturizes double precision. Single Precision uses 32bit format while double precision is 64 bits word length. As the name suggests double precision can represent fractions with larger accuracy. In both the cases, MSB is sign bit for the mantissa part, followed by Exponent and Mantissa. The exponent part has its sign bit.

It is to be noted that in Single Precision, we can represent an exponent in the range -127 to +127. It is possible as a result of arithmetic operations the resulting exponent may not fit in. This situation is called overflow in the case of positive exponent and underflow in the case of negative exponent. The Double Precision format has 11 bits for exponent meaning a number as large as -1023 to 1023 can be represented. The programmer has to make a choice between Single Precision and Double Precision declaration using his knowledge about the data being handled.

The Floating Point operations on the regular CPU is very very slow. Generally, a special purpose CPU known as Co-processor is used. This Co-processor works in tandem with the main CPU. The programmer should be using the float declaration only if his data is in real number form. Float declaration is not to be used generously.

  • Decimal Numbers Representation

Decimal numbers (radix 10) are represented and processed in the system with the support of additional hardware. We deal with numbers in decimal format in everyday life. Some machines implement decimal arithmetic too, like floating-point arithmetic hardware. In such a case, the CPU uses decimal numbers in BCD (binary coded decimal) form and does BCD arithmetic operation. BCD operates on radix 10. This hardware operates without conversion to pure binary. It uses a nibble to represent a number in packed BCD form. BCD operations require not only special hardware but also decimal instruction set.

  • Exceptions and Error Detection

All of us know that when we do arithmetic operations, we get answers which have more digits than the operands (Ex: 8 x 2= 16). This happens in computer arithmetic operations too. When the result size exceeds the allotted size of the variable or the register, it becomes an error and exception. The exception conditions associated with numbers and number operations are Overflow, Underflow, Truncation, Rounding and Multiple Precision . These are detected by the associated hardware in arithmetic Unit. These exceptions apply to both Fixed Point and Floating Point operations. Each of these exceptional conditions has a flag bit assigned in the Processor Status Word (PSW). We may discuss more in detail in the later chapters.

  • Character Representation

Another data type is non-numeric and is largely character sets. We use a human-understandable character set to communicate with computer i.e. for both input and output. Standard character sets like EBCDIC and ASCII are chosen to represent alphabets, numbers and special characters. Nowadays Unicode standard is also in use for non-English language like Chinese, Hindi, Spanish, etc. These codes are accessible and available on the internet. Interested readers may access and learn more.

1. Track your progress [Earn 200 points]

Mark as complete

2. Provide your ratings to this chapter [Earn 100 points]

Binary Data Representation Videos

Prof. harry h. porter iii, ph.d., portland state university.

  • Math Article

Binary Number System

Class Registration Banner

A binary number system is one of the four types of number system. In computer applications, where binary numbers are represented by only two symbols or digits, i.e. 0 (zero) and 1(one). The binary numbers here are expressed in the base-2 numeral system. For example, (101) 2 is a binary number. Each digit in this system is said to be a bit. Learn about the number system here.

Number System is a way to represent numbers in computer architecture. There are four different types of the number system, such as:

  • Binary number system (base 2)
  • Octal number system (base 8) 
  • Decimal number system(base 10)
  • Hexadecimal number system (base 16) . 

In this article, let us discuss what is a binary number system, conversion from one system to other systems,  table, positions, binary operations such as addition, subtraction, multiplication, and division, uses and solved examples in detail.

What is a Binary Number System?

Binary Number System: According to digital electronics and mathematics, a binary number is defined as a number that is expressed in the binary system or base 2 numeral system. It describes numeric values by two separate symbols; 1 (one) and 0 (zero). The base-2 system is the positional notation with 2 as a radix.

The binary system is applied internally by almost all latest computers and computer-based devices because of its direct implementation in electronic circuits using logic gates. Every digit is referred to as a bit . 

Example: Convert 4 in binary.

4 in binary is (100) 2 .

Here, 4 is represented in the decimal number system, where we can represent the number using the digits from 0-9. However, in a binary number system, we use only two digits, such as 0 and 1.

Now, let’s discuss how to convert 4 in binary number system. The following steps help to convert 4 in binary.

Step 1: First, divide the number 4 by 2. Use the integer quotient obtained in this step as the dividend for the next step. Continue this step, until the quotient becomes 0.

4/2 = 2

0

2/2 = 1

0

1/2 = 0

1

Step 2: Now, write the remainder in reverse chronological order. (i.e from bottom to top).

Here, the Least Significant Bit (LSB) is 0 and the Most Significant Bit (MSB) is 1.

Hence, the decimal number 4 in binary is 100 2

So, if we want to find how many bits does 4 in binary have? we have to count the number of zeros and ones. 

So, 4 in binary is 100 2 . Here, there are 2 zeroes and 1 one. Hence, we have 3 bits.

Therefore, the number of bits does 4 in binary have is 3.

What is Bit in Binary Number?

A single binary digit is called a “ Bit”. A binary number consists of several bits. Examples are:

  • 10101 is a five-bit binary number
  • 101 is a three-bit binary number
  • 100001 is a six-bit binary number

Binary Numbers Table

Some of the binary notations of lists of decimal numbers from 1 to 30,  are mentioned in the below list.

Number Binary Number Number Binary Number Number Binary Number
1 1 11 1011 21 10101
2 10 12 1100 22 10110
3 11 13 1101 23 10111
4 100 14 1110 24 11000
5 101 15 1111 25 11001
6 110 16 10000 26 11010
7 111 17 10001 27 11011
8 1000 18 10010 28 11100
9 1001 19 10011 29 11101
10 1010 20 10100 30 11110

How to Calculate Binary Numbers

For example, the number to be operated is 1235.

1 2 3 5

This indicates,

1235 = 1 × 1000 + 2 × 100 + 3 × 10 + 5 × 1

1000 = 10  = 10 × 10 × 10
100 = 10 = 10 × 10
10 = 10 = 10
1 = 10 (any value to the exponent zero is one)

The above table can be described as,

10 10 10 10
1 2 3 5

= 1 × 10 3 + 2 × 10 2 + 3 × 10 1 + 5 × 10 0

The decimal number system operates in base 10, wherein the digits 0-9 represent numbers. In binary system operates in base 2 and the digits 0-1 represent numbers, and the base is known as radix . Put differently, and the above table can also be shown in the following manner.

Decimal 10 10 10 10
Binary 2 2 2 2

We place the digits in columns 10 0 , 10 1 and so on in base 10. When there is a need to put a value higher than 9 in the form of 10 (n+1)  for instance, to add 10 to column 10 0 , you need to add 1 to the column 10 1 .

We place the digits in columns 2 0 , 2 1 and so on in base 2. To place a value that is higher than 1 in 2 n , you need to add 2 (n+1) . For instance, to add 3 to column 2 0 , you need to add 1 to column 2 1 .

Position in Binary Number System

In the Binary system, we have ones, twos, fours etc…

For example 1011.110

It is shown like this:

1 × 8 + 0 × 4 + 1 × 2 + 1 + 1 × ½ + 1 × ¼ + 0 × 1⁄8

= 11.75 in Decimal

To show the values greater than or less than one, the numbers can be placed to the left or right of the point.

For 10.1, 10 is a whole number on the left side of the decimal, and as we move more left, the number place gets bigger (Twice).

The first digit on the right is always Halves ½ and as we move more right, the number gets smaller (half as big).

In the example given above:

  • “10” shows ‘2’ in decimal.
  • “.1” shows ‘half’.
  • So, “10.1” in binary is 2.5 in decimal.

Binary Arithmetic Operations

Like we perform the arithmetic operations in numerals, in the same way, we can perform addition, subtraction, multiplication and division operations on Binary numbers. Let us learn them one by one.

Binary Addition

Adding two binary numbers will give us a binary number itself. It is the simplest method. Addition of two single-digit binary number is given in the table below.

0 0 0
0 1 1
1 0 1
1 1 0; Carry →1

Let us take an example of two binary numbers and add them.

Binary Addition

Binary Subtraction

Subtracting two binary numbers will give us a binary number itself. It is also a straightforward method. Subtraction of two single-digit binary number is given in the table below.

0 0 0
0 1 1; Borrow 1
1 0 1
1 1 0

Let us take an example of two binary numbers and subtract them.

Example: Subtract 1101 2, and 1010 2 .

Solution:  1101 2  – 1010 2 = 0011 2

Binary Multiplication

The multiplication process is the same for the binary numbers as it is for numerals. Let us understand it with example.

Example: Multiply 1101 2 and 1010 2 .

Binary Multiplication

Binary Division

The binary division is similar to the decimal number division method. We will learn with an example here.

Example: Divide 1010 2 by 10 2

Binary Division

Uses of Binary Number System

Binary numbers are commonly used in computer applications. All the coding and languages in computers such as C, C++, Java, etc. use binary digits 0 and 1 to write a program or encode any digital data. The computer understands only the coded language. Therefore these 2-digit number system is used to represent a set of data or information in discrete bits of information.

Problems and Solutions

Let us practice some of the problems for better understanding:

Question 1 : What is binary number 1.1 in decimal?

Step 1: 1 on the left-hand side is on the one’s position, so it’s 1.

Step 2: The one on the right-hand side is in halves, so it’s

Step 3: so, 1.1 = 1.5 in decimal.

Question 2:  Write 10.11 2 in Decimal?

10.11 =  1 x (2) 1   + 0 (2) 0  + 1 (½) 1    + 1(½) 2    

= 2 + 0 + ½ + ½

So, 10.11 is 2.75  in Decimal.

Frequently Asked Questions on Binary Number System

What is a binary number system, what is a bit, how to convert a decimal number into a binary number give an example., what is the use of binary numbers, what is the value of 163 in binary, how is 200 represented in binary.

Quiz Image

Put your understanding of this concept to test by answering a few MCQs. Click ‘Start Quiz’ to begin!

Select the correct answer and click on the “Finish” button Check your score and answers at the end of the quiz

Visit BYJU’S for all Maths related queries and study materials

Your result is as below

Request OTP on Voice Call

MATHS Related Links

Leave a Comment Cancel reply

Your Mobile number and Email id will not be published. Required fields are marked *

Post My Comment

data representation to binary

It was very helpful thanks

Thank you. I am satisfied.

data representation to binary

Register with BYJU'S & Download Free PDFs

Register with byju's & watch live videos.

Binary to Decimal converter

Decimal to Binary converter ►

Binary number is a number expressed in the base 2 numeral system. Binary number's digits have 2 symbols: zero (0) and one (1). Each digit of a binary number counts a power of 2.

Binary number example: 1101 2 = 1×2 3 +1×2 2 +0×2 1 +1×2 0 = 13 10

Decimal number is a number expressed in the base 10 numeral system. Decimal number's digits have 10 symbols: 0,1,2,3,4,5,6,7,8,9. Each digit of a decimal number counts a power of 10.

Decimal number example: 653 10 = 6×10 2 +5×10 1 +3×10 0

  • How to convert binary to decimal

For binary number with n digits:

d n-1  ... d 3  d 2  d 1  d 0

The decimal number is equal to the sum of binary digits (d n ) times their power of 2 (2 n ):

decimal = d 0 ×2 0 + d 1 ×2 1 + d 2 ×2 2 + ...

Find the decimal value of 111001 2 :

binary number: 1 1 1 0 0 1
power of 2: 2 2 2 2 2 2

111001 2 = 1⋅2 5 +1⋅2 4 +1⋅2 3 +0⋅2 2 +0⋅2 1 +1⋅2 0 = 57 10

Binary to decimal conversion table

Binary
Number
Decimal
Number
Hex
Number
0 0 0
1 1 1
10 2 2
11 3 3
100 4 4
101 5 5
110 6 6
111 7 7
1000 8 8
1001 9 9
1010 10 A
1011 11 B
1100 12 C
1101 13 D
1110 14 E
1111 15 F
10000 16 10
10001 17 11
10010 18 12
10011 19 13
10100 20 14
10101 21 15
10110 22 16
10111 23 17
11000 24 18
11001 25 19
11010 26 1A
11011 27 1B
11100 28 1C
11101 29 1D
11110 30 1E
11111 31 1F
100000 32 20
1000000 64 40
10000000 128 80
100000000 256 100
  • Decimal to Binary converter
  • Binary to hex converter
  • Binary calculator
  • Binary to ASCII text converter
  • Hex to decimal converter
  • Octal to decimal converter
  • Hex/decimal/octal/binary converter
  • Numeral systems
  • 1001 binary to decimal
  • 1010 binary to decimal
  • 1011 binary to decimal
  • 10101 binary to decimal

Write how to improve this page

Number conversion.

  • ASCII,Hex,Binary,Decimal converter
  • ASCII to binary
  • ASCII to hex
  • Base converter
  • Binary converter
  • Binary to ASCII
  • Binary to decimal
  • Binary to hex
  • Date to roman
  • Decimal to fraction
  • Decimal to percent
  • Decimal to binary
  • Decimal to octal
  • Decimal to hex
  • Degrees to minutes,seconds
  • Degrees to radians
  • Fraction to decimal
  • Fraction to percent
  • Hex to ASCII
  • Hex to binary
  • Hex to decimal
  • Minutes, seconds to degrees
  • Octal to decimal
  • Percent to decimal
  • Percent to fraction
  • Percent to ppm
  • ppm to percent
  • ppm converter
  • Radians to degrees
  • Roman numerals converter

RAPID TABLES

  • Recommend Site
  • Send Feedback

© RapidTables.com | About | Terms of Use | Privacy Policy | Manage Cookies

  • Practice Bitwise Algorithms
  • MCQs on Bitwise Algorithms
  • Tutorial on Biwise Algorithms
  • Binary Representation
  • Bitwise Operators
  • Bit Swapping
  • Bit Manipulation
  • Count Set bits
  • Setting a Bit
  • Clear a Bit
  • Toggling a Bit
  • Left & Right Shift
  • Checking Power of 2
  • Important Tactics
  • Bit Manipulation for CP
  • Fast Exponentiation
  • Bitwise Algorithms
  • Introduction to Bitwise Algorithms - Data Structures and Algorithms Tutorial
  • Bitwise Operators in C
  • Bitwise Operators in Java
  • Python Bitwise Operators
  • JavaScript Bitwise Operators
  • All about Bit Manipulation
  • What is Endianness? Big-Endian & Little-Endian
  • Bits manipulation (Important tactics)

Easy Problems on Bit Manipulations and Bitwise Algorithms

Binary representation of a given number.

  • Count set bits in an integer
  • Add two bit strings
  • Turn off the rightmost set bit
  • Rotate bits of a number
  • Compute modulus division by a power-of-2-number
  • Find the Number Occurring Odd Number of Times
  • Program to find whether a given number is power of 2
  • Find position of the only set bit
  • Check for Integer Overflow
  • Find XOR of two number without using XOR operator
  • Check if two numbers are equal without using arithmetic and comparison operators
  • Detect if two integers have opposite signs
  • How to swap two numbers without using a temporary variable?
  • Russian Peasant (Multiply two numbers using bitwise operators)

Medium Problems on Bit Manipulations and Bitwise Algorithms

  • Swap bits in a given number
  • Smallest of three integers without comparison operators
  • Compute the minimum or maximum of two integers without branching
  • Smallest power of 2 greater than or equal to n
  • Program to find parity
  • Check if binary representation of a number is palindrome
  • Generate n-bit Gray Codes
  • Check if a given number is sparse or not
  • Euclid's Algorithm when % and / operations are costly
  • Calculate square of a number without using *, / and pow()
  • Copy set bits in a range
  • Check if a number is Bleak
  • Gray to Binary and Binary to Gray conversion

Hard Problems on Bit Manipulations and Bitwise Algorithms

  • Next higher number with same number of set bits
  • Find the maximum subarray XOR in a given array
  • Find longest sequence of 1's in binary representation with one flip
  • Closest (or Next) smaller and greater numbers with same number of set bits
  • Bitmasking and Dynamic Programming | Travelling Salesman Problem
  • Compute the parity of a number using XOR and table look-up
  • XOR Encryption by Shifting Plaintext
  • Count pairs in an array which have at least one digit common
  • Python program to convert floating to binary
  • Booth’s Multiplication Algorithm
  • Number of pairs with Pandigital Concatenation
  • Find the n-th number whose binary representation is a palindrome
  • Find the two non-repeating elements in an array of repeating elements/ Unique Numbers 2
  • Builtin functions of GCC compiler

Write a program to print a Binary representation of a given number. 

Source: Microsoft Interview Set-3  

Method 1: Iterative Method:

For any number, we can check whether its ‘i’th bit is 0(OFF) or 1(ON) by bitwise AND it with “2^i” (2 raise to i). 

Let us take unsigned integers (32 bits), which consists of 0-31 bits. To print the binary representation of an unsigned integer, start from 31th bit, and check whether 31th bit is ON or OFF, if it is ON print “1” else print “0”. Now check whether 30th bit is ON or OFF, if it is ON print “1” else print “0”, do this for all bits from 31 to 0, finally we will get binary representation of number.

Below is the implementation of the above approach:

Time Complexity: O(1) Auxiliary Space: O(1)

Method 2: Recursive  Approach:

Following is recursive method to print binary representation of ‘NUM’. 

Time Complexity: O(log N) Auxiliary Space: O(log N)

Method 3: Recursive using bitwise operator  

Steps to convert decimal number to its binary representation are given below: 

  • Check n > 0
  • Right shift the number by 1 bit and recursive function call
  • Print the bits of number

This article is compiled by Narendra Kangralkar . 

Please Login to comment...

Similar reads, improve your coding skills with practice.

 alt=

What kind of Experience do you want to share?

Getuplearn.com

Data Representation in Computer: Number Systems, Characters, Audio, Image and Video

data representation to binary

Table of Contents

  • 1 What is Data Representation in Computer?
  • 2.1 Binary Number System
  • 2.2 Octal Number System
  • 2.3 Decimal Number System
  • 2.4 Hexadecimal Number System
  • 3.4 Unicode
  • 4 Data Representation of Audio, Image and Video
  • 5.1 What is number system with example?

What is Data Representation in Computer?

A computer uses a fixed number of bits to represent a piece of data which could be a number, a character, image, sound, video, etc. Data representation is the method used internally to represent data in a computer. Let us see how various types of data can be represented in computer memory.

Before discussing data representation of numbers, let us see what a number system is.

Number Systems

Number systems are the technique to represent numbers in the computer system architecture, every value that you are saving or getting into/from computer memory has a defined number system.

A number is a mathematical object used to count, label, and measure. A number system is a systematic way to represent numbers. The number system we use in our day-to-day life is the decimal number system that uses 10 symbols or digits.

The number 289 is pronounced as two hundred and eighty-nine and it consists of the symbols 2, 8, and 9. Similarly, there are other number systems. Each has its own symbols and method for constructing a number.

A number system has a unique base, which depends upon the number of symbols. The number of symbols used in a number system is called the base or radix of a number system.

Let us discuss some of the number systems. Computer architecture supports the following number of systems:

Binary Number System

Octal number system, decimal number system, hexadecimal number system.

Number Systems

A Binary number system has only two digits that are 0 and 1. Every number (value) represents 0 and 1 in this number system. The base of the binary number system is 2 because it has only two digits.

The octal number system has only eight (8) digits from 0 to 7. Every number (value) represents with 0,1,2,3,4,5,6 and 7 in this number system. The base of the octal number system is 8, because it has only 8 digits.

The decimal number system has only ten (10) digits from 0 to 9. Every number (value) represents with 0,1,2,3,4,5,6, 7,8 and 9 in this number system. The base of decimal number system is 10, because it has only 10 digits.

A Hexadecimal number system has sixteen (16) alphanumeric values from 0 to 9 and A to F. Every number (value) represents with 0,1,2,3,4,5,6, 7,8,9,A,B,C,D,E and F in this number system. The base of the hexadecimal number system is 16, because it has 16 alphanumeric values.

Here A is 10, B is 11, C is 12, D is 13, E is 14 and F is 15 .

Data Representation of Characters

There are different methods to represent characters . Some of them are discussed below:

Data Representation of Characters

The code called ASCII (pronounced ‘􀀏’.S-key”), which stands for American Standard Code for Information Interchange, uses 7 bits to represent each character in computer memory. The ASCII representation has been adopted as a standard by the U.S. government and is widely accepted.

A unique integer number is assigned to each character. This number called ASCII code of that character is converted into binary for storing in memory. For example, the ASCII code of A is 65, its binary equivalent in 7-bit is 1000001.

Since there are exactly 128 unique combinations of 7 bits, this 7-bit code can represent only128 characters. Another version is ASCII-8, also called extended ASCII, which uses 8 bits for each character, can represent 256 different characters.

For example, the letter A is represented by 01000001, B by 01000010 and so on. ASCII code is enough to represent all of the standard keyboard characters.

It stands for Extended Binary Coded Decimal Interchange Code. This is similar to ASCII and is an 8-bit code used in computers manufactured by International Business Machines (IBM). It is capable of encoding 256 characters.

If ASCII-coded data is to be used in a computer that uses EBCDIC representation, it is necessary to transform ASCII code to EBCDIC code. Similarly, if EBCDIC coded data is to be used in an ASCII computer, EBCDIC code has to be transformed to ASCII.

ISCII stands for Indian Standard Code for Information Interchange or Indian Script Code for Information Interchange. It is an encoding scheme for representing various writing systems of India. ISCII uses 8-bits for data representation.

It was evolved by a standardization committee under the Department of Electronics during 1986-88 and adopted by the Bureau of Indian Standards (BIS). Nowadays ISCII has been replaced by Unicode.

Using 8-bit ASCII we can represent only 256 characters. This cannot represent all characters of written languages of the world and other symbols. Unicode is developed to resolve this problem. It aims to provide a standard character encoding scheme, which is universal and efficient.

It provides a unique number for every character, no matter what the language and platform be. Unicode originally used 16 bits which can represent up to 65,536 characters. It is maintained by a non-profit organization called the Unicode Consortium.

The Consortium first published version 1.0.0 in 1991 and continues to develop standards based on that original work. Nowadays Unicode uses more than 16 bits and hence it can represent more characters. Unicode can represent characters in almost all written languages of the world.

Data Representation of Audio, Image and Video

In most cases, we may have to represent and process data other than numbers and characters. This may include audio data, images, and videos. We can see that like numbers and characters, the audio, image, and video data also carry information.

We will see different file formats for storing sound, image, and video .

Multimedia data such as audio, image, and video are stored in different types of files. The variety of file formats is due to the fact that there are quite a few approaches to compressing the data and a number of different ways of packaging the data.

For example, an image is most popularly stored in Joint Picture Experts Group (JPEG ) file format. An image file consists of two parts – header information and image data. Information such as the name of the file, size, modified data, file format, etc. is stored in the header part.

The intensity value of all pixels is stored in the data part of the file. The data can be stored uncompressed or compressed to reduce the file size. Normally, the image data is stored in compressed form. Let us understand what compression is.

Take a simple example of a pure black image of size 400X400 pixels. We can repeat the information black, black, …, black in all 16,0000 (400X400) pixels. This is the uncompressed form, while in the compressed form black is stored only once and information to repeat it 1,60,000 times is also stored.

Numerous such techniques are used to achieve compression. Depending on the application, images are stored in various file formats such as bitmap file format (BMP), Tagged Image File Format (TIFF), Graphics Interchange Format (GIF), Portable (Public) Network Graphic (PNG).

What we said about the header file information and compression is also applicable for audio and video files. Digital audio data can be stored in different file formats like WAV, MP3, MIDI, AIFF, etc. An audio file describes a format, sometimes referred to as the ‘container format’, for storing digital audio data.

For example, WAV file format typically contains uncompressed sound and MP3 files typically contain compressed audio data. The synthesized music data is stored in MIDI(Musical Instrument Digital Interface) files.

Similarly, video is also stored in different files such as AVI (Audio Video Interleave) – a file format designed to store both audio and video data in a standard package that allows synchronous audio with video playback, MP3, JPEG-2, WMV, etc.

FAQs About Data Representation in Computer

What is number system with example.

Let us discuss some of the number systems. Computer architecture supports the following number of systems: 1. Binary Number System 2. Octal Number System 3. Decimal Number System 4. Hexadecimal Number System

Leave a Reply Cancel reply

Save my name, email, and website in this browser for the next time I comment.

Read the memo from CrowdStrike explaining how its update broke the world's computers

  • CrowdStrike explained in a memo how a faulty update caused a global Microsoft IT outage on Friday.
  • CrowdStrike said on Wednesday that its Rapid Response Content update contained an undetected error.
  • It was the largest IT outage in history; some companies, like Delta Air Lines, are still recovering.

Insider Today

Many of the world's airlines, banks, and retailers came to a grinding halt last Friday after a faulty update caused a global Microsoft IT outage. Now the cybersecurity firm behind the update, CrowdStrike, is explaining what happened.

In a memo released Wednesday , CrowdStrike said an update to its Rapid Response Content, which is "designed to respond to the changing threat landscape at operational speed," contained an "undetected error."

That error — explained in CrowdStrike's full memo, included below — caused chaos around the world . Thousands of flights were canceled , emergency 911 services went down , retailers closed stores or accepted only cash payments , and some hospital operations were delayed or disrupted .

CrowdStrike quickly deployed a fix, but it took time to go into effect , with some systems requiring a manual reboot .

It amounted to the largest IT outage in history , and some companies, such as Delta Air Lines , are still recovering from the fallout.

Read CrowdStrike's memo below:

Preliminary Post Incident Review (PIR): Content Configuration Update Impacting the Falcon Sensor and the Windows Operating System (BSOD) This is CrowdStrike's preliminary Post Incident Review (PIR). We will be detailing our full investigation in the forthcoming Root Cause Analysis that will be released publicly. Throughout this PIR, we have used generalized terminology to describe the Falcon platform for improved readability. Terminology in other documentation may be more specific and technical. What Happened? On Friday, July 19, 2024 at 04:09 UTC, as part of regular operations, CrowdStrike released a content configuration update for the Windows sensor to gather telemetry on possible novel threat techniques. These updates are a regular part of the dynamic protection mechanisms of the Falcon platform. The problematic Rapid Response Content configuration update resulted in a Windows system crash. Systems in scope include Windows hosts running sensor version 7.11 and above that were online between Friday, July 19, 2024 04:09 UTC and Friday, July 19, 2024 05:27 UTC and received the update. Mac and Linux hosts were not impacted. The defect in the content update was reverted on Friday, July 19, 2024 at 05:27 UTC. Systems coming online after this time, or that did not connect during the window, were not impacted. What Went Wrong and Why? CrowdStrike delivers security content configuration updates to our sensors in two ways: Sensor Content that is shipped with our sensor directly, and Rapid Response Content that is designed to respond to the changing threat landscape at operational speed. The issue on Friday involved a Rapid Response Content update with an undetected error. Sensor Content Sensor Content provides a wide range of capabilities to assist in adversary response. It is always part of a sensor release and not dynamically updated from the cloud. Sensor Content includes on-sensor AI and machine learning models, and comprises code written expressly to deliver longer-term, reusable capabilities for CrowdStrike's threat detection engineers. These capabilities include Template Types, which have pre-defined fields for threat detection engineers to leverage in Rapid Response Content. Template Types are expressed in code. All Sensor Content, including Template Types, go through an extensive QA process, which includes automated testing, manual testing, validation and rollout steps. The sensor release process begins with automated testing, both prior to and after merging into our code base. This includes unit testing, integration testing, performance testing and stress testing. This culminates in a staged sensor rollout process that starts with dogfooding internally at CrowdStrike, followed by early adopters. It is then made generally available to customers. Customers then have the option of selecting which parts of their fleet should install the latest sensor release ('N'), or one version older ('N-1') or two versions older ('N-2') through Sensor Update Policies. The event of Friday, July 19, 2024 was not triggered by Sensor Content, which is only delivered with the release of an updated Falcon sensor. Customers have complete control over the deployment of the sensor — which includes Sensor Content and Template Types. Rapid Response Content Rapid Response Content is used to perform a variety of behavioral pattern-matching operations on the sensor using a highly optimized engine. Rapid Response Content is a representation of fields and values, with associated filtering. This Rapid Response Content is stored in a proprietary binary file that contains configuration data. It is not code or a kernel driver. Rapid Response Content is delivered as "Template Instances," which are instantiations of a given Template Type. Each Template Instance maps to specific behaviors for the sensor to observe, detect or prevent. Template Instances have a set of fields that can be configured to match the desired behavior. In other words, Template Types represent a sensor capability that enables new telemetry and detection, and their runtime behavior is configured dynamically by the Template Instance (i.e., Rapid Response Content). Rapid Response Content provides visibility and detections on the sensor without requiring sensor code changes. This capability is used by threat detection engineers to gather telemetry, identify indicators of adversary behavior and perform detections and preventions. Rapid Response Content is behavioral heuristics, separate and distinct from CrowdStrike's on-sensor AI prevention and detection capabilities. Rapid Response Content Testing and Deployment Rapid Response Content is delivered as content configuration updates to the Falcon sensor. There are three primary systems: the Content Configuration System, the Content Interpreter and the Sensor Detection Engine. The Content Configuration System is part of the Falcon platform in the cloud, while the Content Interpreter and Sensor Detection Engine are components of the Falcon sensor. The Content Configuration System is used to create Template Instances, which are validated and deployed to the sensor through a mechanism called Channel Files. The sensor stores and updates its content configuration data through Channel Files, which are written to disk on the host. The Content Interpreter on the sensor reads the Channel File and interprets the Rapid Response Content, enabling the Sensor Detection Engine to observe, detect or prevent malicious activity, depending on the customer's policy configuration. The Content Interpreter is designed to gracefully handle exceptions from potentially problematic content. Newly released Template Types are stress tested across many aspects, such as resource utilization, system performance impact and event volume. For each Template Type, a specific Template Instance is used to stress test the Template Type by matching against any possible value of the associated data fields to identify adverse system interactions. Template Instances are created and configured through the use of the Content Configuration System, which includes the Content Validator that performs validation checks on the content before it is published. Timeline of Events: Testing and Rollout of the InterProcessCommunication (IPC) Template Type Sensor Content Release: On February 28, 2024, sensor 7.11 was made generally available to customers, introducing a new IPC Template Type to detect novel attack techniques that abuse Named Pipes. This release followed all Sensor Content testing procedures outlined above in the Sensor Content section. Template Type Stress Testing: On March 05, 2024, a stress test of the IPC Template Type was executed in our staging environment, which consists of a variety of operating systems and workloads. The IPC Template Type passed the stress test and was validated for use. Template Instance Release via Channel File 291 : On March 05, 2024, following the successful stress test, an IPC Template Instance was released to production as part of a content configuration update. Subsequently, three additional IPC Template Instances were deployed between April 8, 2024 and April 24, 2024. These Template Instances performed as expected in production. What Happened on July 19, 2024? On July 19, 2024, two additional IPC Template Instances were deployed. Due to a bug in the Content Validator, one of the two Template Instances passed validation despite containing problematic content data. Based on the testing performed before the initial deployment of the Template Type (on March 05, 2024), trust in the checks performed in the Content Validator, and previous successful IPC Template Instance deployments, these instances were deployed into production. When received by the sensor and loaded into the Content Interpreter, problematic content in Channel File 291 resulted in an out-of-bounds memory read triggering an exception. This unexpected exception could not be gracefully handled, resulting in a Windows operating system crash (BSOD). How Do We Prevent This From Happening Again? Software Resiliency and Testing Improve Rapid Response Content testing by using testing types such as: Local developer testing Content update and rollback testing Stress testing, fuzzing and fault injection Stability testing Content interface testing Add additional validation checks to the Content Validator for Rapid Response Content. A new check is in process to guard against this type of problematic content from being deployed in the future. Enhance existing error handling in the Content Interpreter. Rapid Response Content Deployment Implement a staggered deployment strategy for Rapid Response Content in which updates are gradually deployed to larger portions of the sensor base, starting with a canary deployment. Improve monitoring for both sensor and system performance, collecting feedback during Rapid Response Content deployment to guide a phased rollout. Provide customers with greater control over the delivery of Rapid Response Content updates by allowing granular selection of when and where these updates are deployed. Provide content update details via release notes, which customers can subscribe to. In addition to this preliminary Post Incident Review, CrowdStrike is committed to publicly releasing the full Root Cause Analysis once the investigation is complete.

Watch: Global computer glitch grounds flights, knocks out 911

data representation to binary

  • Main content

ACM Digital Library home

  • Advanced Search

Robust feature selection via central point link information and sparse latent representation

New citation alert added.

This alert has been successfully added and will be sent to:

You will be notified whenever a record that you have chosen has been cited.

To manage your alert preferences, click on the button below.

New Citation Alert!

Please log in to your account

Information & Contributors

Bibliometrics & citations, view options, recommendations, uncorrelated feature selection via sparse latent representation and extended olsda.

  • SLREO performs feature selection in the latent representation space, uses latent representation learning to mine the hidden information between data, and ...

Modern unsupervised feature selection methods predominantly obtain the cluster structure and pseudo-labels information through spectral clustering. However, the pseudo-labels obtained by spectral clustering are usually mixed between ...

Feature selection via Non-convex constraint and latent representation learning with Laplacian embedding

  • Calculate the similarity between pseudo-labels to keep complete sample information.

In unsupervised feature selection, the relationship between pseudo-labels is often ignored, and the interconnection information between the data is not fully utilized. In order to solve these problems, this paper proposes a feature ...

Feature self-representation based hypergraph unsupervised feature selection via low-rank representation

Dimension reduction methods always catch many attentions, because it could effectively solve the curse of dimensionality problem. In this paper, we propose an unsupervised feature selection method which could efficiently select a subset of informative ...

Information

Published in.

Elsevier Science Inc.

United States

Publication History

Author tags.

  • Center matrix
  • Link information
  • Sparse latent representation
  • Unsupervised feature selection
  • Dual graph structure
  • Research-article

Contributors

Other metrics, bibliometrics, article metrics.

  • 0 Total Citations
  • 0 Total Downloads
  • Downloads (Last 12 months) 0
  • Downloads (Last 6 weeks) 0

View options

Login options.

Check if you have access through your login credentials or your institution to get full access on this article.

Full Access

Share this publication link.

Copying failed.

Share on social media

Affiliations, export citations.

  • Please download or close your previous search result export first before starting a new bulk export. Preview is not available. By clicking download, a status dialog will open to start the export process. The process may take a few minutes but once it finishes a file will be downloadable from your browser. You may continue to browse the DL while the export process is in progress. Download
  • Download citation
  • Copy citation

We are preparing your search results for download ...

We will inform you here when the file is ready.

Your file of search results citations is now ready.

Your search export query has expired. Please try again.

IMAGES

  1. PPT

    data representation to binary

  2. PPT

    data representation to binary

  3. PPT

    data representation to binary

  4. PPT

    data representation to binary

  5. ( a ) Data representation in binary and integer form. ( b ) Data

    data representation to binary

  6. PPT

    data representation to binary

VIDEO

  1. Data Representation

  2. Data Representation

  3. COMPUTER DATA REPRESENTATION (BINARY, OCTAL, DECIMAL, HEXADECIMAL NO. SYSTEM) FOR RRB JE/CMA CBT-2

  4. Data Representation and Conversions in Urdu/Hindi || Assembly Language Lecture 2 Easy Understanding

  5. Ch2 Data Representation || One shot video || CBSE Class 11th Computer Science with python

  6. Array Based Representation of Binary Trees شرح هياكل بيانات

COMMENTS

  1. Decimal to Binary Converter

    Binary number's digits have 2 symbols: zero (0) and one (1). Each digit of a binary number counts a power of 2. Binary number example: 1101 2 = 1×2 3 +1×2 2 +0×2 1 +1×2 0 = 13 10. How to convert decimal to binary Conversion steps: Divide the number by 2. Get the integer quotient for the next iteration. Get the remainder for the binary digit.

  2. How do computers represent data?

    How do computers represent data? When we look at a computer, we see text and images and shapes. To a computer, all of that is just binary data, 1s and 0s. The following 1s and 0s represents a tiny GIF: This next string of 1s and 0s represents a command to add a number: You might be scratching your head at this point.

  3. Numbers

    19 in binary is 0001 0011, which is the two's complement for a positive number. For -19, we take the binary of the positive, which is 0001 0011 (above), invert it to 1110 1100, and add 1, giving a representation of 1110 1101. 107 in binary is 0110 1011, which is the two's complement for a positive number.

  4. Binary numbers

    What are binary numbers and how do they work? In this article, you will learn the basics of binary numbers, how to convert them to decimal, and why they are important for computing and digital information. This is part of the AP CSP course on Khan Academy, a free online learning platform that offers courses on various subjects.

  5. Binary Number System

    Binary Number System is a number system that is used to represent various numbers using only two symbols "0" and "1". The word binary is derived from the word "bi" which means two. ... Binary Number System is used in Data Sciences for various purposes, etc. Read More, ... Practice Questions on Real Number Representation | Class 9 ...

  6. PDF Binary Representation

    Binary Representation. 017The Power of BitsThe fundamental unit of memory inside a computer is called a bit—a term introduced in a paper by Claude Shannon as a contraction of t. words binary digit.An individual bit exists in one of two states, usual. rger numbers of bits:Two bits can represent.

  7. PDF Binary Representation and Strings

    The Power of Bits. The fundamental unit of memory inside a computer is called a bit—a term introduced in a paper by Claude Shannon as a contraction of the words binary digit. An individual bit exists in one of two states, usually denoted as 0 and 1. Two bits can represent four (2 × 2) values. Three bits can represent eight (2 × 2 × 2) values.

  8. PDF Binary Representation

    32‐bit Binary Floats. Called "single precision" floats. Value is [+/‐] [fraction] x 2[exponent] The 32 bits are used as: High order bit is the sign of the value: 1 for negative, 0 for non‐negative. The next 8 bits are the signed (two's complement) value for the exponent: 127 to ‐128.

  9. Binary Calculator & Converter

    A simple simple to start with: add 10 2 and 11 2. Adding these two binary numbers starting from right-to-left is 0 + 1 = 1, 1 + 1 = 10 so that is 0 with a carry of 1 2 so we get 01 2 and when the carry is added at the front we get the result: 101 2. For a more complex addition example let us add the hex numbers 111 2 and 101 2.

  10. What is Binary, and Why Do Computers Use It?

    Although computers don't run on hexadecimal, programmers use it to represent binary addresses in a human-readable format when writing code. This is because two digits of hexadecimal can represent a whole byte, eight digits in binary. Hexadecimal uses 0-9 like decimal, and also the letters A through F to represent the additional six digits.

  11. Data Representation

    Data Representation in Computers. Information handled by a computer is classified as instruction and data. A broad overview of the internal representation of the information is illustrated in figure 3.1. No matter whether it is data in a numeric or non-numeric form or integer, everything is internally represented in Binary.

  12. Binary Data Representation Videos

    Bits and data representation; introduction to binary numbers; 8 bit numbers; 32 bit numbers; counting in binary Number Bases and Hexadecimal Numbers (33 mins) Base 10 number system; hexadecimal numbers and base 16; base 2; conversions between hex, binary, and decimal

  13. PDF Chapter II Binary Data Representation

    8 Binary Data Representation [Chapter II to ll the unused bits of the 16-bit short (the target data type). This operation is called sign extension. Sign-extension is based on the fact that as adding 0's to the left of a positive number does not change its value, adding 1's to the left of a negative number, does not change its value too.

  14. Binary Representations in Digital Logic

    Binary is a base-2 number system that uses two states 0 and 1 to represent a number. We can also call it to be a true state and a false state. A binary number is built the same way as we build the normal decimal number . For example, a decimal number 45 can be represented as 4*10^1+5*10^0 = 40+5. Now in binary 45 is represented as 101101.

  15. Binary data

    A binary variable is a random variable of binary type, meaning with two possible values. Independent and identically distributed (i.i.d.) binary variables follow a Bernoulli distribution, but in general binary data need not come from i.i.d. variables.Total counts of i.i.d. binary variables (equivalently, sums of i.i.d. binary variables coded as 1 or 0) follow a binomial distribution, but when ...

  16. Binary Number System

    A binary number system is one of the four types of number system. In computer applications, where binary numbers are represented by only two symbols or digits, i.e. 0 (zero) and 1 (one). The binary numbers here are expressed in the base-2 numeral system. For example, (101)2 is a binary number. Each digit in this system is said to be a bit.

  17. PDF Data Representation

    Data Representation. Data Representation. Eric Roberts. CS 106A February 10, 2016. Structure of MemoryThe fundamental unit of memory inside a computer is called a bit, which is a contraction of the. ords binary digit. A bit can be in either of two states, usually. enoted as 0 and 1.The hardware structure of a computer combines individual bit.

  18. Binary to Decimal Converter

    Binary. Binary number is a number expressed in the base 2 numeral system. Binary number's digits have 2 symbols: zero (0) and one (1). Each digit of a binary number counts a power of 2. Binary number example: 1101 2 = 1×2 3 +1×2 2 +0×2 1 +1×2 0 = 13 10. Decimal. Decimal number is a number expressed in the base 10 numeral system.

  19. Binary representation of a given number

    Method 2: Recursive Approach: Following is recursive method to print binary representation of 'NUM'. step 1) if NUM > 1. a) push NUM on stack. b) recursively call function with 'NUM / 2'. step 2) a) pop NUM from stack, divide it by 2 and print it's remainder. Below is the implementation of the above approach:

  20. Data Representation in Computer: Number Systems, Characters

    A computer uses a fixed number of bits to represent a piece of data which could be a number, a character, image, sound, video, etc. Data representation is the method used internally to represent data in a computer. Let us see how various types of data can be represented in computer memory. Before discussing data representation of numbers, let ...

  21. CrowdStrike Explains How Its Update Broke the World's Computers

    Rapid Response Content is a representation of fields and values, with associated filtering. This Rapid Response Content is stored in a proprietary binary file that contains configuration data. It ...

  22. Robust feature selection via central point link information and sparse

    The link graph and the data graph form a dual graph structure, which can not only preserve more complete data information but also holds on to the manifold structure of the data. Feature selection is conducted in the latent representation space, and interconnection information among data is mined by using latent representation learning to ...