UNIT 1

DIGITAL ELECTRONICS: Introduction, Positional Numbering System, Converting Between Bases, Signed Integer Representation, Floating Point Representation. Character Code, Fixed Point Addition And Substraction, Fixed Point Multiplication And Division, floating point arithmetic, Boolean Algebra , Boolean Expression, Boolean Identities,K-Maps & Map Minimization, Logic Gates,Digital Componenet,Combinational Circuit, Sequential Circuit.


DIGITAL ELECTRONICS

INTRODUCTION


The concept will be primarily concerned with positional number systems including our familiar decimal system as well as the binary, octal and hexadecimal systems which occur frequently in dealing with computers. A few applications of number systems, some of them recreational, will be given, and finally the rather prosaic topic of decimal multiplication will be discussed in some detail. First of all, though, we will consider a few of the number systems which preceded the adoption of our Hindu-Arabic positional notation, the vestiges of which may be still seen today.

POSITIONAL NUMBER SYSTEM


When we type some letters or words, the computer translates them in numbers as computers can understand only numbers. A computer can understand the positional number system where there are only a few symbols called digits and these symbols represent different values depending on the position they occupy in the number. The value of each digit in a number can be determined using -
  • The digit
  • The position of the digit in the number
  • The base of the number system (where the base is defined as the total number of digits available in the number system)

Decimal Number System

The number system that we use in our day-to-day life is the decimal number system. Decimal number system has base 10 as it uses 10 digits from 0 to 9. In decimal number system, the successive positions to the left of the decimal point represent units, tens, hundreds, thousands, and so on. Each position represents a specific power of the base (10). For example, the decimal number 1234 consists of the digit 4 in the units position, 3 in the tens position, 2 in the hundreds position, and 1 in the thousands position. Its value can be written as

Example



As a computer programmer or an IT professional, you should understand the following number systems which are frequently used in computers.

Binary Number System

Characteristics of the binary number system are as follows -
  • Uses two digits, 0 and 1
  • Also called as base 2 number system
  • Each position in a binary number represents a 0 power of the base (2). Example 20
  • Last position in a binary number represents a x power of the base (2). Example 2x where x represents the last position - 1.

Example

Binary Number: 101012 Calculating Decimal Equivalent -

Note - 101012 is normally written as 10101.

Octal Number System

Cha0racteristics of the octal number system are as follows -
  • Uses eight digits, 0,1,2,3,4,5,6,7
  • Also called as base 8 number system
  • Each position in an octal number represents a 0 power of the base (8). Example 80
  • Last position in an octal number represents a x power of the base (8). Example 8x where x represents the last position - 1

Example

Octal Number: 125708 Calculating Decimal Equivalent -

Note - 125708 is normally written as 12570.

Hexadecimal Number System

Characteristics of hexadecimal number system are as follows -
  • Uses 10 digits and 6 letters, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F
  • Letters represent the numbers starting from 10. A = 10. B = 11, C = 12, D = 13, E = 14, F = 15
  • Also called as base 16 number system
  • Each position in a hexadecimal number represents a 0 power of the base (16). Example, 160
  • Last position in a hexadecimal number represents a x power of the base (16). Example 16x where x represents the last position - 1

Example

Hexadecimal Number: 19FDE16 Calculating Decimal Equivalent -

Note - 19FDE16 is normally written as 19FDE

CONVERTING BETWEEN BASES

Number Bases

  • Digital computers usually store integers in base 2 (binary), base 8 (octal), or base 16 (hexadecimal)
  • EXAMPLE:(266)10 = (100001010)2 = (412)8 = 10A16

Arbitrary Bases

  • In base 2, we only need 2 digits, 0,1.
  • In base 16, we need 16 digits
  • Use the letters "ABCDEF" to represent the digits from 10 to 15.
  • With enough digits, integers can be expressed in any base

Base 2 To Base 16

To convert the base 2 to base 16,combine the binary digits to right in a groups of four and translate.

Conversion Among Bases

Convert One base To Another

  • Convert base 16 to base 10.
  • Convert any base to base 10.
  • Convert base 10 to base 16
  • Convert base 10 to any base.
  • Combine Steps 2 and 4

Base 16 to base 10

  • Represent Base 16 numbers as strings.
  • Digits in Base 16: digits = '0123456789ABCDEF'
  • digits.find('F') = 15 (15 is base 10 equivalent of digit 'F')

Any Base to 10

Base 10 to 16

  • Think of 522 = a2162 + a116 + a0
  • Then a2a1a0 is the hexadecimal representation of 522.
  • Clearly 522%16 = a0
  • Also 522//16 = a216 + a1 = (a2a1) 16

Binary To Deciaml Conversion

Any Base To Any Base

SIGN INTEGER REPRESENTATION

Integer Representation

Integers are whole numbers or fixed-point numbers with the radix point fixed after the least-significant bit. They are contrast to real numbers or floating-point numbers, where the position of the radix point varies. It is important to take note that integers and floating-point numbers are treated differently in computers. They have different representation and are processed differently (e.g., floating-point numbers are processed in a so-called floating-point processor). Floating-point numbers will be discussed later.

Computers use a fixed number of bits to represent an integer. The commonly-used bit-lengths for integers are 8-bit, 16-bit, 32-bit or 64-bit. Besides bit-lengths, there are two representation schemes for integers:

  1. Unsigned Integers: can represent zero and positive integers.
  2. Signed Integers: can represent zero, positive and negative integers. Three representation schemes had been proposed for signed integers:
    1. Sign-Magnitude representation
    2. 1's Complement representation
    3. 2's Complement representation

You, as the programmer, need to decide on the bit-length and representation scheme for your integers, depending on your application's requirements. Suppose that you need a counter for counting a small quantity from 0 up to 200, you might choose the 8-bit unsigned integer scheme as there is no negative numbers involved.

N-bit Unsigned Integers

Unsigned integers can represent zero and positive integers, but not negative integers. The value of an unsigned integer is interpreted as "the magnitude of its underlying binary pattern".

Example 1: Suppose that n=8 and the binary pattern is 0100 0001B, the value of this unsigned integer is 1×2^0 + 1×2^6 = 65D.

Example 2: Suppose that n=16 and the binary pattern is 0001 0000 0000 1000B, the value of this unsigned integer is 1×2^3 + 1×2^12 = 4104D.

Example 3: Suppose that n=16 and the binary pattern is 0000 0000 0000 0000B, the value of this unsigned integer is 0.

An n-bit pattern can represent 2^n distinct integers. An n-bit unsigned integer can represent integers from 0 to (2^n)-1, as tabulated below:

n Minimum Maximum
8 0 (2^8)-1  (=255)
16 0 (2^16)-1 (=65,535)
32 0 (2^32)-1 (=4,294,967,295) (9+ digits)
64 0 (2^64)-1 (=18,446,744,073,709,551,615) (19+ digits)

Signed Integers

Signed integers can represent zero, positive integers, as well as negative integers. Three representation schemes are available for signed integers:

  1. Sign-Magnitude representation
  2. 1's Complement representation
  3. 2's Complement representation

In all the above three schemes, the most-significant bit (msb) is called the sign bit. The sign bit is used to represent the sign of the integer - with 0 for positive integers and 1 for negative integers. The magnitude of the integer, however, is interpreted differently in different schemes.

N-bit Sign Integers in Sign-Magnitude Representation

In sign-magnitude representation:

  • The most-significant bit (msb) is the sign bit, with value of 0 representing positive integer and 1 representing negative integer.
  • The remaining n-1 bits represents the magnitude (absolute value) of the integer. The absolute value of the integer is interpreted as "the magnitude of the (n-1)-bit binary pattern".

Example 1: Suppose that n=8 and the binary representation is 0 100 0001B.
   Sign bit is 0 ⇒ positive
   Absolute value is 100 0001B = 65D
   Hence, the integer is +65D

Example 2: Suppose that n=8 and the binary representation is 1 000 0001B.
   Sign bit is 1 ⇒ negative
   Absolute value is 000 0001B = 1D
   Hence, the integer is -1D

Example 3: Suppose that n=8 and the binary representation is 0 000 0000B.
   Sign bit is 0 ⇒ positive
   Absolute value is 000 0000B = 0D
   Hence, the integer is +0D

Example 4: Suppose that n=8 and the binary representation is 1 000 0000B.
   Sign bit is 1 ⇒ negative
   Absolute value is 000 0000B = 0D
   Hence, the integer is -0D

sign-magnitude representation

The drawbacks of sign-magnitude representation are:

  1. There are two representations (0000 0000B and 1000 0000B) for the number zero, which could lead to inefficiency and confusion.
  2. Positive and negative integers need to be processed separately.

N-bit Sign Integers in 1's Complement Representation

In 1's complement representation:

  • Again, the most significant bit (msb) is the sign bit, with value of 0 representing positive integers and 1 representing negative integers.
  • The remaining n-1 bits represents the magnitude of the integer, as follows:
    • for positive integers, the absolute value of the integer is equal to "the magnitude of the (n-1)-bit binary pattern".
    • for negative integers, the absolute value of the integer is equal to "the magnitude of the complement (inverse) of the (n-1)-bit binary pattern" (hence called 1's complement).

Example 1: Suppose that n=8 and the binary representation 0 100 0001B.
   Sign bit is 0 ⇒ positive
   Absolute value is 100 0001B = 65D
   Hence, the integer is +65D

Example 2: Suppose that n=8 and the binary representation 1 000 0001B.
   Sign bit is 1 ⇒ negative
   Absolute value is the complement of 000 0001B, i.e., 111 1110B = 126D
   Hence, the integer is -126D

Example 3: Suppose that n=8 and the binary representation 0 000 0000B.
   Sign bit is 0 ⇒ positive
   Absolute value is 000 0000B = 0D
   Hence, the integer is +0D

Example 4: Suppose that n=8 and the binary representation 1 111 1111B.
   Sign bit is 1 ⇒ negative
   Absolute value is the complement of 111 1111B, i.e., 000 0000B = 0D
   Hence, the integer is -0D

1's complement

Again, the drawbacks are:

  1. There are two representations (0000 0000B and 1111 1111B) for zero.
  2. The positive integers and negative integers need to be processed separately.

N-bit Sign Integers in 2's Complement Representation

In 2's complement representation:

  • Again, the most significant bit (msb) is the sign bit, with value of 0 representing positive integers and 1 representing negative integers.
  • The remaining n-1 bits represents the magnitude of the integer, as follows:
    • for positive integers, the absolute value of the integer is equal to "the magnitude of the (n-1)-bit binary pattern".
    • for negative integers, the absolute value of the integer is equal to "the magnitude of the complement of the (n-1)-bit binary pattern plus one" (hence called 2's complement).

Example 1: Suppose that n=8 and the binary representation 0 100 0001B.
   Sign bit is 0 ⇒ positive
   Absolute value is 100 0001B = 65D
   Hence, the integer is +65D

Example 2: Suppose that n=8 and the binary representation 1 000 0001B.
   Sign bit is 1 ⇒ negative
   Absolute value is the complement of 000 0001B plus 1, i.e., 111 1110B + 1B = 127D
   Hence, the integer is -127D

Example 3: Suppose that n=8 and the binary representation 0 000 0000B.
   Sign bit is 0 ⇒ positive
   Absolute value is 000 0000B = 0D
   Hence, the integer is +0D

Example 4: Suppose that n=8 and the binary representation 1 111 1111B.
   Sign bit is 1 ⇒ negative
   Absolute value is the complement of 111 1111B plus 1, i.e., 000 0000B + 1B = 1D
   Hence, the integer is -1D

2's complement

FLOATING POINT REPRESENTATION

A floating-point number (or real number) can represent a very large (1.23×10^88) or a very small (1.23×10^-88) value. It could also represent very large negative number (-1.23×10^88) and very small negative number (-1.23×10^88), as well as zero, as illustrated:

A floating-point number is typically expressed in the scientific notation, with a fraction (F), and an exponent (E) of a certain radix (r), in the form of F×r^E. Decimal numbers use radix of 10 (F×10^E); while binary numbers use radix of 2 (F×2^E).

Representation of floating point number is not unique. For example, the number 55.66 can be represented as 5.566×10^1, 0.5566×10^2, 0.05566×10^3, and so on. The fractional part can be normalized. In the normalized form, there is only a single non-zero digit before the radix point. For example, decimal number 123.4567 can be normalized as 1.234567×10^2; binary number 1010.1011B can be normalized as 1.0101011B×2^3.

It is important to note that floating-point numbers suffer from loss of precision when represented with a fixed number of bits (e.g., 32-bit or 64-bit). This is because there are infinite number of real numbers (even within a small range of says 0.0 to 0.1). On the other hand, a n-bit binary pattern can represent a finite 2^n distinct numbers. Hence, not all the real numbers can be represented. The nearest approximation will be used instead, resulted in loss of accuracy.

It is also important to note that floating number arithmetic is very much less efficient than integer arithmetic. It could be speed up with a so-called dedicated floating-point co-processor. Hence, use integers if your application does not require floating-point numbers.

In computers, floating-point numbers are represented in scientific notation of fraction (F) and exponent (E) with a radix of 2, in the form of F×2^E. Both E and F can be positive as well as negative. Modern computers adopt IEEE 754 standard for representing floating-point numbers. There are two representation schemes: 32-bit single-precision and 64-bit double-precision.

IEEE-754 32-bit Single-Precision Floating-Point Numbers

In 32-bit single-precision floating-point representation:

  • The most significant bit is the sign bit (S), with 0 for positive numbers and 1 for negative numbers.
  • The following 8 bits represent exponent (E).
  • The remaining 23 bits represents fraction (F).
float
Normalized Form

Let's illustrate with an example, suppose that the 32-bit pattern is 1 1000 0001 011 0000 0000 0000 0000 0000, with:

  • S = 1
  • E = 1000 0001
  • F = 011 0000 0000 0000 0000 0000

In the normalized form, the actual fraction is normalized with an implicit leading 1 in the form of 1.F. In this example, the actual fraction is 1.011 0000 0000 0000 0000 0000 = 1 + 12^-2 + 12^-3 = 1.375D.

The sign bit represents the sign of the number, with S=0 for positive and S=1 for negative number. In this example with S=1, this is a negative number, i.e., -1.375D.

In normalized form, the actual exponent is E-127 (so-called excess-127 or bias-127). This is because we need to represent both positive and negative exponent. With an 8-bit E, ranging from 0 to 255, the excess-127 scheme could provide actual exponent of -127 to 128. In this example, E-127=129-127=2D.

Hence, the number represented is -1.3752^2=-5.5D.

De-Normalized Form

Normalized form has a serious problem, with an implicit leading 1 for the fraction, it cannot represent the number zero! Convince yourself on this!

De-normalized form was devised to represent zero and other numbers.

For E=0, the numbers are in the de-normalized form. An implicit leading 0 (instead of 1) is used for the fraction; and the actual exponent is always -126. Hence, the number zero can be represented with E=0 and F=0 (because 0.02^-126=0).

We can also represent very small positive and negative numbers in de-normalized form with E=0. For example, if S=1, E=0, and F=011 0000 0000 0000 0000 0000. The actual fraction is 0.011=12^-2+12^-3=0.375D. Since S=1, it is a negative number. With E=0, the actual exponent is -126. Hence the number is -0.3752^-126 = -4.4×10^-39, which is an extremely small negative number (close to zero).

Summary

In summary, the value (N) is calculated as follows:

  • For 1 ≤ E ≤ 254, N = (-1)^S × 1.F × 2^(E-127). These numbers are in the so-called normalized form. The sign-bit represents the sign of the number. Fractional part (1.F) are normalized with an implicit leading 1. The exponent is bias (or in excess) of 127, so as to represent both positive and negative exponent. The range of exponent is -126 to +127.
  • For E = 0, N = (-1)^S × 0.F × 2^(-126). These numbers are in the so-called denormalized form. The exponent of 2^-126 evaluates to a very small number. Denormalized form is needed to represent zero (with F=0 and E=0). It can also represents very small positive and negative number close to zero.
  • For E = 255, it represents special values, such as ±INF (positive and negative infinity) and NaN (not a number). This is beyond the scope of this article.

Example 1: Suppose that IEEE-754 32-bit floating-point representation pattern is 0 10000000 110 0000 0000 0000 0000 0000.

Sign bit S = 0 ⇒ positive number
E = 1000 0000B = 128D (in normalized form)
Fraction is 1.11B (with an implicit leading 1) = 1 + 1×2^-1 + 1×2^-2 = 1.75D
The number is +1.75 × 2^(128-127) = +3.5D

Example 2: Suppose that IEEE-754 32-bit floating-point representation pattern is 1 01111110 100 0000 0000 0000 0000 0000.

Sign bit S = 1 ⇒ negative number
E = 0111 1110B = 126D (in normalized form)
Fraction is 1.1B  (with an implicit leading 1) = 1 + 2^-1 = 1.5D
The number is -1.5 × 2^(126-127) = -0.75D

Example 3: Suppose that IEEE-754 32-bit floating-point representation pattern is 1 01111110 000 0000 0000 0000 0000 0001.

Sign bit S = 1 ⇒ negative number
E = 0111 1110B = 126D (in normalized form)
Fraction is 1.000 0000 0000 0000 0000 0001B  (with an implicit leading 1) = 1 + 2^-23
The number is -(1 + 2^-23) × 2^(126-127) = -0.500000059604644775390625 (may not be exact in decimal!)

Example 4 (De-Normalized Form): Suppose that IEEE-754 32-bit floating-point representation pattern is 1 00000000 000 0000 0000 0000 0000 0001.

Sign bit S = 1 ⇒ negative number
E = 0 (in de-normalized form)
Fraction is 0.000 0000 0000 0000 0000 0001B  (with an implicit leading 0) = 1×2^-23
The number is -2^-23 × 2^(-126) = -2×(-149) ≈ -1.4×10^-45

CHARACTER CODE

Character Encoding

In computer memory, character are "encoded" (or "represented") using a chosen "character encoding schemes" (aka "character set", "charset", "character map", or "code page").

For example, in ASCII (as well as Latin1, Unicode, and many other character sets):

  • code numbers 65D (41H) to 90D (5AH) represents 'A' to 'Z', respectively.
  • code numbers 97D (61H) to 122D (7AH) represents 'a' to 'z', respectively.
  • code numbers 48D (30H) to 57D (39H) represents '0' to '9', respectively.

It is important to note that the representation scheme must be known before a binary pattern can be interpreted. E.g., the 8-bit pattern "0100 0010B" could represent anything under the sun known only to the person encoded it.

The most commonly-used character encoding schemes are: 7-bit ASCII (ISO/IEC 646) and 8-bit Latin-x (ISO/IEC 8859-x) for western european characters, and Unicode (ISO/IEC 10646) for internationalization (i18n).

A 7-bit encoding scheme (such as ASCII) can represent 128 characters and symbols. An 8-bit character encoding scheme (such as Latin-x) can represent 256 characters and symbols; whereas a 16-bit encoding scheme (such as Unicode UCS-2) can represents 65,536 characters and symbols.

7-bit ASCII Code (aka US-ASCII, ISO/IEC 646, ITU-T T.50)

  • ASCII (American Standard Code for Information Interchange) is one of the earlier character coding schemes.
  • ASCII is originally a 7-bit code. It has been extended to 8-bit to better utilize the 8-bit computer memory organization. (The 8th-bit was originally used for parity check in the early computers.)
  • Code numbers 32D (20H) to 126D (7EH) are printable (displayable) characters as tabulated:
    Hex 0 1 2 3 4 5 6 7 8 9 A B C D E F
    2 SP!" #$%&'()*+,-./
    3 0123456789:;<=> ?
    4 @ABCDEFGHIJKLMNO
    5 PQRSTUVWXYZ[\]^_
    6 `abcdefghijklmno
    7 pqrstuvwxyz{|}~ 
    • Code number 32D (20H) is the blank or space character.
    • '0' to '9': 30H-39H (0011 0001B to 0011 1001B) or (0011 xxxxB where xxxx is the equivalent integer value)
    • 'A' to 'Z': 41H-5AH (0101 0001B to 0101 1010B) or (010x xxxxB). 'A' to 'Z' are continuous without gap.
    • 'a' to 'z': 61H-7AH (0110 0001B to 0111 1010B) or (011x xxxxB). 'A' to 'Z' are also continuous without gap. However, there is a gap between uppercase and lowercase letters. To convert between upper and lowercase, flip the value of bit-5.
  • Code numbers 0D (00H) to 31D (1FH), and 127D (7FH) are special control characters, which are non-printable (non-displayable), as tabulated below. Many of these characters were used in the early days for transmission control (e.g., STX, ETX) and printer control (e.g., Form-Feed), which are now obsolete. The remaining meaningful codes today are:
    • 09H for Tab ('\t').
    • 0AH for Line-Feed or newline (LF or '\n') and 0DH for Carriage-Return (CR or 'r'), which are used as line delimiter (aka line separator, end-of-line) for text files. There is unfortunately no standard for line delimiter: Unixes and Mac use 0AH (LF or "\n"), Windows use 0D0AH (CR+LF or "\r\n"). Programming languages such as C/C++/Java (which was created on Unix) use 0AH (LF or "\n").
    • In programming languages such as C/C++/Java, line-feed (0AH) is denoted as '\n', carriage-return (0DH) as '\r', tab (09H) as '\t'.
DECHEXMeaningDECHEXMeaning
000NULNull1711DC1Device Control 1
101SOHStart of Heading1812DC2Device Control 2
202STXStart of Text1913DC3Device Control 3
303ETXEnd of Text2014DC4Device Control 4
404EOTEnd of Transmission2115NAKNegative Ack.
505ENQEnquiry2216SYNSync. Idle
606ACKAcknowledgment2317ETBEnd of Transmission
707BELBell2418CANCancel
808BS Back Space '\b' 2519EMEnd of Medium
909HTHorizontal Tab '\t'261ASUBSubstitute
100ALFLine Feed '\n'271BESCEscape
110BVTVertical Feed281CIS4File Separator
120CFFForm Feed 'f'291DIS3Group Separator
13 0D CR Carriage Return '\r' 301EIS2Record Separator
140ESOShift Out311FIS1Unit Separator
150FSIShift In        
1610DLEDatalink Escape 127 7F DEL Delete

8-bit Latin-1 (aka ISO/IEC 8859-1)

ISO/IEC-8859 is a collection of 8-bit character encoding standards for the western languages.

ISO/IEC 8859-1, aka Latin alphabet No. 1, or Latin-1 in short, is the most commonly-used encoding scheme for western european languages. It has 191 printable characters from the latin script, which covers languages like English, German, Italian, Portuguese and Spanish. Latin-1 is backward compatible with the 7-bit US-ASCII code. That is, the first 128 characters in Latin-1 (code numbers 0 to 127 (7FH)), is the same as US-ASCII. Code numbers 128 (80H) to 159 (9FH) are not assigned. Code numbers 160 (A0H) to 255 (FFH) are given as follows:

Hex 0 1 2 3 4 5 6 7 8 9 A B C D E F
A NBSP SHY
B
C
D
E
F

ISO/IEC-8859 has 16 parts. Besides the most commonly-used Part 1, Part 2 is meant for Central European (Polish, Czech, Hungarian, etc), Part 3 for South European (Turkish, etc), Part 4 for North European (Estonian, Latvian, etc), Part 5 for Cyrillic, Part 6 for Arabic, Part 7 for Greek, Part 8 for Hebrew, Part 9 for Turkish, Part 10 for Nordic, Part 11 for Thai, Part 12 was abandon, Part 13 for Baltic Rim, Part 14 for Celtic, Part 15 for French, Finnish, etc. Part 16 for South-Eastern European.

Other 8-bit Extension of US-ASCII (ASCII Extensions)

Beside the standardized ISO-8859-x, there are many 8-bit ASCII extensions, which are not compatible with each others.

ANSI (American National Standards Institute) (aka Windows-1252, or Windows Codepage 1252): for Latin alphabets used in the legacy DOS/Windows systems. It is a superset of ISO-8859-1 with code numbers 128 (80H) to 159 (9FH) assigned to displayable characters, such as "smart" single-quotes and double-quotes. A common problem in web browsers is that all the quotes and apostrophes (produced by "smart quotes" in some Microsoft software) were replaced with question marks or some strange symbols. It it because the document is labeled as ISO-8859-1 (instead of Windows-1252), where these code numbers are undefined. Most modern browsers and e-mail clients treat charset ISO-8859-1 as Windows-1252 in order to accommodate such mis-labeling.

Hex 0 1 2 3 4 5 6 7 8 9 A B C D E F
8      
9      

EBCDIC (Extended Binary Coded Decimal Interchange Code): Used in the early IBM computers.

Unicode (aka ISO/IEC 10646 Universal Character Set)

Before Unicode, no single character encoding scheme could represent characters in all languages. For example, western european uses several encoding schemes (in the ISO-8859-x family). Even a single language like Chinese has a few encoding schemes (GB2312/GBK, BIG5). Many encoding schemes are in conflict of each other, i.e., the same code number is assigned to different characters.

Unicode aims to provide a standard character encoding scheme, which is universal, efficient, uniform and unambiguous. Unicode standard is maintained by a non-profit organization called the Unicode Consortium . Unicode is an ISO/IEC standard 10646.

Unicode is backward compatible with the 7-bit US-ASCII and 8-bit Latin-1 (ISO-8859-1). That is, the first 128 characters are the same as US-ASCII; and the first 256 characters are the same as Latin-1.

Unicode originally uses 16 bits (called UCS-2 or Unicode Character Set - 2 byte), which can represent up to 65,536 characters. It has since been expanded to more than 16 bits, currently stands at 21 bits. The range of the legal codes in ISO/IEC 10646 is now from U+0000H to U+10FFFFH (21 bits or about 2 million characters), covering all current and ancient historical scripts. The original 16-bit range of U+0000H to U+FFFFH (65536 characters) is known as Basic Multilingual Plane (BMP), covering all the major languages in use currently. The characters outside BMP are called Supplementary Characters, which are not frequently-used.

Unicode has two encoding schemes:

  • UCS-2 (Universal Character Set - 2 Byte): Uses 2 bytes (16 bits), covering 65,536 characters in the BMP. BMP is sufficient for most of the applications. UCS-2 is now obsolete.
  • UCS-4 (Universal Character Set - 4 Byte): Uses 4 bytes (32 bits), covering BMP and the supplementary characters.

FIXED POINT ADDITION AND SUBSTRACTION

Fixed-Point Numbers

  • Fixed-point numbers are generally stored in In.Qm format (sometimes referred as Qn.m format) n = number of bits in integer part. m = number of bits in fractional part.

Signed Fixed-Point Numbers

  • Positive fixed-point numbers are the same as unsigned fixed-point numbers. Negative fixed-point numbers are obtained by simply calculating 2s complement as they are integers.
I8.Q8: 01000110.1100000 = 70.75
2s comp. 10111001.0100000 = -70.75

Fixed-Point Addition

  • Fixed-point addition is the same as integer addition! Align two fixed point number and apply integer addition:

Overflow

  • Overflow occurs when the addition of two positive numbers produces a negative result, or when the addition of two negative numbers produces a positive result. Adding operands of unlike signs never produces an overflow.
  • Notice that discarding the carry out of the most significant bit during two s complement addition is a normal occurrence, and does not by itself indicate overflow.
  • As an example of overflow, consider adding (80 + 80 = 160)10, which produces a result of -9610 in an 8-bit twos complement format

Sign Extension

  • The leftmost bit is assigned to the sign. What happens to the sign bit if we place a number into a larger or smaller word? For positive numbers, all that we need to do is pad the left side with 0s:
  • For negative numbers, we cannot simply pad the left side with 0s because that would change the value of the number
    • As it turns out, all that we need to do is copy the sign bit for as many places as there are to the left and the number will be correctly extended, regardless of the value of the sign. This is known as sign extension:
    • If we want to reduce the size of the word, we can simply remove bits on the left and the resulting sign will be correct, as long as the number can be represented in the remaining bits:

    Ones Complement Addition

    • An example of ones complement integer addition with an end-around carry:

    End-Around Carry for Fractions

    • The end-around carry complicates ones complement addition for non-integers, and is generally not used for this situation. The issue is that the distance between the two representations of 0 is 1.0, whereas the rightmost fraction position is less than 1

    FLOATING POINT MULTIPLICATION AND DIVISION

    Multiplication Example

    • Multiplication of two 4-bit unsigned binary integers produces an 8-bit result.
    • Multiplication of two 4-bit signed binary integers produces only a 7-bit result (each operand reduces to a sign bit and a 3-bit magnitude for each operand, producing a sign-bit and a 6-bit result).

    Example of Multiplication Using Serial Multiplie

    Multiplication of Signed Integers

    • Sign extension to the target word size is needed for the negative operand(s).
    • A target word size of 8 bits is used here for two 4-bit signed operands, but only a 7-bit target word size is needed for the result.

    Example of Base 2 Division

    • (7 / 3 = 2)10 with a remainder R of 1.
    • Equivalently, (0111/ 11 = 10)2 with a remainder R of 1

    Division Example Using Serial Divider

    FLOATING POINT ARITHMETIC

    • Floating point arithmetic differs from integer arithmetic in that exponents must be handled as well as the magnitudes of the operands.
    • The exponents of the operands must be made equal for addition and subtraction. The fractions are then added or subtracted as appropriate, and the result is normalized.
    • Ex: Perform the floating point operation: (.101 23 + .111 24)2
    • Start by adjusting the smaller exponent to be equal to the larger exponent, and adjust the fraction accordingly. Thus we have .101 23 = .010 24, losing .001 23 of precision in the process.
    • The resulting sum is (.010 + .111) 24 = 1.001 24 = .1001 25, and rounding to three significant digits, .100 25, and we have lost another 0.001 24 in the rounding process.
    • If we simply added the numbers using as much precision as we needed and then applied rounding only in the final normalization step.
    • Normalizing yields .10011 25, and rounding to three significant digits using the round to nearest even method yields .101 25.
    • Which calculation is correct .100 x 25 or .101 x 25?
    • According to the IEEE 754 standard, the final result should be the same as if the maximum precision needed is used before applying the rounding method, and so the correct result is .101 25.

    The Booth Algorithm

    • Booth multiplication reduces the number of additions for intermediate results, but can sometimes make it worse as we will see.
    • Positive and negative numbers treated alike.

    A Worst Case Booth Example

    • A worst case situation in which the simple Booth algorithm requires twice as many additions as serial multiplication
    .

    Bit-Pair Recoding (Modified Booth Algorithm)

    BOOLEAN ALGEBRA

    Boolean Algebra is used to analyze and simplify the digital logic circuits. It uses only the binary numbers i.e. 0 and 1. It is also called as Binary Algebra or logical Algebra. Boolean algebra was invented by George Boole in 1854.

    Rule in Boolean Algebra

    Following are the important rules used in Boolean algebra.
    • Variable used can have only two values.
    • Binary 1 for HIGH and Binary 0 for LOW.
    • Complement of a variable is represented by an overbar -. Thus, complement of variable B is represented as = 0.
    • ORing of the variables is represented by a plus + sign between them. For example ORing of A, B, C is represented as A + B + C.
    • Logical ANDing of the two or more variable is represented by writing a dot between them such as A.B.C. Sometime the dot may be omitted like ABC

    Important Points

  • Boolean algebra is a mathematical system for the manipulation of variables that can have one of two value
  • In formal logic, these values are true and false.
    In digital systems, these values are on and off, 1 and 0, or high and low.
  • Boolean expressions are created by performing operations on Boolean variables.
  • Common Boolean operators include AND, OR, and NOT.
  • A Boolean operator can be completely described using a truth table.
  • The truth table for the Boolean operators AND and OR are shown at the right.
  • The AND operator is also known as a Boolean product. The OR operator is the Boolean sum.
  • The truth table for the Boolean NOT operator is shown at the right.
  • The NOT operation is most often designated by an overbar. It is sometimes indicated by a prime mark ( ) or an elbow (-).
  • A Boolean function has:

  • At least one Boolean variable,
    At least one Boolean operator, and
    At least one input from the set {0,1}.
    It produces an output that is also a member of the set {0,1}.
  • The truth table for the Boolean function is shown at the right
  • To make evaluation of the Boolean function easier, the truth table contains extra (shaded) columns to hold evaluations of subparts of the function.
  • Most Boolean identities have an AND (product) form as well as an OR (sum) form. We give our identities using both forms. Our first group is rather intuitive:
  • Our last group of Boolean identities are perhaps the most useful.
  • If you have studied set theory or formal logic, these laws are also familiar to you.
  • Sometimes it is more economical to build a circuit using the complement of a function (and complementing its result) than it is to implement the function directly.
  • DeMorgans law provides an easy way of finding the complement of a Boolean function.
  • Recall DeMorgans law states:
  • BOOLEAN EXPRESSION

    Definition

    • In computer science, a Boolean expression is an expression in a programming language that produces a Boolean value when evaluated, i.e. one of true or false. A Boolean expression may be composed of a combination of the Boolean constants true or false, Boolean-typed variables, Boolean-valued operators, and Boolean-valued functions.
    • Boolean expressions correspond to propositional formulas in logic and are a special case of Boolean circuits.

    Boolean operators

    • Most programming languages have the Boolean operators OR, AND and not; in C and some newer languages, these are represented by "||" (double pipe character), "&&" (double ampersand) and "!" (exclamation point) respectively, while the corresponding bitwise operations are represented by "|", "&" and "~" (tilde)
    • .
    • In the mathematical literature the symbols used are often "+" (plus), "" (dot) and overbar, or "?" (cup), "?" (cap) and "" or "'" (prime)

    Examples

    • The expression "5 > 3" is evaluated as true.
    • The expression "3 > 5" is evaluated as false.
    • "5>=3" and "3<=5" are equivalent Boolean expressions, both of which are evaluated as true.
    • "typeof true" returns "boolean" and "typeof false" returns "boolean"
    • Of course, most Boolean expressions will contain at least one variable (X > 3), and often more (X > Y).

    BOOLEAN IDENTITY

    Complement Laws


    A + ~A = 1
    A ~A = 0

    Law of the Double Complement


    ~(~A) = A

    Idempotent Laws


    A + A = A
    A A = A

    Identity Laws


    A + 0 = A
    A 1 = A

    Dominance Laws


    A + 1 = 1
    A 0 = 0

    Commutative Laws


    A + B = B + A
    A B = B A

    Associative Laws


    A + (B + C) = (A + B) + C
    A (B C) = (A B) C

    Distributive Laws


    A + (B C) = (A + B) (A + C)
    A (B + C) = (A B)+(A C)

    DeMorgan's Laws


    ~(A B) = ~A + ~B
    ~(A + B) = ~A ~B

    Absorption Laws


    A (A + B) = A
    A + (A B) = A

    Simplification Laws


    A (~A + B) = A B
    A + (~A B) = A + B
    A B + A C + ~A C = A B + ~B C

    K-MAPS & MAP MINIMIZATION

    Karnaugh Map

    The Karnaugh map (K-map for short), Maurice KarnaughThe Karnaugh map (K-map for short), Maurice Karnaugh's 1953 refinement of Edward VeitchThe Karnaugh map (K-map for short), Maurice Karnaugh's 1953 refinement of Edward Veitch's 1952 Veitch diagram, is a method to simplify Boolean algebra expressions. K-map is K-Maps are a convenient way to simplify Boolean Expressions. They can be used for up to 4 or 5 variables.

    Truth table to K-Map (2 variable minterm)

    K-Maps (2 Variables k-map contd)

    Adjacent 1s can be paired off Any variable which is both a 1 and a zero in this pairing can be eliminated Pairs may be adjacent horizontally or vertically,

    Three Variable K-Map


    Grouping the Pairs

    Expression is ABC+ABC+ABC+ABC Groups of 4 in a block can be used to eliminate two variables:

    QUAD = ABC+ABC+ABC+ABC = AB+AB =B

    Karnaugh Maps - Four Variable K-Map



    Karnaugh Maps - Four Variable K-Map


    Octet Reduction


    Quad Reduction


    LOGIC GATE

  • A gate is an electronic device that produces a result based on two or more input values.

  • In reality, gates consist of one to six transistors, but digital designers think of them as a single unit.

  • Integrated circuits contain collections of gates suited to a particular purpose
  • The three simplest gates are the AND, OR, and NOT gates.


  • They correspond directly to their respective Boolean operations, as you can see by their truth tables.


  • Another very useful gate is the exclusive OR (XOR) gate.

  • The output of the XOR operation is true only when the values of the inputs differ.


  • NAND and NOR are two very important gates. Their symbols and truth tables are shown at the right

  • NAND and NOR are known as universal gates because they are inexpensive to manufacture and any Boolean function can be constructed using only NAND or only NOR gates.

  • Gates can have multiple inputs and more than one output.

  • A second output can be provided for the complement of the operation.

  • Well see more of this later

DIGITAL COMPONENTS

  • A simple processor illustrates many of the basic components used in any digital system:
  • Datapath: The core -- all other components are support units that store either the results of the datapath or determine what happens in the next cycle.

Important Terms

  • Memory

  • A broad range of classes exist determined by the way data is accessed:
  • Read-Only vs. Read-Write
  • Sequential vs. Random access
  • Single-ported vs. Multi-ported access
  • Or by their data retention characteristics:
  • Dynamic vs. Static
  • Stay tuned for a more extensive treatment of memories.
  • Control:

  • A FSM (sequential circuit) implemented using random logic, PLAs or memories.
  • Interconnect and Input-Output:

  • Parasitic resistance, capacitance and inductance affect performance of wires both on and off the chip.
  • Growing die size increases the length of the on-chip interconnect, increasing the value of the parasitics.
  • Datapath

    elements include adders, multipliers, shifters, BFUs, etc.
  • The speed of these elements often dominates the overall system performance so optimization techniques are important.
  • However, as we will see, the task is non-trivial since there are multiple equivalent logic and circuit topologies to choose from, each with adv./disadv. in terms of speed, power and area.
  • Also, optimizations focused at one design level, e.g., sizing transistors, leads to inferior design

COMBINATIONAL CIRCUIT

Combinational circuit is a circuit in which we combine the different gates in the circuit, for example encoder, decoder, multiplexer and demultiplexer.
Some of the characteristics of combinational circuits are following -
  • The output of combinational circuit at any instant of time, depends only on the levels present at input
  • The combinational circuit do not use any memory.
  • The previous state of input does not have any effect on the present state of the circuit.
  • A combinational circuit can have an n number of inputs and m number of outputs.
  • Block diagram



    We're going to elaborate few important combinational circuits as follows. Half adder is a combinational logic circuit with two inputs and two outputs. The half adder circuit is designed to add two single bit binary number A and B. It is the basic building block for addition of two single bit numbers. This circuit has two outputs carry and sum.

    Block diagram

    Truth Table

    Circuit Diagram

    Full Adder

    Full adder is developed to overcome the drawback of Half Adder circuit. It can add two one-bit numbers A and B, and carry c. The full adder is a three input and two output combinational circuit.

    Block diagram

    Truth Table

    Circuit Diagram

    N-Bit Parallel Adder

    The Full Adder is capable of adding only two single digit binary number along with a carry input. But in practical we need to add binary numbers which are much longer than just one bit. To add two n-bit binary numbers we need to use the n-bit parallel adder. It uses a number of full adders in cascade. The carry output of the previous full adder is connected to carry input of the next full adder.

    4 Bit Parallel Adder

    In the block diagram, A0 and B0 represent the LSB of the four bit words A and B. Hence Full Adder0 is the lowest stage. Hence its Cin has been permanently made 0. The rest of the connections are exactly same as those of n-bit parallel adder is shown in fig. The four bit parallel adder is a very common logic circuit.

    Block diagram

    N-Bit Parallel Subtractor

    The subtraction can be carried out by taking the 1's or 2's complement of the number to be subtracted. For example we can perform the subtraction A-B by adding either 1's or 2's complement of B to A. That means we can use a binary adder to perform the binary subtraction.

    4 Bit Parallel Subtractor

    The number to be subtracted B is first passed through inverters to obtain its 1's complement. The 4-bit adder then adds A and 2's complement of B to produce the subtraction. S3 S2 S1 S0 represents the result of binary subtraction A-B and carry output Cout represents the polarity of the result. If A > B then Cout = 0 and the result of binary form A-B then Cout = 1 and the result is in the 2's complement form.

    Block diagram

    Half Subtractors

    Half subtractor is a combination circuit with two inputs and two outputs differenceandborrow. It produces the difference between the two binary bits at the input and also produces an output Borrow to indicate if a 1 has been borrowed. In the subtraction A-B, A is called as Minuend bit and B is called as Subtrahend bit.

    Truth Table

    Circuit Diagram

    Full Subtractors

    The disadvantage of a half subtractor is overcome by full subtractor. The full subtractor is a combinational circuit with three inputs A,B,C and two output D and C'. A is the 'minuend', B is 'subtrahend', C is the 'borrow' produced by the previous stage, D is the difference output and C' is the borrow output.

    Truth Table

    Circuit Diagram

    Multiplexers

    Multiplexer is a special type of combinational circuit. There are n-data inputs, one output and m select inputs with 2m = n. It is a digital circuit which selects one of the n data inputs and routes it to the output. The selection of one of the n inputs is done by the selected inputs. Depending on the digital code applied at the selected inputs, one out of n data sources is selected and transmitted to the single output Y. E is called the strobe or enable input which is useful for the cascading. It is generally an active low terminal that means it will perform the required operation when it is low.

    Block diagram

      Multiplexers come in multiple variations
    • 2 : 1 multiplexer
    • 4 : 1 multiplexer
    • 16 : 1 multiplexer
    • 32 : 1 multiplexer
    • Block Diagram

      Truth table

      Decoder

      A decoder is a combinational circuit. It has n input and to a maximum m = 2n outputs. Decoder is identical to a demultiplexer without any data input. It performs operations which are exactly opposite to those of an encoder.

      Block diagram

      Examples of Decoders are following. Code converters BCD to seven segment decoders Nixie tube decoders Relay actuator

      2 to 4 Line Decode

      The block diagram of 2 to 4 line decoder is shown in the fig. A and B are the two inputs where D through D are the four outputs. Truth table explains the operations of a decoder. It shows that each output is 1 for only a specific combination of inputs.

      Block diagram

      Truth Table

      Logic Circuit

      Encoder

      Encoder is a combinational circuit which is designed to perform the inverse operation of the decoder. An encoder has n number of input lines and m number of output lines. An encoder produces an m bit binary code corresponding to the digital input number. The encoder accepts an n input digital word and converts it into an m bit another digital word.

      Block diagram

        Priority encoders
      • Decimal to BCD encode
      • Octal to binary encoder
      • Hexadecimal to binary encoder
      • Priority Encoder

        This is a special type of encoder. Priority is given to the input lines. If two or more input line are 1 at the same time, then the input line with highest priority will be considered. There are four input D0, D1, D2, D3 and two output Y0, Y1. Out of the four input D3 has the highest priority and D0 has the lowest priority. That means if D3 = 1 then Y1 Y1 = 11 irrespective of the other inputs. Similarly if D3 = 0 and D2 = 1 then Y1 Y0 = 10 irrespective of the other inputs.

        Block diagram

        Truth Table

        Cicuit Diagram

        SEQUENTIAL CIRCUIT

        The combinational circuit does not use any memory. Hence the previous state of input does not have any effect on the present state of the circuit. But sequential circuit has memory so output can vary based on input. This type of circuits uses previous input, output, clock and a memory element.

        Block diagram

        Flip Flop

        Flip flop is a sequential circuit which generally samples its inputs and changes its outputs only at particular instants of time and not continuously. Flip flop is said to be edge sensitive or edge triggered rather than being level triggered like latches.

        S-R Flip Flop

        It is basically S-R latch using NAND gates with an additional enable input. It is also called as level triggered SR-FF. For this, circuit in output will take place if and only if the enable input E is made active. In short this circuit will operate as an S-R latch if E = 1 but there is no change in the output if E = 0.

        Block Diagram

        Truth Table

        Operation

        Master Slave JK Flip Flop

        Master slave JK FF is a cascade of two S-R FF with feedback from the output of second to input of first. Master is a positive level triggered. But due to the presence of the inverter in the clock line, the slave will respond to the negative level. Hence when the clock = 1 positivelevel the master is active and the slave is inactive. Whereas when clock = 0 lowlevel the slave is active and master is inactive.

        Circuit Diagram

        Truth Table

        Delay Flip Flop

        D Flip Flop Delay Flip Flop or D Flip Flop is the simple gated S-R latch with a NAND inverter connected between S and R inputs. It has only one input. The input data is appearing at the output after some time. Due to this data delay between i/p and o/p, it is called delay flip flop. S and R will be the complements of each other due to NAND inverter. Hence S = R = 0 or S = R = 1, these input condition will never appear. This problem is avoid by SR = 00 and SR = 1 conditions.

        Block Diagram

        Circuit Diagram

        Operation

        Toggle Flip Flop

        T Flip Flop Toggle flip flop is basically a JK flip flop with J and K terminals permanently connected together. It has only input denoted by T as shown in the Symbol Diagram. The symbol for positive edge triggered T flip flop is shown in the Block Diagram.

        Symbol Diagram

        Block Diagram

        Operation