Tuesday, February 13, 2024

Binary Arithmetic


Binary arithmetic is essential part of all the digital computers and many other digital system.

Binary Addition

Binary addition is the easiest of the processes to perform. As you'll see with the other operations below, it is essentially the same way you learnt to do addition of decimal numbers by hand (probably many years ago in your early school years). The process is actually easier with binary as we only have 2 digits to worry about, 0 and 1.

The process is that we line the two numbers up (one under the other), then, starting at the far right, add each column, recording the result and possible carry as we go.

Here are the possibilities:

  • 0 + 0 = 0
  • 1 + 0 = 1
  • 1 + 1 = 2 which is 10 in binary which is 0 with a carry of 1
  • 1 + 1 + 1 (carry) = 3 which is 11 in binary which is 1 with a carry of 1

The carry is involved whenever we have a result larger than 1 (which is the largest amount we may represent with a single binary digit).

Adding more than two numbers

It is possible to add more than 2 binary numbers in one go but it can soon get unweildly managing the carries. My suggestion is that you add the 1st and 2nd numbers together. Then take the result and add the third number to that. Then take the result and add the 4th etc. This way you may add as many binary numbers as you like and the complexity will never increase. It's a little more work but with practice you will get very quick at it.

Binary Multiplication

Binary multiplication is just about as easy as binary addition. Again it is the same process as we would do with decimal multiplication by hand. Again it is easier as binary only has 0 and 1.

We line the two numbers up (similar to addition). Then we multiply the entire top number by each individual digit of the bottom number. As we move across each digit we pad out the result with 0's to line it up. Finally we add all the results together.

Here are the possibilities:

  • 0 * 0 = 0
  • 1 * 0 = 0
  • 1 * 1 = 1

As you have no doubt noticed, the process is fairly straight forward. If the binary digit on the second row we are multiplying by is a 1 then pad out accordingly and write out the top binary number. If the binary digit on the second row we are multiplying by is a 0 then we can just write out 0's.

Binary Subtraction

With binary subtraction we start to get a little more difficult (But not that difficult). Similar to binary addition, we will work through the numbers, column by column, starting on the far right. Instead of carrying forward however, we will borrow backwards (when necessary).

Here are the possibilities:

  • 0 - 0 = 0
  • 1 - 0 = 1
  • 1 - 1 = 0
  • 0 - 1 we can't do so we borrow 1 from the next column. This makes it 10 - 1 which is 1.

Another approach

The above example is the most convenient way for us to do binary subtraction by hand. There is another approach however and this is the way that computers subtract binary digits. This approach is called Two's Complement.

Let's say we want to compute 1000 ( 8 ) - 11 ( 3 ).

  • Step 1: Write the equation out, padding the bottom number with 0's
    1000
    0011 -
  • Step 2: Invert the digits of the lower number
    1000
    1100
  • Step 3: Add 1 to the lower number
    1000
    1101
  • Step 4: Add those two numbers together to get 10101
  • Step 5: Remove the leading 1 (and any 0's after it). You are left with 101 ( 5 ).

Binary Division

Binary division is probably the most difficult of the binary equations. Fortunately, it is also made easier by the fact we only have to deal with 1's and 0's.

First off, some terminology. The number we are dividing by is the divisor. The number we are dividing into is the dividend.

The process is as follows:

  • Step 1: Create the working portion of the dividend. Starting at the right, keep including digits until we have a number that the divisor will go into.
  • Step 2: Work out how many times the divisor goes into the working portion (with binary this is easy as it will always be 1). Write this number above the line (in line with the far right digit of the working number).
  • Step 3: Subtract the divisor from the working number. This becomes the beginning of the new working number.
  • Step 4: Bring down digits from the dividend and add to the new working number until we have a new working number large enough for the divisor to go into.
  • Step 5: Repeat steps 2 to 4 until we are at the end of the dividend.
  • Step 6: The result of the final subtraction is the remainder.

 

Binary Codes


In the coding, when numbers, letters or words are represented by a specific group of symbols, it is said that the number, letter or word is being encoded. The group of symbols is called as a code. The digital data is represented, stored and transmitted as group of binary bits. This group is also called as binary code. The binary code is represented by the number as well as alphanumeric letter.

Advantages of Binary Code

Following is the list of advantages that binary code offers.

  • Binary codes are suitable for the computer applications.

  • Binary codes are suitable for the digital communications.

  • Binary codes make the analysis and designing of digital circuits if we use the binary codes.

  • Since only 0 & 1 are being used, implementation becomes easy.

Classification of binary codes

The codes are broadly categorized into following four categories.

  • Weighted Codes
  • Non-Weighted Codes
  • Binary Coded Decimal Code
  • Alphanumeric Codes
  • Error Detecting Codes
  • Error Correcting Codes

Weighted Codes

Weighted binary codes are those binary codes which obey the positional weight principle. Each position of the number represents a specific weight. Several systems of the codes are used to express the decimal digits 0 through 9. In these codes each decimal digit is represented by a group of four bits.

Non-Weighted Codes

In this type of binary codes, the positional weights are not assigned. The examples of non-weighted codes are Excess-3 code and Gray code.

Excess-3 code

The Excess-3 code is also called as XS-3 code. It is non-weighted code used to express decimal numbers. The Excess-3 code words are derived from the 8421 BCD code words adding (0011)2 or (3)10 to each code word in 8421. The excess-3 codes are obtained as follows −

Excess-3 code

Gray Code

It is the non-weighted code and it is not arithmetic codes. That means there are no specific weights assigned to the bit position. It has a very special feature that, only one bit will change each time the decimal number is incremented as shown in fig. As only one bit changes at a time, the gray code is called as a unit distance code. The gray code is a cyclic code. Gray code cannot be used for arithmetic operation.

Application of Gray code

  • Gray code is popularly used in the shaft position encoders.

  • A shaft position encoder produces a code word which represents the angular position of the shaft.

Binary Coded Decimal (BCD) code

In this code each decimal digit is represented by a 4-bit binary number. BCD is a way to express each of the decimal digits with a binary code. In the BCD, with four bits we can represent sixteen numbers (0000 to 1111). But in BCD code only first ten of these are used (0000 to 1001). The remaining six code combinations i.e. 1010 to 1111 are invalid in BCD.

Advantages of BCD Codes

  • It is very similar to decimal system.
  • We need to remember binary equivalent of decimal numbers 0 to 9 only.

Disadvantages of BCD Codes

  • The addition and subtraction of BCD have different rules.

  • The BCD arithmetic is little more complicated.

  • BCD needs more number of bits than binary to represent the decimal number. So BCD is less efficient than binary.

Alphanumeric codes

A binary digit or bit can represent only two symbols as it has only two states '0' or '1'. But this is not enough for communication between two computers because there we need many more symbols for communication. These symbols are required to represent 26 alphabets with capital and small letters, numbers from 0 to 9, punctuation marks and other symbols.

The alphanumeric codes are the codes that represent numbers and alphabetic characters. Mostly such codes also represent other characters such as symbol and various instructions necessary for conveying information. An alphanumeric code should at least represent 10 digits and 26 letters of alphabet i.e. total 36 items. The following three alphanumeric codes are very commonly used for the data representation.

  • American Standard Code for Information Interchange (ASCII).
  • Extended Binary Coded Decimal Interchange Code (EBCDIC).
  • Five bit Baudot Code.

ASCII code is a 7-bit code whereas EBCDIC is an 8-bit code. ASCII code is more commonly used worldwide while EBCDIC is used primarily in large IBM computers.

Error Codes

There are binary code techniques available to detect and correct data during data transmission.

Error CodeDescription

Error Detection and Error Correction

Error detection and correction code technique

Monday, February 12, 2024

Computer System Organization


A digital computer consists of an interconnected system of processors, memories, and input/output devices. Processors, memories, and input/output are key concepts and will be present at every level, so we will start to study computer architecture by looking at all three in turn.

PROCESSORS

The CPU (Central Processing Unit) is the ‘‘brain’’ of the computer.

A central processing unit (CPU) is the electronic circuitry within a computer that carries out the instructions  of a computer program by performing the basic arithmetic, logical, control and input/output (I/O) operations specified by the instructions. The computer industry has used the term “central processing unit” at least since the early 1960s.Traditionally, the term “CPU” refers to a processor, more specifically to its processing unit and control unit (CU), distinguishing these core elements of a computer from external components such as main memory and I/O circuitry.

The CPU is composed of several distinct parts. The control unit is responsible for fetching instructions from the main memory and determining their type. The arithmetic logic unit performs operations such as addition and Boolean AND needed to carry out the instructions.

The CPU also contains a small, high-speed memory used to store temporary results and certain control information. This memory is made up of a number of registers, each of which has a certain size and function. Usually, all the registers have the same size. Each register can hold one number, up to some maximum determined by the size of the register. Registers can be read and written at high speed since they are internal to the CPU. The most important register is the Program Counter (PC), which points to the next instruction to be fetched for execution. ( The name ‘‘program counter’’ is somewhat misleading because it has nothing to do with counting anything, but the term is universally used. Also important is the Instruction Register (IR), which holds the instruction currently being executed. ( Most computers have numerous other registers as well, some of the general-purpose as well as some for specific purposes.

What is the,,bus,,?

In computer architecture, a bus is a communication system that transfers data between components inside a computer, or between computers. This expression covers all related hardware components (wire, optical fiber, etc.) and software, including communication protocols.

Early computer buses were parallel electrical wires with multiple hardware connections, but the term is now used for any physical arrangement that provides the same logical function as a parallel electrical bus. Modern computer buses can use both parallel and bit-serial connections and can be wired in either a multi-drop(electrical parallel) or daisy chain topology, or connected by switched hubs, as in the case of USB.

CPU Organization

  • A system bus is a link that connects every segment of a system to the central storage and carries out the data transfer in them.
  • It is a pathway composed of cables and connectors which is used to carry data between a computer microprocessor and the main memory.
  • It provides a communication path for the data and control signals moving between the major components of the computer system.

The types of system buses are

  • These are the pieces of information that are to be transferred.
  • The data is transferred between peripherals, memory and the CPU. The data bus can be a very busy pathway.
  • It stores information about where the data is to be transferred.
  • The components pass memory addresses to one another over the address bus.
  • These are the set of instructions regarding what to do with the data.
  • It is used to send out signals to coordinate and manage the activities of the motherboard components.

1. Bus Width

  • The size of a bus also known as its width.
  • It determines how much data can be transferred at a time.
  • This refers to the amount of information that can be transferred once.

2. Bus Speed

  • This refers to the no. of bits or bytes the bus can send per unit time.
  • It is also defined by its frequency. Frequency means that the number of data packets sent or received per second. Each time that data is sent or received is called a cycle.

The system bus combines the functions of the three main buses, namely Control Bus, Address Bus, Data Bus. The control bus carries the control, timing and coordination signals to manage the various functions across the system. The address bus is used to specify memory locations for the data being transferred.

The data bus, which is a bidirectional path. It carries the actual data between the processor (CPU), the memory and the peripherals (Input and Output). The system bus architecture varies from system to system and can be specific to a particular computer design. The other common characteristics of system buses are based on the primary role, connecting devices internally or externally, etc.

Internal Bus

  • It is also known as an internal data bus, a memory bus, a system bus or Front-Side-Bus.
  • It connects all the internal components of a computer, such as CPU and memory, to the motherboard.
  • Internal data buses are also referred to as a local bus because they are intended to connect to local devices.
  • This bus is quick and independent of the rest of the computer operations.

External Bus

  • It is also known as an expansion bus.
  • It is made up of the electronic pathways that connect the different external devices, such as a printer, etc.

Cache Function

A CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to reduce the average cost (time or energy) to access data from the main memory. A cache is a smaller, faster memory, closer to a processor core, which stores copies of the data from frequently used main memory locations. Most CPUs have different independent caches, including instruction and data caches, where the data cache is usually organized as a hierarchy of more cache levels (L1, L2, etc.).

All modern (fast) CPUs (with few specialized exceptions) have multiple levels of CPU caches. The first CPUs that used a cache had only one level of cache; unlike later level 1 caches, it was not split into L1d (for data) and L1i (for instructions). Almost all current CPUs with caches have a split L1 cache. They also have L2 caches and, for larger processors, L3 caches as well. The L2 cache is usually not split and acts as a common repository for the already split L1 cache. Every core of a multi-core processor has a dedicated L2 cache and is usually not shared between the cores. The L3 cache, and higher-level caches, are shared between the cores and are not split. An L4 cache is currently uncommon and is generally on dynamic random access memory (DRAM), rather than on static random access memory (SRAM), on a separate die or chip. That was also the case historically with L1, while bigger chips have allowed integration of it and generally all cache levels, with the possible exception of the last level. Each extra level of cache tends to be bigger and be optimized differently.

Other types of caches exist (that are not counted towards the “cache size” of the most important caches mentioned above), such as the translation look aside buffer (TLB) that is part of the memory management unit (MMU) that most CPUs have.

Caches are generally sized in powers of two: 4, 8, 16 etc. KiB or MiB(for larger non-L1) sizes, although the IBM z13has a 96 KiB L1 instruction cache.

Instruction Execution Cycle

  • This is a process of getting the instruction from the memory, decoding it to the machine language and executing it. So, three basic steps of the cycle are:

Fetch the instruction.
Decode it.
Execute.

  • The whole process of fetching the instructions from the memory, decoding it to the machine language and executing it, is termed as an instruction cycle.

PRIMARY MEMORY

The memory is that part of the computer where programs and data are stored. Some computer scientists (especially British ones) use the term store or storage rather than memory, although more and more, the term ‘‘storage’’ is used to refer to disk storage. Without a memory from which the processors can read and write information, there would be no stored-program digital computers.

Bits

The basic unit of memory is the binary digit, called a bit. A bit may contain a 0 or a 1. It is the simplest possible unit. (A device capable of storing only zeros could hardly form the basis of a memory system; at least two values are needed.) People often say that computers use binary arithmetic because it is ‘‘efficient.’’ What they mean (although they rarely realize it) is that digital information can be stored by distinguishing between different values of some continuous physical quantity, such as voltage or current. The more values that must be distinguished, the less separation between adjacent values, and the less reliable the memory. The binary number system requires only two values to be distinguished. Consequently, it is the most reliable method for encoding digital information.

Memory Addresses

In computing, a memory address is a reference to a specific memory location used at various levels by software and hardware. Memory addresses are fixed-length sequences of digits conventionally displayed and manipulated as unsigned integers.Such numerical semantic bases itself upon features of CPU (such as the instruction pointer and incremental address registers), as well upon the use of the memory like an array endorsed by various programming languages.

 

 

Friday, February 9, 2024

Binary Codes


In the coding, when numbers, letters or words are represented by a specific group of symbols, it is said that the number, letter or word is being encoded. The group of symbols is called as a code. The digital data is represented, stored and transmitted as group of binary bits. This group is also called as binary code. The binary code is represented by the number as well as alphanumeric letter.

Advantages of Binary Code

Following is the list of advantages that binary code offers.

  • Binary codes are suitable for the computer applications.

  • Binary codes are suitable for the digital communications.

  • Binary codes make the analysis and designing of digital circuits if we use the binary codes.

  • Since only 0 & 1 are being used, implementation becomes easy.

Classification of binary codes

The codes are broadly categorized into following four categories.

  • Weighted Codes
  • Non-Weighted Codes
  • Binary Coded Decimal Code
  • Alphanumeric Codes
  • Error Detecting Codes
  • Error Correcting Codes

Weighted Codes

Weighted binary codes are those binary codes which obey the positional weight principle. Each position of the number represents a specific weight. Several systems of the codes are used to express the decimal digits 0 through 9. In these codes each decimal digit is represented by a group of four bits.

Non-Weighted Codes

In this type of binary codes, the positional weights are not assigned. The examples of non-weighted codes are Excess-3 code and Gray code.

Excess-3 code

The Excess-3 code is also called as XS-3 code. It is non-weighted code used to express decimal numbers. The Excess-3 code words are derived from the 8421 BCD code words adding (0011)2 or (3)10 to each code word in 8421. The excess-3 codes are obtained as follows −

Excess-3 code

Gray Code

It is the non-weighted code and it is not arithmetic codes. That means there are no specific weights assigned to the bit position. It has a very special feature that, only one bit will change each time the decimal number is incremented as shown in fig. As only one bit changes at a time, the gray code is called as a unit distance code. The gray code is a cyclic code. Gray code cannot be used for arithmetic operation.

Application of Gray code

  • Gray code is popularly used in the shaft position encoders.

  • A shaft position encoder produces a code word which represents the angular position of the shaft.

Binary Coded Decimal (BCD) code

In this code each decimal digit is represented by a 4-bit binary number. BCD is a way to express each of the decimal digits with a binary code. In the BCD, with four bits we can represent sixteen numbers (0000 to 1111). But in BCD code only first ten of these are used (0000 to 1001). The remaining six code combinations i.e. 1010 to 1111 are invalid in BCD.

Advantages of BCD Codes

  • It is very similar to decimal system.
  • We need to remember binary equivalent of decimal numbers 0 to 9 only.

Disadvantages of BCD Codes

  • The addition and subtraction of BCD have different rules.

  • The BCD arithmetic is little more complicated.

  • BCD needs more number of bits than binary to represent the decimal number. So BCD is less efficient than binary.

Alphanumeric codes

A binary digit or bit can represent only two symbols as it has only two states '0' or '1'. But this is not enough for communication between two computers because there we need many more symbols for communication. These symbols are required to represent 26 alphabets with capital and small letters, numbers from 0 to 9, punctuation marks and other symbols.

The alphanumeric codes are the codes that represent numbers and alphabetic characters. Mostly such codes also represent other characters such as symbol and various instructions necessary for conveying information. An alphanumeric code should at least represent 10 digits and 26 letters of alphabet i.e. total 36 items. The following three alphanumeric codes are very commonly used for the data representation.

  • American Standard Code for Information Interchange (ASCII).
  • Extended Binary Coded Decimal Interchange Code (EBCDIC).
  • Five bit Baudot Code.

ASCII code is a 7-bit code whereas EBCDIC is an 8-bit code. ASCII code is more commonly used worldwide while EBCDIC is used primarily in large IBM computers.

Error Codes

There are binary code techniques available to detect and correct data during data transmission.

Error CodeDescription

Error Detection and Error Correction

Error detection and correction code technique

Wednesday, February 7, 2024

Computer System Organization


A digital computer consists of an interconnected system of processors, memories, and input/output devices. Processors, memories, and input/output are key concepts and will be present at every level, so we will start to study computer architecture by looking at all three in turn.

PROCESSORS

The CPU (Central Processing Unit) is the ‘‘brain’’ of the computer.

A central processing unit (CPU) is the electronic circuitry within a computer that carries out the instructions  of a computer program by performing the basic arithmetic, logical, control and input/output (I/O) operations specified by the instructions. The computer industry has used the term “central processing unit” at least since the early 1960s.Traditionally, the term “CPU” refers to a processor, more specifically to its processing unit and control unit (CU), distinguishing these core elements of a computer from external components such as main memory and I/O circuitry.

The CPU is composed of several distinct parts. The control unit is responsible for fetching instructions from the main memory and determining their type. The arithmetic logic unit performs operations such as addition and Boolean AND needed to carry out the instructions.

The CPU also contains a small, high-speed memory used to store temporary results and certain control information. This memory is made up of a number of registers, each of which has a certain size and function. Usually, all the registers have the same size. Each register can hold one number, up to some maximum determined by the size of the register. Registers can be read and written at high speed since they are internal to the CPU. The most important register is the Program Counter (PC), which points to the next instruction to be fetched for execution. ( The name ‘‘program counter’’ is somewhat misleading because it has nothing to do with counting anything, but the term is universally used. Also important is the Instruction Register (IR), which holds the instruction currently being executed. ( Most computers have numerous other registers as well, some of the general-purpose as well as some for specific purposes.

What is the,,bus,,?

In computer architecture, a bus is a communication system that transfers data between components inside a computer, or between computers. This expression covers all related hardware components (wire, optical fiber, etc.) and software, including communication protocols.

Early computer buses were parallel electrical wires with multiple hardware connections, but the term is now used for any physical arrangement that provides the same logical function as a parallel electrical bus. Modern computer buses can use both parallel and bit-serial connections and can be wired in either a multi-drop(electrical parallel) or daisy chain topology, or connected by switched hubs, as in the case of USB.

CPU Organization

  • A system bus is a link that connects every segment of a system to the central storage and carries out the data transfer in them.
  • It is a pathway composed of cables and connectors which is used to carry data between a computer microprocessor and the main memory.
  • It provides a communication path for the data and control signals moving between the major components of the computer system.

The types of system buses are

  • These are the pieces of information that are to be transferred.
  • The data is transferred between peripherals, memory and the CPU. The data bus can be a very busy pathway.
  • It stores information about where the data is to be transferred.
  • The components pass memory addresses to one another over the address bus.
  • These are the set of instructions regarding what to do with the data.
  • It is used to send out signals to coordinate and manage the activities of the motherboard components.

1. Bus Width

  • The size of a bus also known as its width.
  • It determines how much data can be transferred at a time.
  • This refers to the amount of information that can be transferred once.

2. Bus Speed

  • This refers to the no. of bits or bytes the bus can send per unit time.
  • It is also defined by its frequency. Frequency means that the number of data packets sent or received per second. Each time that data is sent or received is called a cycle.

The system bus combines the functions of the three main buses, namely Control Bus, Address Bus, Data Bus. The control bus carries the control, timing and coordination signals to manage the various functions across the system. The address bus is used to specify memory locations for the data being transferred.

The data bus, which is a bidirectional path. It carries the actual data between the processor (CPU), the memory and the peripherals (Input and Output). The system bus architecture varies from system to system and can be specific to a particular computer design. The other common characteristics of system buses are based on the primary role, connecting devices internally or externally, etc.

Internal Bus

  • It is also known as an internal data bus, a memory bus, a system bus or Front-Side-Bus.
  • It connects all the internal components of a computer, such as CPU and memory, to the motherboard.
  • Internal data buses are also referred to as a local bus because they are intended to connect to local devices.
  • This bus is quick and independent of the rest of the computer operations.

External Bus

  • It is also known as an expansion bus.
  • It is made up of the electronic pathways that connect the different external devices, such as a printer, etc.

Cache Function

A CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to reduce the average cost (time or energy) to access data from the main memory. A cache is a smaller, faster memory, closer to a processor core, which stores copies of the data from frequently used main memory locations. Most CPUs have different independent caches, including instruction and data caches, where the data cache is usually organized as a hierarchy of more cache levels (L1, L2, etc.).

All modern (fast) CPUs (with few specialized exceptions) have multiple levels of CPU caches. The first CPUs that used a cache had only one level of cache; unlike later level 1 caches, it was not split into L1d (for data) and L1i (for instructions). Almost all current CPUs with caches have a split L1 cache. They also have L2 caches and, for larger processors, L3 caches as well. The L2 cache is usually not split and acts as a common repository for the already split L1 cache. Every core of a multi-core processor has a dedicated L2 cache and is usually not shared between the cores. The L3 cache, and higher-level caches, are shared between the cores and are not split. An L4 cache is currently uncommon and is generally on dynamic random access memory (DRAM), rather than on static random access memory (SRAM), on a separate die or chip. That was also the case historically with L1, while bigger chips have allowed integration of it and generally all cache levels, with the possible exception of the last level. Each extra level of cache tends to be bigger and be optimized differently.

Other types of caches exist (that are not counted towards the “cache size” of the most important caches mentioned above), such as the translation look aside buffer (TLB) that is part of the memory management unit (MMU) that most CPUs have.

Caches are generally sized in powers of two: 4, 8, 16 etc. KiB or MiB(for larger non-L1) sizes, although the IBM z13has a 96 KiB L1 instruction cache.

Instruction Execution Cycle

  • This is a process of getting the instruction from the memory, decoding it to the machine language and executing it. So, three basic steps of the cycle are:

Fetch the instruction.
Decode it.
Execute.

  • The whole process of fetching the instructions from the memory, decoding it to the machine language and executing it, is termed as an instruction cycle.

PRIMARY MEMORY

The memory is that part of the computer where programs and data are stored. Some computer scientists (especially British ones) use the term store or storage rather than memory, although more and more, the term ‘‘storage’’ is used to refer to disk storage. Without a memory from which the processors can read and write information, there would be no stored-program digital computers.

Bits

The basic unit of memory is the binary digit, called a bit. A bit may contain a 0 or a 1. It is the simplest possible unit. (A device capable of storing only zeros could hardly form the basis of a memory system; at least two values are needed.) People often say that computers use binary arithmetic because it is ‘‘efficient.’’ What they mean (although they rarely realize it) is that digital information can be stored by distinguishing between different values of some continuous physical quantity, such as voltage or current. The more values that must be distinguished, the less separation between adjacent values, and the less reliable the memory. The binary number system requires only two values to be distinguished. Consequently, it is the most reliable method for encoding digital information.

Memory Addresses

In computing, a memory address is a reference to a specific memory location used at various levels by software and hardware. Memory addresses are fixed-length sequences of digits conventionally displayed and manipulated as unsigned integers.Such numerical semantic bases itself upon features of CPU (such as the instruction pointer and incremental address registers), as well upon the use of the memory like an array endorsed by various programming languages.

 

 

Tuesday, February 6, 2024

Cellular Network


Cellular network is an underlying technology for mobile phones, personal communication systems, wireless networking etc. The technology is developed for mobile radio telephone to replace high power transmitter/receiver systems. Cellular networks use lower power, shorter range and more transmitters for data transmission.

Features of Cellular Systems

Wireless Cellular Systems solves the problem of spectral congestion and increases user capacity. The features of cellular systems are as follows −

  • Offer very high capacity in a limited spectrum.

  • Reuse of radio channel in different cells.

  • Enable a fixed number of channels to serve an arbitrarily large number of users by reusing the channel throughout the coverage region.

  • Communication is always between mobile and base station (not directly between mobiles).

  • Each cellular base station is allocated a group of radio channels within a small geographic area called a cell.

  • Neighboring cells are assigned different channel groups.

  • By limiting the coverage area to within the boundary of the cell, the channel groups may be reused to cover different cells.

  • Keep interference levels within tolerable limits.

  • Frequency reuse or frequency planning.

  • Organization of Wireless Cellular Network.

Cellular network is organized into multiple low power transmitters each 100w or less.

Shape of Cells

The coverage area of cellular networks are divided into cells, each cell having its own antenna for transmitting the signals. Each cell has its own frequencies. Data communication in cellular networks is served by its base station transmitter, receiver and its control unit.

The shape of cells can be either square or hexagon −

Square

A square cell has four neighbors at distance d and four at distance Root 2 d

  • Better if all adjacent antennas equidistant
  • Simplifies choosing and switching to new antenna

Hexagon

A hexagon cell shape is highly recommended for its easy coverage and calculations. It offers the following advantages −

  • Provides equidistant antennas
  • Distance from center to vertex equals length of side

Frequency Reuse

Frequency reusing is the concept of using the same radio frequencies within a given area, that are separated by considerable distance, with minimal interference, to establish communication.

Frequency reuse offers the following benefits −

  • Allows communications within cell on a given frequency
  • Limits escaping power to adjacent cells
  • Allows re-use of frequencies in nearby cells
  • Uses same frequency for multiple conversations
  • 10 to 50 frequencies per cell

For example, when N cells are using the same number of frequencies and K be the total number of frequencies used in systems. Then each cell frequency is calculated by using the formulae K/N.

In Advanced Mobile Phone Services (AMPS) when K = 395 and N = 7, then frequencies per cell on an average will be 395/7 = 56. Here, cell frequency is 56.

Advantages of Cellular networks :

● It is flexible enough to use the features and functions of almost all public and private networks.

● It has increased capacity.

● It consumes less power.

● It can be distributed to larger coverage area.

● It reduces interference from other signals.

 

Thursday, February 1, 2024

Characteristics of Wireless Channel


The most important characteristics of wireless channel are −

  • Path loss
  • Fading
  • Interference
  • Doppler shift

In the following sections, we will discuss these channel characteristics one by one.

Path Loss

Path loss can be expressed as the ratio of the power of the transmitted signal to the power of the same signal received by the receiver, on a given path. It is a function of the propagation distance.

  • Estimation of path loss is very important for designing and deploying wireless communication networks

  • Path loss is dependent on a number of factors such as the radio frequency used and the nature of the terrain.

  • The free space propagation model is the simplest path loss model in which there is a direct-path signal between the transmitter and the receiver, with no atmosphere attenuation or multipath components.

In this model, the relationship between the transmitted power Pt and the received power Pr is given by

$$P_{r} = P_{t}G_{t}G_{r}(\frac{\lambda}{4\Pi d})^2$$

Where

  • Gt is the transmitter antenna gain

  • Gr is the receiver antenna gain

  • d is the distance between the transmitter and receiver

  • λ is the wavelength of the signal

Two-way model also called as two path models is widely used path loss model. The free space model described above assumes that there is only one single path from the transmitter to the receiver.

In reality, the signal reaches the receiver through multiple paths. The two path model tries to capture this phenomenon. The model assumes that the signal reaches the receiver through two paths, one a line-of-sight and the other the path through which the reflected wave is received.

According to the two-path model, the received power is given by

$$P_{r} = P_{t}G_{t}G_{r}(\frac{h_{t}h_{r}}{d^2})^2$$

Where

  • pt is the transmitted power

  • Gt represent the antenna gain at the transmitter

  • Gr represent the antenna gain at the receiver

  • d is the distance between the transmitter and receiver

  • ht is the height of the transmitter

  • hr are the height of the receiver

Fading

Fading refers to the fluctuations in signal strength when received at the receiver. Fading can be classified in to two types −

  • Fast fading/small scale fading and
  • Slow fading/large scale fading

Fast fading refers to the rapid fluctuations in the amplitude, phase or multipath delays of the received signal, due to the interference between multiple versions of the same transmitted signal arriving at the receiver at slightly different times.

The time between the reception of the first version of the signal and the last echoed signal is called delay spread. The multipath propagation of the transmitted signal, which causes fast fading, is because of the three propagation mechanisms, namely −

  • Reflection
  • Diffraction
  • Scattering

The multiple signal paths may sometimes add constructively or sometimes destructively at the receiver causing a variation in the power level of the received signal. The received single envelope of a fast fading signal is said to follow a Rayleigh distribution to see if there is no line-of-sight path between the transmitter and the receiver.

Slow Fading

The name Slow Fading itself implies that the signal fades away slowly. The features of slow fading are as given below.

  • Slow fading occurs when objects that partially absorb the transmission lie between the transmitter and receiver.

  • Slow fading is so called because the duration of the fade may last for multiple seconds or minutes.

  • Slow fading may occur when the receiver is inside a building and the radio wave must pass through the walls of a building, or when the receiver is temporarily shielded from the transmitter by a building. The obstructing objects cause a random variation in the received signal power.

  • Slow fading may cause the received signal power to vary, though the distance between the transmitter and receiver remains the same.

  • Slow fading is also referred to as shadow fading since the objects that cause the fade, which may be large buildings or other structures, block the direct transmission path from the transmitter to the receiver.

Interference

Wireless transmissions have to counter interference from a wide variety of sources. Two main forms of interference are −

  • Adjacent channel interference and
  • Co-channel interference.

In Adjacent channel interference case, signals in nearby frequencies have components outside their allocated ranges, and these components may interfere with on-going transmission in the adjacent frequencies. It can be avoided by carefully introducing guard bands between the allocated frequency ranges.

Co-channel interference, sometimes also referred to as narrow band interference, is due to other nearby systems using the same transmission frequency.

Inter-symbol interference is another type of interference, where distortion in the received signal is caused by the temporal spreading and the consequent overlapping of individual pulses in the signal.

Adaptive equalization is a commonly used technique for combating inter symbol interference. It involves gathering the dispersed symbol energy into its original time interval. Complex digital processing algorithms are used in the equalization process.

Editing Microsoft Word document

Open the file that you want to edit. Choose from the following tasks:   Task Steps Edit text Click the Edit tab. Select the text...