Monday, February 12, 2024

Computer System Organization


A digital computer consists of an interconnected system of processors, memories, and input/output devices. Processors, memories, and input/output are key concepts and will be present at every level, so we will start to study computer architecture by looking at all three in turn.

PROCESSORS

The CPU (Central Processing Unit) is the ‘‘brain’’ of the computer.

A central processing unit (CPU) is the electronic circuitry within a computer that carries out the instructions  of a computer program by performing the basic arithmetic, logical, control and input/output (I/O) operations specified by the instructions. The computer industry has used the term “central processing unit” at least since the early 1960s.Traditionally, the term “CPU” refers to a processor, more specifically to its processing unit and control unit (CU), distinguishing these core elements of a computer from external components such as main memory and I/O circuitry.

The CPU is composed of several distinct parts. The control unit is responsible for fetching instructions from the main memory and determining their type. The arithmetic logic unit performs operations such as addition and Boolean AND needed to carry out the instructions.

The CPU also contains a small, high-speed memory used to store temporary results and certain control information. This memory is made up of a number of registers, each of which has a certain size and function. Usually, all the registers have the same size. Each register can hold one number, up to some maximum determined by the size of the register. Registers can be read and written at high speed since they are internal to the CPU. The most important register is the Program Counter (PC), which points to the next instruction to be fetched for execution. ( The name ‘‘program counter’’ is somewhat misleading because it has nothing to do with counting anything, but the term is universally used. Also important is the Instruction Register (IR), which holds the instruction currently being executed. ( Most computers have numerous other registers as well, some of the general-purpose as well as some for specific purposes.

What is the,,bus,,?

In computer architecture, a bus is a communication system that transfers data between components inside a computer, or between computers. This expression covers all related hardware components (wire, optical fiber, etc.) and software, including communication protocols.

Early computer buses were parallel electrical wires with multiple hardware connections, but the term is now used for any physical arrangement that provides the same logical function as a parallel electrical bus. Modern computer buses can use both parallel and bit-serial connections and can be wired in either a multi-drop(electrical parallel) or daisy chain topology, or connected by switched hubs, as in the case of USB.

CPU Organization

  • A system bus is a link that connects every segment of a system to the central storage and carries out the data transfer in them.
  • It is a pathway composed of cables and connectors which is used to carry data between a computer microprocessor and the main memory.
  • It provides a communication path for the data and control signals moving between the major components of the computer system.

The types of system buses are

  • These are the pieces of information that are to be transferred.
  • The data is transferred between peripherals, memory and the CPU. The data bus can be a very busy pathway.
  • It stores information about where the data is to be transferred.
  • The components pass memory addresses to one another over the address bus.
  • These are the set of instructions regarding what to do with the data.
  • It is used to send out signals to coordinate and manage the activities of the motherboard components.

1. Bus Width

  • The size of a bus also known as its width.
  • It determines how much data can be transferred at a time.
  • This refers to the amount of information that can be transferred once.

2. Bus Speed

  • This refers to the no. of bits or bytes the bus can send per unit time.
  • It is also defined by its frequency. Frequency means that the number of data packets sent or received per second. Each time that data is sent or received is called a cycle.

The system bus combines the functions of the three main buses, namely Control Bus, Address Bus, Data Bus. The control bus carries the control, timing and coordination signals to manage the various functions across the system. The address bus is used to specify memory locations for the data being transferred.

The data bus, which is a bidirectional path. It carries the actual data between the processor (CPU), the memory and the peripherals (Input and Output). The system bus architecture varies from system to system and can be specific to a particular computer design. The other common characteristics of system buses are based on the primary role, connecting devices internally or externally, etc.

Internal Bus

  • It is also known as an internal data bus, a memory bus, a system bus or Front-Side-Bus.
  • It connects all the internal components of a computer, such as CPU and memory, to the motherboard.
  • Internal data buses are also referred to as a local bus because they are intended to connect to local devices.
  • This bus is quick and independent of the rest of the computer operations.

External Bus

  • It is also known as an expansion bus.
  • It is made up of the electronic pathways that connect the different external devices, such as a printer, etc.

Cache Function

A CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to reduce the average cost (time or energy) to access data from the main memory. A cache is a smaller, faster memory, closer to a processor core, which stores copies of the data from frequently used main memory locations. Most CPUs have different independent caches, including instruction and data caches, where the data cache is usually organized as a hierarchy of more cache levels (L1, L2, etc.).

All modern (fast) CPUs (with few specialized exceptions) have multiple levels of CPU caches. The first CPUs that used a cache had only one level of cache; unlike later level 1 caches, it was not split into L1d (for data) and L1i (for instructions). Almost all current CPUs with caches have a split L1 cache. They also have L2 caches and, for larger processors, L3 caches as well. The L2 cache is usually not split and acts as a common repository for the already split L1 cache. Every core of a multi-core processor has a dedicated L2 cache and is usually not shared between the cores. The L3 cache, and higher-level caches, are shared between the cores and are not split. An L4 cache is currently uncommon and is generally on dynamic random access memory (DRAM), rather than on static random access memory (SRAM), on a separate die or chip. That was also the case historically with L1, while bigger chips have allowed integration of it and generally all cache levels, with the possible exception of the last level. Each extra level of cache tends to be bigger and be optimized differently.

Other types of caches exist (that are not counted towards the “cache size” of the most important caches mentioned above), such as the translation look aside buffer (TLB) that is part of the memory management unit (MMU) that most CPUs have.

Caches are generally sized in powers of two: 4, 8, 16 etc. KiB or MiB(for larger non-L1) sizes, although the IBM z13has a 96 KiB L1 instruction cache.

Instruction Execution Cycle

  • This is a process of getting the instruction from the memory, decoding it to the machine language and executing it. So, three basic steps of the cycle are:

Fetch the instruction.
Decode it.
Execute.

  • The whole process of fetching the instructions from the memory, decoding it to the machine language and executing it, is termed as an instruction cycle.

PRIMARY MEMORY

The memory is that part of the computer where programs and data are stored. Some computer scientists (especially British ones) use the term store or storage rather than memory, although more and more, the term ‘‘storage’’ is used to refer to disk storage. Without a memory from which the processors can read and write information, there would be no stored-program digital computers.

Bits

The basic unit of memory is the binary digit, called a bit. A bit may contain a 0 or a 1. It is the simplest possible unit. (A device capable of storing only zeros could hardly form the basis of a memory system; at least two values are needed.) People often say that computers use binary arithmetic because it is ‘‘efficient.’’ What they mean (although they rarely realize it) is that digital information can be stored by distinguishing between different values of some continuous physical quantity, such as voltage or current. The more values that must be distinguished, the less separation between adjacent values, and the less reliable the memory. The binary number system requires only two values to be distinguished. Consequently, it is the most reliable method for encoding digital information.

Memory Addresses

In computing, a memory address is a reference to a specific memory location used at various levels by software and hardware. Memory addresses are fixed-length sequences of digits conventionally displayed and manipulated as unsigned integers.Such numerical semantic bases itself upon features of CPU (such as the instruction pointer and incremental address registers), as well upon the use of the memory like an array endorsed by various programming languages.

 

 

Friday, February 9, 2024

Binary Codes


In the coding, when numbers, letters or words are represented by a specific group of symbols, it is said that the number, letter or word is being encoded. The group of symbols is called as a code. The digital data is represented, stored and transmitted as group of binary bits. This group is also called as binary code. The binary code is represented by the number as well as alphanumeric letter.

Advantages of Binary Code

Following is the list of advantages that binary code offers.

  • Binary codes are suitable for the computer applications.

  • Binary codes are suitable for the digital communications.

  • Binary codes make the analysis and designing of digital circuits if we use the binary codes.

  • Since only 0 & 1 are being used, implementation becomes easy.

Classification of binary codes

The codes are broadly categorized into following four categories.

  • Weighted Codes
  • Non-Weighted Codes
  • Binary Coded Decimal Code
  • Alphanumeric Codes
  • Error Detecting Codes
  • Error Correcting Codes

Weighted Codes

Weighted binary codes are those binary codes which obey the positional weight principle. Each position of the number represents a specific weight. Several systems of the codes are used to express the decimal digits 0 through 9. In these codes each decimal digit is represented by a group of four bits.

Non-Weighted Codes

In this type of binary codes, the positional weights are not assigned. The examples of non-weighted codes are Excess-3 code and Gray code.

Excess-3 code

The Excess-3 code is also called as XS-3 code. It is non-weighted code used to express decimal numbers. The Excess-3 code words are derived from the 8421 BCD code words adding (0011)2 or (3)10 to each code word in 8421. The excess-3 codes are obtained as follows −

Excess-3 code

Gray Code

It is the non-weighted code and it is not arithmetic codes. That means there are no specific weights assigned to the bit position. It has a very special feature that, only one bit will change each time the decimal number is incremented as shown in fig. As only one bit changes at a time, the gray code is called as a unit distance code. The gray code is a cyclic code. Gray code cannot be used for arithmetic operation.

Application of Gray code

  • Gray code is popularly used in the shaft position encoders.

  • A shaft position encoder produces a code word which represents the angular position of the shaft.

Binary Coded Decimal (BCD) code

In this code each decimal digit is represented by a 4-bit binary number. BCD is a way to express each of the decimal digits with a binary code. In the BCD, with four bits we can represent sixteen numbers (0000 to 1111). But in BCD code only first ten of these are used (0000 to 1001). The remaining six code combinations i.e. 1010 to 1111 are invalid in BCD.

Advantages of BCD Codes

  • It is very similar to decimal system.
  • We need to remember binary equivalent of decimal numbers 0 to 9 only.

Disadvantages of BCD Codes

  • The addition and subtraction of BCD have different rules.

  • The BCD arithmetic is little more complicated.

  • BCD needs more number of bits than binary to represent the decimal number. So BCD is less efficient than binary.

Alphanumeric codes

A binary digit or bit can represent only two symbols as it has only two states '0' or '1'. But this is not enough for communication between two computers because there we need many more symbols for communication. These symbols are required to represent 26 alphabets with capital and small letters, numbers from 0 to 9, punctuation marks and other symbols.

The alphanumeric codes are the codes that represent numbers and alphabetic characters. Mostly such codes also represent other characters such as symbol and various instructions necessary for conveying information. An alphanumeric code should at least represent 10 digits and 26 letters of alphabet i.e. total 36 items. The following three alphanumeric codes are very commonly used for the data representation.

  • American Standard Code for Information Interchange (ASCII).
  • Extended Binary Coded Decimal Interchange Code (EBCDIC).
  • Five bit Baudot Code.

ASCII code is a 7-bit code whereas EBCDIC is an 8-bit code. ASCII code is more commonly used worldwide while EBCDIC is used primarily in large IBM computers.

Error Codes

There are binary code techniques available to detect and correct data during data transmission.

Error CodeDescription

Error Detection and Error Correction

Error detection and correction code technique

Wednesday, February 7, 2024

Computer System Organization


A digital computer consists of an interconnected system of processors, memories, and input/output devices. Processors, memories, and input/output are key concepts and will be present at every level, so we will start to study computer architecture by looking at all three in turn.

PROCESSORS

The CPU (Central Processing Unit) is the ‘‘brain’’ of the computer.

A central processing unit (CPU) is the electronic circuitry within a computer that carries out the instructions  of a computer program by performing the basic arithmetic, logical, control and input/output (I/O) operations specified by the instructions. The computer industry has used the term “central processing unit” at least since the early 1960s.Traditionally, the term “CPU” refers to a processor, more specifically to its processing unit and control unit (CU), distinguishing these core elements of a computer from external components such as main memory and I/O circuitry.

The CPU is composed of several distinct parts. The control unit is responsible for fetching instructions from the main memory and determining their type. The arithmetic logic unit performs operations such as addition and Boolean AND needed to carry out the instructions.

The CPU also contains a small, high-speed memory used to store temporary results and certain control information. This memory is made up of a number of registers, each of which has a certain size and function. Usually, all the registers have the same size. Each register can hold one number, up to some maximum determined by the size of the register. Registers can be read and written at high speed since they are internal to the CPU. The most important register is the Program Counter (PC), which points to the next instruction to be fetched for execution. ( The name ‘‘program counter’’ is somewhat misleading because it has nothing to do with counting anything, but the term is universally used. Also important is the Instruction Register (IR), which holds the instruction currently being executed. ( Most computers have numerous other registers as well, some of the general-purpose as well as some for specific purposes.

What is the,,bus,,?

In computer architecture, a bus is a communication system that transfers data between components inside a computer, or between computers. This expression covers all related hardware components (wire, optical fiber, etc.) and software, including communication protocols.

Early computer buses were parallel electrical wires with multiple hardware connections, but the term is now used for any physical arrangement that provides the same logical function as a parallel electrical bus. Modern computer buses can use both parallel and bit-serial connections and can be wired in either a multi-drop(electrical parallel) or daisy chain topology, or connected by switched hubs, as in the case of USB.

CPU Organization

  • A system bus is a link that connects every segment of a system to the central storage and carries out the data transfer in them.
  • It is a pathway composed of cables and connectors which is used to carry data between a computer microprocessor and the main memory.
  • It provides a communication path for the data and control signals moving between the major components of the computer system.

The types of system buses are

  • These are the pieces of information that are to be transferred.
  • The data is transferred between peripherals, memory and the CPU. The data bus can be a very busy pathway.
  • It stores information about where the data is to be transferred.
  • The components pass memory addresses to one another over the address bus.
  • These are the set of instructions regarding what to do with the data.
  • It is used to send out signals to coordinate and manage the activities of the motherboard components.

1. Bus Width

  • The size of a bus also known as its width.
  • It determines how much data can be transferred at a time.
  • This refers to the amount of information that can be transferred once.

2. Bus Speed

  • This refers to the no. of bits or bytes the bus can send per unit time.
  • It is also defined by its frequency. Frequency means that the number of data packets sent or received per second. Each time that data is sent or received is called a cycle.

The system bus combines the functions of the three main buses, namely Control Bus, Address Bus, Data Bus. The control bus carries the control, timing and coordination signals to manage the various functions across the system. The address bus is used to specify memory locations for the data being transferred.

The data bus, which is a bidirectional path. It carries the actual data between the processor (CPU), the memory and the peripherals (Input and Output). The system bus architecture varies from system to system and can be specific to a particular computer design. The other common characteristics of system buses are based on the primary role, connecting devices internally or externally, etc.

Internal Bus

  • It is also known as an internal data bus, a memory bus, a system bus or Front-Side-Bus.
  • It connects all the internal components of a computer, such as CPU and memory, to the motherboard.
  • Internal data buses are also referred to as a local bus because they are intended to connect to local devices.
  • This bus is quick and independent of the rest of the computer operations.

External Bus

  • It is also known as an expansion bus.
  • It is made up of the electronic pathways that connect the different external devices, such as a printer, etc.

Cache Function

A CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to reduce the average cost (time or energy) to access data from the main memory. A cache is a smaller, faster memory, closer to a processor core, which stores copies of the data from frequently used main memory locations. Most CPUs have different independent caches, including instruction and data caches, where the data cache is usually organized as a hierarchy of more cache levels (L1, L2, etc.).

All modern (fast) CPUs (with few specialized exceptions) have multiple levels of CPU caches. The first CPUs that used a cache had only one level of cache; unlike later level 1 caches, it was not split into L1d (for data) and L1i (for instructions). Almost all current CPUs with caches have a split L1 cache. They also have L2 caches and, for larger processors, L3 caches as well. The L2 cache is usually not split and acts as a common repository for the already split L1 cache. Every core of a multi-core processor has a dedicated L2 cache and is usually not shared between the cores. The L3 cache, and higher-level caches, are shared between the cores and are not split. An L4 cache is currently uncommon and is generally on dynamic random access memory (DRAM), rather than on static random access memory (SRAM), on a separate die or chip. That was also the case historically with L1, while bigger chips have allowed integration of it and generally all cache levels, with the possible exception of the last level. Each extra level of cache tends to be bigger and be optimized differently.

Other types of caches exist (that are not counted towards the “cache size” of the most important caches mentioned above), such as the translation look aside buffer (TLB) that is part of the memory management unit (MMU) that most CPUs have.

Caches are generally sized in powers of two: 4, 8, 16 etc. KiB or MiB(for larger non-L1) sizes, although the IBM z13has a 96 KiB L1 instruction cache.

Instruction Execution Cycle

  • This is a process of getting the instruction from the memory, decoding it to the machine language and executing it. So, three basic steps of the cycle are:

Fetch the instruction.
Decode it.
Execute.

  • The whole process of fetching the instructions from the memory, decoding it to the machine language and executing it, is termed as an instruction cycle.

PRIMARY MEMORY

The memory is that part of the computer where programs and data are stored. Some computer scientists (especially British ones) use the term store or storage rather than memory, although more and more, the term ‘‘storage’’ is used to refer to disk storage. Without a memory from which the processors can read and write information, there would be no stored-program digital computers.

Bits

The basic unit of memory is the binary digit, called a bit. A bit may contain a 0 or a 1. It is the simplest possible unit. (A device capable of storing only zeros could hardly form the basis of a memory system; at least two values are needed.) People often say that computers use binary arithmetic because it is ‘‘efficient.’’ What they mean (although they rarely realize it) is that digital information can be stored by distinguishing between different values of some continuous physical quantity, such as voltage or current. The more values that must be distinguished, the less separation between adjacent values, and the less reliable the memory. The binary number system requires only two values to be distinguished. Consequently, it is the most reliable method for encoding digital information.

Memory Addresses

In computing, a memory address is a reference to a specific memory location used at various levels by software and hardware. Memory addresses are fixed-length sequences of digits conventionally displayed and manipulated as unsigned integers.Such numerical semantic bases itself upon features of CPU (such as the instruction pointer and incremental address registers), as well upon the use of the memory like an array endorsed by various programming languages.

 

 

Tuesday, February 6, 2024

Cellular Network


Cellular network is an underlying technology for mobile phones, personal communication systems, wireless networking etc. The technology is developed for mobile radio telephone to replace high power transmitter/receiver systems. Cellular networks use lower power, shorter range and more transmitters for data transmission.

Features of Cellular Systems

Wireless Cellular Systems solves the problem of spectral congestion and increases user capacity. The features of cellular systems are as follows −

  • Offer very high capacity in a limited spectrum.

  • Reuse of radio channel in different cells.

  • Enable a fixed number of channels to serve an arbitrarily large number of users by reusing the channel throughout the coverage region.

  • Communication is always between mobile and base station (not directly between mobiles).

  • Each cellular base station is allocated a group of radio channels within a small geographic area called a cell.

  • Neighboring cells are assigned different channel groups.

  • By limiting the coverage area to within the boundary of the cell, the channel groups may be reused to cover different cells.

  • Keep interference levels within tolerable limits.

  • Frequency reuse or frequency planning.

  • Organization of Wireless Cellular Network.

Cellular network is organized into multiple low power transmitters each 100w or less.

Shape of Cells

The coverage area of cellular networks are divided into cells, each cell having its own antenna for transmitting the signals. Each cell has its own frequencies. Data communication in cellular networks is served by its base station transmitter, receiver and its control unit.

The shape of cells can be either square or hexagon −

Square

A square cell has four neighbors at distance d and four at distance Root 2 d

  • Better if all adjacent antennas equidistant
  • Simplifies choosing and switching to new antenna

Hexagon

A hexagon cell shape is highly recommended for its easy coverage and calculations. It offers the following advantages −

  • Provides equidistant antennas
  • Distance from center to vertex equals length of side

Frequency Reuse

Frequency reusing is the concept of using the same radio frequencies within a given area, that are separated by considerable distance, with minimal interference, to establish communication.

Frequency reuse offers the following benefits −

  • Allows communications within cell on a given frequency
  • Limits escaping power to adjacent cells
  • Allows re-use of frequencies in nearby cells
  • Uses same frequency for multiple conversations
  • 10 to 50 frequencies per cell

For example, when N cells are using the same number of frequencies and K be the total number of frequencies used in systems. Then each cell frequency is calculated by using the formulae K/N.

In Advanced Mobile Phone Services (AMPS) when K = 395 and N = 7, then frequencies per cell on an average will be 395/7 = 56. Here, cell frequency is 56.

Advantages of Cellular networks :

● It is flexible enough to use the features and functions of almost all public and private networks.

● It has increased capacity.

● It consumes less power.

● It can be distributed to larger coverage area.

● It reduces interference from other signals.

 

Thursday, February 1, 2024

Characteristics of Wireless Channel


The most important characteristics of wireless channel are −

  • Path loss
  • Fading
  • Interference
  • Doppler shift

In the following sections, we will discuss these channel characteristics one by one.

Path Loss

Path loss can be expressed as the ratio of the power of the transmitted signal to the power of the same signal received by the receiver, on a given path. It is a function of the propagation distance.

  • Estimation of path loss is very important for designing and deploying wireless communication networks

  • Path loss is dependent on a number of factors such as the radio frequency used and the nature of the terrain.

  • The free space propagation model is the simplest path loss model in which there is a direct-path signal between the transmitter and the receiver, with no atmosphere attenuation or multipath components.

In this model, the relationship between the transmitted power Pt and the received power Pr is given by

$$P_{r} = P_{t}G_{t}G_{r}(\frac{\lambda}{4\Pi d})^2$$

Where

  • Gt is the transmitter antenna gain

  • Gr is the receiver antenna gain

  • d is the distance between the transmitter and receiver

  • λ is the wavelength of the signal

Two-way model also called as two path models is widely used path loss model. The free space model described above assumes that there is only one single path from the transmitter to the receiver.

In reality, the signal reaches the receiver through multiple paths. The two path model tries to capture this phenomenon. The model assumes that the signal reaches the receiver through two paths, one a line-of-sight and the other the path through which the reflected wave is received.

According to the two-path model, the received power is given by

$$P_{r} = P_{t}G_{t}G_{r}(\frac{h_{t}h_{r}}{d^2})^2$$

Where

  • pt is the transmitted power

  • Gt represent the antenna gain at the transmitter

  • Gr represent the antenna gain at the receiver

  • d is the distance between the transmitter and receiver

  • ht is the height of the transmitter

  • hr are the height of the receiver

Fading

Fading refers to the fluctuations in signal strength when received at the receiver. Fading can be classified in to two types −

  • Fast fading/small scale fading and
  • Slow fading/large scale fading

Fast fading refers to the rapid fluctuations in the amplitude, phase or multipath delays of the received signal, due to the interference between multiple versions of the same transmitted signal arriving at the receiver at slightly different times.

The time between the reception of the first version of the signal and the last echoed signal is called delay spread. The multipath propagation of the transmitted signal, which causes fast fading, is because of the three propagation mechanisms, namely −

  • Reflection
  • Diffraction
  • Scattering

The multiple signal paths may sometimes add constructively or sometimes destructively at the receiver causing a variation in the power level of the received signal. The received single envelope of a fast fading signal is said to follow a Rayleigh distribution to see if there is no line-of-sight path between the transmitter and the receiver.

Slow Fading

The name Slow Fading itself implies that the signal fades away slowly. The features of slow fading are as given below.

  • Slow fading occurs when objects that partially absorb the transmission lie between the transmitter and receiver.

  • Slow fading is so called because the duration of the fade may last for multiple seconds or minutes.

  • Slow fading may occur when the receiver is inside a building and the radio wave must pass through the walls of a building, or when the receiver is temporarily shielded from the transmitter by a building. The obstructing objects cause a random variation in the received signal power.

  • Slow fading may cause the received signal power to vary, though the distance between the transmitter and receiver remains the same.

  • Slow fading is also referred to as shadow fading since the objects that cause the fade, which may be large buildings or other structures, block the direct transmission path from the transmitter to the receiver.

Interference

Wireless transmissions have to counter interference from a wide variety of sources. Two main forms of interference are −

  • Adjacent channel interference and
  • Co-channel interference.

In Adjacent channel interference case, signals in nearby frequencies have components outside their allocated ranges, and these components may interfere with on-going transmission in the adjacent frequencies. It can be avoided by carefully introducing guard bands between the allocated frequency ranges.

Co-channel interference, sometimes also referred to as narrow band interference, is due to other nearby systems using the same transmission frequency.

Inter-symbol interference is another type of interference, where distortion in the received signal is caused by the temporal spreading and the consequent overlapping of individual pulses in the signal.

Adaptive equalization is a commonly used technique for combating inter symbol interference. It involves gathering the dispersed symbol energy into its original time interval. Complex digital processing algorithms are used in the equalization process.

Wednesday, January 31, 2024

Multiple Access


In any cellular system or cellular technology, it is necessary to have a scheme that enables several multiple users to gain access to it and use it simultaneously. As cellular technology has progressed different multiple access schemes have been used. They form the very core of the way in which the radio technology of the cellular system works.

There are four main multiple access schemes that are used in cellular systems ranging from the very first analogue cellular technologies to those cellular technologies that are being developed for use in the future. The multiple access schemes are known as FDMA, TDMA, CDMA and OFDMA.

Requirements for a multiple access scheme

In any cellular system it is necessary for it to be able have a scheme whereby it can handle multiple users at any given time. There are many ways of doing this, and as cellular technology has advanced, different techniques have been used.

There are a number of requirements that any multiple access scheme must be able to meet:

  • Ability to handle several users without mutual interference.
  • Ability to be able to maximise the spectrum efficiency
  • Must be robust, enabling ease of handover between cells.

FDMA - Frequency Division Multiple Access

FDMA is the most straightforward of the multiple access schemes that have been used. As a subscriber comes onto the system, or swaps from one cell to the next, the network allocates a channel or frequency to each one. In this way the different subscribers are allocated a different slot and access to the network. As different frequencies are used, the system is naturally termed Frequency Division Multiple Access. This scheme was used by all analogue systems.

 

TDMA - Time Division Multiple Access

The second system came about with the transition to digital schemes for cellular technology. Here digital data could be split up in time and sent as bursts when required. As speech was digitised it could be sent in short data bursts, any small delay caused by sending the data in bursts would be short and not noticed. In this way it became possible to organise the system so that a given number of slots were available on a give transmission. Each subscriber would then be allocated a different time slot in which they could transmit or receive data. As different time slots are used for each subscriber to gain access to the system, it is known as time division multiple access. Obviously this only allows a certain number of users access to the system. Beyond this another channel may be used, so systems that use TDMA may also have elements of FDMA operation as well.

CDMA - Code Division Multiple Access

CDMA uses one of the aspects associated with the use of direct sequence spread spectrum. It can be seen from the article in the cellular telecoms area of this site that when extracting the required data from a DSSS signal it was necessary to have the correct spreading or chip code, and all other data from sources using different orthogonal chip codes would be rejected. It is therefore possible to allocate different users different codes, and use this as the means by which different users are given access to the system.

The scheme has been likened to being in a room filled with people all speaking different languages. Even though the noise level is very high, it is still possible to understand someone speaking in your own language. With CDMA different spreading or chip codes are used. When generating a direct sequence spread spectrum, the data to be transmitted is multiplied with spreading or chip code. This widens the spectrum of the signal, but it can only be decided in the receiver if it is again multiplied with the same spreading code. All signals that use different spreading codes are not seen, and are discarded in the process. Thus in the presence of a variety of signals it is possible to receive only the required one.

In this way the base station allocates different codes to different users and when it receives the signal it will use one code to receive the signal from one mobile, and another spreading code to receive the signal from a second mobile. In this way the same frequency channel can be used to serve a number of different mobiles.

OFDMA - Orthogonal Frequency Division Multiple Access

OFDMA is the form of multiple access scheme that is being considered for the fourth generation cellular technologies along with the evolutions for the third generation cellular systems (LTE for UMTS / W-CDMA and UMB for CDMA2000).

As the name implies, OFDMA is based around OFDM. This is a technology that utilises a large number of close spaced carriers.

To utilise OFDM as a multiple access scheme for cellular technology, two different methods are used, one for the uplink and one for the downlink. In the downlink, the mobile receives the whole signal transmitted by the base station and extracts the data destined for the particular mobile. In the uplink, one or more carriers are allocated to each handset dependent upon the data to be transmitted, etc. In this way the cellular network is able to control how the data is to be sent and received.

 

Wireless Application Protocol(WAP)


Wireless Application Protocol or WAP is a programming model or an application environment and set of communication protocols based on the concept of the World Wide Web (WWW), and its hierarchical design is very much similar to TCP/IP protocol stack design. See the most prominent features of Wireless Application Protocol or WAP in Mobile Computing:

  • WAP is a De-Facto standard or a protocol designed for micro-browsers, and it enables the mobile devices to interact, exchange and transmit information over the Internet.
  • WAP is based upon the concept of the World Wide Web (WWW), and the backend functioning also remains similar to WWW, but it uses the markup language Wireless Markup Language (WML) to access the WAP services while WWW uses HTML as a markup language. WML is defined as XML 1.0 application.
  • In 1998, some giant IT companies such as Ericson, Motorola, Nokia and Unwired Planet founded the WAP Forum to standardize the various wireless technologies via protocols.
  • After developing the WAP model, it was accepted as a wireless protocol globally capable of working on multiple wireless technologies such as mobile, printers, pagers, etc.
  • In 2002, by the joint efforts of the various members of the WAP Forum, it was merged with various other forums of the industry and formed an alliance known as Open Mobile Alliance (OMA).
  • WAP was opted as a De-Facto standard because of its ability to create web applications for mobile devices.

Working of Wireless Application Protocol or WAP Model

The following steps define the working of Wireless Application Protocol or WAP Model:

  • The WAP model consists of 3 levels known as Client, Gateway and Origin Server.
  • When a user opens the browser in his/her mobile device and selects a website that he/she wants to view, the mobile device sends the URL encoded request via a network to a WAP gateway using WAP protocol.
  • The request he/she sends via mobile to WAP gateway is called as encoding request.
  • The sent encoding request is translated through WAP gateway and then forwarded in the form of a conventional HTTP URL request over the Internet.
  • When the request reaches a specified Web server, the server processes the request just as it would handle any other request and sends the response back to the mobile device through WAP gateway.
  • Now, the WML file's final response can be seen in the browser of the mobile users.

WAP Protocol Stack

It specifies the different communications and data transmission layers used in the WAP model:

Application Layer: This layer consists of the Wireless Application Environment (WAE), mobile device specifications, and content development programming languages, i.e., WML.

Session Layer: The session layer consists of the Wireless Session Protocol (WSP). It is responsible for fast connection suspension and reconnection.

Transaction Layer: The transaction layer consists of Wireless Transaction Protocol (WTP) and runs on top of UDP (User Datagram Protocol). This layer is a part of TCP/IP and offers transaction support.

Security Layer: It contains Wireless Transaction Layer Security (WTLS) and responsible for data integrity, privacy and authentication during data transmission.

Transport Layer: This layer consists of Wireless Datagram Protocol (WDP). It provides a consistent data format to higher layers of the WAP protocol stack.

Advantages of Wireless Application Protocol (WAP)

Following is a list of some advantages of Wireless Application Protocol or WAP:

  • WAP is a very fast-paced technology.
  • It is an open-source technology and completely free of cost.
  • It can be implemented on multiple platforms.
  • It is independent of network standards.
  • It provides higher controlling options.
  • It is implemented near to Internet model.
  • By using WAP, you can send/receive real-time data.
  • Nowadays, most modern mobile phones and devices support WAP.

Disadvantages of Wireless Application Protocol (WAP)

Following is a list of some disadvantages of Wireless Application Protocol or WAP:

  • The connection speed in WAP is slow, and there is limited availability also.
  • In some areas, the ability to connect to the Internet is very sparse, and in some other areas, Internet access is entirely unavailable.
  • It is less secured.
  • WAP provides a small User interface (UI).

Applications of Wireless Application Protocol (WAP)

The following are some most used applications of Wireless Application Protocol or WAP:

  • WAP facilitates you to access the Internet from your mobile devices.
  • You can play games on mobile devices over wireless devices.
  • It facilitates you to access E-mails over the mobile Internet.
  • Mobile hand-sets can be used to access timesheets and fill expenses claims.
  • Online mobile banking is very popular nowadays.
  • It can also be used in multiple Internet-based services such as geographical location, Weather forecasting, Flight information, Movie & cinema information, Traffic updates etc. All are possible due to WAP technology.

What is computer security?

Computer security basically is the protection of computer systems and information from harm, theft, and unauthorized use. It is the process ...