Wednesday, November 13, 2024

What is Network Security Critical Necessity?

Information and efficient communication are two of the most critical strategic problems for the success of each business. With the advent of digital manner of conversation and storage, more and more companies have shifted to using information networks to communicate, keep data, and to achieve assets. There are different types and levels of network infrastructures which can be used for running the business.

It may be stated that in the modern world nothing had a larger effect on businesses than the networked computers. but networking brings with it security threats which, if mitigated, permit the advantages of networking to outweigh the dangers.

Role of Network in Business

Nowadays, computer networks are considered as a resource by means of almost all businesses. This resource permits them to collect, analyze, organize, and disseminate data this is important to their profitability. most businesses have installed networks to stay competitive.

The most obvious role of computer networking is that groups can shop genuinely any type of data at a important region and retrieve it at the preferred place through the network.

Benefits of Networks

Computer networking permits people to share data and thoughts easily, in order to work more successfully and productively. Networks improve activities including purchasing, selling, and customer service. Networking makes traditional business techniques extra efficient, greater possible, and less expensive.

The main benefits a business draws from computer networks are −

  • Resource Sharing − A business can lessen the amount of cash spent on hardware by means of sharing components and peripherals related to the network.
  • Streamlined Business Processes − computer networks allow groups to streamline their internal business methods.
  • Collaboration Among Departments − when two or more departments of business connect selected quantities of their networks, they can streamline business methods that generally take inordinate amounts of time and effort and often pose difficulties for achieving higher productivity.
  • Improved Customer Relations − Networks offer customers with many advantages including convenience in doing business, speedy carrier response, and so on.

There are many other business unique advantages that accrue from networking. Such advantages have made it important for all sorts of businesses to adopt computer networking.

Necessity for Network Security

The threats on stressed or wireless networks has significantly improved because of advancement in present day technology with developing capacity of computer networks. the overwhelming use of internet in today’s world for numerous business transactions has posed challenges of data theft and different assaults on business intellectual assets.

In the present technology, maximum of the businesses are performed through network utility, and hence, all networks are at a risk of being attacked. maximum common security threats to business network are information interception and theft, and identity theft.

Network security is a specialized subject that offers with thwarting such threats and presenting the protection of the usability, reliability, integrity, and protection of computer networking infrastructure of a business.

Importance of Network Security for Business

  • Protecting Business Assets − that is the primary goal of network security. property suggest the data that is saved in the computer networks. data is as important and precious as any other tangible assets of the organization. network security is involved with the integrity, safety, and secure access of personal data.
  • Compliance with Regulatory Requirements − network security features assist organizations to conform with government and industry precise rules about data protection.
  • Secure Collaborative working − network security encourages co-worker collaboration and enables communication with clients and providers by providing them relaxed network access. It boosts client and consumer confidence that their sensitive information is protected.
  • Reduced Risk − Adoption of network security reduces the effect of security breaches, such as legal action that may bankrupt small organizations.
  • Gaining Competitive Advantage − developing an powerful security device for networks supply a competitive area to an company. in the area of net financial services and e-commerce, network security assumes prime importance.

Tuesday, November 12, 2024

Firewall

Monday, November 11, 2024

What is Access Control


Access control is a method of restricting access to sensitive data. Only those that have had their identity verified can access company data through an access control gateway.

What are the components of access control?

At a high level, access control is about restricting access to a resource. Any access control system, whether physical or logical, has five main components:

  1. Authentication: The act of proving an assertion, such as the identity of a person or computer user. It might involve validating personal identity documents, verifying the authenticity of a website with a digital certificate, or checking login credentials against stored details. 
  2. Authorization: The function of specifying access rights or privileges to resources. For example, human resources staff are normally authorized to access employee records and this policy is usually formalized as access control rules in a computer system. 
  3. Access: Once authenticated and authorized, the person or computer can access the resource.
  4. Manage: Managing an access control system includes adding and removing authentication and authorization of users or systems. Some systems will sync with G Suite or Azure Active Directory, streamlining the management process.
  5. Audit: Frequently used as part of access control to enforce the principle of least privilege. Over time, users can end up with access they no longer need, e.g. when they change roles. Regular audits minimize this risk.  

How does access control work?

Access control can be split into two groups designed to improve physical security or cybersecurity:

  • Physical access control: limits access to campuses, building and other physical assets, e.g. a proximity card to unlock a door.
  • Logical access control: limits access to computers, networks, files and other sensitive data, e.g. a username and password.

For example, an organization may employ an electronic control system that relies on user credentials, access card readers, intercom, auditing and reporting to track which employees have access and have accessed a restricted data center. This system may incorporate an access control panel that can restrict entry to individual rooms and buildings, as well as sound alarms, initiate lockdown procedures and prevent unauthorized access. 

This access control system could authenticate the person's identity with biometrics and check if they are authorized by checking against an access control policy or with a key fob, password or personal identification number (PIN) entered on a keypad. 

Another access control solution may employ multi factor authentication, an example of a defense in depth security system, where a person is required to know something (a password), be something (biometrics) and have something (a two-factor authentication code from smartphone mobile apps). 

In general, access control software works by identifying an individual (or computer), verifying they are who they claim to be, authorizing they have the required access level and then storing their actions against a username, IP address or other audit system to help with digital forensics if needed.

Why is access control important?

Access control minimizes the risk of authorized access to physical and computer systems, forming a foundational part of information security, data security and network security.

Depending on your organization, access control may be a regulatory compliance requirement:

  • PCI DSS: Requirement 9 mandates organizations to restrict physical access to their buildings for onsite personnel, visitors and media, as well as having adequate logical access controls to mitigate the cybersecurity risk of malicious individuals stealing sensitive data. Requirement 10 requires organizations employ security solutions to track and monitor their systems in an auditable manner. 
  • HIPAA: The HIPAA Security Rule requires Covered Entities and their business associates to prevent the unauthorized disclosure of protected health information(PHI), this includes the usage of physical and electronic access control.  
  • SOC 2: The auditing procedure enforce third-party vendors and service providers to manage sensitive data to prevent data breaches, protecting employee and customer privacy. Companies who wish to gain SOC 2 assurance must use a form of access control with two-factor authentication and data encryption. SOC 2 assurance is particularly important for organization's who process personally identifiable information(PII).
  • ISO 27001: An information security standard that requires management systematically examine an organization's attack vendors and audits all cyber threats and vulnerabilities. It also requires a comprehensive set of risk mitigation or transfer protocols to ensure continuous information security and business continuity. 

What are the types of access control?

The main types of access control are:

  • Attribute-based access control (ABAC): Access management systems were access is granted not on the rights of a user after authentication but based on attributes. The end user has to prove so-called claims about their attributes to the access control engine. An attribute-based access control policy specifies which claims need to be satisfied to grant access to the resource. For example, the claim may be the user's age is older than 18 and any user who can prove this claim will be granted access. In ABAC, it's not always necessary to authenticate or identify the user, just that they have the attribute. 
  • Discretionary access control (DAC): Access management where owners or administrators of the protected system, data or resource set the policies defining who or what is authorized to access the resource. These systems rely on administrators to limit the propagation of access rights. DAC systems are criticized for their lack of centralized control. 
  • Mandatory access control (MAC): Access rights are regulated by a central authority based on multiple levels of security. MAC is common in government and military environments where classifications are assigned to system resources and the operating system or security kernel will grant or deny access based on the user's or the device's security clearance. It is difficult to manage but its use is justified when used to protected highly sensitive data. 
  • Role-Based Access Control (RBAC): In RBAC, an access system determines who can access a resource rather than an owner. RBAC is common in commercial and military systems, where multi-level security requirements may exist. RBAC differs from DAC in that DAC allows users to control access while in RBAC, access is controlled at the system level, outside of user control. RBAC can be distinguished from MAC primarily by the way it handles permissions. MAC controls read and write permissions based on a user/device's clearance level while RBAC controls collections of permissions that may include complex operations such as credit card transactions or may be as simple as read or write. Commonly, RBAC is used to restrict access based on business functions, e.g. engineers, human resources and marketing have access to different SaaS products.
  • Rule-based access control: A security model where an administrator defines rules that govern access to the resource. These rules may be based on conditions, such as time of day and location. It's not uncommon to have some form of rule-based access control and role-based access control working together.
  • Break-Glass access control: Traditional access control has the purpose of restricting access, which is why most access control models follow the principle of least privilege and the default deny principle. This behavior may conflict with operations of a system. In certain situations, humans are willing to take the risk that might be involved in violating an access control policy, if the potential benefit of real-time access outweighs the risks. This need is visible in healthcare where inability to access to patient records could cause death. 

 


Thursday, October 17, 2024

Data Link Layer


Data Link Layer provides the functional and procedural means to transfer data between network entities and to detect and possibly correct errors that may occur in the physical layer. Originally, this layer was intended for point-to-point and point-to-multipoint media, characteristic of wide area media in the telephone system. Local area network architecture, which included broadcast-capable multi-access media, was developed independently of the ISO work in IEEE Project 802. IEEE work assumed sub-layering and management functions not required for WAN use. In modern practice, only error detection, not flow control using sliding window, is present in data link protocols such as Point-to-Point Protocol (PPP), and, on local area networks, the IEEE 802.2 LLC layer is not used for most protocols on the Ethernet, and on other local area networks, its flow control and acknowledgment mechanisms are rarely used. Sliding window flow control and acknowledgment is used at the transport layer by protocols such as TCP, but is still used in niches where X.25 offers performance advantages. The ITU-T G.hn standard, which provides high-speed local area networking over existing wires (power lines, phone lines, and coaxial cables), includes a complete data link layer which provides both error correction and flow control by means of a selective repeat sliding window protocol. Both WAN and LAN service arrange bits from the physical layer into logical sequences called frames. Not all physical layer bits necessarily go into frames, as some of these bits are purely intended for physical layer functions. For example, every fifth bit of the FDDI bit stream is not used by the layer.

Services provided by Data Link Layer

Data Link Layer is basically second layer of seven-layer Open System Interconnection(OSI) reference model of computer networking and lies just above Physical Layer.

This layer usually provides and gives data reliability and provides various tools to establish, maintain, and also release data link connections between network nodes. It is responsible for receiving and getting data bits usually from Physical Layer and also then converting these bits into groups, known as data link frames so that it can be transmitted further. It is also responsible to handle errors that might arise due to transmission of bits.

Service Provided to Network Layer :
The important and essential function of Data Link Layer is to provide an interface to Network Layer. Network Layer is third layer of seven-layer OSI reference model and is present just above Data Link Layer.

The main aim of Data Link Layer is to transmit data frames they have received to destination machine so that these data frames can be handed over to network layer of destination machine. At the network layer, these data frames are basically addressed and routed.

1. Actual Communication :
In this communication, physical medium is present through which Data Link Layer simply transmits data frames. The actual path is Network Layer -> Data link layer -> Physical Layer on sending machine, then to physical media and after that to Physical Layer -> Data link layer -> Network Layer on receiving machine.

2. Virtual Communication :
In this communication, no physical medium is present for Data Link Layer to transmit data. It can be only be visualized and imagined that two Data Link Layers are communicating with each other with the help of or using data link protocol.

Types of Services provided by Data Link Layer :


The Data link layer generally provides or offers three types of services as given below :

1. Unacknowledged Connectionless Service
2. Acknowledged Connectionless Service
3. Acknowledged Connection-Oriented Service 
  1. Unacknowledged Connectionless Service :
    Unacknowledged connectionless service simply provides datagram styles delivery without any error, issue, or flow control. In this service, source machine generally transmits independent frames to destination machine without having destination machine to acknowledge these frames.

    This service is called as connectionless service because there is no connection established among sending or source machine and destination or receiving machine before data transfer or release after data transfer.

    In Data Link Layer, if anyhow frame is lost due to noise, there will be no attempt made just to detect or determine loss or recovery from it. This simply means that there will be no error or flow control. An example can be Ethernet.

  2. Acknowledged Connectionless Service :
    This service simply provides acknowledged connectionless service i.e. packet delivery is simply acknowledged, with help of stop and wait for protocol.

    In this service, each frame that is transmitted by Data Link Layer is simply acknowledged individually and then sender usually knows whether or not these transmitted data frames received safely. There is no logical connection established and each frame that is transmitted is acknowledged individually.

    This mode simply provides means by which user of data link can just send or transfer data and request return of data at the same time. It also uses particular time period that if it has passed frame without getting acknowledgment, then it will resend data frame on time period.

    This service is more reliable than unacknowledged connectionless service. This service is generally useful over several unreliable channels, like wireless systems, Wi-Fi services, etc.

  3. Acknowledged Connection-Oriented Service :
    In this type of service, connection is established first among sender and receiver or source and destination before data is transferred.

    Then data is transferred or transmitted along with this established connection. In this service, each of frames that are transmitted is provided individual numbers first, so as to confirm and guarantee that each of frames is received only once that too in an appropriate order and sequence.

 

 

 

Monday, September 30, 2024

Network Layer



  • The Network Layer is the third layer of the OSI model.
  • It handles the service requests from the transport layer and further forwards the service request to the data link layer.
  • The network layer translates the logical addresses into physical addresses
  • It determines the route from the source to the destination and also manages the traffic problems such as switching, routing and controls the congestion of data packets.
  • The main role of the network layer is to move the packets from sending host to the receiving host.

The main functions performed by the network layer are:

  • Routing: When a packet reaches the router's input link, the router will move the packets to the router's output link. For example, a packet from S1 to R1 must be forwarded to the next router on the path to S2.
  • Logical Addressing: The data link layer implements the physical addressing and network layer implements the logical addressing. Logical addressing is also used to distinguish between source and destination system. The network layer adds a header to the packet which includes the logical addresses of both the sender and the receiver.
  • Internetworking: This is the main role of the network layer that it provides the logical connection between different types of networks.
  • Fragmentation: The fragmentation is a process of breaking the packets into the smallest individual data units that travel through different networks.

Forwarding & Routing

In Network layer, a router is used to forward the packets. Every router has a forwarding table. A router forwards a packet by examining a packet's header field and then using the header field value to index into the forwarding table. The value stored in the forwarding table corresponding to the header field value indicates the router's outgoing interface link to which the packet is to be forwarded.

For example, the router with a header field value of 0111 arrives at a router, and then router indexes this header value into the forwarding table that determines the output link interface is 2. The router forwards the packet to the interface 2. The routing algorithm determines the values that are inserted in the forwarding table. The routing algorithm can be centralized or decentralized.

Services Provided by the Network Layer

  • Guaranteed delivery: This layer provides the service which guarantees that the packet will arrive at its destination.
  • Guaranteed delivery with bounded delay: This service guarantees that the packet will be delivered within a specified host-to-host delay bound.
  • In-Order packets: This service ensures that the packet arrives at the destination in the order in which they are sent.
  • Guaranteed max jitter: This service ensures that the amount of time taken between two successive transmissions at the sender is equal to the time between their receipt at the destination.
  • Security services: The network layer provides security by using a session key between the source and destination host. The network layer in the source host encrypts the payloads of datagrams being sent to the destination host. The network layer in the destination host would then decrypt the payload. In such a way, the network layer maintains the data integrity and source authentication services.

Design Issues with Network Layer

  • A key design issue is determining how packets are routed from source to destination. Routes can be based on static tables that are wired into the network and rarely changed. They can also be highly dynamic, being determined anew for each packet, to reflect the current network load.
  • If too many packets are present in the subnet at the same time, they will get into one another's way, forming bottlenecks. The control of such congestion also belongs to the network layer.
  • Moreover, the quality of service provided(delay, transmit time, jitter, etc) is also a network layer issue.
  • When a packet has to travel from one network to another to get to its destination, many problems can arise such as:
    • The addressing used by the second network may be different from the first one.
    • The second one may not accept the packet at all because it is too large.
    • The protocols may differ, and so on.
  • It is up to the network layer to overcome all these problems to allow heterogeneous networks to be interconnected.

 

Network Layer Protocols

TCP/IP supports the following protocols:

ARP

  • ARP stands for Address Resolution Protocol.
  • It is used to associate an IP address with the MAC address.
  • Each device on the network is recognized by the MAC address imprinted on the NIC. Therefore, we can say that devices need the MAC address for communication on a local area network. MAC address can be changed easily. For example, if the NIC on a particular machine fails, the MAC address changes but IP address does not change. ARP is used to find the MAC address of the node when an internet address is known.

How ARP works

If the host wants to know the physical address of another host on its network, then it sends an ARP query packet that includes the IP address and broadcast it over the network. Every host on the network receives and processes the ARP packet, but only the intended recipient recognizes the IP address and sends back the physical address. The host holding the datagram adds the physical address to the cache memory and to the datagram header, then sends back to the sender.

Steps taken by ARP protocol

If a device wants to communicate with another device, the following steps are taken by the device:

  • The device will first look at its internet list, called the ARP cache to check whether an IP address contains a matching MAC address or not. It will check the ARP cache in command prompt by using a command arp-a.
  • If ARP cache is empty, then device broadcast the message to the entire network asking each device for a matching MAC address.
  • The device that has the matching IP address will then respond back to the sender with its MAC address
  • Once the MAC address is received by the device, then the communication can take place between two devices.
  • If the device receives the MAC address, then the MAC address gets stored in the ARP cache. We can check the ARP cache in command prompt by using a command arp -a.

There are two types of ARP entries:

  • Dynamic entry: It is an entry which is created automatically when the sender broadcast its message to the entire network. Dynamic entries are not permanent, and they are removed periodically.
  • Static entry: It is an entry where someone manually enters the IP to MAC address association by using the ARP command utility.

RARP

  • RARP stands for Reverse Address Resolution Protocol.
  • If the host wants to know its IP address, then it broadcast the RARP query packet that contains its physical address to the entire network. A RARP server on the network recognizes the RARP packet and responds back with the host IP address.
  • The protocol which is used to obtain the IP address from a server is known as Reverse Address Resolution Protocol.
  • The message format of the RARP protocol is similar to the ARP protocol.
  • Like ARP frame, RARP frame is sent from one machine to another encapsulated in the data portion of a frame.

ICMP

  • ICMP stands for Internet Control Message Protocol.
  • The ICMP is a network layer protocol used by hosts and routers to send the notifications of IP datagram problems back to the sender.
  • ICMP uses echo test/reply to check whether the destination is reachable and responding.
  • ICMP handles both control and error messages, but its main function is to report the error but not to correct them.
  • An IP datagram contains the addresses of both source and destination, but it does not know the address of the previous router through which it has been passed. Due to this reason, ICMP can only send the messages to the source, but not to the immediate routers.
  • ICMP protocol communicates the error messages to the sender. ICMP messages cause the errors to be returned back to the user processes.
  • ICMP messages are transmitted within IP datagram.

Error Reporting

ICMP protocol reports the error messages to the sender.

Five types of errors are handled by the ICMP protocol:

  • Destination unreachable
  • Source Quench
  • Time Exceeded
  • Parameter problems
  • Redirection

    Destination unreachable: The message of "Destination Unreachable" is sent from receiver to the sender when destination cannot be reached, or packet is discarded when the destination is not reachable.

    Source Quench: The purpose of the source quench message is congestion control. The message sent from the congested router to the source host to reduce the transmission rate. ICMP will take the IP of the discarded packet and then add the source quench message to the IP datagram to inform the source host to reduce its transmission rate. The source host will reduce the transmission rate so that the router will be free from congestion.

    Time Exceeded: Time Exceeded is also known as "Time-To-Live". It is a parameter that defines how long a packet should live before it would be discarded.

There are two ways when Time Exceeded message can be generated:

Sometimes packet discarded due to some bad routing implementation, and this causes the looping issue and network congestion. Due to the looping issue, the value of TTL keeps on decrementing, and when it reaches zero, the router discards the datagram. However, when the datagram is discarded by the router, the time exceeded message will be sent by the router to the source host.

When destination host does not receive all the fragments in a certain time limit, then the received fragments are also discarded, and the destination host sends time Exceeded message to the source host.

  • Parameter problems: When a router or host discovers any missing value in the IP datagram, the router discards the datagram, and the "parameter problem" message is sent back to the source host.
  • Redirection: Redirection message is generated when host consists of a small routing table. When the host consists of a limited number of entries due to which it sends the datagram to a wrong router. The router that receives a datagram will forward a datagram to a correct router and also sends the "Redirection message" to the host to update its routing table.

IGMP

  • IGMP stands for Internet Group Message Protocol.
  • The IP protocol supports two types of communication:
    • Unicasting: It is a communication between one sender and one receiver. Therefore, we can say that it is one-to-one communication.
    • Multicasting: Sometimes the sender wants to send the same message to a large number of receivers simultaneously. This process is known as multicasting which has one-to-many communication.
  • The IGMP protocol is used by the hosts and router to support multicasting.
  • The IGMP protocol is used by the hosts and router to identify the hosts in a LAN that are the members of a group.
  • IGMP is a part of the IP layer, and IGMP has a fixed-size message.

IGMP Messages

  • Membership Query message
    • This message is sent by a router to all hosts on a local area network to determine the set of all the multicast groups that have been joined by the host.
    • It also determines whether a specific multicast group has been joined by the hosts on a attached interface.
    • The group address in the query is zero since the router expects one response from a host for every group that contains one or more members on that host.
  • Membership Report message
    • The host responds to the membership query message with a membership report message.
    • Membership report messages can also be generated by the host when a host wants to join the multicast group without waiting for a membership query message from the router.
    • Membership report messages are received by a router as well as all the hosts on an attached interface.
    • Each membership report message includes the multicast address of a single group that the host wants to join.
    • IGMP protocol does not care which host has joined the group or how many hosts are present in a single group. It only cares whether one or more attached hosts belong to a single multicast group.
    • The membership Query message sent by a router also includes a "Maximum Response time". After receiving a membership query message and before sending the membership report message, the host waits for the random amount of time from 0 to the maximum response time. If a host observes that some other attached host has sent the "Maximum Report message", then it discards its "Maximum Report message" as it knows that the attached router already knows that one or more hosts have joined a single multicast group. This process is known as feedback suppression. It provides the performance optimization, thus avoiding the unnecessary transmission of a "Membership Report message".
  • Leave Report
    When the host does not send the "Membership Report message", it means that the host has left the group. The host knows that there are no members in the group, so even when it receives the next query, it would not report the group.

Internetworking

Internetworking is combined of 2 words, inter and networking which implies an association between totally different nodes or segments. This connection area unit is established through intercessor devices akin to routers or gateway. The first term for associate degree internetwork was catenet. This interconnection is often among or between public, private, commercial, industrial, or governmental networks. Thus, associate degree internetwork could be an assortment of individual networks, connected by intermediate networking devices, that functions as one giant network. Internetworking refers to the trade, products, and procedures that meet the challenge of making and administering internetworks.

To enable communication, every individual network node or phase is designed with similar protocol or communication logic, that is Transfer Control Protocol (TCP) or Internet Protocol (IP). Once a network communicates with another network having constant communication procedures, it’s called Internetworking. Internetworking was designed to resolve the matter of delivering a packet of information through many links.

There a minute difference between extending the network and Internetworking. Merely exploitation of either a switch or a hub to attach 2 local area networks is an extension of LAN whereas connecting them via the router is associate degree example of Internetworking. Internetworking is enforced in Layer three (Network Layer) of OSI-ISO model. The foremost notable example of internetworking is that the Internet.

There are chiefly 3 unit of Internetworking:

  1. Extranet
  2. Intranet
  3. Internet

Intranets and extranets might or might not have connections to the net. If there is a connection to the net, the computer network or extranet area unit is usually shielded from being accessed from the net if it is not authorized. The net isn’t thought-about to be a section of the computer network or extranet, though it should function a portal for access to parts of associate degree extranet.

  1. Extranet – It’s a network of the internetwork that’s restricted in scope to one organization or entity however that additionally has restricted connections to the networks of one or a lot of different sometimes, however not essential. It’s very lowest level of Internetworking, usually enforced in an exceedingly personal area. Associate degree extranet may additionally be classified as a Man, WAN, or different form of network however it cannot encompass one local area network i.e. it should have a minimum of one reference to associate degree external network.
  2. Intranet – This associate degree computer network could be a set of interconnected networks, which exploits the Internet Protocol and uses IP-based tools akin to web browsers and FTP tools, that’s underneath the management of one body entity. That body entity closes the computer network to the remainder of the planet and permits solely specific users. Most typically, this network is the internal network of a corporation or different enterprise. An outsized computer network can usually have its own internet server to supply users with browseable data.
  3. Internet – A selected Internetworking, consisting of a worldwide interconnection of governmental, academic, public, and personal networks based mostly upon the Advanced analysis comes Agency Network (ARPANET) developed by ARPA of the U.S. Department of Defense additionally home to the World Wide Web (WWW) and cited as the ‘Internet’ to differentiate from all different generic Internetworks. Participants within the web, or their service suppliers, use IP Addresses obtained from address registries that management assignments.

Internetwork Addressing –

Internetwork addresses establish devices severally or as members of a bunch. Addressing schemes differ based on the protocol family and therefore the OSI layer. Three kinds of internetwork addresses area unit ordinarily used: data-link layer addresses, Media Access control (MAC) addresses, and network-layer addresses.

  1. Data Link Layer addresses: A data-link layer address unambiguously identifies every physical network association of a network device. Data-link addresses typically area unit cited as physical or hardware addresses. Data-link addresses sometimes exist among a flat address area and have a pre-established and usually fastened relationship to a selected device. End systems usually have just one physical network association, and therefore have just one data-link address. Routers and different internetworking devices usually have multiple physical network connections and so eventually have multiple data-link addresses.
  2. MAC Addresses: Media Access management (MAC) addresses encompass a set of data-link layer addresses. MAC addresses establish network entities in LANs that implement the IEEE MAC addresses of the data-link layer. MAC addresses different area unit distinctively for every local area network interface. MAC addresses are forty-eight bits long and are expressed in form of twelve hexadecimal digits. The primary half dozen hexadecimal digits, that are usually administered by the IEEE, establish the manufacturer or merchant and therefore comprise the Organizational Unique Identifier (OUI). The last half dozen positional notation digits comprise the interface serial variety or another price administered by the particular merchant. MAC addresses typically area unit referred to as burned-in addresses (BIAs) as a result of burned into read-only memory(ROM) and are traced into random-access memory (RAM) once the interface card initializes.
  3. Network-Layer Addresses: Network addresses sometimes exist among a gradable address area and typically area unit referred to as virtual or logical addresses. the connection between a network address and a tool is logical and unfixed, it usually relies either on physical network characteristics or on groupings that don’t have any physical basis. finish systems need one network-layer address for every network-layer protocol they support. Routers and different Internetworking devices need one network-layer address per physical network association for every network-layer protocol supported.

Challenges to Internetworking –

Implementing a useful internetwork isn’t at any certainty. There are several challenging fields, particularly in the areas of dependableness, connectivity, network management, and adaptability and each and every space is essential in establishing associate degree economical and effective internetwork. Few of them are:-

  • The initial challenge lies when we are trying to connect numerous systems to support communication between disparate technologies. For example, Totally different sites might use different kinds of media, or they could operate at variable speeds.
  • Another essential thought is reliable service that should be maintained in an internetwork. Individual users and whole organizations depend upon consistent, reliable access to network resources.
  • Network management should give centralized support associate degreed troubleshooting capabilities in an internetwork. Configuration, security, performance, and different problems should be adequately addressed for the internetwork to perform swimmingly.
  • Flexibility, the ultimate concern, is important for network enlargement and new applications and services, among different factors.

Network Addressing

When you configure the TCP/IP protocol on a Windows computer, the TCP/IP configuration settings require:

  • An IP address
  • A subnet mask
  • A default gateway

To configure TCP/IP correctly, it's necessary to understand how TCP/IP networks are addressed and divided into networks and subnetworks.

The success of TCP/IP as the network protocol of the Internet is largely because of its ability to connect together networks of different sizes and systems of different types. These networks are arbitrarily defined into three main classes (along with a few others) that have predefined sizes. Each of them can be divided into smaller subnetworks by system administrators. A subnet mask is used to divide an IP address into two parts. One part identifies the host (computer), the other part identifies the network to which it belongs. To better understand how IP addresses and subnet masks work, look at an IP address and see how it's organized.

IP addresses: Networks and hosts

An IP address is a 32-bit number. It uniquely identifies a host (computer or other device, such as a printer or router) on a TCP/IP network.

IP addresses are normally expressed in dotted-decimal format, with four numbers separated by periods, such as 192.168.123.132. To understand how subnet masks are used to distinguish between hosts, networks, and subnetworks, examine an IP address in binary notation.

For example, the dotted-decimal IP address 192.168.123.132 is (in binary notation) the 32-bit number 110000000101000111101110000100. This number may be hard to make sense of, so divide it into four parts of eight binary digits.

These 8-bit sections are known as octets. The example IP address, then, becomes 11000000.10101000.01111011.10000100. This number only makes a little more sense, so for most uses, convert the binary address into dotted-decimal format (192.168.123.132). The decimal numbers separated by periods are the octets converted from binary to decimal notation.

For a TCP/IP wide area network (WAN) to work efficiently as a collection of networks, the routers that pass packets of data between networks don't know the exact location of a host for which a packet of information is destined. Routers only know what network the host is a member of and use information stored in their route table to determine how to get the packet to the destination host's network. After the packet is delivered to the destination's network, the packet is delivered to the appropriate host.

For this process to work, an IP address has two parts. The first part of an IP address is used as a network address, the last part as a host address. If you take the example 192.168.123.132 and divide it into these two parts, you get 192.168.123. Network .132 Host or 192.168.123.0 - network address. 0.0.0.132 - host address.

Subnet mask

The second item, which is required for TCP/IP to work, is the subnet mask. The subnet mask is used by the TCP/IP protocol to determine whether a host is on the local subnet or on a remote network.

In TCP/IP, the parts of the IP address that are used as the network and host addresses aren't fixed. Unless you have more information, the network and host addresses above can't be determined. This information is supplied in another 32-bit number called a subnet mask. The subnet mask is 255.255.255.0 in this example. It isn't obvious what this number means unless you know 255 in binary notation equals 11111111. So, the subnet mask is 11111111.11111111.11111111.0000000.

Lining up the IP address and the subnet mask together, the network, and host portions of the address can be separated:

11000000.10101000.01111011.10000100 - IP address (192.168.123.132)
11111111.11111111.11111111.00000000 - Subnet mask (255.255.255.0)

The first 24 bits (the number of ones in the subnet mask) are identified as the network address. The last 8 bits (the number of remaining zeros in the subnet mask) are identified as the host address. It gives you the following addresses:

11000000.10101000.01111011.00000000 - Network address (192.168.123.0)
00000000.00000000.00000000.10000100 - Host address (000.000.000.132)

So now you know, for this example using a 255.255.255.0 subnet mask, that the network ID is 192.168.123.0, and the host address is 0.0.0.132. When a packet arrives on the 192.168.123.0 subnet (from the local subnet or a remote network), and it has a destination address of 192.168.123.132, your computer will receive it from the network and process it.

Almost all decimal subnet masks convert to binary numbers that are all ones on the left and all zeros on the right. Some other common subnet masks are:

Decimal Binary 255.255.255.192 1111111.11111111.1111111.11000000 255.255.255.224 1111111.11111111.1111111.11100000

Internet RFC 1878 describes the valid subnets and subnet masks that can be used on TCP/IP networks.

 

Network classes

Internet addresses are allocated by the InterNIC, the organization that administers the Internet. These IP addresses are divided into classes. The most common of them are classes A, B, and C. Classes D and E exist, but aren't used by end users. Each of the address classes has a different default subnet mask. You can identify the class of an IP address by looking at its first octet. Following are the ranges of Class A, B, and C Internet addresses, each with an example address:

  • Class A networks use a default subnet mask of 255.0.0.0 and have 0-127 as their first octet. The address 10.52.36.11 is a class A address. Its first octet is 10, which is between 1 and 126, inclusive.

  • Class B networks use a default subnet mask of 255.255.0.0 and have 128-191 as their first octet. The address 172.16.52.63 is a class B address. Its first octet is 172, which is between 128 and 191, inclusive.

  • Class C networks use a default subnet mask of 255.255.255.0 and have 192-223 as their first octet. The address 192.168.123.132 is a class C address. Its first octet is 192, which is between 192 and 223, inclusive.

In some scenarios, the default subnet mask values don't fit the organization needs for one of the following reasons:

  • The physical topology of the network
  • The numbers of networks (or hosts) don't fit within the default subnet mask restrictions.

The next section explains how networks can be divided using subnet masks.

Subnetting

A Class A, B, or C TCP/IP network can be further divided, or subnetted, by a system administrator. It becomes necessary as you reconcile the logical address scheme of the Internet (the abstract world of IP addresses and subnets) with the physical networks in use by the real world.

A system administrator who is allocated a block of IP addresses may be administering networks that aren't organized in a way that easily fits these addresses. For example, you have a wide area network with 150 hosts on three networks (in different cities) that are connected by a TCP/IP router. Each of these three networks has 50 hosts. You are allocated the class C network 192.168.123.0. (For illustration, this address is actually from a range that isn't allocated on the Internet.) It means that you can use the addresses 192.168.123.1 to 192.168.123.254 for your 150 hosts.

Two addresses that can't be used in your example are 192.168.123.0 and 192.168.123.255 because binary addresses with a host portion of all ones and all zeros are invalid. The zero address is invalid because it's used to specify a network without specifying a host. The 255 address (in binary notation, a host address of all ones) is used to broadcast a message to every host on a network. Just remember that the first and last address in any network or subnet can't be assigned to any individual host.

You should now be able to give IP addresses to 254 hosts. It works fine if all 150 computers are on a single network. However, your 150 computers are on three separate physical networks. Instead of requesting more address blocks for each network, you divide your network into subnets that enable you to use one block of addresses on multiple physical networks.

In this case, you divide your network into four subnets by using a subnet mask that makes the network address larger and the possible range of host addresses smaller. In other words, you are 'borrowing' some of the bits used for the host address, and using them for the network portion of the address. The subnet mask 255.255.255.192 gives you four networks of 62 hosts each. It works because in binary notation, 255.255.255.192 is the same as 1111111.11111111.1111111.11000000. The first two digits of the last octet become network addresses, so you get the additional networks 00000000 (0), 01000000 (64), 10000000 (128) and 11000000 (192). (Some administrators will only use two of the subnetworks using 255.255.255.192 as a subnet mask. For more information on this topic, see RFC 1878.) In these four networks, the last six binary digits can be used for host addresses.

Using a subnet mask of 255.255.255.192, your 192.168.123.0 network then becomes the four networks 192.168.123.0, 192.168.123.64, 192.168.123.128 and 192.168.123.192. These four networks would have as valid host addresses:

192.168.123.1-62 192.168.123.65-126 192.168.123.129-190 192.168.123.193-254

Remember, again, that binary host addresses with all ones or all zeros are invalid, so you can't use addresses with the last octet of 0, 63, 64, 127, 128, 191, 192, or 255.

You can see how it works by looking at two host addresses, 192.168.123.71 and 192.168.123.133. If you used the default Class C subnet mask of 255.255.255.0, both addresses are on the 192.168.123.0 network. However, if you use the subnet mask of 255.255.255.192, they are on different networks; 192.168.123.71 is on the 192.168.123.64 network, 192.168.123.133 is on the 192.168.123.128 network.

Default gateways

If a TCP/IP computer needs to communicate with a host on another network, it will usually communicate through a device called a router. In TCP/IP terms, a router that is specified on a host, which links the host's subnet to other networks, is called a default gateway. This section explains how TCP/IP determines whether or not to send packets to its default gateway to reach another computer or device on the network.

When a host attempts to communicate with another device using TCP/IP, it performs a comparison process using the defined subnet mask and the destination IP address versus the subnet mask and its own IP address. The result of this comparison tells the computer whether the destination is a local host or a remote host.

If the result of this process determines the destination to be a local host, then the computer will send the packet on the local subnet. If the result of the comparison determines the destination to be a remote host, then the computer will forward the packet to the default gateway defined in its TCP/IP properties. It's then the responsibility of the router to forward the packet to the correct subnet.

Troubleshooting

TCP/IP network problems are often caused by incorrect configuration of the three main entries in a computer's TCP/IP properties. By understanding how errors in TCP/IP configuration affect network operations, you can solve many common TCP/IP problems.

Incorrect Subnet Mask: If a network uses a subnet mask other than the default mask for its address class, and a client is still configured with the default subnet mask for the address class, communication will fail to some nearby networks but not to distant ones. As an example, if you create four subnets (such as in the subnetting example) but use the incorrect subnet mask of 255.255.255.0 in your TCP/IP configuration, hosts won't be able to determine that some computers are on different subnets than their own. In this situation, packets destined for hosts on different physical networks that are part of the same Class C address won't be sent to a default gateway for delivery. A common symptom of this issue is when a computer can communicate with hosts that are on its local network and can talk to all remote networks except those networks that are nearby and have the same class A, B, or C address. To fix this problem, just enter the correct subnet mask in the TCP/IP configuration for that host.

Incorrect IP Address: If you put computers with IP addresses that should be on separate subnets on a local network with each other, they won't be able to communicate. They'll try to send packets to each other through a router that can't forward them correctly. A symptom of this problem is a computer that can talk to hosts on remote networks, but can't communicate with some or all computers on their local network. To correct this problem, make sure all computers on the same physical network have IP addresses on the same IP subnet. If you run out of IP addresses on a single network segment, there are solutions that go beyond the scope of this article.

Incorrect Default Gateway: A computer configured with an incorrect default gateway can communicate with hosts on its own network segment. But it will fail to communicate with hosts on some or all remote networks. A host can communicate with some remote networks but not others if the following conditions are true:

  • A single physical network has more than one router.
  • The wrong router is configured as a default gateway.

This problem is common if an organization has a router to an internal TCP/IP network and another router connected to the Internet.

Routing

  • A Router is a process of selecting path along which the data can be transferred from source to the destination. Routing is performed by a special device known as a router.
  • A Router works at the network layer in the OSI model and internet layer in TCP/IP model
  • A router is a networking device that forwards the packet based on the information available in the packet header and forwarding table.
  • The routing algorithms are used for routing the packets. The routing algorithm is nothing but a software responsible for deciding the optimal path through which packet can be transmitted.
  • The routing protocols use the metric to determine the best path for the packet delivery. The metric is the standard of measurement such as hop count, bandwidth, delay, current load on the path, etc. used by the routing algorithm to determine the optimal path to the destination.
  • The routing algorithm initializes and maintains the routing table for the process of path determination.

Routing Metrics and Costs

Routing metrics and costs are used for determining the best route to the destination. The factors used by the protocols to determine the shortest path, these factors are known as a metric.

Metrics are the network variables used to determine the best route to the destination. For some protocols use the static metrics means that their value cannot be changed and for some other routing protocols use the dynamic metrics means that their value can be assigned by the system administrator.

The most common metric values are given below:

  • Hop count: Hop count is defined as a metric that specifies the number of passes through internetworking devices such as a router, a packet must travel in a route to move from source to the destination. If the routing protocol considers the hop as a primary metric value, then the path with the least hop count will be considered as the best path to move from source to the destination.
  • Delay: It is a time taken by the router to process, queue and transmit a datagram to an interface. The protocols use this metric to determine the delay values for all the links along the path end-to-end. The path having the lowest delay value will be considered as the best path.
  • Bandwidth: The capacity of the link is known as a bandwidth of the link. The bandwidth is measured in terms of bits per second. The link that has a higher transfer rate like gigabit is preferred over the link that has the lower capacity like 56 kb. The protocol will determine the bandwidth capacity for all the links along the path, and the overall higher bandwidth will be considered as the best route.
  • Load: Load refers to the degree to which the network resource such as a router or network link is busy. A Load can be calculated in a variety of ways such as CPU utilization, packets processed per second. If the traffic increases, then the load value will also be increased. The load value changes with respect to the change in the traffic.
  • Reliability: Reliability is a metric factor may be composed of a fixed value. It depends on the network links, and its value is measured dynamically. Some networks go down more often than others. After network failure, some network links repaired more easily than other network links. Any reliability factor can be considered for the assignment of reliability ratings, which are generally numeric values assigned by the system administrator.

Types of Routing

Routing can be classified into three categories:

  • Static Routing
  • Default Routing
  • Dynamic Routing

Static Routing

  • Static Routing is also known as Nonadaptive Routing.
  • It is a technique in which the administrator manually adds the routes in a routing table.
  • A Router can send the packets for the destination along the route defined by the administrator.
  • In this technique, routing decisions are not made based on the condition or topology of the networks

Advantages Of Static Routing

Following are the advantages of Static Routing:

  • No Overhead: It has ho overhead on the CPU usage of the router. Therefore, the cheaper router can be used to obtain static routing.
  • Bandwidth: It has not bandwidth usage between the routers.
  • Security: It provides security as the system administrator is allowed only to have control over the routing to a particular network.

Disadvantages of Static Routing:

Following are the disadvantages of Static Routing:

  • For a large network, it becomes a very difficult task to add each route manually to the routing table.
  • The system administrator should have a good knowledge of a topology as he has to add each route manually.

Default Routing

  • Default Routing is a technique in which a router is configured to send all the packets to the same hop device, and it doesn't matter whether it belongs to a particular network or not. A Packet is transmitted to the device for which it is configured in default routing.
  • Default Routing is used when networks deal with the single exit point.
  • It is also useful when the bulk of transmission networks have to transmit the data to the same hp device.
  • When a specific route is mentioned in the routing table, the router will choose the specific route rather than the default route. The default route is chosen only when a specific route is not mentioned in the routing table.

Dynamic Routing

  • It is also known as Adaptive Routing.
  • It is a technique in which a router adds a new route in the routing table for each packet in response to the changes in the condition or topology of the network.
  • Dynamic protocols are used to discover the new routes to reach the destination.
  • In Dynamic Routing, RIP and OSPF are the protocols used to discover the new routes.
  • If any route goes down, then the automatic adjustment will be made to reach the destination.

The Dynamic protocol should have the following features:

  • All the routers must have the same dynamic routing protocol in order to exchange the routes.
  • If the router discovers any change in the condition or topology, then router broadcast this information to all other routers.

Advantages of Dynamic Routing:

  • It is easier to configure.
  • It is more effective in selecting the best route in response to the changes in the condition or topology.

Disadvantages of Dynamic Routing:

  • It is more expensive in terms of CPU and bandwidth usage.
  • It is less secure as compared to default and static routing.

Friday, September 6, 2024

Data Link Layer


  • In the OSI model, the data link layer is a 4th layer from the top and 2nd layer from the bottom.

  • The communication channel that connects the adjacent nodes is known as links, and in order to move the datagram from source to the destination, the datagram must be moved across an individual link.

  • The main responsibility of the Data Link Layer is to transfer the datagram across an individual link.

  • The Data link layer protocol defines the format of the packet exchanged across the nodes as well as the actions such as Error detection, retransmission, flow control, and random access.

  • The Data Link Layer protocols are Ethernet, token ring, FDDI and PPP.

  • An important characteristic of a Data Link Layer is that datagram can be handled by different link layer protocols on different links in a path. For example, the datagram is handled by Ethernet on the first link, PPP on the second link.

Following services are provided by the Data Link Layer:

  • Framing & Link access: Data Link Layer protocols encapsulate each network frame within a Link layer frame before the transmission across the link. A frame consists of a data field in which network layer datagram is inserted and a number of data fields. It specifies the structure of the frame as well as a channel access protocol by which frame is to be transmitted over the link.

  • Reliable delivery: Data Link Layer provides a reliable delivery service, i.e., transmits the network layer datagram without any error. A reliable delivery service is accomplished with transmissions and acknowledgements. A data link layer mainly provides the reliable delivery service over the links as they have higher error rates and they can be corrected locally, link at which an error occurs rather than forcing to retransmit the data.

  • Flow control: A receiving node can receive the frames at a faster rate than it can process the frame. Without flow control, the receiver's buffer can overflow, and frames can get lost. To overcome this problem, the data link layer uses the flow control to prevent the sending node on one side of the link from overwhelming the receiving node on another side of the link.

  • Error detection: Errors can be introduced by signal attenuation and noise. Data Link Layer protocol provides a mechanism to detect one or more errors. This is achieved by adding error detection bits in the frame and then receiving node can perform an error check.

  • Error correction: Error correction is similar to the Error detection, except that receiving node not only detect the errors but also determine where the errors have occurred in the frame.

  • Half-Duplex & Full-Duplex: In a Full-Duplex mode, both the nodes can transmit the data at the same time. In a Half-Duplex mode, only one node can transmit the data at the same time.

Errors

When bits are transmitted over the computer network, they are subject to get corrupted due to interference and network problems. The corrupted bits leads to spurious data being received by the destination and are called errors.

Types of Errors

Errors can be of three types, namely single bit errors, multiple bit errors, and burst errors.

  • Single bit error − In the received frame, only one bit has been corrupted, i.e. either changed from 0 to 1 or from 1 to 0.

  • Multiple bits error − In the received frame, more than one bits are corrupted.
  • Burst error − In the received frame, more than one consecutive bits are corrupted.

Error Control

Error control can be done in two ways

  • Error detection − Error detection involves checking whether any error has occurred or not. The number of error bits and the type of error does not matter.

  • Error correction − Error correction involves ascertaining the exact number of bits that has been corrupted and the location of the corrupted bits.

For both error detection and error correction, the sender needs to send some additional bits along with the data bits. The receiver performs necessary checks based upon the additional redundant bits. If it finds that the data is free from errors, it removes the redundant bits before passing the message to the upper layers.

Error Detection Techniques

There are three main techniques for detecting errors in frames: Parity Check, Checksum and Cyclic Redundancy Check (CRC).

Parity Check

The parity check is done by adding an extra bit, called parity bit to the data to make a number of 1s either even in case of even parity or odd in case of odd parity.

While creating a frame, the sender counts the number of 1s in it and adds the parity bit in the following way

  • In case of even parity: If a number of 1s is even then parity bit value is 0. If the number of 1s is odd then parity bit value is 1.

  • In case of odd parity: If a number of 1s is odd then parity bit value is 0. If a number of 1s is even then parity bit value is 1.

On receiving a frame, the receiver counts the number of 1s in it. In case of even parity check, if the count of 1s is even, the frame is accepted, otherwise, it is rejected. A similar rule is adopted for odd parity check.

The parity check is suitable for single bit error detection only.

Checksum

In this error detection scheme, the following procedure is applied

  • Data is divided into fixed sized frames or segments.

  • The sender adds the segments using 1’s complement arithmetic to get the sum. It then complements the sum to get the checksum and sends it along with the data frames.

  • The receiver adds the incoming segments along with the checksum using 1’s complement arithmetic to get the sum and then complements it.

  • If the result is zero, the received frames are accepted; otherwise, they are discarded.

Cyclic Redundancy Check (CRC)

Cyclic Redundancy Check (CRC) involves binary division of the data bits being sent by a predetermined divisor agreed upon by the communicating system. The divisor is generated using polynomials.

  • Here, the sender performs binary division of the data segment by the divisor. It then appends the remainder called CRC bits to the end of the data segment. This makes the resulting data unit exactly divisible by the divisor.

  • The receiver divides the incoming data unit by the divisor. If there is no remainder, the data unit is assumed to be correct and is accepted. Otherwise, it is understood that the data is corrupted and is therefore rejected.

Error Correction Techniques

Error correction techniques find out the exact number of bits that have been corrupted and as well as their locations. There are two principle ways

 

  • Backward Error Correction (Retransmission) −  If the receiver detects an error in the incoming frame, it requests the sender to retransmit the frame. It is a relatively simple technique. But it can be efficiently used only where retransmitting is not expensive as in fiber optics and the time for retransmission is low relative to the requirements of the application.

  • Forward Error Correction −  If the receiver detects some error in the incoming frame, it executes error-correcting code that generates the actual frame. This saves bandwidth required for retransmission. It is inevitable in real-time systems. However, if there are too many errors, the frames need to be retransmitted.

The four main error correction codes are

  • Hamming Codes
  • Binary Convolution Code
  • Reed – Solomon Code
  • Low-Density Parity-Check Code

Data Link Controls

Data Link Control is the service provided by the Data Link Layer to provide reliable data transfer over the physical medium. For example, In the half-duplex transmission mode, one device can only transmit the data at a time. If both the devices at the end of the links transmit the data simultaneously, they will collide and leads to the loss of the information. The Data link layer provides the coordination among the devices so that no collision occurs.

The Data link layer provides three functions:

  • Line discipline
  • Flow Control
  • Error Control

Line Discipline

  • Line Discipline is a functionality of the Data link layer that provides the coordination among the link systems. It determines which device can send, and when it can send the data.

Line Discipline can be achieved in two ways:

  • ENQ/ACK
  • Poll/select

END/ACK

END/ACK stands for Enquiry/Acknowledgement is used when there is no wrong receiver available on the link and having a dedicated path between the two devices so that the device capable of receiving the transmission is the intended one.

END/ACK coordinates which device will start the transmission and whether the recipient is ready or not.

Poll/Select

The Poll/Select method of line discipline works with those topologies where one device is designated as a primary station, and other devices are secondary stations.

Flow Control

  • It is a set of procedures that tells the sender how much data it can transmit before the data overwhelms the receiver.
  • The receiving device has limited speed and limited memory to store the data. Therefore, the receiving device must be able to inform the sending device to stop the transmission temporarily before the limits are reached.
  • It requires a buffer, a block of memory for storing the information until they are processed.

Two methods have been developed to control the flow of data:

  • Stop-and-wait
  • Sliding window

Stop-and-wait

  • In the Stop-and-wait method, the sender waits for an acknowledgement after every frame it sends.
  • When acknowledgement is received, then only next frame is sent. The process of alternately sending and waiting of a frame continues until the sender transmits the EOT (End of transmission) frame.

Advantage of Stop-and-wait

The Stop-and-wait method is simple as each frame is checked and acknowledged before the next frame is sent.

Disadvantage of Stop-and-wait

Stop-and-wait technique is inefficient to use as each frame must travel across all the way to the receiver, and an acknowledgement travels all the way before the next frame is sent. Each frame sent and received uses the entire time needed to traverse the link.

Sliding Window

  • At the beginning of a transmission, the sender window contains n-1 frames, and when they are sent out, the left boundary moves inward shrinking the size of the window. For example, if the size of the window is w if three frames are sent out, then the number of frames left out in the sender window is w-3.
  • Once the ACK has arrived, then the sender window expands to the number which will be equal to the number of frames acknowledged by ACK.
  • For example, the size of the window is 7, and if frames 0 through 4 have been sent out and no acknowledgement has arrived, then the sender window contains only two frames, i.e., 5 and 6. Now, if ACK has arrived with a number 4 which means that 0 through 3 frames have arrived undamaged and the sender window is expanded to include the next four frames. Therefore, the sender window contains six frames (5,6,7,0,1,2).

Receiver Window

  • At the beginning of transmission, the receiver window does not contain n frames, but it contains n-1 spaces for frames.
  • When the new frame arrives, the size of the window shrinks.
  • The receiver window does not represent the number of frames received, but it represents the number of frames that can be received before an ACK is sent. For example, the size of the window is w, if three frames are received then the number of spaces available in the window is (w-3).
  • Once the acknowledgement is sent, the receiver window expands by the number equal to the number of frames acknowledged.
  • Suppose the size of the window is 7 means that the receiver window contains seven spaces for seven frames. If the one frame is received, then the receiver window shrinks and moving the boundary from 0 to 1. In this way, window shrinks one by one, so window now contains the six spaces. If frames from 0 through 4 have sent, then the window contains two spaces before an acknowledgement is sent.

Error Control

Error Control is a technique of error detection and retransmission.

Categories of Error Control:

Stop-and-wait ARQ

Stop-and-wait ARQ is a technique used to retransmit the data in case of damaged or lost frames.

This technique works on the principle that the sender will not transmit the next frame until it receives the acknowledgement of the last transmitted frame.

Four features are required for the retransmission:

  • The sending device keeps a copy of the last transmitted frame until the acknowledgement is received. Keeping the copy allows the sender to retransmit the data if the frame is not received correctly.
  • Both the data frames and the ACK frames are numbered alternately 0 and 1 so that they can be identified individually. Suppose data 1 frame acknowledges the data 0 frame means that the data 0 frame has been arrived correctly and expects to receive data 1 frame.
  • If an error occurs in the last transmitted frame, then the receiver sends the NAK frame which is not numbered. On receiving the NAK frame, sender retransmits the data.
  • It works with the timer. If the acknowledgement is not received within the allotted time, then the sender assumes that the frame is lost during the transmission, so it will retransmit the frame.

Two possibilities of the retransmission:

  • Damaged Frame: When the receiver receives a damaged frame, i.e., the frame contains an error, then it returns the NAK frame. For example, when the data 0 frame is sent, and then the receiver sends the ACK 1 frame means that the data 0 has arrived correctly, and transmits the data 1 frame. The sender transmits the next frame: data 1. It reaches undamaged, and the receiver returns ACK 0. The sender transmits the next frame: data 0. The receiver reports an error and returns the NAK frame. The sender retransmits the data 0 frame.
  • Lost Frame: Sender is equipped with the timer and starts when the frame is transmitted. Sometimes the frame has not arrived at the receiving end so that it can be acknowledged neither positively nor negatively. The sender waits for acknowledgement until the timer goes off. If the timer goes off, it retransmits the last transmitted frame.

Sliding Window ARQ

SlidingWindow ARQ is a technique used for continuous transmission error control.

Three Features used for retransmission:

  • In this case, the sender keeps the copies of all the transmitted frames until they have been acknowledged. Suppose the frames from 0 through 4 have been transmitted, and the last acknowledgement was for frame 2, the sender has to keep the copies of frames 3 and 4 until they receive correctly.
  • The receiver can send either NAK or ACK depending on the conditions. The NAK frame tells the sender that the data have been received damaged. Since the sliding window is a continuous transmission mechanism, both ACK and NAK must be numbered for the identification of a frame. The ACK frame consists of a number that represents the next frame which the receiver expects to receive. The NAK frame consists of a number that represents the damaged frame.
  • The sliding window ARQ is equipped with the timer to handle the lost acknowledgements. Suppose then n-1 frames have been sent before receiving any acknowledgement. The sender waits for the acknowledgement, so it starts the timer and waits before sending any more. If the allotted time runs out, the sender retransmits one or all the frames depending upon the protocol used.

Two protocols used in sliding window ARQ:

  • Go-Back-n ARQ: In Go-Back-N ARQ protocol, if one frame is lost or damaged, then it retransmits all the frames after which it does not receive the positive ACK.

Three possibilities can occur for retransmission:

  • Damaged Frame: When the frame is damaged, then the receiver sends a NAK frame.
  • Lost Data Frame: In Sliding window protocols, data frames are sent sequentially. If any of the frames is lost, then the next frame arrive at the receiver is out of sequence. The receiver checks the sequence number of each of the frame, discovers the frame that has been skipped, and returns the NAK for the missing frame. The sending device retransmits the frame indicated by NAK as well as the frames transmitted after the lost frame.
  • Lost Acknowledgement: The sender can send as many frames as the windows allow before waiting for any acknowledgement. Once the limit of the window is reached, the sender has no more frames to send; it must wait for the acknowledgement. If the acknowledgement is lost, then the sender could wait forever. To avoid such situation, the sender is equipped with the timer that starts counting whenever the window capacity is reached. If the acknowledgement has not been received within the time limit, then the sender retransmits the frame since the last ACK.

Selective-Reject ARQ

  • Selective-Reject ARQ technique is more efficient than Go-Back-n ARQ.
  • In this technique, only those frames are retransmitted for which negative acknowledgement (NAK) has been received.
  • The receiver storage buffer keeps all the damaged frames on hold until the frame in error is correctly received.
  • The receiver must have an appropriate logic for reinserting the frames in a correct order.
  • The sender must consist of a searching mechanism that selects only the requested frame for retransmission.

 

Tuesday, August 27, 2024

Physical Layer



Physical layer is the lowest layer of the OSI reference model. It is responsible for sending bits from one computer to another. This layer is not concerned with the meaning of the bits and deals with the setup of physical connection to the network and with transmission and reception of signals.

Functions of Physical Layer

Following are the various functions performed by the Physical layer of the OSI model.

  1. Representation of Bits: Data in this layer consists of stream of bits. The bits must be encoded into signals for transmission. It defines the type of encoding i.e. how 0's and 1's are changed to signal.
  2. Data Rate: This layer defines the rate of transmission which is the number of bits per second.
  3. Synchronization: It deals with the synchronization of the transmitter and receiver. The sender and receiver are synchronized at bit level.
  4. Interface: The physical layer defines the transmission interface between devices and transmission medium.
  5. Line Configuration: This layer connects devices with the medium: Point to Point configuration and Multipoint configuration.
  6. Topologies: Devices must be connected using the following topologies: Mesh, Star, Ring and Bus.
  7. Transmission Modes: Physical Layer defines the direction of transmission between two devices: Simplex, Half Duplex, Full Duplex.
  8. Deals with baseband and broadband transmission.

 

Services:

The physical layer provides the following services:

  • Modulates the process of converting a signal from one form to another so that it can be physically transmitted over a communication channel.

  • Bit-by-bit delivery.

  • Line coding, which allows data to be sent by hardware devices that are optimized for digital communications that may have discreet timing on the transmission link.

  • Bit synchronization for synchronous serial communications.

  • Start-stop signaling and flow control in asynchronous serial communication.

  • Circuit switching and multiplexing hardware control of multiplexed digital signals.

  • Carrier sensing and collision detection, whereby the physical layer detects carrier availability and avoids the congestion problems caused by undeliverable packets.

  • Signal equalization to ensure reliable connections and facilitate multiplexing.

  • Forward error correction/channel coding such as error correction code.

  • Bit interleaving to improve error correction.

  • Auto-negotiation.

  • Transmission mode control.

Examples of protocols that use physical layers include:

  • Digital Subscriber Line.
  • Integrated Services Digital Network.
  • Infrared Data Association.
  • Universal Serial Bus (USB.)
  • Bluetooth.
  • Controller Area Network.
  • Ethernet.

Network switch

A network switch is networking hardware that connects devices on a computer network by using packet switching to receive and forward data to the destination device. A network switch is a multiport network bridge that uses MAC addresses to forward data at the data link layer of the OSI model.

Switching techniques

In large networks, there can be multiple paths from sender to receiver. The switching technique will decide the best route for data transmission.

Switching technique is used to connect the systems for making one-to-one communication.

Classification Of Switching Techniques

Circuit Switching

  • Circuit switching is a switching technique that establishes a dedicated path between sender and receiver.
  • In the Circuit Switching Technique, once the connection is established then the dedicated path will remain to exist until the connection is terminated.
  • Circuit switching in a network operates in a similar way as the telephone works.
  • A complete end-to-end path must exist before the communication takes place.
  • In case of circuit switching technique, when any user wants to send the data, voice, video, a request signal is sent to the receiver then the receiver sends back the acknowledgment to ensure the availability of the dedicated path. After receiving the acknowledgment, dedicated path transfers the data.
  • Circuit switching is used in public telephone network. It is used for voice transmission.
  • Fixed data can be transferred at a time in circuit switching technology.

Communication through circuit switching has 3 phases:

  • Circuit establishment
  • Data transfer
  • Circuit Disconnect

Circuit Switching can use either of the two technologies:

Space Division Switches:

  • Space Division Switching is a circuit switching technology in which a single transmission path is accomplished in a switch by using a physically separate set of crosspoints.
  • Space Division Switching can be achieved by using crossbar switch. A crossbar switch is a metallic crosspoint or semiconductor gate that can be enabled or disabled by a control unit.
  • The Crossbar switch is made by using the semiconductor. For example, Xilinx crossbar switch using FPGAs.
  • Space Division Switching has high speed, high capacity, and nonblocking switches.

Space Division Switches can be categorized in two ways:

  • Crossbar Switch
  • Multistage Switch

Crossbar Switch

The Crossbar switch is a switch that has n input lines and n output lines. The crossbar switch has n2 intersection points known as crosspoints.

Disadvantage of Crossbar switch:

The number of crosspoints increases as the number of stations is increased. Therefore, it becomes very expensive for a large switch. The solution to this is to use a multistage switch.

Multistage Switch

  • Multistage Switch is made by splitting the crossbar switch into the smaller units and then interconnecting them.
  • It reduces the number of crosspoints.
  • If one path fails, then there will be an availability of another path.

Advantages Of Circuit Switching:

  • In the case of Circuit Switching technique, the communication channel is dedicated.
  • It has fixed bandwidth.

Disadvantages Of Circuit Switching:

  • Once the dedicated path is established, the only delay occurs in the speed of data transmission.
  • It takes a long time to establish a connection approx 10 seconds during which no data can be transmitted.
  • It is more expensive than other switching techniques as a dedicated path is required for each connection.
  • It is inefficient to use because once the path is established and no data is transferred, then the capacity of the path is wasted.
  • In this case, the connection is dedicated therefore no other data can be transferred even if the channel is free.

Message Switching

  • Message Switching is a switching technique in which a message is transferred as a complete unit and routed through intermediate nodes at which it is stored and forwarded.
  • In Message Switching technique, there is no establishment of a dedicated path between the sender and receiver.
  • The destination address is appended to the message. Message Switching provides a dynamic routing as the message is routed through the intermediate nodes based on the information available in the message.
  • Message switches are programmed in such a way so that they can provide the most efficient routes.
  • Each and every node stores the entire message and then forward it to the next node. This type of network is known as store and forward network.
  • Message switching treats each message as an independent entity.

Advantages Of Message Switching

  • Data channels are shared among the communicating devices that improve the efficiency of using available bandwidth.
  • Traffic congestion can be reduced because the message is temporarily stored in the nodes.
  • Message priority can be used to manage the network.
  • The size of the message which is sent over the network can be varied. Therefore, it supports the data of unlimited size.

Disadvantages Of Message Switching

  • The message switches must be equipped with sufficient storage to enable them to store the messages until the message is forwarded.
  • The Long delay can occur due to the storing and forwarding facility provided by the message switching technique.

Packet Switching

  • The packet switching is a switching technique in which the message is sent in one go, but it is divided into smaller pieces, and they are sent individually.
  • The message splits into smaller pieces known as packets and packets are given a unique number to identify their order at the receiving end.
  • Every packet contains some information in its headers such as source address, destination address and sequence number.
  • Packets will travel across the network, taking the shortest path as possible.
  • All the packets are reassembled at the receiving end in correct order.
  • If any packet is missing or corrupted, then the message will be sent to resend the message.
  • If the correct order of the packets is reached, then the acknowledgment message will be sent.

Approaches Of Packet Switching:

There are two approaches to Packet Switching:

Datagram Packet switching:

  • It is a packet switching technology in which packet is known as a datagram, is considered as an independent entity. Each packet contains the information about the destination and switch uses this information to forward the packet to the correct destination.
  • The packets are reassembled at the receiving end in correct order.
  • In Datagram Packet Switching technique, the path is not fixed.
  • Intermediate nodes take the routing decisions to forward the packets.
  • Datagram Packet Switching is also known as connectionless switching.

Virtual Circuit Switching

  • Virtual Circuit Switching is also known as connection-oriented switching.
  • In the case of Virtual circuit switching, a preplanned route is established before the messages are sent.
  • Call request and call accept packets are used to establish the connection between sender and receiver.
  • In this case, the path is fixed for the duration of a logical connection.

Advantages Of Packet Switching:

  • Cost-effective: In packet switching technique, switching devices do not require massive secondary storage to store the packets, so cost is minimized to some extent. Therefore, we can say that the packet switching technique is a cost-effective technique.
  • Reliable: If any node is busy, then the packets can be rerouted. This ensures that the Packet Switching technique provides reliable communication.
  • Efficient: Packet Switching is an efficient technique. It does not require any established path prior to the transmission, and many users can use the same communication channel simultaneously, hence makes use of available bandwidth very efficiently.

Disadvantages Of Packet Switching:

  • Packet Switching technique cannot be implemented in those applications that require low delay and high-quality services.
  • The protocols used in a packet switching technique are very complex and requires high implementation cost.
  • If the network is overloaded or corrupted, then it requires retransmission of lost packets. It can also lead to the loss of critical information if errors are nor recovered.

What is computer security?

Computer security basically is the protection of computer systems and information from harm, theft, and unauthorized use. It is the process ...