Wednesday, June 5, 2024

Virtual Memory in Operating System


Virtual Memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory. The addresses a program may use to reference memory are distinguished from the addresses the memory system uses to identify physical storage sites, and program generated addresses are translated automatically to the corresponding machine addresses.
The size of virtual storage is limited by the addressing scheme of the computer system and amount of secondary memory is available not by the actual number of the main storage locations.

It is a technique that is implemented using both hardware and software. It maps memory addresses used by a program, called virtual addresses, into physical addresses in computer memory.

  1. All memory references within a process are logical addresses that are dynamically translated into physical addresses at run time. This means that a process can be swapped in and out of main memory such that it occupies different places in main memory at different times during the course of execution.
  2. A process may be broken into number of pieces and these pieces need not be continuously located in the main memory during execution. The combination of dynamic run-time address translation and use of page or segment table permits this.

If these characteristics are present then, it is not necessary that all the pages or segments are present in the main memory during execution. This means that the required pages need to be loaded into memory whenever required. Virtual memory is implemented using Demand Paging or Demand Segmentation.

Demand Paging :
The process of loading the page into memory on demand (whenever page fault occurs) is known as demand paging.
The process includes the following steps :

  1. If CPU try to refer a page that is currently not available in the main memory, it generates an interrupt indicating memory access fault.
  2. The OS puts the interrupted process in a blocking state. For the execution to proceed the OS must bring the required page into the memory.
  3. The OS will search for the required page in the logical address space.
  4. The required page will be brought from logical address space to physical address space. The page replacement algorithms are used for the decision making of replacing the page in physical address space.
  5. The page table will updated accordingly.
  6. The signal will be sent to the CPU to continue the program execution and it will place the process back into ready state.

Hence whenever a page fault occurs these steps are followed by the operating system and the required page is brought into memory.

Advantages :

  • More processes may be maintained in the main memory: Because we are going to load only some of the pages of any particular process, there is room for more processes. This leads to more efficient utilization of the processor because it is more likely that at least one of the more numerous processes will be in the ready state at any particular time.
  • A process may be larger than all of main memory: One of the most fundamental restrictions in programming is lifted. A process larger than the main memory can be executed because of demand paging. The OS itself loads pages of a process in main memory as required.
  • It allows greater multiprogramming levels by using less of the available (primary) memory for each process.

Page Fault Service Time :
The time taken to service the page fault is called as page fault service time. The page fault service time includes the time taken to perform all the above six steps.

Swapping:

Swapping a process out means removing all of its pages from memory, or marking them so that they will be removed by the normal page replacement process. Suspending a process ensures that it is not runnable while it is swapped out. At some later time, the system swaps back the process from the secondary storage to main memory. When a process is busy swapping pages in and out then this situation is called thrashing.

Causes of Thrashing :

  1. High degree of multiprogramming : If the number of processes keeps on increasing in the memory than number of frames allocated to each process will be decreased. So, less number of frames will be available to each process. Due to this, page fault will occur more frequently and more CPU time will be wasted in just swapping in and out of pages and the utilization will keep on decreasing.

    For example:
    Let free frames = 400
    Case 1: Number of process = 100
    Then, each process will get 4 frames.

    Case 2: Number of process = 400
    Each process will get 1 frame.
    Case 2 is a condition of thrashing, as the number of processes are increased,frames per process are decreased. Hence CPU time will be consumed in just swapping pages.

  2. Lacks of Frames:If a process has less number of frames then less pages of that process will be able to reside in memory and hence more frequent swapping in and out will be required. This may lead to thrashing. Hence sufficient amount of frames must be allocated to each process in order to prevent thrashing.

Recovery of Thrashing :

  • Do not allow the system to go into thrashing by instructing the long term scheduler not to bring the processes into memory after the threshold.
  • If the system is already in thrashing then instruct the mid term schedular to suspend some of the processes so that we can recover the system from thrashing.

 

Monday, June 3, 2024

Memory Management


Memory Management is the process of controlling and coordinating computer memory, assigning portions known as blocks to various running programs to optimize the overall performance of the system.

It is the most important function of an operating system that manages primary memory. It helps processes to move back and forward between the main memory and execution disk. It helps OS to keep track of every memory location, irrespective of whether it is allocated to some process or it remains free.

Why Use Memory Management?

Here, are reasons for using memory management:

  • It allows you to check how much memory needs to be allocated to processes that decide which processor should get memory at what time.

  • Tracks whenever inventory gets freed or unallocated. According to it will update the status.

  • It allocates the space to application routines.

  • It also make sure that these applications do not interfere with each other.

  • Helps protect different processes from each other

  • It places the programs in memory so that memory is utilized to its full extent.

Memory Management Techniques

Here, are some most crucial memory management techniques:

Single Contiguous Allocation

It is the easiest memory management technique. In this method, all types of computer's memory except a small portion which is reserved for the OS is available for one application. For example, MS-DOS operating system allocates memory in this way. An embedded system also runs on a single application.

Partitioned Allocation

It divides primary memory into various memory partitions, which is mostly contiguous areas of memory. Every partition stores all the information for a specific task or job. This method consists of allotting a partition to a job when it starts & unallocate when it ends.

Paged Memory Management

This method divides the computer's main memory into fixed-size units known as page frames. This hardware memory management unit maps pages into frames which should be allocated on a page basis.

Segmented Memory Management

Segmented memory is the only memory management method that does not provide the user's program with a linear and contiguous address space.

Segments need hardware support in the form of a segment table. It contains the physical address of the section in memory, size, and other data like access protection bits and status.

What is Swapping?

Swapping is a method in which the process should be swapped temporarily from the main memory to the backing store. It will be later brought back into the memory for continue execution.

Backing store is a hard disk or some other secondary storage device that should be big enough inorder to accommodate copies of all memory images for all users. It is also capable of offering direct access to these memory images.

Benefits of Swapping

Here, are major benefits/pros of swapping:

  • It offers a higher degree of multiprogramming.

  • Allows dynamic relocation. For example, if address binding at execution time is being used, then processes can be swap in different locations. Else in case of compile and load time bindings, processes should be moved to the same location.

  • It helps to get better utilization of memory.

  • Minimum wastage of CPU time on completion so it can easily be applied to a priority-based scheduling method to improve its performance.

What is Memory allocation?

Memory allocation is a process by which computer programs are assigned memory or space.

Here, main memory is divided into two types of partitions

  1. Low Memory - Operating system resides in this type of memory.
  2. High Memory- User processes are held in high memory.

Partition Allocation

Memory is divided into different blocks or partitions. Each process is allocated according to the requirement. Partition allocation is an ideal method to avoid internal fragmentation.

Below are the various partition allocation schemes :

  • First Fit: In this type fit, the partition is allocated, which is the first sufficient block from the beginning of the main memory.
  • Best Fit: It allocates the process to the partition that is the first smallest partition among the free partitions.
  • Worst Fit: It allocates the process to the partition, which is the largest sufficient freely available partition in the main memory.
  • Next Fit: It is mostly similar to the first Fit, but this Fit, searches for the first sufficient partition from the last allocation point.

What is Paging?

Paging is a storage mechanism that allows OS to retrieve processes from the secondary storage into the main memory in the form of pages. In the Paging method, the main memory is divided into small fixed-size blocks of physical memory, which is called frames. The size of a frame should be kept the same as that of a page to have maximum utilization of the main memory and to avoid external fragmentation. Paging is used for faster access to data, and it is a logical concept.

What is Fragmentation?

Processes are stored and removed from memory, which creates free memory space, which are too small to use by other processes.

After sometimes, that processes not able to allocate to memory blocks because its small size and memory blocks always remain unused is called fragmentation. This type of problem happens during a dynamic memory allocation system when free blocks are quite small, so it is not able to fulfill any request.

Two types of Fragmentation methods are:

  1. External fragmentation
  2. Internal fragmentation
  • External fragmentation can be reduced by rearranging memory contents to place all free memory together in a single block.
  • The internal fragmentation can be reduced by assigning the smallest partition, which is still good enough to carry the entire process.

Summary:

  • Memory management is the process of controlling and coordinating computer memory, assigning portions called blocks to various running programs to optimize the overall performance of the system.
  • It allows you to check how much memory needs to be allocated to processes that decide which processor should get memory at what time.
  • In Single Contiguous Allocation, all types of computer's memory except a small portion which is reserved for the OS is available for one application
  • Partitioned Allocation method divides primary memory into various memory partitions, which is mostly contiguous areas of memory
  • Paged Memory Management method divides the computer's main memory into fixed-size units known as page frames
  • Segmented memory is the only memory management method that does not provide the user's program with a linear and contiguous address space.
  • Swapping is a method in which the process should be swapped temporarily from the main memory to the backing store. It will be later brought back into the memory for continue execution.
  • Memory allocation is a process by which computer programs are assigned memory or space.
  • Paging is a storage mechanism that allows OS to retrieve processes from the secondary storage into the main memory in the form of pages.
  • Fragmentation refers to the condition of a disk in which files are divided into pieces scattered around the disk.
  • Segmentation method works almost similarly to paging. The only difference between the two is that segments are of variable-length, whereas, in the paging method, pages are always of fixed size.
  • Dynamic loading is a routine of a program which is not loaded until the program calls it.
  • Linking is a method that helps OS to collect and merge various modules of code and data into a single executable file

Wednesday, May 22, 2024

File System


A file is a collection of correlated information which is recorded on secondary or non-volatile storage like magnetic disks, optical disks, and tapes. It is a method of data collection that is used as a medium for giving input and receiving output from that program.

In general, a file is a sequence of bits, bytes, or records whose meaning is defined by the file creator and user. Every File has a logical location where they are located for storage and retrieval.

Objective of File management System

Here are the main objectives of the file management system:

  • It provides I/O support for a variety of storage device types.

  • Minimizes the chances of lost or destroyed data

  • Helps OS to standardized I/O interface routines for user processes.

  • It provides I/O support for multiple users in a multiuser systems environment.

Properties of a File System

Here, are important properties of a file system:

  • Files are stored on disk or other storage and do not disappear when a user logs off.

  • Files have names and are associated with access permission that permits controlled sharing.

  • Files could be arranged or more complex structures to reflect the relationship between them.

File structure

A File Structure needs to be predefined format in such a way that an operating system understands . It has an exclusively defined structure, which is based on its type.

Three types of files structure in OS:

  • A text file: It is a series of characters that is organized in lines.

  • An object file: It is a series of bytes that is organized into blocks.

  • A source file: It is a series of functions and processes.

File Attributes

A file has a name and data. Moreover, it also stores meta information like file creation date and time, current size, last modified date, etc. All this information is called the attributes of a file system.

Here, are some important File attributes used in OS:

  • Name: It is the only information stored in a human-readable form.

  • Identifier: Every file is identified by a unique tag number within a file system known as an identifier.

  • Location: Points to file location on device.

  • Type: This attribute is required for systems that support various types of files.

  • Size. Attribute used to display the current file size.

  • Protection. This attribute assigns and controls the access rights of reading, writing, and executing the file.

  • Time, date and security: It is used for protection, security, and also used for monitoring

File Type

It refers to the ability of the operating system to differentiate various types of files like text files, binary, and source files. However, Operating systems like MS_DOS and UNIX has the following type of files:

Character Special File

It is a hardware file that reads or writes data character by character, like mouse, printer, and more.

Ordinary files

  • These types of files stores user information.

  • It may be text, executable programs, and databases.

  • It allows the user to perform operations like add, delete, and modify.

Directory Files

  • Directory contains files and other related information about those files. Its basically a folder to hold and organize multiple files.

Special Files

  • These files are also called device files. It represents physical devices like printers, disks, networks, flash drive, etc.

Functions of File

  • Create file, find space on disk, and make an entry in the directory.

  • Write to file, requires positioning within the file

  • Read from file involves positioning within the file

  • Delete directory entry, regain disk space.

  • Reposition: move read/write position.

Commonly used terms in File systems

Field:

This element stores a single value, which can be static or variable length.

DATABASE:

Collection of related data is called a database. Relationships among elements of data are explicit.

FILES:

Files is the collection of similar record which is treated as a single entity.

RECORD:

A Record type is a complex data type that allows the programmer to create a new data type with the desired column structure. Its groups one or more columns to form a new data type. These columns will have their own names and data type.

File Access Methods

File access is a process that determines the way that files are accessed and read into memory. Generally, a single access method is always supported by operating systems. Though there are some operating system which also supports multiple access methods.

Three file access methods are:

  • Sequential access

  • Direct random access

  • Index sequential access

Sequential Access

In this type of file access method, records are accessed in a certain pre-defined sequence. In the sequential access method, information stored in the file is also processed one by one. Most compilers access files using this access method.

Random Access

The random access method is also called direct random access. This method allow accessing the record directly. Each record has its own address on which can be directly accessed for reading and writing.

Sequential Access

This type of accessing method is based on simple sequential access. In this access method, an index is built for every file, with a direct pointer to different memory blocks. In this method, the Index is searched sequentially, and its pointer can access the file directly. Multiple levels of indexing can be used to offer greater efficiency in access. It also reduces the time needed to access a single record.

File Directories

A single directory may or may not contain multiple files. It can also have sub-directories inside the main directory. Information about files is maintained by Directories. In Windows OS, it is called folders.

Following is the information which is maintained in a directory:

  • Name The name which is displayed to the user.

  • Type: Type of the directory.

  • Position: Current next-read/write pointers.

  • Location: Location on the device where the file header is stored.

  • Size : Number of bytes, block, and words in the file.

  • Protection: Access control on read/write/execute/delete.

  • Usage: Time of creation, access, modification

File types- name, extension

File TypeUsual extensionFunction
Executableexe, com, bin or noneready-to-run machine- language program
Objectobj, ocomplied, machine language, not linked
Source codec. p, pas, 177, asm, a‭‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬source code in various languages‭‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬
Batchbat, shSeries of commands to be executed
Texttxt, doctextual data documents
Word processordoc,docs, tex, rrf, etc.various word-processor formats‭ ‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬
Librarylib, hlibraries of routines
Archivearc, zip, tarrelated files grouped into one file, sometimes compressed.


Summary:

  • A file is a collection of correlated information which is recorded on secondary or non-volatile storage like magnetic disks, optical disks, and tapes.
  • It provides I/O support for a variety of storage device types.
  • Files are stored on disk or other storage and do not disappear when a user logs off.
  • A File Structure needs to be predefined format in such a way that an operating system understands it.
  • File type refers to the ability of the operating system to differentiate different types of files like text files, binary, and source files.
  • Create find space on disk and make an entry in the directory.
  • Indexed Sequential Access method is based on simple sequential access
  • In Sequential Access method records are accessed in a certain pre-defined sequence
  • The random access method is also called direct random access
  • Three types of space allocation methods are:
    • Linked Allocation
    • Indexed Allocation
    • Contiguous Allocation
  • Information about files is maintained by Directories

Tuesday, May 21, 2024

Threads


Thread is an execution unit which consists of its own program counter, a stack, and a set of registers. Threads are also known as Lightweight processes. Threads are popular way to improve application through parallelism. The CPU switches rapidly back and forth among the threads giving illusion that the threads are running in parallel.

As each thread has its own independent resource for process execution, multpile processes can be executed parallely by increasing number of threads.

Types of Thread

There are two types of threads:

  1. User Threads
  2. Kernel Threads

User threads, are above the kernel and without kernel support. These are the threads that application programmers use in their programs.

Kernel threads are supported within the kernel of the OS itself. All modern OSs support kernel level threads, allowing the kernel to perform multiple simultaneous tasks and/or to service multiple kernel system calls simultaneously.

Multithreading Models

The user threads must be mapped to kernel threads, by one of the following strategies:

  • Many to One Model

  • One to One Model

  • Many to Many Model

Many to One Model

  • In the many to one model, many user-level threads are all mapped onto a single kernel thread.

  • Thread management is handled by the thread library in user space, which is efficient in nature.

One to One Model

  • The one to one model creates a separate kernel thread to handle each and every user thread.

  • Most implementations of this model place a limit on how many threads can be created.

  • Linux and Windows from 95 to XP implement the one-to-one model for threads.

Many to Many Model

  • The many to many model multiplexes any number of user threads onto an equal or smaller number of kernel threads, combining the best features of the one-to-one and many-to-one models.

  • Users can create any number of the threads.

  • Blocking the kernel system calls does not block the entire process.

  • Processes can be split across multiple processors.

What are Thread Libraries?

Thread libraries provide programmers with API for creation and management of threads.

Thread libraries may be implemented either in user space or in kernel space. The user space involves API functions implemented solely within the user space, with no kernel support. The kernel space involves system calls, and requires a kernel with thread library support.

Three types of Thread

  1. POSIX Pitheads, may be provided as either a user or kernel library, as an extension to the POSIX standard.

  2. Win32 threads, are provided as a kernel-level library on Windows systems.

  3. Java threads: Since Java generally runs on a Java Virtual Machine, the implementation of threads is based upon whatever OS and hardware the JVM is running on, i.e. either Pitheads or Win32 threads depending on the system.

Benefits of Multithreading

  • Responsiveness

  • Resource sharing, hence allowing better utilization of resources.

  • Economy. Creating and managing threads becomes easier.

  • Scalability. One thread runs on one CPU. In Multithreaded processes, threads can be distributed over a series of processors to scale.

  • Context Switching is smooth. Context switching refers to the procedure followed by CPU to change from one task to another.

Multithreading Issues

Below we have mentioned a few issues related to multithreading. Well, it's an old saying, All good things, come at a price.

Thread Cancellation

Thread cancellation means terminating a thread before it has finished working. There can be two approaches for this, one is Asynchronous cancellation, which terminates the target thread immediately. The other is Deferred cancellation allows the target thread to periodically check if it should be cancelled.

Signal Handling

Signals are used in UNIX systems to notify a process that a particular event has occurred. Now in when a Multithreaded process receives a signal, to which thread it must be delivered? It can be delivered to all, or a single thread.

fork() System Call

fork() is a system call executed in the kernel through which a process creates a copy of itself. Now the problem in Multithreaded process is, if one thread forks, will the entire process be copied or not?

Security Issues

Yes, there can be security issues because of extensive sharing of resources between multiple threads.

There are many other issues that you might face in a multithreaded process, but there are appropriate solutions available for them. Pointing out some issues here was just to study both sides of the coin.

 

 

 

Wednesday, May 15, 2024

Operating system security


  • Installing updated antivirus engines and software

  • Scrutinizing all incoming and outgoing network traffic through a firewall

  • Creating secure accounts with required privileges only (i.e., user management)

 We're going to discuss following topics in this chapter.

  • Authentication

  • One Time passwords

  • Program Threats

  • System Threats

  • Computer Security Classifications

Authentication

Authentication refers to identifying each user of the system and associating the executing programs with those users. It is the responsibility of the Operating System to create a protection system which ensures that a user who is running a particular program is authentic. Operating Systems generally identifies/authenticates users using following three ways −

  • Username / Password − User need to enter a registered username and password with Operating system to login into the system.

  • User card/key − User need to punch card in card slot, or enter key generated by key generator in option provided by operating system to login into the system.

  • User attribute - fingerprint/ eye retina pattern/ signature − User need to pass his/her attribute via designated input device used by operating system to login into the system.

One Time passwords

One-time passwords provide additional security along with normal authentication. In One-Time Password system, a unique password is required every time user tries to login into the system. Once a one-time password is used, then it cannot be used again. One-time password are implemented in various ways.

  • Random numbers − Users are provided cards having numbers printed along with corresponding alphabets. System asks for numbers corresponding to few alphabets randomly chosen.

  • Secret key − User are provided a hardware device which can create a secret id mapped with user id. System asks for such secret id which is to be generated every time prior to login.

  • Network password − Some commercial applications send one-time passwords to user on registered mobile/ email which is required to be entered prior to login.

Program Threats

Operating system's processes and kernel do the designated task as instructed. If a user program made these process do malicious tasks, then it is known as Program Threats. One of the common example of program threat is a program installed in a computer which can store and send user credentials via network to some hacker. Following is the list of some well-known program threats.

  • Trojan Horse − Such program traps user login credentials and stores them to send to malicious user who can later on login to computer and can access system resources.

  • Trap Door − If a program which is designed to work as required, have a security hole in its code and perform illegal action without knowledge of user then it is called to have a trap door.

  • Logic Bomb − Logic bomb is a situation when a program misbehaves only when certain conditions met otherwise it works as a genuine program. It is harder to detect.

  • Virus − Virus as name suggest can replicate themselves on computer system. They are highly dangerous and can modify/delete user files, crash systems. A virus is generatlly a small code embedded in a program. As user accesses the program, the virus starts getting embedded in other files/ programs and can make system unusable for user

System Threats

System threats refers to misuse of system services and network connections to put user in trouble. System threats can be used to launch program threats on a complete network called as program attack. System threats creates such an environment that operating system resources/ user files are misused. Following is the list of some well-known system threats.

  • Worm − Worm is a process which can choked down a system performance by using system resources to extreme levels. A Worm process generates its multiple copies where each copy uses system resources, prevents all other processes to get required resources. Worms processes can even shut down an entire network.

  • Port Scanning − Port scanning is a mechanism or means by which a hacker can detects system vulnerabilities to make an attack on the system.

  • Denial of Service − Denial of service attacks normally prevents user to make legitimate use of the system. For example, a user may not be able to use internet if denial of service attacks browser's content settings.

Computer Security Classifications

As per the U.S. Department of Defense Trusted Computer System's Evaluation Criteria there are four security classifications in computer systems: A, B, C, and D. This is widely used specifications to determine and model the security of systems and of security solutions. Following is the brief description of each classification.

S.N.Classification Type & Description
1

Type A

Highest Level. Uses formal design specifications and verification techniques. Grants a high degree of assurance of process security.

2

Type B

Provides mandatory protection system. Have all the properties of a class C2 system. Attaches a sensitivity label to each object. It is of three types.

  • B1 − Maintains the security label of each object in the system. Label is used for making decisions to access control.

  • B2 − Extends the sensitivity labels to each system resource, such as storage objects, supports covert channels and auditing of events.

  • B3 − Allows creating lists or user groups for access-control to grant access or revoke access to a given named object.

3

Type C

Provides protection and user accountability using audit capabilities. It is of two types.

  • C1 − Incorporates controls so that users can protect their private information and keep other users from accidentally reading / deleting their data. UNIX versions are mostly Cl class.

  • C2 − Adds an individual-level access control to the capabilities of a Cl level system.

4

Type D

Lowest level. Minimum protection. MS-DOS, Window 3.1 fall in this category.

Input/Output OS Software


Basically, input/output software organized in the following four layers:

  • Interrupt handlers

  • Device drivers

  • Device-independent input/output software

  • User-space input/output software

In every input/output software, each of the above given four layer has a well-defined function to perform and a well-defined interface to the adjacent layers.

Now let's describe briefly, all the four input/output software layers that are listed above.

Interrupt Handlers

Whenever the interrupt occurs, then the interrupt procedure does whatever it has to in order to handle the interrupt.

Device Drivers

Basically, device drivers is a device-specific code just for controlling the input/output device that are attached to the computer system.

Device-Independent Input/Output Software

In some of the input/output software is device specific, and other parts of that input/output software are device-independent.

The exact boundary between the device-independent software and drivers is device dependent, just because of that some functions that could be done in a device-independent way sometime be done in the drivers, for efficiency or any other reasons.

Here are the list of some functions that are done in the device-independent software:

  • Uniform interfacing for device drivers

  • Buffering

  • Error reporting

  • Allocating and releasing dedicated devices

  • Providing a device-independent block size

User-Space Input/Output Software

Generally most of the input/output software is within the operating system (OS), and some small part of that input/output software consists of libraries that are linked with the user programs and even whole programs running outside the kernel.

 

Goals of the I/O Software

  • A key concept in the design of I/O software is known as device independence. It means that I/O devices should be accessible to programs without specifying the device in advance.

  • Uniform Naming, simply be a string or an integer and not depend on the device in any way. In UNIX, all disks can be integrated in the file-system hierarchy in arbitrary ways so the user need not be aware of which name corresponds to which device.

  • Error Handling: If the controller discovers a read error, it should try to correct the error itself if it can. If it cannot, then the device driver should handle it, perhaps by just trying to read the block again. In many cases, error recovery can be done transparently at a low level without the upper levels even knowing about the error.

  • Synchronous (blocking) and Asynchronous (interrupt-driven) transfers: Most physical I/O is asynchronous, however, some very high-performance applications need to control all the details of the I/O, so some operating systems make asynchronous I/O available to them.

  • Buffering: Often data that come off a device cannot be stored directly in their final destination.

  • Sharable and Dedicated devices: Some I/O devices, such as disks, can be used by many users at the same time. No problems are caused by multiple users having open files on the same disk at the same time. Other devices, such as printers, have to be dedicated to a single user until that user is finished. Then another user can have the printer. Introducing dedicated (unshared) devices also introduces a variety of problems, such as deadlocks. Again, the operating system must be able to handle both shared and dedicated devices in a way that avoids problems.

 

 

 

Monday, May 13, 2024

Operating System - I/O Hardware


Overview

Computers operate on many kinds of devices. General types include storage devices (disks, tapes),transmission devices (network cards, modems), and human-interface devices (screen, keyboard, mouse). Other devices are more specialized. A device communicates with a computer system by sending signals over a cable or even through the air. The device communicates with the machine via a connection point termed a port (for example, a serial port). If one or more devices use a common set of wires, the connection is called a bus.In other terms, a bus is a set of wires and a rigidly defined protocol that specifies a set of messages that can be sent on the wires.

Daisy chain

When device A has a cable that plugs into device B, and device B has a cable that plugs into device C, and device C plugs into a port on the computer, this arrangement is called a daisy chain. It usually operates as a bus.

Controller

A controller is a collection of electronics that can operate a port, a bus, or a device. A serial-port controller is an example of a simple device controller. This is a single chip in the computer that controls the signals on the wires of a serial port. The SCSI bus controller is often implemented as a separate circuit board (a host adapter) that plugs into the computer. It contains a processor, microcode, and some private memory to enable it to process the SCSI protocol messages. Some devices have their own built-in controllers.

I/O port

An I/O port typically consists of four registers, called the status , control, data-in, and data-outregisters.

Status Register

The status register contains bits that can be read by the host. These bits indicate states such as whether the current command has completed, whether a byte is available to be read from the data-in register, and whether there has been a device error.

Control register

The control register can be written by the host to start a command or to change the mode of a device. For instance, a certain bit in the control register of a serial port chooses between fullduplex and half-duplex communication, another enables parity checking, a third bit sets the word length to 7 or 8 bits, and other bits select one of the speeds supported by the serial port.

Data-in register

The data-in register is read by the host to get input.

Data-out register

The data out register is written by the host to send output.

Polling

Polling is a process by which a host waits for controller response.It is a looping process, reading the status register over and over until the busy bit of status register becomes clear. The controller uses/sets the busy bit when it is busy working on a command, and clears the busy bit when it is ready to accept the next command. The host signals its wish via the command-ready bit in the command register. The host sets the command-ready bit when a command is available for the controller to execute.

In the following example, the host writes output through a port, coordinating with the controller by handshaking

The host repeatedly reads the busy bit until that bit becomes clear.

The host sets the write bit in the command register and writes a byte into the data-out register.

The host sets the command-ready bit.

When the controller notices that the command-ready bit is set, it sets the busy bit.

The controller reads the command register and sees the write command.

It reads the data-out register to get the byte, and does the I/O to the device.

The controller clears the command-ready bit, clears the error bit in the status register to indicate that the device I/O succeeded, and clears the busy bit to indicate that it is finished.

I/O devices

I/O Devices can be categorized into following category.

Human readable

Human Readable devices are suitable for communicating with the computer user. Examples are printers, video display terminals, keyboard etc.

Machine readable

Machine Readable devices are suitable for communicating with electronic equipment. Examples are disk and tape drives, sensors, controllers and actuators.

Communication

Communication devices are suitable for communicating with remote devices. Examples are digital line drivers and modems.

Following are the differences between I/O Devices

Data rate

There may be differences of several orders of magnitude between the data transfer rates.

Application

Different devices have different use in the system

Complexity of Control

A disk is much more complex whereas printer requires simple control interface.

Unit of transfer

Data may be transferred as a stream of bytes or characters or in larger blocks.

Data representation

Different data encoding schemes are used for different devices.

Error Conditions

The nature of errors differs widely from one device to another.

Direct Memory Access (DMA)

Many computers avoid burdening the main CPU with programmed I/O by offloading some of this work to a special purpose processor. This type of processor is called, a Direct Memory Access(DMA) controller. A special control unit is used to transfer block of data directly between an external device and the main memory, without intervention by the processor. This approach is called Direct Memory Access(DMA).

DMA can be used with either polling or interrupt software. DMA is particularly useful on devices like disks, where many bytes of information can be transferred in single I/O operations. When used with an interrupt, the CPU is notified only after the entire block of data has been transferred. For each byte or word transferred, it must provide the memory address and all the bus signals controlling the data transfer. Interaction with a device controller is managed through a device driver.

Handshaking is a process between the DMA controller and the device controller. It is performed via wires using terms DMA request and DMA acknowledge.

 

 

AntiVirus

Antivirus software is designed to find known viruses and oftentimes other malware such as Ransomware, Trojan Horses, worms, spyw...