Thursday, June 6, 2024

Operating System Properties



The Properties of Operating System are:

  1. Batch processing
  2. Multitasking
  3. Multi-programming
  4. Interactivity
  5. Real-Time System
  6. Distributed Environment
  7. spooling

1. Batch Processing

In Batch Processing, the OS first gathers the data and programs together in a batch, then processing starts.

The operating system performs various types of activities related to batch processing:

  • In this, the operating system defines the set of jobs that are re-assigned to a command sequence, data, and programs within a single unit.
  • The operating system keeps the list of jobs in the memory then executes it one by one according to the scheduling algorithm.
  • In this, the job is processed based on FCFS Scheduling means first-come, first-serve.

Advantage of Batch Processing

The performance is increased because a new job is started early when the old job gets completed without any manual interference.

Disadvantages of Batch Processing

  • Sometimes jobs are entered in an infinite loop.
  • Debugging the program is tough.

2. Multitasking

Multitasking is a technique in which the CPU executes a number of jobs within the same time by switching among the jobs. The task of switching the job is so frequent that the user will be able to communicate with each program when the program is running.

The operating system performs the following activities in the context of multitasking.

  • In this, the user directly instructs the OS or the program, and then obtain a fast response.

  • The Operating system manages multitasking in such a way so that multiple operations can be handled at the same time.
  • It is also known as a time-sharing system.

  • Multitasking operating systems are developed so that we can use the computer system interactively at a less price.

3. Multi-programming

Multi-programming is defined as sharing of the processor when two or more programs exist in the memory at a time. With the help of multi-programming, the CPU utilization efficiency can be increased. In other words, Multi-programming is defined as the capability of an Operating system to run more than one program on a single processor. Example of Multi-programming: A computer can run excel and firebox browser simultaneously.

Advantages of Multi-programming

  • Efficient CPU utilization.

  • The users assume that CPU is simultaneously working on multiple programs.

Disadvantages of Multi-programming

  • It needs CPU scheduling.

  • Memory management is needed to accommodate different jobs in memory.

4. Interactivity

Interactivity means the user’s ability to interact with a computer system.

The operating system performs various activities related to interactivity.

  • Handling input devices in order to take the input from the user. Example: – keyboard.

  • It also helps to handle output devices to display output to the user. Example: – Monitor.

  • It provides an interface to the user, so that the user can interact with the system.

5. Real-Time System

We can understand the Real-time system as the dedicated embedded systems,

An operating system performs various tasks related to a real-time system.

  • In a real-time system, the operating system reads and reacts with the help of sensor data.

  • The Operating system provides the assurance that the event is completed within a fixed interval of time to provide accurate performance.

6. Distributed Environment

A Distributed environment defines a set of multiple independent processors or CPUs in a single computer system.

The Operating system performs various activities, such as:

  • The Operating system handles the communications between processors, and communication is done with the help of communication lines.

  • Operating system share computation logic’s among different physical processors.

  • The processors will not share the memory; rather, each processor has its own local memory.

7. Spooling

Spooling stands for Simultaneous peripheral operation online. Spooling is a process in which jobs are put into a buffer, disk, or a particular area in the memory so that a device can access these jobs when it is ready.

Spooling is effective because, with the help of spooling devices can access the data with different rates. Buffer offers a waiting station so that data can respite at the time when the slower device catches up. The application of spooling is Print Spooling.

The operating system performs various tasks related to spooling:

  • It manages the I/O device data spooling when devices have multiple data access rates.

  • Handles parallel computation because the process of spooling is done I/O in a parallel way.

  • Handles the spooling buffer that provides the waiting station for the data to take rest in that time when the slower device catches up.

Advantages of Spooling

  • Spooling can overlap the I/O operation for one process with processor operations for another process.

  • It uses disk as a huge buffer for the spooling operations.

 

Wednesday, June 5, 2024

Virtual Memory in Operating System


Virtual Memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory. The addresses a program may use to reference memory are distinguished from the addresses the memory system uses to identify physical storage sites, and program generated addresses are translated automatically to the corresponding machine addresses.
The size of virtual storage is limited by the addressing scheme of the computer system and amount of secondary memory is available not by the actual number of the main storage locations.

It is a technique that is implemented using both hardware and software. It maps memory addresses used by a program, called virtual addresses, into physical addresses in computer memory.

  1. All memory references within a process are logical addresses that are dynamically translated into physical addresses at run time. This means that a process can be swapped in and out of main memory such that it occupies different places in main memory at different times during the course of execution.
  2. A process may be broken into number of pieces and these pieces need not be continuously located in the main memory during execution. The combination of dynamic run-time address translation and use of page or segment table permits this.

If these characteristics are present then, it is not necessary that all the pages or segments are present in the main memory during execution. This means that the required pages need to be loaded into memory whenever required. Virtual memory is implemented using Demand Paging or Demand Segmentation.

Demand Paging :
The process of loading the page into memory on demand (whenever page fault occurs) is known as demand paging.
The process includes the following steps :

  1. If CPU try to refer a page that is currently not available in the main memory, it generates an interrupt indicating memory access fault.
  2. The OS puts the interrupted process in a blocking state. For the execution to proceed the OS must bring the required page into the memory.
  3. The OS will search for the required page in the logical address space.
  4. The required page will be brought from logical address space to physical address space. The page replacement algorithms are used for the decision making of replacing the page in physical address space.
  5. The page table will updated accordingly.
  6. The signal will be sent to the CPU to continue the program execution and it will place the process back into ready state.

Hence whenever a page fault occurs these steps are followed by the operating system and the required page is brought into memory.

Advantages :

  • More processes may be maintained in the main memory: Because we are going to load only some of the pages of any particular process, there is room for more processes. This leads to more efficient utilization of the processor because it is more likely that at least one of the more numerous processes will be in the ready state at any particular time.
  • A process may be larger than all of main memory: One of the most fundamental restrictions in programming is lifted. A process larger than the main memory can be executed because of demand paging. The OS itself loads pages of a process in main memory as required.
  • It allows greater multiprogramming levels by using less of the available (primary) memory for each process.

Page Fault Service Time :
The time taken to service the page fault is called as page fault service time. The page fault service time includes the time taken to perform all the above six steps.

Swapping:

Swapping a process out means removing all of its pages from memory, or marking them so that they will be removed by the normal page replacement process. Suspending a process ensures that it is not runnable while it is swapped out. At some later time, the system swaps back the process from the secondary storage to main memory. When a process is busy swapping pages in and out then this situation is called thrashing.

Causes of Thrashing :

  1. High degree of multiprogramming : If the number of processes keeps on increasing in the memory than number of frames allocated to each process will be decreased. So, less number of frames will be available to each process. Due to this, page fault will occur more frequently and more CPU time will be wasted in just swapping in and out of pages and the utilization will keep on decreasing.

    For example:
    Let free frames = 400
    Case 1: Number of process = 100
    Then, each process will get 4 frames.

    Case 2: Number of process = 400
    Each process will get 1 frame.
    Case 2 is a condition of thrashing, as the number of processes are increased,frames per process are decreased. Hence CPU time will be consumed in just swapping pages.

  2. Lacks of Frames:If a process has less number of frames then less pages of that process will be able to reside in memory and hence more frequent swapping in and out will be required. This may lead to thrashing. Hence sufficient amount of frames must be allocated to each process in order to prevent thrashing.

Recovery of Thrashing :

  • Do not allow the system to go into thrashing by instructing the long term scheduler not to bring the processes into memory after the threshold.
  • If the system is already in thrashing then instruct the mid term schedular to suspend some of the processes so that we can recover the system from thrashing.

 

Monday, June 3, 2024

Memory Management


Memory Management is the process of controlling and coordinating computer memory, assigning portions known as blocks to various running programs to optimize the overall performance of the system.

It is the most important function of an operating system that manages primary memory. It helps processes to move back and forward between the main memory and execution disk. It helps OS to keep track of every memory location, irrespective of whether it is allocated to some process or it remains free.

Why Use Memory Management?

Here, are reasons for using memory management:

  • It allows you to check how much memory needs to be allocated to processes that decide which processor should get memory at what time.

  • Tracks whenever inventory gets freed or unallocated. According to it will update the status.

  • It allocates the space to application routines.

  • It also make sure that these applications do not interfere with each other.

  • Helps protect different processes from each other

  • It places the programs in memory so that memory is utilized to its full extent.

Memory Management Techniques

Here, are some most crucial memory management techniques:

Single Contiguous Allocation

It is the easiest memory management technique. In this method, all types of computer's memory except a small portion which is reserved for the OS is available for one application. For example, MS-DOS operating system allocates memory in this way. An embedded system also runs on a single application.

Partitioned Allocation

It divides primary memory into various memory partitions, which is mostly contiguous areas of memory. Every partition stores all the information for a specific task or job. This method consists of allotting a partition to a job when it starts & unallocate when it ends.

Paged Memory Management

This method divides the computer's main memory into fixed-size units known as page frames. This hardware memory management unit maps pages into frames which should be allocated on a page basis.

Segmented Memory Management

Segmented memory is the only memory management method that does not provide the user's program with a linear and contiguous address space.

Segments need hardware support in the form of a segment table. It contains the physical address of the section in memory, size, and other data like access protection bits and status.

What is Swapping?

Swapping is a method in which the process should be swapped temporarily from the main memory to the backing store. It will be later brought back into the memory for continue execution.

Backing store is a hard disk or some other secondary storage device that should be big enough inorder to accommodate copies of all memory images for all users. It is also capable of offering direct access to these memory images.

Benefits of Swapping

Here, are major benefits/pros of swapping:

  • It offers a higher degree of multiprogramming.

  • Allows dynamic relocation. For example, if address binding at execution time is being used, then processes can be swap in different locations. Else in case of compile and load time bindings, processes should be moved to the same location.

  • It helps to get better utilization of memory.

  • Minimum wastage of CPU time on completion so it can easily be applied to a priority-based scheduling method to improve its performance.

What is Memory allocation?

Memory allocation is a process by which computer programs are assigned memory or space.

Here, main memory is divided into two types of partitions

  1. Low Memory - Operating system resides in this type of memory.
  2. High Memory- User processes are held in high memory.

Partition Allocation

Memory is divided into different blocks or partitions. Each process is allocated according to the requirement. Partition allocation is an ideal method to avoid internal fragmentation.

Below are the various partition allocation schemes :

  • First Fit: In this type fit, the partition is allocated, which is the first sufficient block from the beginning of the main memory.
  • Best Fit: It allocates the process to the partition that is the first smallest partition among the free partitions.
  • Worst Fit: It allocates the process to the partition, which is the largest sufficient freely available partition in the main memory.
  • Next Fit: It is mostly similar to the first Fit, but this Fit, searches for the first sufficient partition from the last allocation point.

What is Paging?

Paging is a storage mechanism that allows OS to retrieve processes from the secondary storage into the main memory in the form of pages. In the Paging method, the main memory is divided into small fixed-size blocks of physical memory, which is called frames. The size of a frame should be kept the same as that of a page to have maximum utilization of the main memory and to avoid external fragmentation. Paging is used for faster access to data, and it is a logical concept.

What is Fragmentation?

Processes are stored and removed from memory, which creates free memory space, which are too small to use by other processes.

After sometimes, that processes not able to allocate to memory blocks because its small size and memory blocks always remain unused is called fragmentation. This type of problem happens during a dynamic memory allocation system when free blocks are quite small, so it is not able to fulfill any request.

Two types of Fragmentation methods are:

  1. External fragmentation
  2. Internal fragmentation
  • External fragmentation can be reduced by rearranging memory contents to place all free memory together in a single block.
  • The internal fragmentation can be reduced by assigning the smallest partition, which is still good enough to carry the entire process.

Summary:

  • Memory management is the process of controlling and coordinating computer memory, assigning portions called blocks to various running programs to optimize the overall performance of the system.
  • It allows you to check how much memory needs to be allocated to processes that decide which processor should get memory at what time.
  • In Single Contiguous Allocation, all types of computer's memory except a small portion which is reserved for the OS is available for one application
  • Partitioned Allocation method divides primary memory into various memory partitions, which is mostly contiguous areas of memory
  • Paged Memory Management method divides the computer's main memory into fixed-size units known as page frames
  • Segmented memory is the only memory management method that does not provide the user's program with a linear and contiguous address space.
  • Swapping is a method in which the process should be swapped temporarily from the main memory to the backing store. It will be later brought back into the memory for continue execution.
  • Memory allocation is a process by which computer programs are assigned memory or space.
  • Paging is a storage mechanism that allows OS to retrieve processes from the secondary storage into the main memory in the form of pages.
  • Fragmentation refers to the condition of a disk in which files are divided into pieces scattered around the disk.
  • Segmentation method works almost similarly to paging. The only difference between the two is that segments are of variable-length, whereas, in the paging method, pages are always of fixed size.
  • Dynamic loading is a routine of a program which is not loaded until the program calls it.
  • Linking is a method that helps OS to collect and merge various modules of code and data into a single executable file

Data Link Layer

In the OSI model, the data link layer is a 4 th  layer from the top and 2 nd  layer from the bottom. The communication channel t...