![](https://codelido.com/assets/files/2022-12-18/1671387294-783400-image.png)
In this blog, we are gonna see and learn the concepts of Address spaces, Free Space Management, Address Translation, Translation look aside buffer, segmentation, Introduction to Paging, Page Replacement Algorithm, etc.
Basically, when we talk about memory virtualization the first thing that comes into our mind is what is Memory Virtualization?
Generally, Memory Virtualization is a common Technique used in Operating Systems.
We are gonna discuss Address spaces
Address Spaces:
Address space refers to the amount of memory allocated for all possible addresses for a computational entity.
Basically, they are Two types of addresses:
1. 1. Logical Address 2. Physical Address
Logical Address:
It is generated CPU. It can also be referred to as a Virtual Address.
It is a logical address space in which the set of all logical addresses is generated by the program.
Physical Address:
Physical Address is seen by Memory Unit.
Physical Address is the set of all Physical Addresses generated by the program.
We have to remember that both types of addresses are the same in compile time and load time .
They are different in run time for building addresses.
Now we are gonna see about Address Translation
Address Translation:
Address Translation is something that refers to the manipulation of IP addresses used to identify devices over the Internet.
Generally Address Translation serves to map private IP addresses within networks to public addresses that are routable over the Internet. The system is more often referred to as Network Address Translations (NAT).
Broadly it is divided into two types:
1. Static NAT
2. Dynamic NAT
Now we will how this NAT’S works
Static NAT:
Static NAT maps network traffic from a static external IP address to an internal IP address or network. It creates a static translation of real addresses to mapped addresses. Static NAT provides internet connectivity to networking devices through a private LAN with an unregistered private IP address.
For better understanding we can tell that Static NAT mainly focuses on One-to-One mapping from one IP subnet to another IP subnet. The mapping includes destination IP address translation in one direction and source IP address in the reverse direction.
Static NAT allows connections to be originated from either side of the network, but transalation is limited to one or between blocks of addresses of same size.
Static NAT also supports the following types of translation:
· To map multiple IP addresses and specified ranges of ports to a same IP address and different range of ports
· To map a specific IP address and port to a different IP address and port
Dynamic NAT:
Dynamic network address translation (Dynamic NAT) is a technique in which multiple public Internet Protocol (IP) addresses are mapped and used with an internal or private IP address.
It allows a user to connect a local computer, server or networking device to an external network or Internet group with an unregistered private IP address that has a group of available public IP addresses.
In the other wors Dynamic NAT can be defined as Unlike with static NAT, where you had to manually define a static mapping between a private and public address, dynamic NAT does the mapping of a local address to a global address happens dynamically. This means that the router dynamically picks an address from the global address pool that is not currently assigned. The dynamic entry stays in the NAT translations table as long as the traffic is exchanged. The entry times out after a period of inactivity and the global IP address can be used for new translations.
Now we will be discussing about Segementation
Segmentation:
Segmentation is a memory management technique in which the memory is divided into the variable size parts.
Each part is known as a segment which can be allocated to a process.
These details about each segment are stored in segment table.
Segment table contains mainly two fields:
1. Base : It is the base address of the segment
2. Limit : It is the length of the segment
It is better to have a segmentation that divides the process into segments. Each segment contains the same type of functions such as the main function can be included in one segment and library function can be included in other segment and the library function canbe included in the other segment.
Segmentation gives the user’s view of the process which paging does not give. Here the user’s view is mapped to physical memory. There are types of segmentation:
1. Virtual memory segmentation – Each process is divided into a number of segments, not all of which are resident at any one point in time.
2. Simple segmentation – Each process is divided into a number of segments, all of which are loaded into memory at run time, though not necessarily contiguously.
There is no simple relationship between logical addresses and physical addresses in segmentation. A table stores the information about all such segments and is called Segment Table.
The address generated by the CPU is divided into:
1. Segment number (s): Number of bits required to represent the segment.
2. Segment offset (d): Number of bits required to represent the size of the segment.
Advantages:
- No internal fragmentation
- Average Segment Size is larger than the actual page size.
- Less overhead
- It is easier to relocate segments than entire address space.
- The segment table is of lesser size as compared to the page table in paging.
Disadvantages:
- It can have external fragmentation.
- it is difficult to allocate contiguous memory to variable sized partition.
- Costly memory management algorithms.
Now we have known about Segmentation
Now we are gonna Free Space Management
Free Space Management:
The “operating system” works to allocate free space to the files when a file is created. The operating system also creates a free void space when a file is deleted from the system. For doing all these tasks and managing spaces in our system, the operating system works with the help of a “free space management system” and allocates and de-allocates memory spaces simultaneously. In this article, we are going to learn about these concepts.
They are four types in Free Space Management:
1. Bit vector
2. Linked List
3. Grouping
4. Counting
1.
Bit vector:
Generally a bit vector is the most frequently used method to implement the free space list. A bit vector is also known as a “Bit map”. It is a series or collection of bits in which each bit represents a disk block. The values taken by the bits are either 1 or 0. If the block bit is 1, it means the block is empty and if the block bit is 0, it means the block is not free. It is allocated to some files. Since all the blocks are empty initially so, each bit in the bit vector represents 0.
The formula is given by:
b= (p)*(t) +l
b= block number
p= no.of.words per words
t= no.of.0-valued word
l = offset of first 1 bit
Advantages:
1.Simple and easy to understand.
2.Consumes less memory.
3.It is efficient to find free space.
Disadvantages:
1.The operating system goes through all the blocks until it finds a free block. (block whose bit is 0).
2.It is not efficient when the disk size is large.
Linked list:
A linked list is another approach for free space management in an operating system. In it, all the free blocks inside a disk are linked together in a “linked list”. These free blocks on the disk are linked together by a pointer. These pointers of the free block contain the address of the next free block and the last pointer of the list points to null which indicates the end of the linked list. This technique is not enough to traverse the list because we have to read each disk block one by one which requires I/O time.
Advantages:
1.In this method, available space is used efficiently.
2.As there is no size limit on a linked list, a new free space can be added easily.
Disadvantages :
1.In this method, the overhead of maintaining the pointer appears.
2.The Linked list is not efficient when we need to reach every block of memory.
Grouping:
The grouping technique is also called the “modification of a linked list technique”. In this method, first, the free block of memory contains the addresses of the n-free blocks. And the last free block of these n free blocks contains the addresses of the next n free block of memory and this keeps going on. This technique separates the empty and occupied blocks of space of memory.
The main advantage is by using this method, we can easily find addresses of a large number of free blocks easily and quickly.
The main disadvantage is we need to change the entire list if one block gets occupied.
Counting:
In memory space, several files are created and deleted at the same time. For which memory blocks are allocated and de-allocated for the files. Creation of files occupy free blocks and deletion of file frees blocks. When there is an entry in the free space, it consists of two parameters- “address of first free disk block (a pointer)” and “ a number ‘n’ ”.
The main advantage is in this method, a bunch of free blocks take place fastly and also the list is in smaller size.
The main disadvantage is in the counting method, the first free block stores the rest free blocks, so it requires more space
Introduction to Pagging
Generally all we know about pagging is It is one of the memory management scheme which eliminates the need of contiguous allocation of Physical Memory.
Pagging can be defined as the process of retrieving processes in the form of pages from secondary storage into the main memory is known as Pagging.
Pagging is mainly used separate each procedure into pages.
The main idea behind the paging is to divide each process in the form of pages.
The main memory will also be divided in the form of frames.
Now we are gonna discuss about Translation look aside buffer.
Translation look aside buffer:
Basically, Translation look aside buffer is used to cover the drawbacks of Paging.
Let’s discuss what are the drawbacks in Pagging First
The main drawbacks in Pagging are :
1. Size of Page table can be very big and therefore it wastes main memory.
2. CPU will take more time to read a single word from the main memory.
Now we will see what is Translation Aside Buffer:
Translation Aside Buffer can be also called as (TLB)
Generally, the Translation look aside buffer can be defined as a memory cache that can be used to reduce the time taken to access the page table again and again.
It is a memory cache that is closer to the CPU and the time taken by the CPU to access TLB is lesser than that taken to access the main memory.
In other words, we can say that TLB is faster and smaller than the main memory but cheaper and bigger than the register.
Mainly used for effectiveness for the program /question to be run.
The formula used for effective access time is:
EAT=P (a+f) + (1-p) (a+s.f+f)
P = TLB hit rate
a= time taken to accessTLB
f=time taken to access main memory
s=value is 1, if the single level paging has been implemented
General Questions will be arising what is/are the uses or advantages of this formuls
Here is the answer for that:
1.Effective access time will be decreased if the TLB hit rate is increased.
2.Effective access time will be increased in the case of multilevel paging.
Swapping
In general we know that Swap means exchange of numbers or items between two or multiple positions.
Same in the case of Swapping in the Operating system.
Swapping is a memory management technique for swapping data between main memory and secondary memory for better memory utilization.
Why we are using Swapping in OS
Here is the answer for such questions
We use Swapping in memory management is to allow more available memory than the computer hardware. Physical memory may be allocated, and the process may require additional memory. Rather than constraining the system to use only memory based on physical RAM, memory swapping allows the operating system and its users to extend the memory to disk.
Advantages of swapping:
1.Swapping can help to make more room and allow your programs to run more smoothly. Additionally, swap files can also be helpful in cases where you have multiple programs open at the same time.
2.Using a swap file, you can ensure that each program has its own dedicated chunk of memory, which can help improve overall performance.
3.Improve the degree of multi-programming.
4.Better RAM utilization.
Disadvantages of swapping:
1.If the computer system is turned off during high paging activity, the user may lose all information related to the program.
2.The number of page faults increases, which can reduce overall processing performance.
3.When you make a lot of transactions, users lose information and computers lose power.
Demand Paging:
Demand paging is a process of swapping in the Virtual Memory system. In this process, all data is not moved from hard drive to main memory because while using this demand paging, when some programs are getting demand then data will be transferred. But, if required data is already existed into memory then not need to copy of data. The demand paging system is done with swapping from auxiliary storage to primary memory, so it is known as “Lazy Evaluation”.
Whenever any page is referred for the first time in the main memory, then that page will be found in the secondary memory.
Thrashing:
Basically, Thrash is the poor performance of a virtual memory system when the same pages are being loaded repeatedly due to a lack of main memory to keep them in memory. Depending on the configuration and algorithm, the actual throughput of a system can degrade by multiple orders of magnitude.
We know the concepts of Page Fault and Swapping concepts already so it will be very easy to understand.
Thrashing is when the page fault and swapping happens very frequently at a higher rate, and then the operating system has to spend more time swapping these pages. This state in the operating system is known as thrashing. Because of thrashing, the CPU utilization is going to be reduced or negligible.
What are the causes of Thrashing?
1.If CPU utilization is too low, we increase the degree of multiprogramming by introducing a new system. A global page replacement algorithm is used. The CPU scheduler sees the decreasing CPU utilization and increases the degree of multiprogramming.
2.CPU utilization is plotted against the degree of multiprogramming.
3.As the degree of multiprogramming increases, CPU utilization also increases.
4.If the degree of multiprogramming is increased further, thrashing sets in, and CPU utilization drops sharply.
5.So, at this point, to increase CPU utilization and to stop thrashing, we must decrease the degree of multiprogramming
There is also a question how to avoid Thrashing?
Here is the answer for that question
Thrashing has some negative impacts on hard drive health and system performance. Therefore, it is necessary to take some actions to avoid it. To resolve the problem of thrashing, here are the following methods, such as:
1. Adjust the swap file size:If the system swap file is not configured correctly, disk thrashing can also happen to you.
2. Increase the amount of RAM: As insufficient memory can cause disk thrashing, one solution is to add more RAM to the laptop. With more memory, your computer can handle tasks easily and don’t have to work excessively. Generally, it is the best long-term solution.
3. Decrease the number of applications running on the computer: If there are too many applications running in the background, your system resource will consume a lot. And the remaining system resource is slow which can result in thrashing. So while closing, some applications will release some resources so that you can avoid thrashing to some extent.
4.Replace programs: Replace those programs that are heavy memory occupied with equivalents that use less memory.
Page Replacement Algorithm:
We have studied in Demand Pagging only certain pages of a process are loaded initially into the memory. This allows us to get more processes into memory at the same time. but what happens when a process requests for more pages and no free memory is available to bring them in. Following steps can be taken to deal with this problem
1.Put the process in the wait queue, until any other process finishes its execution thereby freeing frames.
2.Or, remove some other process completely from the memory to free frames.
3.Or, find some pages that are not being used right now, move them to the disk to get free frames. This technique is called Page replacement and is most commonly used.
In this case, if a process requests a new page and supposes there are no free frames, then the Operating system needs to decide which page to replace. The operating system must use any page replacement algorithm in order to select the victim frame. The Operating system must then write the victim frame to the disk then read the desired page into the frame and then update the page tables. And all these require double the disk access time.
1.Page replacement prevents the over-allocation of the memory by modifying the page-fault service routine.
2.To reduce the overhead of page replacement a modify bit (dirty bit) is used in order to indicate whether each page is modified.
3.This technique provides complete separation between logical memory and physical memory.
Page Replacement in OS
In Virtual Memory Management, Page Replacement Algorithms play an important role. The main objective of all the Page replacement policies is to decrease the maximum number of page faults.
What is the algorithm for Page Replacement?
Page Replacement follows some certain rules
1.First of all, find the location of the desired page on the disk.
2.Find a free Frame:
a) If there is a free frame, then use it.
b) If there is no free frame then make use of the page-replacement algorithm in order to select the victim frame.
c) Then after that write the victim frame to the disk and then make the changes in the page table and frame table accordingly.
3.After that read the desired page into the newly freed frame and then change the page and frame tables.
4.Restart the process.
Types of Page Replacement:
1.FIFO Page Replacement Algorithm
2.LIFO Page Replacement Algorithm
3.LRU Page Replacement Algorithm
4.Optimal Page Replacement Algorithm
5.Random Page Replacement Algorithm
Now we will discuss one by one in detail:
FIFO Page Replacement Algorithm:’
As we know FIFO means First In First Out
It is a very simple way of Page replacement and is referred to as First in First Out. This algorithm mainly replaces the oldest page that has been present in the main memory for the longest time.
This algorithm is implemented by keeping the track of all the pages in the queue.
As new pages are requested and are swapped in, they are added to the tail of a queue and the page which is at the head becomes the victim.
This is not an effective way of page replacement but it can be used for small systems.
Advantages:
1.This algorithm is simple and easy to use.
2.FIFO does not cause more overhead.
Disadvantages:
1.This algorithm does not make the use of the frequency of last used time rather it just replaces the Oldest Page.
2.There is an increase in page faults as page frames increases.
3.The performance of this algorithm is the worst.
LIFO Page Replacement Algorithm:
This Page Replacement algorithm stands for “Last In First Out”.This algorithm works in a similar way to the LIFO principle.
In this, the newest page is replaced which is arrived at last in the primary memory
This algorithm makes use of the stack for monitoring all the pages.
LRU Page Replacement Algorithm in OS
This algorithm stands for “Least recent used” and this algorithm helps the Operating system to search those pages that are used over a short duration of time frame.
The page that has not been used for the longest time in the main memory will be selected for replacement.
This algorithm is easy to implement.
This algorithm makes use of the counter along with the even-page.
Advantages of LRU
1.It is an efficient technique.
2.With this algorithm, it becomes easy to identify the faulty pages that are not needed for a long time.
3.It helps in Full analysis.
Disadvantages of LRU
It is expensive and has more complexity.
There is a need for an additional data structure.
Optimal Page Replacement Algorithm:
This algorithm mainly replaces the page that will not be used for the longest time in the future. The practical implementation of this algorithm is not possible.
Practical implementation is not possible because we cannot predict in advance those pages that will not be used for the longest time in the future.
This algorithm leads to less number of page faults and thus is the best-known algorithm
Also, this algorithm can be used to measure the performance of other algorithms.
Advantages of OPR
This algorithm is easy to use.
This algorithm provides excellent efficiency and is less complex.
For the best result, the implementation of data structures is very easy
Disadvantages of OPR
In this algorithm future awareness of the program is needed.
Practical Implementation is not possible because the operating system is unable to track the future request
5. Random Page Replacement Algorithm
As indicated from the name this algorithm replaces the page randomly. This Algorithm can work like any other page replacement algorithm that is LIFO, FIFO, Optimal, and LRU.