Ict201 Efficient Distributed Memory Management Answers


  • Internal Code :
  • Subject Code : ICT201
  • University : Kings Own Institute
  • Subject Name : IT Computer Science

Computer Organization and Architecture

Table of Contents

Introduction.

Memory Management

Virtual Memory.

Resource Allocation Graph.

Conclusion.

References.

Introduction to Efficient Distributed Memory Management

The following report is based on the memory management and virtual memory of an operating system. Here, various concepts of virtual memory and memory management will be discussed. Besides, a CPU scheduling time of 5 processes has been given where the waiting time and turnaround time of those ongoing processes have been calculated. however, the round-robin time scheduling and, shortest remaining time and shortest first process has also been calculated here. Besides, the burst time of the CPU scheduling process has already been given, through that the overall scheduling processes have been calculated here.

Memory Management

The term main memory has been known as physical memory that has been also considered as internal memory. Here, the word memory has been used for distinguishing the memory from external storage devices like disk drives. Besides, the main memory can be also considered as RAM. The ability of a system is it can be only capable of changing the data of the main memory. On the other hand, each program that has been executed and accessible should need to be copied from the device in the main memory. Memory management of a system generally handles primary memory as moving all the processes back in between the main memory during the execution. Besides, the main memory has been keeping track of every location. Besides, the process will be getting memory in the time that will be decided by this. On the other hand, the main memory generally tracks the allocation of process and freeing space of the location also.

There are few concepts related to memory management and that has been given below:

Process space: It has been referred to process address space where it has been set for the logical address and here the overall process can be able to address the code. As an example, it can be said that there has been a process in use with 32 bit addressing where the range of the address can be 0 to 0*7ffffff. This processing address can be referred to as 2^31 numbers which can be indicated as 2gb theoretical size. As per the statement of Siahaan (2016), the operating system here also takes care of mapping logical addresses into physical addresses during the memory allocation time. There have been three address types are present as memory allocation address and they are:

Symbolic addresses: The addresses here have been used as source code which also included variables, constants, instruction labels which are considered as basic elements.

Relative addresses: This address has been defined as compilation time where the compiler has been capable of converting symbolic addresses to relative addresses

Physical addresses: These addresses are responsible for generating by the loader during the time of program loading.

Swapping: This is a mechanism in memory management where any process can be temporarily swapped from memory to disk as well making the allocated memory allocated for the available process. After a few times, the swapping of the process has been repeated back from the secondary memory to the main memory. However, the swapping process can also be considered as a memory compaction technique.

The overall swapping time has been indicated as the moving time into a disk as well as repeating the back process into the memory.

Memory allocation

The memory allocation is two types such as low memory and high memory. In low memory, the operating system of a computer has been residing while the processes are holding in the high memory. The allocation process of an operating system can be divided into two parts such as a single partition and multiple partitions. Besides, the single partition has been used for protecting various user processes as well as changing codes (Gandhi et al., 2017). The register for relocation has been containing a small value of physical address which has been included with a small range of logical addresses. In multiple partitions, the main memory has been divided into several fixed partitions where all the partitions have been contained with a single process. On the other hand, during the time of free partition, a random process will be selected in between the queue which has been loaded from the free partition. Hence, during the termination of the process, the availability of the process became free for the new process.

Virtual Memory

 The virtual memory of an operating system has been considered as extra space memory which is responsible for addressing more memory as compared with the physical amount of the system (Alam et al., 2017). The virtual memory of a system has been considered as a hard disk system that has been included in the RAM of the system. However, the purposes of a virtual memory can be extending the physical memory uses and allowing the memory protection. The virtual memory of a system has been implemented through demand paging. Here, the implementation of the process can be done through a segmentation system. The concepts of virtual memory have been given below: 

Demand paging

This system has been considered as a paging system where the processes are residing in the secondary memory as the pages have been loaded there (Cai et al., 2018). During the context switch, the copying process from the operating system is in the way of program pages out for creating new program pages. The execution of a new program has fetched the pages of the previous program even after the loading of previous pages. There are several advantages of demand paging as this has been contained with bigger virtual memory, efficient memory use, and no limitations for multiprogramming. However, there have been a few disadvantages such as the tables and process amount has been greater as per management techniques of pages. According to the statement of Liu, Yang, Peng & Li (2019), the reference program of a page cannot be available as the page has been swapped earlier. Besides, the memory reference can be known as page fault as the transfer controls are demanding the pages back to the memory.

Page replacement

The algorithms here have been used as operating system techniques where the entire system has been deciding the memory pages swapping and writing of a disk during the time of memory allocation. Besides, the paging system occurs during the fault of a page and free pages are not used as an allocation unit. Hence, the availability of pages is negative as the number of free pages has been counted as lower according to the required pages.

  1. The process timeline has been given below:

Process

Priority

Arrival Time

Service Time

A

3

0

3

B

2

3

8

C

1

3

3

D

2

7

14

E

2

7

2

B.

Here, the priority list of the process is:

Process

Priority

Arrival Time

CPU cycle time

C

1

3

3

B

2

3

8

D

2

7

14

E

2

7

2

A

3

0

3

The waiting time of a CPU generally refers to the time that will be taken by the process until the desired resources are free. As an example, it can be said that there are 3 processes in a system and resources that have been allocated are 5. Process 1 and 2 need 2 resources each and process 3 needs 3 resources (Oukid et al., 2017). Hence, the resource 1 has been scheduled for process resource 1 and 2 has been scheduled for process 1 first and then process 3. The process no 3 needs to wait for until the execution of the process 1 and 2 completed or unit amount has been used.

Waiting time= The waiting time here can be calculated once the turnaround time has been found for the ongoing processes.

Turnaround time

It can be calculated by such formula Exit time -Arrival Time

C= 0ms, B= 1ms, D= 1ms, E= 2ms, A= 3ms. This is the arrival time

Now, the waiting time of all the processes is C= (3-3) = 0, B= (3-3) =0, D= (8+3)-7 =6ms, E= (25-14) = 11ms and A= (27-7) =20ms.

Turnaround time for C= (0+3) = 3, B= (1+8) = 9, D= (7+14) = 23, E= (11+2) =13, A= (20+3) =23

  1. Feedback

The time quantum for all the process has been given where q=3.

  1. Highest Response Ratio Next

C=3/3=0 B=0/8=0 D= 23/14=1.64 E=13/2=6.5 A= (20/3) =6.67

  1. Round Robin

C= (0-0) + (15-3) =12

B= (3-1) =2

D= (6-2) +(15-6) +(18-15) =16

E= (9-3) +(18-9) +(21-18) =18

A= (12-4) +(21-12) +(24-21) =20

Shortest remaining time

C

C

B

D

D

E

A

C

C

B

D

EA

C

C

0

1

2

3

4

5

6

7

8

9

10

11

12

13

Shortest Process Next

Process

Burst time

Arrival Time

C

3

3

B

8

3

D

14

7

E

2

7

A

3

0

At time 0, process A starts but it needs two more execution units for the completion of it.

At time 1, process C, B and A take place.

At time 2, process C, B, and A will continue

At time 3, the C and B process has completed their execution.

At times, 4,5,6,7 the process D, E will take place and after one unit they will be in a waiting place one by one. Hence, in these four units of time the processes will continue their execution and after the processing unit, the process A will take place and complete their execution. After that, the process D and E will continue their execution.

Resource Allocation Graph

Here, a resource allocation graph has been given where 3 processes have been scheduled with their allocated resources. However, there have been a few rules for finding the allocation graph analysis. The first rule for finding the graph is in deadlock condition whether or not it indicates the resources are in a single instance or not.

Here, it can be seen that the resource no 2 is not in a single instance; it has been included with multiple instances which have been allocated for both process1 and process2 (Kalhauge & Palsberg, 2018). From the analysis of this graph, it can be said that the system is in a deadlock state. As here it can be seen that resource1 has been used by process1 first and after that process1 needs to use resource2 but here process2 needs to be waiting for the time when resource2 will be in use. Hence, process no 3 needs to be waiting until resource2 will be free. On the other hand, it has been seen that resource2 needs to be used by all three processes here and every process will be waiting till the allocation of resources will be free which has formed a deadlock situation here.

 The process 2 and 3 has been blocked here because the resources they need are already in use. Here, the process has been blocked as there has been 2 input from this process and 1 output from this process. P2 here indicated in the blocked stage as resource2 cannot be used easily as there will be the more waiting state for the ongoing process (Gentine et al., 2018). Besides, p3 has also been blocked as there are 3 inputs in that process and 1 output which results in the process as a blocked process.

  1. Reduction of resource allocation

The above diagram has been shown after the reduction of some resource blocking issue. Here, the resources can be used by every process but there will be a few waiting states for each process until the resource allocation is free.

Conclusion on Efficient Distributed Memory Management

The analysis of the above report has been discussed about various concepts of memory management and virtual memory and the working structure of these have been discussed. Besides, the CPU time schedule has been given already from where the round-robin scheduling, waiting time, turnaround time has been calculated here. Besides, the shortest time remaining and the shortest process next has also been calculated. On the other hand, a process table has been attached in the report where the process schedule of which process will come first and which process will come next that has been described. Besides, the resource allocation graph has been measured where the deadlock condition has been identified of the given graph, and either any process has been blocked or not that has been mentioned. Hence, the graph here is in a deadlock condition.

References for Efficient Distributed Memory Management

Alam, H., Zhang, T., Erez, M., & Etsion, Y. (2017). Do-it-yourself virtual memory translation. In 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA) (pp. 457-468). IEEE.

Cai, Q., Guo, W., Zhang, H., Agrawal, D., Chen, G., Ooi, B. C., ... & Wang, S. (2018). Efficient distributed memory management with RDMA and caching. Proceedings of the VLDB Endowment11(11), 1604-1617.

Gandhi, J., Karakostas, V., Ayar, F., Cristal, A., Hill, M. D., McKinley, K. S., ... & Ünsal, O. S. (2016). Range translations for fast virtual memory. IEEE Micro36(3), 118-126.

Gentine, P., Pritchard, M., Rasp, S., Reinaudi, G., & Yacalis, G. (2018). Could machine learning break the convection parameterization deadlock?. Geophysical Research Letters45(11), 5742-5751.

Kalhauge, C. G., & Palsberg, J. (2018). Sound deadlock prediction. Proceedings of the ACM on Programming Languages2(OOPSLA), 1-29.

Liu, L., Yang, S., Peng, L., & Li, X. (2019). Hierarchical hybrid memory management in OS for tiered memory systems. IEEE Transactions on Parallel and Distributed Systems30(10), 2223-2236.

Oukid, I., Booss, D., Lespinasse, A., Lehner, W., Willhalm, T., & Gomes, G. (2017). Memory management techniques for large-scale persistent-main-memory systems. Proceedings of the VLDB Endowment10(11), 1166-1177.

Siahaan, A. P. U. (2016). Comparison analysis of CPU scheduling: FCFS, SJF and Round Robin. International Journal of Engineering Development and Research4(3), 124-132.

Remember, at the center of any academic work, lies clarity and evidence. Should you need further assistance, do look up to our Computer Science Assignment Help


Book Online Sessions for Ict201 Efficient Distributed Memory Management Answers Online

Submit Your Assignment Here