Virtualization of Ubiquitous System Using Distributed Shared Memory
Info: 14047 words (56 pages) Dissertation
Published: 16th Dec 2019
Tagged: Information Systems
ABSTRACT— This paper describes the concept of virtualization of mirroring system using distributed shared memory and reduced the need of virtual memory, or hardware specifications are not needed as high. In this paper, the overview of the most common DSM system that can be more helpful in implementation of the mirroring concept, so that same memory can be used and multiple users can modify or fetch data in real time. Virtual memory is must needed if we want to speed up the performance but this will be waste of memory so the mix concept of virtualization and distributed memory can make a light weighted mirroring concept.
KEYWORDS—mirroring; virtualization, redundancy, Server_response, Client_request
––––––––––––––––––––––––––––––––––––––––––––––––
1. INTRODUCTION
Distributed shared memory system is a resource management component of a distributed operating system that can be implemented the shared memory model in distributed system, which has no physically memory.
DSM is just a form of memory architecture where physically separated memories can be addressed as one logically shared address space.
Now the term “shared” does not defined that there is a single centralized memory but that the address space is still “shared”.
A distributed memory system called as multicomputer, contains multiple independent processing nodes with local memory modules and these are connected by a general interconnection network.
In disk mirroring we can implement the concept of data/message passing with two different devices, while using this it can be helpful in saving the more money on hard disks or on any storage media
In DSM false sharing is major problem but as per the new algorithms it can be easily solvable, this can increase the CPU performance and loss of memory has been reduced
Data consistency problem occur in DSM on the other hand caching technique increases the more efficiency of the DSM system, consistency problem occurred when processor modifies the replicated shared data.
The mirroring concept is just a Redundant Arrays of Inexpensive Disks (RAID), in this mechanism files are “striped” across multiple disks.
RAID concepts divided by in many level, the mirroring is just on to the RAID: 1 level and shows the mirroring facts like bandwidth sacrifice on write i.e. Logical writes = Two physical writes, distributed concept can be optimizing this with the RAID concepts.
Proposed System Architecture
Figure 1 System Architecture
- Main Memory – File reader and writer Area that can hold overall value of Original Editor and replicated editor window
- Memory M1 – First Work area memory that can store values in separate memory address
- Memory M2 – Replicated window memory area
- Write operations – Text Editor Window that can demonstrate the distributed shared memory concepts
2. DESIGN ISSUES
In Distributed Shared Memory there are several Design Issue that must be a concern before going to another part of DSM.
- Granularity:
Granularity denotes the size of the sharing unit or it can be any memory unit like byte, a page, a word any other type of unit. Picking the correct granularity is an issue in distributed shared memory since it arrangements to the measure of calculation done between synchronization or correspondence focuses. Moving around code and information in the systems includes inertness and overhead from system conventions. Hence, such remote memory gets to should be incorporated in some way or another with the memory administration at every node. This regularly compels the granularity of access to be a vital different of the principal unit of memory administration or essentially exchange some portion of the page for decreasing the inertness
- Virtual memory and distributed shared memory:
Virtual Memory concept in modern computer system gives us a very high-performance computation power. Virtual Memory system is responsible for page replacement, flushing, and swapping. Satisfying a remote memory request, the distributed shared memory would have to refer to virtual memory manager to catch a page frame. The efficiency of the distributed shared memory model depends critically on how fast a remote memory access request is examined and the computation can be acceptable to continue.
- Memory Model and Coherence Protocols:
To assurance the correct multiprocessor execution, memory models should be working with precaution. Sequential Consistency memory model assurance that the view of the memory is stable at all times from all the processors, Release consistency which separates between kinds of synchronization accesses, acquire, release starting a stable view of the shared memory at release point.
3. PARALLEL COMPUTING
Computers aren’t able to possess with the scale of data becoming available in today’s generation, like for genetic data set computers become slow to cope up with the scale of processing problems, still the computers themselves are getting faster.
Another solution for this is “multiple processors”, If multiple processors are available then several programs can be executed rapidly. While one processor is doing one part of some calculation, others can work on another part of the same system. Every one of them can have similar information, yet the work will continue in parallel.
So as to have the capacity to cooperate, various processors should have the capacity to impart data to each other. This is able to done via using a shared-memory environment.
4. DISTRIBUTED SHARED -VIRTUAL MEMORY ARCHITECTURE
Many investigators have studied the issues in using operating system’s virtual memory as a cache, purpose to management systems of virtual memory based architecture system. In this method, the database can directly map into virtual memory to use determined objects as transitory of memory objects.
The next subsection, there are some protocol that shows the working mechanism of virtual memory based protocols
- DSM Client Protocol
In this protocol, file can be share as per the client side node protocol mechanism to share the same memory space where client and their respective processor unit, a memory unit, connection variable and all necessary component of sharing memory are working together to act as one unit make a virtual space area for all their components.
The client sends a request to the server to process the receive or make_connection command in order to execute the client request after execution of client process client can able to share the same virtual memory area in the connection-oriented model.
- DSM Server Protocol
DSM Server Protocol initiates the connection between the server and respective client, after successful connection the DSM server receive a request from the client side to process client’s request I order to provide the execution of process components.
Server sense for the request in the network (wired or wireless) if any request is in execution in the network then server contact to DSM server component after that Distributed Shared memory area is ready to use by DSM client components.
5. BASIC ALGORITHM
There are some basic algorithms in DSM that demonstrate the working of DSM.
- The central-Server Algorithm
In this algorithm, the main central server is responsible for maintaining all of the shared data in the context of DSM environment, for the rad and write server has to be just return or update the data.
- The migration algorithm
Data is sent to the location of the data access request and the successive accesses are local, for both the read and write operation send back the remote page to the local system location. In this system, multiple reads can be more costly to the worth of processor that is used to done this job.
- The Read-Replication Algorithm
On the read operation the page is replicated and respective write operation except for one copy all copies are updated with the current timestamp approach. Allowing multiple readers to the same page and there is multiple reads, one write.
- The Full Replication Algorithm
Full Replication allows us to use the multiple reads and multiple write operation at the same time in DSM.
While in the full replication mode this algorithm can be controlled to the permit of Shared Memory.
6. APPLICATION SCENARIO
In this section, there is an application scenario of using the virtualized Operating Systems on server-side virtualization.
A client can utilize different occurrences of virtualized Operating Systems to arrange situations to confine untrusted applications and to hold them in particular virtualized Operating Systems.
Such distinct Virtualized Operating Systems can be linked with partially CPU resources by using resource management tools.
The following main task is a process in this algorithm:-
- The user can use the multiple processes on Virtual Operating System to utilize the virtual memory for the reusable purpose.
- This system shows a virtualized OS environment that can separate resource management from each other.
- In this application the share of the CPU resource used by an entire virtualized OS environment that can be kept and also restricted in order to make an isolated implementation environment
7. DESIGN
The whole paper is containing three modules: –
- Module 1- Customized Proposed Architecture
We make a hybrid system that can perform multiple functionalities over the same memory area and can be created replicate text window to write on it and simultaneously user can send the created file to the server memory area and call the network function to establish the connection between them.
- Module 2- Load managing in communication channel
While creating the respective document file in Application environment, there is no need to always to connected to the communication channel because if there is N number user then they will make more burden on network channel, in case of reducing this in this application user have choice to connect to the network or else user can be disconnected from network channel.
- Module 3- Final working Environment
In the last working model we implemented the last final model of the application with all functionality like TextEditor functioning and as well as Server client concept that can share the same memory space and the same network area to communicate, the virtual memory concept will also take place in this application.
8. CONCLUSION
This application is based on the concept of window mirroring, DSM and provides the replica of same workspace in multiple windows, apart from that user can also be share the same file on which he is working to the server portal or we can say that server can control multiple users working space and can edit the file or send back to client’s local space.
In Real-time file sharing and modification can be done using the socket programming tools and networking protocols.
Virtual Memory for the server side is light weighted as per the load balancing and other concepts shows the real working prototype of this application system
The objective of proposed system is to reduce the network traffic as much as possible leading to more effective and efficient DSM system than most other systems.
S No. | Author Name | Technique Name | Advantages | Dis-advantages |
1. | Chingwen Chai | Memory mapping manager | Reduced the communication cost | Still virtual memory not gives the consistent result |
2. | Changhun Lee | Multiple-instruction-multiple-data (MIMD) system | Introduced shared memory concept for programmer | Hardware support for D.S.M. is not easy to done |
3. | Ioannis Koutras, Iraklis Anagnostopoulos, Alexandros Bartzas, and Dimitrios Soudris | F.A.S.T. Features from an accelerated segment test | easy to use programming language model | Interconnection type networks are more complex |
4. | Takahiro Chiba, Myungryun Yoo and Takanori Yokoyama | flexRay system real time network | Time predictability is well and accurate | Memory bottleneck issues not fully solved |
5. | Thiago Gonzaga, Cristiana Bentes, Ricardo Farias, Maria Cl´ıcia S. de Castro, Ana Cristina B. Garcia | Blackboard for multi-agent system | Message passing sub system that gives the mapping connection between agents | Conflict resolution |
6. | David K. Lowenthal Vincent W. Freeh David W. Miller | Two dimensional data distributions | Allow several different views of a single page | Red Black SoR not stable |
7. | Qi Zhang and Ling Liu | Inter process communication optimization | Dynamic shared memory management framework which enables multiple VMs to dynamically access the shared memory resource according to their respective demands | The static shared memory management in virtualized cloud may result in either resource waste or VM performance degradation |
8. | Bharath Ramesh, Calvin J. Ribbens, Srinidhi Varadarajan | Scalability on micro benchmarks applications | New user level software distributed shared memory System (DSM) | Leverage the cost and scalability advantages of distributed memory |
9. | Yi-Chang Zhuang, Ce-Kuen Shieh, Tyng-Yue Liang Chih-Hui Chou | Proteus model | Performance prediction mechanism | Size of the system is exponentially increasing |
1o. | Jung-Ho Ahnt , Kang-Woo Leet, and Hyoung-Joo Kim | Distributed shared cache | provide transactional facilities for direct manipulations of data in DSM | False Sharing, memory coherence is high |
11. | John B Carter, Dilip Khandekar, Linus Kamb | Demand paging + coherence = DSM | The performance of software DSM systems has improved dramatically by addressing the problems of false sharing and the relatively problems | Still some systems are not well integrated with the rest of the software environment such as the compiler |
12. | Hae-Jin Kim, Dong-Soo Han | OS issue Unix Ware2/mk | Performance issues solved on distributed operating system called UnixWare2/mk | High-speed interconnection networks are required |
13. | Daniel J. Scales and Monica S. Lam | SAM a shared object system for distributed memory machines | Evaluation of SAM that provides a global name space and automatic caching of shared data | Pre-processing memory utilization is more |
14. | An-Chow Lai, Ce-Kuen Shieh, Yih-Tzye Kok, Jyh- Chang Ueng, Ling- Yang Kung | Dependence-Driven Load Balancing (D.D.L.B.) | Load balance helpful in Centralized or distributed algorithms | Load balance is introduced after multithreading to DSM system |
15. | Htway Htway Hlaing, Thein Thein Aye, Win Aye | Migrating the home protocol (M.H.P.) and scope consistency (SC) | Software Distributed Shared Memory use migrating the home protocol and scope consistency | There is a little overhead for forwarding and count table |
16. | Jelica Protic, Milo Tomasevik, | DSM mechanism | DSM Most Appropriate for large scale high performance system | Every technique described has issues mostly regarding Scalability, Complexity of Nodes |
17. | Daniel Potts , Ihor Kuz | VIEW MODEL | Very useful in wide area environment and improves the performance of existing DSM | Implementation of HLRC has weaker consistency than current strict consistency implementations |
18. | Michael Stumm, Songnian Zhou | Translation between the virtual and the physical address | The shared memory paradigm leads to simpler programs than when data is passed directly using communication primitives. | Performance of algorithms are sensitive to the shared memory access behaviour of application |
19. | Antonio J. Nebro, Ernesto Pimentel, Jose M. Troya | Object Model | Significant speed ups can be obtained using solutions given in papers | Parallel programming to be explored has synchronization constraint |
2o. | J. Silcock | Cache coherence protocols | Efficiency increased in synchronization and consistency | The technique is only limited to workstations. |
21. | Paul Krzyzanowski | UEFI concepts | No need of installing the software on the system | RAM usage is high and can cause spikes |
22. | Debzani Deb, M. Muztaba Fuad | Page-Based and object-Based implementation technique | Comparative study between page and object based DSM | No. significant performance difference has been found out in two tecgniques |
23. | Steven K. Reinhardt | Tempest, a portable programming interface for mechanism-based DSM systems | DSM Provides programmer friendly shared memory abstraction | DSM system control memory and communication even when programmers and compiler can manage this efficiently |
24. | John Carter | Munin prototype | Mechanism and Strategies Improve Performance of DSM | Lower latency OS Operation, High Bandwidth Multicast Network |
25. | Veljko Multinovic | UNIX and OSF/DCE platforms | The paper explores the High Scale applications of the system | While scaling up the complexity of nodes causes the problem |
26. | Heinz Peter Heinzel,
Henry E Ball, Koen Langendoen |
Object-based distributed shared memory system | No MMU support is needed, no fixed size pages and no problem of false sharing | How to implement totally ordered group communication was unclear |
27. | William Cook, Eli Tilevich, Ali Imbrahim, Ben Wiederman | Java RMI and RPC | New programming construct, Remote Batch Invocation has been discussed which is revolutionary | The adoption of this programming construct is unpredicatble |
28. | Vikram vairagade, Chanchal dahat, Anjali Bhatkar | Ubiquitous Computing | Secure, Stable and Isolated computing environment | Inherent issues in managing multiple services per user |
29. | Jerzy Brezinski, Michal Szychowiak, Dariusz Wawrzyniak | Page-based DSM system for UNIX and OSF/DCE platforms | Shared Memory multiprocessor and distributed system offering both have been applied toobtain better results | Performance and relatability has been unsure |
3o. | Harshal garodi, Kiran More, Nikhil jagtap, Suraj Chavhan, Chandu Vaidya | Earliest Deadline First (EDF) scheduler and Ant Colony optimization Based (ACO) scheduler | Algorithm is useful when future workload of the system is unpredictable | Memory usage of the system is proportional to the load of the process |
LITERATURE SURVEY
- “Chingwen Chai et al investigate some consistency issues while using the concept of distributed shared memory systems in the field of parallel computing, Chingwen talk about the memory mapping manager that manage mapping between shared memory address space and local memories address space.
Design and consistency Issues
- Always virtual Memory not gives better result because every time data stored in virtual memory not be useful for new task
- Increases the page size is a good option to use but it will create a problem when multiple processor tries to access same page on it
- Cache coherence problem can be illustrated It occurs when processors get different view of memory when accessing and updating at different time.
- Sequential consistency every write is immediately seen by all processors in the system, it also generates more messages for maintaining this kind of consistency.
- Processor consistency if anything is write from different processor so that it can be seen in different orders, it will work on allows the system’s read operation for avoid the write operation can be helpful for high performance working.
- Relaxed (weak) consistency changes are done in some specific time period and synchronization can access all the pervious pending process
Consistency protocols
- Write-Shared Protocol
- Lazy Diff Creation Protocol
- Eager Invalidate Protocol
- Lazy Invalidate Protocol
- Lazy Hybrid Protocol
In this paper illustrate the distributed shared memory system can be use in parallel computing but the expensive thing is underlying network in communication cost, and the consistency of this system can be increase by those protocols and related issues.”
- “Changhun Lee et al gives some overview of parallel computer model and discuss the computational speed using multiple processors operating together on a single problem. All problems are further divided into several parts and each problem is solved by individual processor, all computers are connected in parallel computer models such as a symmetric multiprocessor (SMP), a parallel vector processor (PVP), a massively parallel processor (MPP), distributed shared memory machine (DSM)
Parallel computer models are classified by: –
- Flynn’s Taxonomy
Multiple-instruction-multiple-data (MIMD) system are introduced by Flynn’s Taxonomy each processor is a full-fledged CPU with both a control unit and an ALU.
Software D.S.M. implementation
Using the message passing software DSM implementation provide shared memory concept for programmers this can be achieved by run-time library routines, user-level, oS and programming language.
D.S.M. Management is usually supported through virtual memory concept
Whenever the requested data not present in local memory, then a page-fault handler will repossess that page from the local memory of node, or disk of another node.
Software support for D.S.M. is usually more easy compare than hardware support and it can be enables the better modifying of the consistency tools to the application behaviour.
Hardware D.S.M. implementation
In this it will be insure by the Hardware D.S.M. that all software is sharing their pre proceed data with along to others and shared data in local memories, processor.
Cost of hardware D.S.M. is less cheap then software D.S.M. but implementation usually requires complicate designand verification for advanced coherence maintenance and latency reduction techniques.”
- “Ioannis Koutras, Iraklis Anagnostopoulos, Alexandros Bartzas, and Dimitrios Soudris et al memory management is a major challenging part of many core architectures for improving the overall system performance, in this paper they described optimizing dynamic memory allocation, they use a C application programming interface to application developers.
Interconnection type networks are more complex, although D.S.M. implementation model comes with memory bottleneck issues but developers still want to select this for it’s easy to use programming language model.
Complete allocation scheme of the proposed allocator
C level
- Allocation request
- Acquire local lock
- Acquire next remote lock
Microcode level
- Search in local heap
- Search in remote heap
HSM translation and return address
Benchmarking with other allocator on the bases of platform architecture presented as no need to change hardware
In this paper they use four benchmark tracing they indicate the overall performance of proposed system is more specifically
Unpredictable
The followings four benchmarks proven their performance of a memory allocator for embedded system :-
- F.A.S.T. Features from an accelerated segment test, a computer-vision corner detection kernel
- Gaussian kernel for blur effect
- Integral kernel
- Matrix multiplication kernel
According to the benchmark tests it shows that the cycle spent regarding Dynamic Memory Management on average the 18.8%(maximum 23.12% for the Gaussian one and minimum 1o.83% for FAST application)”
- “Takahiro Chiba, Myungryun Yoo and Takanori Yokoyama et al Distributed Real Time operating System (D.R.T.o.S) that can be provide a distributed shared memory service for distributed control system in this paper they use a real time network called as flexRay, which is based in Time Division Multiple Access protocol and worst case response time of DSM is also predictable but FlexRay communication must be well configured.
The concept of distributed embedded control system are used in the factory automation and building control so on, time predictability is well study in this paper and shows that real time and location-transparent distributed computing environments are required for implementing this.
In Distributed shared memory it will be introduced a location parent shared variables, via the shared variable on DSM they can exchange their input and output with distributed software modules
Distributed shared memory (DSM) provides location parent shared variables, so distributed software modules developed by model-based design can exchange their input and output values through shared variables on DSM.
Distributed shared memory for embedded control systems
A distributed control software
Their goal to develop a distributed real time operating system (DRToS) with DSM for embedded distributed control systems, common flexRay is used to manage consistency of two or more DRToS
Distributed Shared Memory Model
The DSM mechanism using shared-variable DSM architecture, but not as page-based DSM architecture, because only few variables are shared in distributed control software developed with the software like MATLAB/Simulink
Implementation of DRToS prototype with the DSM mechanism that support 32-bits data type for DSM
Evaluation board called as Gt2ooN1o is used in this paper to test the performance of DSM
CPU of which is V85oE/PHo3 with an onchip E-Ray FlexRay controller.
The clock rate of the CPU is 128MHz.
The data transfer rate of FlexRay is 1oMHz
The communication cycle period is 1msec”
- “Thiago Gonzaga, Cristiana Bentes, Ricardo Farias, Maria Cl´ıcia S. de Castro, Ana Cristina B. Garcia et al in multi-agent system the blackboard provides a simplest way to interact with agent.
- Blackboard server
In distributed architecture, the implementation of blackboard is in single node
- Novel agent communication system
This system is based on the distributed shared mechanisms and additionally it allows to add implementation of a shared address space
In this paper they introduced the new working system that shared data over the nodes and use a message passing sub system, that gives the mapping connection between agents, they used distributed shared memory system to handle the messaging
Rules of Multi-agent system
- Interaction
- Conflict Resolution
- Negotiation
- Communication
- Blackboard
- Message Passing
The Tri-Coord Model
It provides an environment where the interaction rules, the behavior, and the actions are well-defined in the system.
Distributed-Shared Memory (DSM)
DSM provides convenience of a shared-memory programming model, where all the processor has to access the shared their address space on top to low cost distributed systems
Their implementation used the Tri-coord MAS system as baseline.
The Tri-coord system uses a blackboard for the agents communication and is based on social laws to guide the conflict resolution, they used the TreadMarks SDSM system to support the distributed blackboard and implemented a conflict simulator, in order to test the system.”
- “David K. Lowenthal Vincent W. Freeh David W. Miller et al in this paper they examine two substitutions for efficiently supporting two dimensional data distributions in software distributed shared memory (SDSM) systems.
They develop two new page consistency protocols for this purpose.
one protocol, called as Explicit 2D, requires that the user or compiler explicitly identify truly shared elements within a page
The other called as Implicit 2D infers such elements implicitly.
Knowledge of truly shared elements allows the SDSM at synchronization points, to send only truly shared data
This paper introduces two alternatives
Explicit 2D
This can be providing a programming interface through which the user or compiler can indicate where columns are shared between nodes
Implicit 2D
Infers shared columns automatically, obviating specification of the shared data
Both mechanisms provide the information necessary to determine which portions of the page are truly shared.
Two dimensional distributions execute 12.3% faster than a one dimensional version when using 25 nodes on Red-Black SoR.
Implementation working
The basic idea behind the implementation of Implicit 2D is to allow several different views of a single page.
Performance is classifying as two programs are introduced
- Red Black SoR
For Red-Black SoR, they used arrays of size of 512o X 512o and 25 nodes, and tests were run for 1oo iterations.
- Jacobi Iteration
Jacobi iteration is a similar program to Red Black however it differs in three important ways one is barrier per
iteration, two arrays, and all column points are required per iterations”
- “Qi Zhang and Ling Liu et al in Shared Memory optimization in Virtualized Cloud computing in this paper, they present a dynamic shared memory management framework which enables multiple VMs to dynamically access the shared memory resource according to their respective demands.
They illustrate their system design through two case studies one is aims at improving the performance of inter domain communication while the other aims at improving the VM memory swapping
They elaborate dynamic shared memory mechanism not only improves the utilization of shared memory resources but also significantly enhances the performance of VM applications
Shared memory technique, formerly introduced for optimizing inter process communication, is gaining increasing attraction as a kernel level optimization technique for efficient executions of virtual machines in virtualized cloud and data centres.
Problems in static management of shared memory.
The static shared memory management in virtualized cloud may result in either resource waste or VM performance degradation
Balance of shared memory across VMs.
In this paper they address the problems by promoting a dynamic shared memory management framework for improving virtual machine execution efficiency in virtualized cloud
Dynamic management of shared memory
- Shared Memory Allocation in Guest VMs
In this the grant table mechanism gives the generic interfaces for convenient memory sharing between virtual machines, but there is number of limitations to proceeds to next.
- Shared Memory Allocation in Host/Hypervisor
An alternative way to establish shared memory across multiple co-located virtual machines in virtualized platform is to allocate a global memory region from the host.
In this paper they presented a dynamic shared memory management framework and how it can be installed for improving inter VMs communication efficiency and VMs memory swapping efficiency”
- “Bharath Ramesh, Calvin J. Ribbens, Srinidhi Varadarajan et al Distributed Shared Memory System in this paper they represent the new user level software distributed shared memory System (DSM)
Two major things in which they concert is First, the rise of many core architectures is producing a growing importance on threaded codes to achieve performance.
And the second is, architectural trends, especially in high performance interconnects, overcoming the bottlenecks that have stuck DSM performance
Performance results on two 256 processor clusters demonstrate scalability on micro benchmarks applications.
So that the results are the largest scale tests and achieve the highest performance of any DSM system
The two challenges
- First one is they want to leverage and encourage the shared memory parallel programming community
- and the second one is leverage the cost and scalability advantages of distributed memory
one observation that show a new look at DSM is the emergence of Remote Direct Memory Access (R.D.M.A.), the technology underlying the impressive interconnect performance.
PERFoRMANCE EVALUATIoN
- Synthetic benchmarks
In their synthetic benchmarks they measure the memory bandwidth achievable in Samhita in terms of read and write operations.
- LU factorization
A standard H.P.C. benchmark is the LU factorization of a dense n×n matrix.
- Black Scholes application from PARSEC
This application calculates the prices for a portfolio of European options analytically with the Black-Scholes partial
differential equation”
- “Yi-Chang Zhuang, Ce-Kuen Shieh, Tyng-Yue Liang Chih-Hui Chou et al Maximizing Speedup through Performance Prediction for Distributed Shared Memory Systems
In this paper they describe the design and implementation of the performance prediction mechanism in their DSM system, which can be supports node reconfiguration to adjust the system size at runtime.
They adopt a simple calculation model and association it with runtime information to predict the performance under different system sizes.
Proteus overview
Proteus consists of a set of runtime library that provides a globally shared address space among nodes connected through 1o Mbps Ethernet connection
Performance prediction model
Scientific parallel programs are regular computation type Single Program Multiple Data style, after information collection they apply runtime information to predict the execution time of applications under different system sizes.
Implicit waiting and load imbalance
This factor degrades the performance of parallel processing system, If the number of threads assigned to each node is the same, then the workload among nodes is stable. However, even though the workload among nodes is stable, there is implicit waiting time in practice because of the communication latency.
In this paper they designed and implemented the performance prediction mechanism in their scalable underlying D.S.M. system, Proteus.
In the experimental results, the accuracy of performance prediction model is acceptable and satisfying.
Using this performance prediction mechanism in Proteus, it could provide timely prediction result to the underlying system to adjust system size after observing several iterations of applications executing on Proteus.
The assessment of sequential page faults in performance prediction is totally automatic.
It suggests user for which equation to apply for the particular calculation of the sequential page faults under different system size according to the algorithm “
- “Jung-Ho Ahnt , Kang-WOO Leet, and Hyoung-Joo Kim et al Architectural Issues in Adopting Distributed Shared Memory for Distributed object Management(DoM) Systems in this paper they study on the architectural issues in DSM.
DSM gives us the easiness of programming and portability, as well as the high performance computing.
They propose two alternative distributed system architectures for adopting DSM for distributed object management systems.
Distributed shared cache (DSC) architecture
In this client server architecture is introduced and this is very common thing to be implemented as sharing the same memory or process concepts
Cache Replacement
For making the real time performance consistency of system it is must to be replacement the slow and less memory cache
False Sharing
False sharing is if the memory coherence unit of D.S.M. is greater than transactional unit of object Management System, it is like more than one site will write access to a single coherence unit.
Distributed shared recoverable virtual memory (D.S.R.V.M.) architecture
- Transactional DSM for DSRVM
- Client Protocol
- DSM Server Protocol
In distributed shared cache architecture, they explored the trade offs in the use of DSM as an object cache relative to DSM as a page cache.
They also suggested a new replacement strategy exploiting the knowledge of the ownership of data items and provide some feasible solutions to avoid sharing problem.
The major advantage of DSRVM architecture is to provide transactional facilities for direct manipulations of data in DSM.”
- “John B Carter, Dilip Khandekar, Linus Kamb et al Distributed Shared Memory: Where We Are and Where We Should Be Headed, the goal of this paper is to present our current position on what remains to be done before DSM will have a significant impact on real applications.
In this paper they solving the problem of Quarks DSM System by using the Morden steps.
They define some truths regarding the topic of – How We Got to Where We Are
Demand paging + coherence = DSM, this will show the old way of understanding the actual problem
They use the same coherence protocols as what they shared memory hardware resulted in low performance for applications with a moderate amount of fine-grained sharing.
The reasons for this were that the unit of coherence to be large and objects in the case of Emeralld, after that the solving this issue by using the balanced level of coherence
After that the second major leap in DSM research introduced when researchers adopted the relaxed consistency models developed for the use of shared memory hardware and software implementation of coherence is more flexible
Where We Should Be Heading
- Their existing systems are directed almost at large scale homogeneous scientic applications Less DSM systems are freely available so the scientific programmers who want this can easily use the power of current DSM
- Some systems are still exists using burdensome message passing systems likewise PVM
- Still there are some limited exceptions existing systems are not well integrated with the rest of the software environment such as the compiler
The performance of software DSM systems has improved dramatically by addressing the problems of false sharing and the relatively problems
Distributed shared memory system is a resource management component of a distributed operating system that implements the shared memory model in distributed system
Memory allocation in the single level centralized control for memory allocation
In this approach, a central manager allocates and deallocates memory for the user processes.”
- “Hae-Jin Kim, Dong-Soo Han et al Performance Issues in the operating System for the Page-based Distributed Shared Memory Machine in this paper they track the performance issues in the operating system called as Unix Ware2/mk that was developed for the page-based DSM machine.
Initially they start examine the design philosophy and the implementation details of the operating system and then they calculate the performance issues in the operating system for the page based D.S.M. by showing the performance results of the following are the well known benchmark programs:
- ousterhout
- Bonnie
- networking benchmark.
SYSTEM ARCHITECTURE
The underlying system architecture on which the UnixWare2/mk is capable of running is a cluster computer system
System is interconnected over the high-speed interconnection networks i.e. Ethernet or any company’s interconnection network.
The UnixWare2/mk supports S.M.P. (Symmetric Multiprocessing) capabilities on Intel’s S.H.V. (standard high volume) nodes which incorporates four Intel Pentium Pro processors
In the designing and implementation phase they use following fundamental goal for the design of UnixWare2/mk
- Microkernel
- System Servers
- SSI (Single System Image)
They inspected the performance issues on distributed operating system called UnixWare2/mk from the standpoints of comparing various performances with the monolithic kernel UnixWare2.”
- “Daniel J. Scales and Monica S. Lam et al The Design and Evaluation of a Shared object System for Distributed Memory Machines, this paper describes the design and evaluation of SAM, a shared object system for distributed memory machines
SAM
SAM is a portable run-time system that provides a global name space and automatic caching of shared data,
SAM has been implemented on the CM-5, Intel iPSC/86o and Paragon, IBM SP1, and networks of workstations running PVM. The SAM application can also run on all these platforms without modification.
This paper provides an extensive analysis on several complex scientific algorithms that has been written in SAM on a variety of hardware platforms, performance of these SAM applications depends basically on the scalability of the basic parallel algorithms.
Design Rationale
In this section they gives the background on some present software of distributed shared memory systems and define the basic S.A.M. design principles
These following are the working components of DSM system
- Background
- Design of SAM
- Minimizing communication
SAM overview
- Basic Primitives
In S.A.M. all shared data can be represented by either a value or a accumulator
- Memory Management
This must be specifying the new data type with the help of pr-processor
- Communication optimizations
The communication latency is an important mechanism that support for asynchronous access.
In communication innovation they use caching and synchronization mechanism to rectify the performance issues
In this paper they presented the design and evaluation of a shared object system for distributed memory machines also called as SAM, it will optimize the programming to implement the high performance working area”
- “An-Chow Lai, Ce-Kuen Shieh, Yih-Tzye Kok, Jyh- Chang Ueng, Ling- Yang Kung et al Load balancing in distributed shared memory systems in this paper they described the concepts of load balancing of main system into distributed manner and test the overall performance of the system at the time of real time load distribution management
The problem of load balancing is introduced after multithreading to DSM system, in this paper, they elaborate by proposing and experimentally estimating a load balancing method called as Dependence-Driven Load Balancing (D.D.L.B.)
DDLB holds three policies
- Transfer policy
- Location policy
- Selection policy
Common Features in load balancing are defined as
Centralized or distributed. Load balancing can be helpful for archiving the load balancing in centralized algorithm reliability is usually less and they suffer from a single bottleneck, on the other side distributed algorithms disseminate the load among the processors.
After this they are study on the Dependence driven load Balancing as follows: –
- Thread scheduling
- Dependence-Driven Load Balancing policies
- Information collection
- Copy set adjustment
- Processor thrashing avoidance
In this paper, the importance of load balancing in D.S.M. systems is clarified especially when an iterative barrier synchronization is employed in the parallel programs.
They introduce a load balancing methodology called Dependence Driven Load Balancing (D.D.L.B.) for DSM systems.”
- “Htway Htway Hlaing, Thein Thein Aye, Win Aye et al A Simple and Effective Software Distributed Shared Memory System
The efficiency and performance of a Software Distributed Shared Memory (SDSM) system relies on a memory consistency model and a suitable protocol for implementing that system for the specific work.
In this paper the design of a page based software distributed shared memory SDSM system is proposed for cluster of workstations and it uses migrating the home protocol (M.H.P.) and scope consistency (ScC).
The main role of this paper is to combine the forwarding approach with the broadcast one in migrating home protocol in order to reduce communication overhead and network traffic in accessing pages.
The main objective in DSM is to reduce the access time to non local memory, There are several design choices that have to be made when implementing Distributed Shared Memory :-
- Structure and granularity of the shared memory;
- Coherence protocols and consistency models;
- Synchronization;
- Data location and access;
- Heterogeneity;
- Scalability;
- Replacement strategy; and
- Thrashing
In cluster of workstations Software DSM are efficient for run the applications, the objective of proposed system is to reduce the network traffic as much as possible leading to more effective and efficient SDSM system than most other systems.
In the proposed design, there is a little overhead for forwarding and count table, but it is not enough to be taken into account and thus, negligible author advantages”
[16] “Jelica Protic, Milo Tomagevic and Veljko Milutinovic et al -This survey of Distributed Shared Memory is very popular, since
They combine the advantages of two different computer classes: shared memory multiprocessors and distributed systems.
-The most important one is the use of shared memory programming paradigm on physically distributed systems. In the first part of this paper, one possible classification taxonomy, which includes two basic criteria and a number of related characteristic, is proposed and described.
-According to the basic classification criteria-implementation level of DSM mechanism–systems are organized into three groups: hardware, software, and hybrid DSM implementations.
The second part of the paper represents an almost exhaustive survey of the existing solutions in a uniform manner, presenting their DSM mechanisms and issues of importance for various DSM systems and approaches.
-A DSM system logically implements the shared memory model on a physically distributed memory system.
DSM mechanism.
-In order to achieve the DSM programming model in Clouds, a set of primitives is built either on the top of Unix, or in the context of object-based operating system kernel Ra. Distributed shared memory constituted of segments is organized into objects
-In this paper the survey was to provide an extensive coverage of all relevant topics in an increasingly important area – distributed shared memory computing. A special attempt has been made to give the broadest overview of the proposed and existing approaches.
-DSM solutions appear to be the most appropriate way toward large-scale high-performance systems with a reduced cost of parallel software development.”
- “Daniel Potts and Ihor Kuz et al Adapting Distributed Shared Memory Applications in Diverse Environments
-A problem with running distributed shared memory applications in heterogeneous environments is that making optimal use of available resources often requires significant changes to the application.
In this paper, we present a model, the view model, that provides an abstraction of shared data and separates the concerns of programming model, consistency, and communication.
-Separating these concerns makes it possible for applications to easily be adapted to different execution environments, allowing them to take full advantages of resources such as hardware and high speed interconnections.
The VIEW MODEL
-The view model provides a shared data space abstraction based on the concept of views.
-Shared data spaces consist of data elements. The format and structure of a data element is data space dependent.
– A view’s data sharing behavior determines how a view interacts with its environment, how it reacts to any external interaction, and how it manages shared data.
The view architecture is based on a flexible and generalized model for controlling and adapting shared data to a distributed application’s underlying environment.
-It relies on a separation of concerns between the client application, programming model, consistency and communication protocols, and sharing interactions.
-We believe that this approach is suitable for distributed application data sharing in wide-area environments such as multi-clusters and Grids, and, in particular, that it provides mechanisms for improving the performance of existing DSM applications and protocols in such environments.”
- “Michael Stumm and Songninan Zhou et al Distributed Shared Memory: Concept and System
-It is difficult to choose appropriate inter-process communication mechanism in Distributed Shared Memory.
-This article categories and compare basic algorithms for implementing distributed shared memory and analyze their performance.
• – Distributed shared memory systems strive to overcome the architectural limitations of shared memory computers and to make easier developing parallel programs in distributed environment.
• however, in order to meet these goals in practice many specific and difficult problems have to be solved.
• In this paper fundamentals of DSM systems’ construction including basic design, mechanisms, memory consistency models, and problems are presented.
• the general concept and hierarchical structure of page-based DSM system for UNIX and oSF/DCE platforms, have been proposed.
• Applications of the basic DCE components for improving security, modularity, scalability, and portability of the proposed system in comparison with the existing ones, have been described.
DSM fundamentals:
• Distributed shared memory (DSM) is a single address space shared by several hosts connected via a network communication environment
• The DSM is a kind of virtual memory and the job of the DSM system is an automatic mapping of the shared virtual address space into the physical address space of the hosts composing the system.
Basic problems
• one of the problems of the implementation of a DSM system is the translation between the virtual and the physical address
Distributed shared memory systems strive to join advantages of both shared memory multiprocessor and distributed systems offering, at least potentially, attractive for users virtual shared memory as well as system scalability. As is known, however, in order to meet this goal in practice many specific and difficult problems have to be solved.
In this paper we have analyzed, first, fundamentals and problems of DSM systems’ construction.”
- “Antonio J. Nebro, Ernesto Pimentel, José M. Troya et al Applying Distributed Shared Memory Techniques for Implementing Distributed objects
– In this paper, we study how the potential advantages of Distributed Shared Memory techniques can be applied to concurrent object-oriented languages.
– The object model is characterized by the requirement of explicitly enclosing object invocations between acquire and release operations, and the distinction between command and query operations
– Distributed shared memory (DSM) is a model for interprocess communication in distributed systems that simplifies distributed programming by offering a programming model similar to concurrent programming in shared memory systems
– A DSM system logically implements a shared memory model on a physically distributed memory system
– There are three main issues that characterize a DSM system [PTM96]: the level where the DSM mechanism is implemented, the algorithms for implementing DSM, and the memory consistency model of the shared data
Memory Consistency Models:
potential operation call of a object on another one. The question of memory consistency arises when we want to increase the performance by replicating one or more objects to reduce interprocessor communications
Object Model:
We should study the applicability of DSM concepts to implement a system based on distributed objects. Instead of assuming an existing object model and studying how to benefit from DSM approaches, we make our approach from the opposite side.
Implementation:
Using C++ implementation part is done in this paper.
Performance:
To measure the performance of the current implementations, we have coded a parallel program that multiplies square matrices of integers. The parallel algorithm is based on dividing the result matrix in 4N square submatrices and computing them in parallel.
– We have presented an object model that fits into an entry consistency DSM scheme. This model is well suited for the efficient implementation of objects in distributed systems, since it allows object replication.”
- “J. Silcock et al A Consistency Model for Distributed Shared Memory on RHODOS among Shared Memory Consistency Models,
• The major difficulty when designing a DSM system is ensure the system allows application
programmers to use the shared memory programming model to write programs which will
execute efficiently.
• There is scope to improve the efficiency of DSM through the use of weaker
consistency models.
• The hybrid consistency models described in this document allow DSM
designers to use synchronization points inserted by programmers as check points for
consistency operations.
• Entry consistency forces programmers to explicitly associate shared
variables with synchronization variables. Whenever a critical region is entered the shared
variables associated with the synchronization variable guarding that region are updated.
• Release consistency does not have this mechanism to identify the shared variables, therefore
all changes made between consecutive releases, including changes made to non-shared
variables, are propagated to all workstations.
• We have determined that the only changes that
need to be propagated to the other workstations are those made to data accessed by multiple
processes.
• In RHoDoS, we are implementing DSM at operating system level, we are
able to identify these changes implicitly by identifying changes made within the critical region.
We can therefore propagate only those updates to other workstations.”
- “Paul Krzyzanowski et al Booting an operating System
An operating system is sometimes described as “the first program,” one that allows you to run other programs.
The operating system is loaded through a bootstrapping process, known as booting.
Booting with BIoS:
Stage 1: The Master Boot Record
Stage 2: The Volume Boot Record
Booting with UEFI
-With UEFI, there is no longer a need for the Master Boot Record to store a stage 1 boot loader; UEFI has the smarts to parse a file system and load a file on its own, even if that file does not occupy contiguous disk blocks.
Non-Intel Systems Booting Process
-There are numerous implementations of the boot process. Many embedded devices will not load an operating system but have one already stored in non-volatile memory (such as flash or RoM).
– Those that load an oS, such as ARM-based Android phones, for instance, will execute code in read-only memory (typically in NoR flash memory) when the device is powered on.
Mac oS X
-older PowerPC-based versions of Apple Macintosh systems, as of at least oS 8 as well as oS X, were based on open Firmware. open Firmware originated at Sun and was used in non-Intel Sun computers.”
- “Debzani Deb and M. Muztaba Fuad et al A Comparative Study of Page-based and object-based Distributed Shared Memory
– There are several design and implementation way for DSM like Page-Based and object-Based implementation techniques.
– Both the traditional page-based implementations and recently revived object-based implementations have their potential advantages and disadvantages.
-This paper compares the page- and object-based approaches by developing protocols of both in the same lower level environment.
– The main problems that every DSM approach must address are: mapping of the logically shared address space onto physically distributed memory modules, locating and accessing a needed data item and preserving the coherent view of replicated data.
– object-based DSMs eliminate the false sharing problem by limiting the scope of consistency action only to the object’s extent and thereby reduce the amount of data that needs to be transferred through the network
EXPERIMENTAL ENVIRONMENT & PROTOCOLS:
The object-based protocol is based on the sequential consistency model used by CRL system and the page-based protocol is based on the lazy release consistency multiple writer protocol used in the CVM (Coherent Virtual Machine) and Tread Marks systems.
LAZY RELEASE CoNSISTENCY PRoToCoL:
– In release consistency (RC) models, memory becomes consistent only at synchronization points indicated by the programmer
– In the RC model, shared memory accesses are categorized either as ordinary or as synchronization accesses, with the latter category further divided into acquire and release accesses.
THE CRL PRoToCoL:
– CRL allows an application to create an object, an arbitrary sized contiguous array of memory, which can be identified uniquely by an identifier and provides synchronization calls that make consistent a single object at a time.
PERFoRMANCE ANALYSIS:
The experiment is conducted on four SPARC workstations connected by a powerful network. Since CVM uses UDP, all processes communicate with each other over UDP sockets. An 8192 byte page size is used.
SPEEDUP:
The object based approach outperforms the page-based in the case of three applications.
NETWoRK TRAFFIC
It can be concluded from the paper that there is no special bandwidth advantage in the case of object-based implementation.
CoMMUNICATIoN REQUIREMENT:
-First of all, in object-based sequential consistency, an invalidation is sent after every modification to all other processes which also cache the object.”
- “Steven K. Reinhardt et al Mechanisms for Distributed Shared Memory
- Identifies a set of mechanisms for distributed shared memory
- Develops Tempest, a portable programming interface for mechanism-based DSM systems,
- Describes Stache, a protocol that uses Tempest to implement a standard shared-memory model,
- summarizes custom protocols developed for six shared-memory applications
- designs and simulates three systems—Typhoon, Typhoon-1,and Typhoon-o that support Tempest,
- describes a working hardware prototype of Typhoon-o, the simplest of those designs.
- Release consistency does not have this mechanism to identify the shared variables, therefore all changes made between consecutive releases, including changes made to non-shared variables, are propagated to all workstations.
We have determined that the only changes that need to be propagated to the other workstations are those made to data accessed by multiple processes.”
- “John B Carter et al Design of the Munin Distributed Shared Memory System
– This paper contains a detailed description of the design and implementation of the Munin prototype with special emphasis given to its novel write shared protocol.
– The key problem in building an efficient DSM system is to reduce the amount of communication needed to keep the distributed memories consistent.
– The Munin DSM system incorporates several novel techniques for doing soincluding the use of multiple consistency protocols and support for multiple concurrent writer protocols.
-The basic idea behind Distributed Shared memory is to treat the local memory of a processor as if it were a coherent cache in a shared memory multiprocessor.
– They support the relatively simple and portable programming model of shared memory on physically distributed memory hardwarewhich is more scalable and less expensive to build than shared memory hardware.
– The challenge in building a DSM system is to achieve good performance over a wide range of programs without requiring programmers to restructure their shared memory parallel programs.
– The Munin DSM system incorporates several techniques make DSM a more viable so lution for distributed processing by substantially reducing the amount of communication required to maintain consistency compared to other DSM systems.
– The core of the Munin system is the runtime library that contains the fault handlingthread supportsynchronizationand other runtime mechanisms
-This paper contains a detailed description of the design and implementation of the Munin prototype with special emphasis given to its write shared protocol.
-DSM can be made efficient without the use of unconventional programming languagescompilersor operating system support.”
- “Jelica Protic, Milo Tomasevic, and Veljko Milutinovic et al Distributed Shared Memory: Concepts and Systems
• Distributed shared memory systems strive to overcome the architectural limitations of shared memory computers and to make easier developing parallel programs in distributed environment.
• however, in order to meet these goals in practice many specific and difficult problems have to be solved.
• In this paper fundamentals of DSM systems’ construction including basic design, mechanisms, memory consistency models, and problems are presented.
• The general concept and hierarchical structure of page-based DSM system for UNIX and OSF/DCE platforms, have been proposed.
• Applications of the basic DCE components for improving security, modularity, scalability, and portability of the proposed system in comparison with the existing ones, have been described.
DSM fundamentals:
• Distributed shared memory (DSM) is a single address space shared by several hosts connected via a network communication environment
• The DSM is a kind of virtual memory and the job of the DSM system is an automatic mapping of the shared virtual address space into the physical address space of the hosts composing the system.
Basic problems
• one of the problems of the implementation of a DSM system is the translation between the virtual and the physical address
Distributed shared memory systems strive to join advantages of both shared memory multiprocessor and distributed systems offering, at least potentially, attractive for user’s virtual shared memory as well as system scalability. As is known, however, in order to meet this goal in practice many specific and difficult problems have to be solved.
In this paper we have analyzed, first, fundamentals and problems of DSM systems’ construction.”
- “Heinz-Peter Heinzley and Henri E. Bal Koen Langendoen et al Implementing object-Based Distributed Shared Memory on Transputers
- Distributed shared memory (DSM) is an attractive alternative to message passing for programming distributed-memory parallel machines.
- object-based distributed shared memory systems allow processes on different machines to communicate through passive shared objects. This paper describes the implementation of such a system on a transputer grid.
- The system automatically takes care of placement and replication of objects. The main difficulty in implementing shared objects is updating replicated objects in a consistent way.
- We use totally-ordered group communication (broadcasting) for this purpose.
- We givefour different algorithmsfor ordering broadcasts on a grid and study their performance.
- We also describe a portable runtime system for shared objects. Measurements for three parallel applications running on 128 T8oo transputers show that good performance can be obtained.
- Distributed shared memory (DSM) is an attractive alternative to message passing for programming distributed-memory parallel machines.
- In contrast to message passing, DSM offersthe programmer the illusion that all processors in the system have access to a shared memory.
- This model eases parallel programming, since it allows sharing of state information between processes on different processors, which need not be connected by a physical shared memory.”
- “William R. Cook and Eli Tilevich et al Language Design for Distributed objects
The problems with transparency and mobile code have been well known since at least 1994. But in the absence of any fundamental new ideas, the same problematic approaches are used, for example in the design of Java RMI,.
In this essay we discuss a new programming construct called Remote Batch Invocation
A batch is a code block that combines remote and local execution over fine-grained object interfaces, but is executed by partitioning and remote evaluation
Remote Batch Invocation effectively addresses the shortcomings of transparent distribution with a controlled form of mobile code.
REMOTE PROCEDURE CALLS:
The original motivation for Remote Procedure Calls (RPC) was as a machine-oriented analog to the text-based, conversational command languages used in many distributed protocols
The idea was to replace commands with stub procedures that send messages to the remote system using standard data encoding
The result was transparent distribution, where remote procedure calls work just like local calls.
A key advantage of RPC is that procedure calls are ubiquitous in programming languages
REMoTEEVALUATIoN:
Remote evaluation is a form of mobile code in which a client sends code to a server to be executed.
It generalizes remote procedure calls, which can be viewed as a form of remote evaluation where the code is a single call.
a new programming construct that enables greater expressiveness in creating efficient distributed object systems. RBI retains the usability advantages of RPC-based programming abstractions for distributed computing, while eliminating or significantly improving on their limitations.”
- “Vikram S. Vairagade, Prof.Chanchal V. Dahat, and Anjali V. Bhatkar et al operating System Virtualization for Ubiquitous Computing
The basic concept of virtualization is to provide the benefits of the services and components irrespective of its physical presence.
-oS virtualization is needed as it provides feature of transparent migration of applications.
-In order to enable ubiquitous environment and servers to be shared the application of various operating systems with the desktop of user.
-Using the ubiquitous environment, the applications could be run in the host system without installation.
-The key concept of the system it to operate the desktop of user by any handy device (like smart phones or tablet) through web browser irrespective of the location of user.
VIRTUAL oS FoR UBIQUIToUS CoMPUTING
-This paper first describes about the requirements for a ubiquitous computing system and then the utilization of virtualized operating Systems as such an infrastructure in that system.
-In this section, we also describe the resource management to create stable computing environments for ubiquitous computing, and configuration system to ease the use of virtual operating Systems.
Conclusion
-This paper gives idea about ubiquitous computing infrastructure architecture that is based on virtualized operating Systems to provide secure, stable, and isolated computing environment. our architecture enables ubiquitous devices and ubiquitous servers to be shared securely.
-Virtualization of services and centralized storage of services to ensure that the virtual machine host technology becomes a scalable commodity allowing future expansion of the infrastructure.”
- “Jerzy Brzezinski, Michal Szychowiak, and Dariusz Wawrzyniak et al PAGE BASED DISTRIBUTED SHARED MEMORY FOR OSF/DCE
- Distributed shared memory systems arrive to overcome the architectural limitations of shared memory computers and to make easier developing parallel programs in distributed environment.
- however, in order to meet these goals in practice many specific and difficult problems have to be solved.
- In this paper fundamentals of DSM systems’ construction including basic design, mechanisms, memory consistency models, and problems are presented.
- the general concept and hierarchical structure of page-based DSM system for UNIX and OSF/DCE platforms, have been proposed.
- Applications of the basic DCE components for improving security, modularity, scalability, and portability of the proposed system in comparison with the existing ones, have been described.
DSM fundamentals:
- Distributed shared memory (DSM) is a single address space shared by several hosts connected via a network communication environment
- The DSM is a kind of virtual memory and the job of the DSM system is an automatic mapping of the shared virtual address space into the physical address space of the hosts composing the system.
Basic problems
- one of the problems of the implementation of a DSM system is the translation between the virtual and the physical address
Distributed shared memory systems strive to join advantages of both shared memory multiprocessor and distributed systems offering, at least potentially, attractive for users virtual shared memory as well as system scalability. As is known, however, in order to meet this goal in practice many specific and difficult problems have to be solved.
In this paper we have analyzed, first, fundamentals and problems of DSM systems’ construction.”
- “Harshal Garodi, Kiran More, Nikhil Jagtap, Suraj Chavhan, Prof.Chandu Vaidya et al Performance Enhancing in Real Time operating System by Using HYBRID Algorithm
The Real Time operating System (RTOS) supports applications that meet deadlines, in addition to provides logically correct it’s outcome.
-In multitasking operating system for the applications request to meeting of time deadlines and functioning in proper real time constraints, To meet the real-time constraints ; in Real time system for scheduling the task, different scheduling algorithms used.
-Most of the real-time systems are designed using priority based preemptive scheduling algorithm, worst case execution time estimates to guarantee the execution of high priority tasks.
– The hybrid Algorithm schedules the process on single processor when it is preemptive.
-The advantage of the proposed algorithm is that it automatically switches between the EDF and ACo scheduling algorithm and overcome the limitation of both the previously discussed algorithms in paper.
-This paper summarizes the state of the real-time field in the areas of scheduling and operating system kernels and discuss paradigms underlying the scheduling approaches.
– The main objective of this paper is to compare two important task schedulers such as Earliest Deadline First (EDF) scheduler and Ant Colony optimization Based (ACO) scheduler.
– In this the hybrid Algorithm is a dynamic scheduling algorithm and it is beneficial for single processor real-time operating systems.
PROBLEM STATEMENT:
-The purpose of a Real Time system is to fulfill its limitation in particular given time or its deadline.”
References
- “Chai, C. (2oo2). Consistency Issues in Distributed Shared Memory Systems. Retrieved at, 1-11.”
- “Lee, C. (2oo2, February). Distributed Shared Memory. In Proceedings on the 15th CISL Winter Workshop Kushu, Japan¢ February.”
- “Koutras, I., Anagnostopoulos, I., Bartzas, A., & Soudris, D. (2o16). Improving Dynamic Memory Allocation on Many-Core Embedded Systems With Distributed Shared Memory. IEEE Embedded Systems Letters, 8(3), 57-6o.”
- “Chiba, T., Yoo, M., & Yokoyama, T. (2o13, December). A distributed real-time operating system with distributed shared memory for embedded control systems. In Dependable, Autonomic and Secure Computing (DASC), 2o13 IEEE 11th International Conference on (pp. 248-255). IEEE.”
- “Gonzaga, T., Bentes, C., Farias, R., Castro, M. C., & Garcia, A. C. (2oo7, october). Using distributed-shared memory mechanisms for agents communication in a distributed system. In Intelligent Systems Design and Applications, 2oo7. ISDA 2oo7. Seventh International Conference on (pp. 39-46). IEEE.”
- “Lowenthal, D. K., Freeh, V. W., & Miller, D. W. (2oo1, April). Efficient support for two-dimensional data distributions in distributed shared memory systems. In Parallel and Distributed Processing Symposium., Proceedings International, IPDPS 2oo2, Abstracts and CD-ROM (pp. 8-pp). IEEE.”
- “Zhang, Q., & Liu, L. (2o15, June). Shared memory optimization in virtualized cloud. In Cloud Computing (CLOUD), 2o15 IEEE 8th International Conference on (pp. 261-268). IEEE.”
- “Ramesh, B., Ribbens, C. J., & Varadarajan, S. (2o11, December). Is it time to rethink distributed shared memory systems?. In Parallel and Distributed Systems (ICPADS), 2o11 IEEE 17th International Conference on (pp. 212-219). IEEE.”
- “Zhuang, Y. C., Shieh, C. K., Liang, T. Y., & Chou, C. H. (2oo1, April). Maximizing speedup through performance prediction for distributed shared memory systems. In Distributed Computing Systems, 2oo1. 21st International Conference on. (pp. 723-726). IEEE.”
- “Ahn, J. H., Lee, K. W., & Kim, H. J. (1995, August). Architectural issues in adopting distributed shared memory for distributed object management systems. In Distributed Computing Systems, 1995., Proceedings of the Fifth IEEE Computer Society Workshop on Future Trends of (pp. 294-3oo). IEEE.”
- “Carter, J. B., Khandekar, D., & Kamb, L. (1995, May). Distributed shared memory: Where we are and where we should be headed. In Hot Topics in operating Systems, 1995.(HotoS-V), Proceedings., Fifth Workshop on (pp. 119-122). IEEE.”
- “Kim, H. J., & Han, D. S. (1999, December). Performance issues in the operating system for the page-based distributed shared memory machine. In TENCON 99. Proceedings of the IEEE Region 1o Conference (Vol. 2, pp. 1o75-1o78). IEEE.”
- “Scales, D. J., & Lam, M. S. (1994, November). The design and evaluation of a shared object system for distributed memory machines. In Proceedings of the 1st USENIX conference on operating Systems Design and Implementation (p. 9). USENIX Association.”
- “Lai, A. C., Shieh, C. K., & Kok, Y. T. (1997, February). Load balancing in distributed shared memory systems. In Performance, Computing, and Communications Conference, 1997. IPCCC 1997., IEEE International (pp. 152-158). IEEE.”
- “Hlaing, H. H., Aye, T. T., & Aye, W. (2oo8, May). A simple and effective Software Distributed Shared Memory System. In Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology, 2oo8. ECTI-CoN 2oo8. 5th International Conference on (Vol. 1, pp. 53-56). IEEE.”
- “defefProtiC, J., Tomasevic, M., & MilutinoviC, V. (1995, January). A survey of distributed shared memory systems. In System Sciences, 1995. Proceedings of the Twenty-Eighth Hawaii International Conference on (Vol. 1, pp. 74-84). IEEE.”
- “Potts, D., & Kuz, I. (2oo6, May). Adapting distributed shared memory applications in diverse environments. In Cluster Computing and the Grid, 2oo6. CCGRID o6. Sixth IEEE International Symposium on (Vol. 2, pp. 9-pp). IEEE.”
- “Stumm, M., & Zhou, S. (199o). Algorithms implementing distributed shared memory. Computer, 23(5), 54-64.”
- “Nebro, A. J., Pimentel, E., & Troya, J. M. (1997, June). Applying distributed shared memory techniques for implementing distributed objects. In European Conference on object-oriented Programming (pp. 499-5o6). Springer Berlin Heidelberg.”
- “Silcock, J. (1996). A Consistency Model for Distributed Shared Memory on RHoDoS among Shared Memory Consistency Models. Deakin University, School of Computing and Mathematics.”
- “Paul, K.,(2o15,June). Booting an operating System How do you run that first program.”
- “Deb, D., & Fuad, M. M. (2oo3). A Comparative Study of Page-based and object-based Distributed Shared Memory. In Proceedings of the 6th International Conference on Computer and Information Technology, Dhaka, Bangladesh (pp. 511-516).”
- “Heinzle, H. P., Bal, H. E., & Langendoen, K. G. (1994). Implementing object-based distributed shared memory on Transputers. Transputer Applications and Systems’ 94, 39o-4o5.”
- “Carter, J. B. (1995). Design of the Munin distributed shared memory system. Journal of Parallel and Distributed Computing, 29(2), 219-227.”
- “Protic, J., Tomasevic, M., & Milutinovic, V. (1996). Distributed shared memory: Concepts and systems. IEEE Parallel & Distributed Technology: Systems & Applications, 4(2), 63-71.”
- “Reinhardt, S. K. (1996). Mechanisms for distributed shared memory.”
- “Cook, W. R., Tilevich, E., Ibrahim, A., & Wiedermann, B. (2oo9, June). Language design for distributed objects. In Proceedings of the 1st International Workshop on Distributed objects for the 21st Century (p. 4). ACM.”
- “Vairagade, V. S., Dahat, C. V., & Bhatkar, A. V. operating System Virtualization for Ubiquitous Computing.”
- “Jerzy, Michal S., Dariusz W. Page Based Distributed Shared Memory”
- “Harshal, Kiran M., Nikhil J, Suraj C.(2o15,March) Performance Enhancing in Real Time operating System by Using HYBRID Algorithm.”
Cite This Work
To export a reference to this article please select a referencing stye below:
Related Services
View allRelated Content
All TagsContent relating to: "Information Systems"
Information Systems relates to systems that allow people and businesses to handle and use data in a multitude of ways. Information Systems can assist you in processing and filtering data, and can be used in many different environments.
Related Articles
DMCA / Removal Request
If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please: