Pages

Monday, September 20, 2010

A Study of Consistency Mechanisms in Distributed Systems with a comparison between Distributed Lock Managers and Leases

A Study of Consistency Mechanisms in Distributed Systems with a comparison between Distributed Lock Managers and Leases

Abstract:

Current trends in computing technologies are drifting towards distributed and parallel computing. These topologies emphasize on concurrent access schemes. Because of distributed computing, the use of shared resources like databases, memory, etc. is growing. Locking mechanisms can be used effectively to achieve high concurrency levels without corrupting the resources. There are a wide range of synchronization protocols, some providing strong consistency while others granting weak consistency. There is a trade-off between complexity and the level of consistency offered by these locking mechanisms. In this report I describe two locking mechanisms appropriate for the consistency requirements of distributed computing environments, viz. Distributed Lock Managers (DLM’s) and Leases. Distributed lock managers leverage conventional client-server locking mechanisms to adapt them to distributed backgrounds. They present strong consistency but are complex in implementation. Leases are time-based protocols and are a hybrid version of server-based and client-based locking strategies. I have compared these two approaches and proposed suitable use cases for these architectures.

Introduction:

Several trends anticipate properties of future distributed systems. Systems are being extended over wider-area networks, the speed of processors also continues to grow. Finally, larger numbers of hosts, both clients and servers, are being tied together within a single system. The increase in the use of distributed systems necessitates proper functioning of consistency mechanisms. Various nodes within the system can have concurrent access to shared resources. It is required that they all get a consistent view of the resources. For this reason, when one node is writing to such resources, no other nodes should get access to the same resource. This can be achieved through locking mechanisms.

The WorldWideWeb (WWW), an extensively distributed architecture, has observed an exponential growth in recent times. The growth is in the number of users as well as in the diversity of applications accessing information stored at geographically distributed sites. This growth, however, is not uniform. Certain objects are accessed more than others and create hot spots. This leads to overload at the server, congestion in the network and increase in client response times. In addition, newer applications require smaller access latencies and stronger consistency guarantees. Consider the following illustration:

Consider a web server that provides on-line stock trading over the internet[9]. Typically, the online traders require the latest stock quote of the stock, which is under consideration. In addition, they want to know the latest news about the company, its quarterly earnings, charts on its performance in the last few months and other statistical information. However, the semantic requirements of the information vary. The traders will be able to tolerate small inconsistency in the statistical information (like number of employees) that is not critical in making a decision. However, the stock quotes, latest news and other critical information need to be consistent all the time. A trader should not be given a inconsistent stock quote and asked to decide whether to buy the stock. This example shows that applications require different consistency guarantees, often requiring these diverse guarantees to coexist.

In order to prevent stale information from being transmitted to clients, a proxy must maintain consistency of cached objects with those on the servers Existing proxies mostly employ weak consistency mechanisms. That is, the proxy does not guarantee that the object served from the cache will always be consistent with the server at all times. This mechanism may be good enough for applications (like reading information about a company) that do not require strong consistency. Until recently, applications did not place stronger consistency requirements. However, with the evolution of the web, application like online trading and shopping are gaining dominance, and these impose strong consistency requirements. The current proxy consistency mechanisms provide no or little support for such applications.

A prototype model for strong consistency mechanism in the web has been developed and implemented in the current internet. The work enables coexistence of weak consistency mechanisms with those that provide strong consistency, thereby serving the diverse needs of applications.


for more info visit.
http://www.enjineer.com

No comments:

Post a Comment