0

Distributed Systems

Distributed Systems

Before the introduction of Distributed systems, computers were large and expensive, also all the systems were centralized as there was no way to connect them. Then the Internet was introduced. A way at which many computers could be connected over the worldwide web and henceforth the birth of Distributed Systems. Distributed systems can be defined as a software system in which components are located in separated locations however they communicate with each other in order to complete a task and main goal. These components usually communicate over a network to coordinate their actions by passing messages. The main difference between distributed systems and centralized systems is that Centralized systems are located in a single place and the system performs as a whole unit, this is also the major downfall of the centralized systems as if there occur a problem anywhere in this system, it renders the whole system useless. Distributed system however has none of the weaknesses associated with centralized system. A failure in a part of the system does not necessarily renders the system as whole useless, also it resources can easily be multiplied to meet demands i.e. it is scalable and makes use of resource sharing to grow and accommodate increased workloads.

Characteristics of Distributed Systems

Resource Sharing

This is when a computer resource is made available from one host to other hosts on a computer network. Other sharable resources could be computer programs, data etc. Resources may be managed by servers and accessed by clients. Applications such as Rss Readers are good examples of how resources can be shared. Data are stored on the main server to be accessed by client devices such as tablets and mobile devices. Hardware such as printers, disk, hard drives can also be shared.

Concurrency

Cuncunrrency refers to properties of a system where various processes and computations are excuted simultaneously and they interact with each other. The processes may be executing on multiple cores in the same chip, preemptively time-shared threads on the same processor, or executed on physically separated processors. In a network of computers, concurrent program execution is the norm. Different users can work on seperate computers sharing resources such as web pages or files when it is required. By adding more resources such as computers or better network, the system’s performance and resource sharing can be highly increased.

Scalability

Scalability is the ability for a system, process or network to accommodate growth and enlarge to handle growth in request or workload. This is a major limitation in centralized systems as they do not accommodate growth like distributed systems does. To handle mre workload distributed systems are transparent in such way that more resources ay be added at any point in time such as computers, extra data space or processes to eet demands. Scalability in distributed systems should include at least three omponents:

  • Size Scalability
  • Gographical Scalability
  • Administrative Scalability

Openness

Openness can be defined as system that uses open interfaces to interoperate. Interface refers to a common boundary, a means to make connection between two software components. Open systems should conform to:

  • Well-defined interfaces
  • Support portability of applications
  • Systems should easily interoperate.

Fault Tolerance

This is the property that enables a system to continue operating properly in the event of the failure of (or one or more faults within) some of its components. If its operating quality decreases at all, the decrease is proportional to the severity of the failure, as compared to a naively designed system in which even a small failure can cause total breakdown. All computer systems are proned to unpredictable failure, and it is the responsibility of system designers to plan for the consequences of possible failures. Faults in the network result in the isolation of the computers that are connected to it, but that doesn’t mean that they stop running. In fact, the programs on them may not be able to detect network failure or slow connections. Similarly, the failure of a computer, or the unexpected breakdown of a program somewhere in the system causing a system crash, is not immediately made known to the other components with which it communicates. When individual parts of the system fails, the others part are still running.

Transparency

This is some aspect of the distributed system that is hidden from the user (programmer, regular user or application program). This is some set of mechanism included in the system layer below the interface where the transparency is required. Some basic transparencies have been defined for distributed system such as Access transparency, Location transparency, Fault transparency, Replication Transparency, Performance transparency, scaling transparency and so on but however, not all of these are appropriate for every system, or are available at the same level of interface. The key design goal of distributed system includes High Performance, Reliability, Scalability, Consistency and Security. All this are ensured to deliver a higher quality system but there is some basic design issues associated with distributed systems. This includes

  • Naming
  • Communications
  • Software Structure
  • Workload Allocations
  • Consistency Maintenance
  • Distributed System Architectures

Distributed system incorporates some various architectures, this includes:

Client-Server Architecture

In this type of architecture, there are only two sides in existence, the client or server-side. These servers are highly optimozed and powerful computers which are dedicated to network traffics management, delivering files and so on. Clients rely on these servers for resources such as files or even processing powers, the client varies from mobile devices to personal computers and tablets. In client to server, the clients always initiate the connection to the servers and the servers always wait for the request from the clients. Examples of client are web browsers, chat client and email clients. Examples of servers are web servers, database servers and file servers such as Google Drive.

Peer to Peer

Peer2peer is another distributed system architecture where each participating host can act as both client and server. Unlike traditional client to server where consumption of resources is divided, peer to peer makes available their resources to other participants without the need for servers’ coordination or some stable hosts. Peer2peer works in such ways a node can be both a client and a server simultaneously, like every other distributed system architecture peer2peer also communicate over a network but the nly difference is that at the application layer peers are able to communicate directly with one another. A major let down in peer2peer architecture is that it is expose to various attacks from malicious users.

The 3 Tier Architecture

3 Tier Architecture is based on client to server architecture whereby the functional process logic, data access/data storage and the user interface are developed and maintained independently as seperate modules. This allows each module to ve uniquely upgraded or replaced. The tiers in the 3 tier architectures consist of:

  1. Presentation Tier: This represent the topmost layer of the architecture. It simply provides the GUI for the users to acess.
  2. Application Tier: Also known as the bussiness logic layer, this is the middle tier and it is responsible for performing processing and application functions.
  3. Data Tier: This is simply the database servers that stores data and also provides Api to the application tier.

Examples of Distributed Systems

Google Search Engine

The famous Google search engine is an example of how distributed systems can make a change both in performance and stability. With over 10 billion searches made per month, google’s search engine is able to index the entire contents of the World Wide Web, this is a lot of data to process as it includes various forms of data from pdfs to web pages and more. Centralized systems cannot handle this much load even if the processor is the best of today’s technology. Google search engine can process billions of requests per seconds without delays and even so, Google can easily add more processing power to increase the performance of the system.

Massively Multiplayer Online Games (MMOGs)

Games such as Activision’s Call of Duty, Battlefield, and Microsoft’s Halo and are good examples of MMOGs where very large numbers of users interact through the internet with in a virtual world of combat and exploration. These systems are so complex and sophisticated such that millions of players can log on to the server and join rooms then play simultaneously online. The engineering of MMOGs represents another major challenge for distributed systems technologies, particularly because of the need for fast response times to for smooth gaming and user experience. Other challenges include the real-time propagation of events to the many players and maintaining a synchronized view of the shared world. This therefore provides an excellent example of the challenges facing modern distributed systems designers. However a number of solutions have been proposed for the design of these massive multiplayer online games. Take for example EVE which happens to be one of the largest online games utilizes a client-server architecture where a single copy of the state of the world is maintained on a centralized server and accessed on players console or other supported devices.

Another huge online game “EverQuest” adopts more distributed architecture where the universe is partitioned across a very large number of servers that may also be geographically distributed. Other new proposed solutions are some architecture based on peer-to-peer technology where every participant contributes resources such as storage and processing to accommodate the game. This architecture however is not based on client-server principles but is naturally extensible by adding servers.

Advantages of Distributed System

  • It is Economical (Commodity microprocessors have better price/performance than mainframes)
  • It is reliable (One system failure does not render the whole system useless)
  • It is Extensible ( Resources such as computers and software can be added incrementally whenever the system needs more processing power for more data and computing )
  • Availability (If one site fails in a distributed system, the remaining sites may be able to continue operating. Thus a failure of a site doesn't necessarily imply the shutdown of the overall System).

Disadvantages of Distributed System

  • Data may be accessed securely but without the owner’s consent (significant issue in modern systems)
  • Software development cost is higher ( because it is more difficult to implement a distributed database system )
  • Increased Processing Overhead (The exchange of information and additional computation required to achieve intersite co-ordination are a form of overhead that does not arise in centralized system)

Conclusion

Distributed system offers an economical and reliable yet fast performance. Reliable in the sense that of one system crashes, the system as a whole can still survive and fast because it introduces enhanced performance through load distributing. Unlike regular independent computers, distributed systems offer data and resource sharing, flexibility and communication.


All rights reserved

Viblo
Hãy đăng ký một tài khoản Viblo để nhận được nhiều bài viết thú vị hơn.
Đăng kí