Open In App

Grid Computing

Last Updated : 20 Jun, 2024
Improve
Improve
Like Article
Like
Save
Share
Report

Grid computing is a distributed architecture that combines computer resources from different locations to achieve a common goal.  It breaks down tasks into smaller subtasks, allowing concurrent processing. In this article, we are going to discuss grid computing.

What is Grid Computing?

Grid Computing can be defined as a network of computers working together to perform a task that would rather be difficult for a single machine. All machines on that network work under the same protocol to act as a virtual supercomputer. The tasks that they work on may include analyzing huge datasets or simulating situations that require high computing power. Computers on the network contribute resources like processing power and storage capacity to the network. 

Grid Computing is a subset of distributed computing, where a virtual supercomputer comprises machines on a network connected by some bus, mostly Ethernet or sometimes the Internet. It can also be seen as a form of Parallel Computing where instead of many CPU cores on a single machine, it contains multiple cores spread across various locations. The concept of grid computing isn’t new, but it is not yet perfected as there are no standard rules and protocols established and accepted by people. 

Why is Grid Computing Important?

  • Scalability: It allows organizations to scale their computational resources dynamically. As workloads increase, additional machines can be added to the grid, ensuring efficient processing.
  • Resource Utilization: By pooling resources from multiple computers, grid computing maximizes resource utilization. Idle or underutilized machines contribute to tasks, reducing wastage.
  • Complex Problem Solving: Grids handle large-scale problems that require significant computational power. Examples include climate modeling, drug discovery, and genome analysis.
  • Collaboration: Grids facilitate collaboration across geographical boundaries. Researchers, scientists, and engineers can work together on shared projects.
  • Cost Savings: Organizations can reuse existing hardware, saving costs while accessing excess computational resources. Additionally, cloud resources can be cost-effectively.

Working of Grid Computing

A Grid computing network mainly consists of these three types of machines 

  • Control Node: A computer, usually a server or a group of servers which administrates the whole network and keeps the account of the resources in the network pool.
  • Provider: The computer contributes its resources to the network resource pool.
  • User: The computer that uses the resources on the network.

When a computer makes a request for resources to the control node, the control node gives the user access to the resources available on the network. When it is not in use it should ideally contribute its resources to the network. Hence a normal computer on the node can swing in between being a user or a provider based on its needs. The nodes may consist of machines with similar platforms using the same OS called homogeneous networks, else machines with different platforms running on various different OSs called heterogeneous networks. This is the distinguishing part of grid computing from other distributed computing architectures. For controlling the network and its resources a software/networking protocol is used generally known as Middleware. This is responsible for administrating the network and the control nodes are merely its executors. As a grid computing system should use only unused resources of a computer, it is the job of the control node that any provider is not overloaded with tasks. 
The meaning of the term Grid Computing has changed over the years, according to “The Grid: Blueprint for a new computing infrastructure” by Ian Foster and Carl Kesselman published in 1999, the idea was to consume computing power like electricity is consumed from a power grid. This idea is similar to the current concept of cloud computing, whereas now grid computing is viewed as a distributed collaborative network. Currently, grid computing is being used in various institutions to solve a lot of mathematical, analytical, and physics problems. 

Grid Computing

What are the Types of Grid Computing?

  • Computational grid: A computational grid is a collection of high-performance processors. It enables researchers to utilize the combined computing capacity of the machines. Researchers employ computational grid computing to complete resource-intensive activities like mathematical calculations.
  • Scavenging grid: Similar to computational grids, CPU scavenging grids have a large number of conventional computers. Scavenging refers to the process of searching for available computing resources in a network of normal computers.
  • Data grid: A data grid is a grid computing network that connects multiple computers together to enable huge amounts of data storage. You can access the stored data as if it were on your local system, without worrying about where it is physically located on the grid.

Use Cases of Grid Computing

Advantages of Grid Computing

  • Grid Computing provide high resources utilization.
  • Grid Computing allow parallel processing of task.
  • Grid Computing is designed to be scalable.

Disadvantages of Grid Computing

  • The software of the grid is still in the evolution stage.
  • Grid computing introduce Complexity.
  • Limited Flexibility
  • Security Risks

What is Distributed Computing?

Distributed computing refers to a system where processing and data storage is distributed across multiple devices or systems, rather than being handled by a single central device. In a distributed system, each device or system has its own processing capabilities and may also store and manage its own data. These devices or systems work together to perform tasks and share resources, with no single device serving as the central hub. One example of a distributed computing system is a cloud computing system, where resources such as computing power, storage, and networking are delivered over the Internet and accessed on demand. In this type of system, users can access and use shared resources through a web browser or other client software.

What is Cluster Computing?

Cluster computing is a collection of tightly or loosely connected computers that work together so that they act as a single entity. The connected computers execute operations all together thus creating the idea of a single system. The clusters are generally connected through fast local area networks (LANs).

Frequently Asked Questions on Grid Computing – FAQs

Which is more scalable cloud or grid computing?

Cloud Computing is more scalable than grid computing.

In which computing does the all devices shares same hardware and operating system?

In clustered computing all the devices shares same hardware and operating system.

How do grid computing handle failure and redundancy?

Load balancing and redundancy solutions can be implemented to handle failure. These techniques allow for faster disaster recovery through redundancy and availability, that’s why load balancing is used in many fault-tolerant systems.



Previous Article
Next Article

Similar Reads

Difference between Cloud Computing and Grid Computing
Cloud Computing: Cloud Computing is a Client-server computing architecture. In cloud computing, resources are used in centralized pattern and cloud computing is a high accessible service. It is a pay and use business means, in cloud computing, the users pay for the use Grid Computing: Grid Computing is a Distributed computing architecture. In grid
2 min read
Difference between Grid computing and Cluster computing
Cluster Computing: A Computer Cluster is a local network of two or more homogeneous computers.A computation process on such a computer network i.e. cluster is called Cluster Computing. Grid Computing: Grid Computing can be defined as a network of homogeneous or heterogeneous computers working together over a long distance to perform a task that wou
2 min read
Distributed Objects Computing: The next generation of client-server computing
Software technology is in the midst of a major computational shift towards distributed object computing DOC. Distributed computing is poised for a second client-server revolution, a transition from first generation client-server era to the next generation client-server era. In this new client-server model, servers are plentiful instead of scarce(be
2 min read
Difference between Cloud Computing and Cluster Computing
1. Cloud Computing : Cloud Computing refers to the on demand delivery of the IT resources especially computing power and data storage through the internet with pay per use pricing. It generally refers to the data centers available to the users over internet. Cloud Computing is the virtualized pool of resources. It allows us to create, configure and
3 min read
Difference between Cloud Computing and Distributed Computing
1. Cloud Computing : Cloud computing refers to providing on demand IT resources/services like server, storage, database, networking, analytics, software etc. over internet. It is a computing technique that delivers hosted services over the internet to its users/customers. Cloud computing provides services such as hardware, software, networking reso
3 min read
The Gossip Protocol in Cloud Computing
Gossip Protocol is a communication protocol, it is process computer to computer communication that works on the same principle as how information is shared on social networks. Nowadays, most of the systems often use gossip protocols to solve problems that might be difficult to solve in other ways, either due to inconvenience in the structure, is ex
3 min read
Edge Computing
Edge Technology aims at making Internet Of Things (IOT) with 100 thousand of sensors in next decade, with the increased usage and manipulation of large data it becomes important to get used to this technology which refers to computing on sensor itself. 2019 is predicted as the year of edge technology and will remain so in the coming years. In a var
3 min read
Technical Challenges of Mobile Computing
Mobile Computing is defined as a computing environment which is mobile and moves along with the user. There are various number of challenges that affected mobile computing and it has to overcome them. Some of the major technical challenges faced by mobile computing are: 1. Mobility 2. Wireless Medium 3. Portability These are explained as following
2 min read
Difference between Supercomputing and Quantum Computing
Supercomputing: The supercomputing can be defined as the processing of highly complex problems using the very large and concentrated compute resources of supercomputer. It enables problem solving and data analysis more easily and simply. A supercomputer can be a huge machine, a big box, or a series of boxes filled with processors, memory, and stora
2 min read
Fog Computing
Fog Computing is the term coined by Cisco that refers to extending cloud computing to an edge of the enterprise's network. Thus, it is also known as Edge Computing or Fogging. It facilitates the operation of computing, storage, and networking services between end devices and computing data centers. The devices comprising the fog infrastructure are
3 min read
Article Tags :