In the late 1990s and early 2000s, the idea of grid computing, a type of distributed computing that harnesses the power of many computers to handle large computational tasks, was all the rage, at least among organizations with high-performance computing (HPC) needs. One of the most notable projects to make use of grid computing was SETI@home, which utilized thousands of Internet-connected computers to search for extraterrestrial intelligence (and still does).
Yet despite the promise of grid computing and the efforts of major vendors such as Sun Microsystems (NASDAQ: JAVA), IBM (NYSE: IBM) and HP (NYSE: HPQ), grid computing failed to catch on in mainstream enterprises, remaining mainly the province of governmental and scientific institutions with data-intensive storage and computing needs and few users. Enterprise uses have been more along the lines of R&D and data-intensive financial simulations instead of the mainstream data centers the technology's proponents had hoped to win over.
Now enterprises are embracing a similar technology: Cloud computing and services such as Amazon's (NASDAQ: AMZN) Simple Storage Service (S3), which provide companies with scalable, high-speed data storage and services at an attractive price.
Can cloud computing succeed where grid failed and find widespread acceptance in enterprise data centers? And is there still room for grid computing in the brave new world of cloud computing? We asked some grid computing pioneers for their views on the issue.
Differences Between Clouds and GridsWhile there are many similarities between grid and cloud computing, it is the differences that matter most. Grid computing is better suited for organizations with large amounts of data being requested by a small number of users (or few but large allocation requests), whereas cloud computing is better suited to environments where there are a large number of users requesting small amounts of data (or many but small allocation requests).
Read the rest at EnterpriseStorageForum.