The digital economy continues to grow, and as it does, it places an increased value on information held within data centers. Today, enormous amounts of data are being created from a variety of sources, such as applications, mobile devices, big data analytics, and the cloud. This is changing the speed with which business is conducted and the scale in which it occurs. Moreover, the digital explosion shows no sign of slowing, and we are going to witness 50% growth of data being stored per year over the next few years.
IT managers and decision makers within the companies and government agencies are presented with a challenge to maintain business continuity with multi-Petabyte (PB) data sets with limited IT budgets and reduced staffing in IT departments. The overall spending of hardware and software has increased by 25% in the past five years and the biggest challenge is to fulfil the data growth demand in an efficient and cost-effective manner. The main effort should be focused on providing new technology which will enable future platforms for fast data access.
Enterprises are faced with challenges such as the vast quantities of data being generated in our digital universe, the ever-growing number of applications, the cloud, virtualization, and big data analytics. Taken together, these issues place pressure on enterprise infrastructures to deliver higher performance and improved responsiveness at greater levels of efficiency.
There are three basic elements of performance in a data center: its processing power provided by servers, its network services using switches and routers, and its storage systems which consists of the disks and SAN and NAS controllers. Each of these elements is under constant strain to keep up with the digital demands of users. Servers and networks have kept pace through added power and intelligently utilizing that power, but storage has not and has become the bottleneck of the enterprise. Now the storage bottleneck has moved beyond being an IT problem and has created a perilous situation for the organization as a whole. So what's causing the Storage IO Bottleneck?
Of the two elements that have kept pace with the growing digital demand, compute power has kept pace via increased performance and increased core density, as well as increased Intelligence though server virtualization and scale-out clustering or grid infrastructures. Networks similarly have kept pace with increased bandwidth capacity and intelligent use of that capacity through QoS, prioritization, and efficient use of wide area connectivity.
Meanwhile, storage performance has not kept pace. Instead it has remained frozen using the same architectural design for at least a decade; a high performance SAN or NAS controller pair that drives an increasing number of disks. While increasing the number of drives can improve performance, there is a limit to the number of drives these controller pairs can support as well as a limit to the amount of inbound traffic they can sustain. This controller (SAN) or head (NAS) is now the primary bottleneck limiting improved storage performance.