Caching SAN adapter
This article possibly contains original research. (May 2014) |
In an enterprise server, a Caching SAN Adapter is a host bus adapter (HBA) for storage area network (SAN) connectivity which accelerates performance by transparently storing duplicate data such that future requests for that data can be serviced faster compared to retrieving the data from the source. A caching SAN adapter is used to accelerate the performance of applications across multiple clustered or virtualized servers and uses DRAM, NAND Flash or other memory technologies as the cache. The key requirement for the memory technology is that it is faster than the media storing the original copy of the data to ensure performance acceleration is achieved.
A caching SAN adapter's cached data is not captive to the server which hosts the adapter and enables clustered enterprise servers to share the cache for fault tolerance and application performance acceleration. Server application transparency is a key attribute of a caching SAN adapter as it ensures caching benefits without additional changes to the operating system and application stacks that can adversely impact interoperability and latency.
Caching SAN Adapters are a new, hybrid approach to server-based caching addresses the drawbacks of the traditional implementations. Rather than creating a discrete captive cache for each server, a Caching SAN Adapter uses a cache that is integrated with a Host Bus Adapter. The adapter uses a cache-coherent implementation that uses the existing SAN infrastructure to create a shared cache resource distributed over multiple physical servers. This capability eliminates the single-server limitation for caching and provides the performance benefits of cached-data acceleration to the high I/O demands of clustered applications and highly virtualized data center environments.
A Caching SAN Adapter incorporates a class of host-based, intelligent I/O optimization engines that provide integrated storage network connectivity, storage capacity, and the embedded processing required to make all cache management entirely transparent to the host. The only host-resident software required for operation is a standard host operating system (OS) device driver. A Caching SAN Adapter appears to the host as a standard Host Bus Adapter and uses a common Host Bus Adapter driver.
Caching SAN Adapters delivers something beyond that of server-based caching implementations: the ability to provide cluster caching for SAN adapters and then share their caches between servers. Clustering Caching SAN Adapters creates a logical group that delivers a single point of management and cooperates to maintain cache coherence, high availability, and allocation of cache resources. Unlike standard Host Bus Adapters, Caching SAN Adapters communicate with each other as both initiators and targets, using the Fibre Channel or similar storage networking infrastructure. This communication allows the Caching SAN Adapter cluster to share and manage caches across multiple server nodes. This distributed cache model enables a single copy of cache data, which ensures coherent cache operation, maximizes the use of caching resources, simplifies the architecture, and increases scalability.
History
[edit]The caching SAN adapter in the enterprise server is a hybrid of two widely used technologies: the SAN HBA and cached memory. SAN HBAs provide server access to a high-speed network of consolidated storage devices. Cached memory has had two primary implementations: DRAM modules and NAND Flash in the form of a Solid State Drive (SSD). DRAM cache is used primarily at the CPU level and offers the highest levels of performance. Due to its high cost per gigabyte (GB) and volatility, DRAM cache has not been used extensively to accelerate larger data sets. SSD caching has gained popularity in recent years and typically comes in a drive, PCIe or other form factor. SSD cache offers an exceptional performance boost to server operations but is captive to the server in which it is contained. Typically, a single server SSD residing in a physical server is not sharable with other servers within a clustered environment. Furthermore, SSD caches lack transparency requiring caching software, special operating system drivers and other changes to make applications aware the cache is available for utilization.
Future
[edit]Caching SAN adapter platforms are available now that utilize server-based NAND flash technology as the cache and Fibre Channel as the server-to-storage interconnect. This configuration enables clustered servers to share cached data. There are new caching SAN adapter technologies in development that will expand utilization to other server-to-storage communication methodologies including Ethernet-based iSCSI and Fibre Channel over Ethernet (FCoE).
See also
[edit]References
[edit]- Data Center Knowledge: A Distributed Caching Approach to Server-based Storage Acceleration
- Network World: Where to deploy flash-based SSD caching to optimize application acceleration
- Storage Switzerland: Overcoming the Weaknesses of Server-Side Flash[permanent dead link ]
- Storage Networking Industry Association: Architecting Flash-based Caching for Storage Acceleration Archived 2013-09-20 at the Wayback Machine
External links
[edit]- Taneja Group: Solving Caching Issues in Virtualized and Clustered Environments[permanent dead link ]
- Enterprise Storage Group (ESG) Validates Caching SAN Adapter Performance
- The Register: 3 Words - Caching SAN Adapter. Just blew your mind, didn't we?
- Storage Review: QLogic Announces FabricCache Server-Based SSD Caching[permanent dead link ]
- The SSD Review: Server Side SSD Caching Achieves A New Summit For SAN Storage
- SSG-Now: Mount Rainier project simplifies and advances server-side cache - Flash memory cache can be transparently shared across multiple physical servers