Through the network, the access needs of multiple hosts can be centralized, allowing multiple front-end hosts to access the same storage device at the back end, thus solving the "storage island" problem caused by individual hosts connecting to independent storage devices in the past. Not only can storage resources be centrally managed without installing independent storage devices for each front-end host, but also the flexibility of disk resource configuration and disk space utilization can be improved. However, the actual environment is not so ideal, and many factors hinder the realization of the above goals.
Limitations of storage networking
Taking SAN applications as an example, the original purpose of introducing SAN was to integrate storage resources and improve disk space utilization. However, in reality, disk array controllers from different brands or different product series of the same brand are not compatible with each other, so it is difficult to allocate disk resources between storage devices of different brands or different product families.
Due to the restrictions of procurement policies and the upgrading of IT products, it is almost impossible to use storage devices of the same brand and model in the entire IT environment. The user's SAN storage environment is often composed of disk devices of multiple brands and models, still forming isolated islands. Although it is much better than the previous situation where each host was connected to an independent storage device, it is still a long way from the complete integration of the storage infrastructure.
This situation will also affect the establishment and execution of advanced applications such as off-site backup and data migration. Currently, many enterprise-level storage devices can provide clustering, remote replication and other functions to help users establish high availability and off-site backup mechanisms. But the problem is that the high availability or remote replication functions that come with most storage devices can only be executed on devices within the same product family. This is equivalent to forcing users to purchase two identical storage devices, and they cannot choose storage devices of different brands or levels based on the difference in business load between the primary site and the backup site, which increases the burden on users.
As for data migration, due to the incompatibility between old and new equipment, users must shut down the system to move data to the new hardware. The resulting business interruption and increased operating costs also make companies daunt data migration when updating or upgrading systems.
In other words, although SAN breaks the limitation of the previous one-to-one connection between the front-end host and the back-end storage device, allowing for more flexible connection and resource allocation between the front-end and the back-end, it is unable to integrate storage devices of different brands and models at the back-end, resulting in failure to optimize resource utilization and limiting the application of advanced functions.
Features and benefits of storage virtualization
In order to solve the shortcomings of the existing network storage architecture, some manufacturers have proposed the concept of "storage virtualization", which decouples the front-end host and the back-end storage device, and uses the transit virtual layer as the storage service basis connecting the front-end and back-end.
Based on the access type, storage virtualization products can be divided into two types: those used in block access and file access environments, corresponding to the SAN and NAS application fields respectively.
SAN Virtualization
SAN virtualization products are usually placed between the front-end host and the back-end storage device in the form of an internetwork connector. The back-end storage device does not directly map the disk space to the front-end host, but first maps the disk area to the SAN virtualization, and then the SAN virtualization maps it to the front-end host.
Therefore, under the SAN virtualization architecture, the SAN virtualization gateway inserts a virtual layer between the front-end and back-end. For the back-end storage device, the virtualization gateway is equivalent to a front-end host that can load its disk space; and for the front-end host, the virtualization gateway plays the role of a storage device that provides disk space. In other words, access between the front-end and back-end is carried out through the mediation of the virtualization gateway.
By virtue of its intermediate position in the architecture, the SAN virtualization gateway can provide many useful access services through its own virtualization software:
(1) Unified storage pool:
Users can use SAN virtualization network connectors to bridge storage devices of different brands and models, load the disk space provided by these storage devices, and then use these disk volumes from different storage devices together to form a storage pool (Pool) for unified use. On this storage pool, virtual disk volumes can be created as needed and loaded to the front-end host through different transmission channels.
Through the storage pool of the virtualized network connector, users can use the space of the underlying storage device more flexibly and allocate space between the underlying heterogeneous storage devices to the front-end host for use. Users do not need to worry about which back-end storage device actually provides the disk space accessed by the front-end host.
Since all storage resources are used in a unified manner under the virtual layer bridge of the virtualized network connector, the connection between the front-end server and the back-end storage device has also changed from the fixed position connection and space mapping in the traditional SAN environment to dynamic bridging through the virtual layer. Therefore, management is more flexible, space utilization can be effectively improved, and the previous storage island problem no longer exists.
(2) More flexible connection architecture:
Since all access between the front and back ends is performed through the transit virtual layer, the connection to the front-end host is provided by the virtual layer rather than the back-end storage device. This also allows the host support of the entire storage environment to break free from the limitations of the back-end storage device.
Under the SAN virtualization architecture, the front-end host types that the storage environment can support are determined by the intermediate virtualization network connector. Users can map the virtual machine disks in the storage pool to the front-end host using any host provided by the virtualization network connector, regardless of the host types supported by the underlying storage device.
This feature will allow users to obtain a more flexible storage connection architecture. For example, if the host of the underlying storage device is a FC interface, but through the bridging of the virtualized network connector, the virtual disk drive of the virtual layer storage pool can be changed to a different host interface such as iSCSI, FC or even FCoE, and loaded to the front-end host.
(3) More flexible advanced applications:
In addition to more flexible space configuration and connection architecture, the SAN virtual layer can also realize more flexible advanced applications, such as local or remote replication (Replication), snapshot (Snapshot) and clone.
- Remote replication
Replication can be divided into three types: host-side, storage-side, and network-side. Many enterprise-level storage devices have built-in synchronous or asynchronous replication functions, allowing users to establish local or remote data mirror backups as the basis for local or remote disaster recovery, but the limitation is that replication operations can only be performed between storage devices of the same brand and series. In other words, users must pay double investment and purchase two sets of identical storage devices and replication function licenses.
If you use host-side replication software instead, it will not be limited by the type of back-end storage device, but it requires installing a software agent on each front-end host that needs to make a mirror backup. Not only does it cost a lot of licensing fees, the agent will also affect the host performance.
With SAN virtualization, there is no such problem. Under the SAN virtualization architecture, replication can be performed by the SAN virtual layer instead of the front-end host or back-end. Replication is performed between two SAN virtualization network connectors. Therefore, it does not matter what the brand and model of the back-end storage device is. As long as one SAN virtualization network connector is built at each site, and then the storage devices of the two sites are integrated into their respective SAN virtualization network connector storage pools, the two SAN virtualization network connectors can establish a replication relationship based on the virtual disk volume in the storage pool.
- Snapshot and Clone
Currently, many enterprise-level storage devices provide disk volume snapshot and clone functions, which can create backups for local disks for data protection or development and testing. However, if there are multiple storage devices of different brands and models in the user environment, the user must purchase snapshot or clone function licenses for different brands and models of devices, and set snapshot or clone operation execution policies separately, which is quite troublesome to build and manage.
Under the SAN virtualization architecture, the virtual layer can be used to uniformly execute snapshot and clone operations. As long as you purchase the snapshot or clone function of the SAN virtualization network connector, you can perform snapshots and clones for the virtual disk devices in the storage pool. Users only need to include the space of the back-end storage device in the storage pool of the SAN virtual layer, and they can obtain disk backup through the snapshot and clone functions of the virtual layer, which is much more convenient both in construction and management.
- Data Migration
Data migration when updating storage devices has always been one of the most time-consuming and troublesome tasks in IT management, and it will also seriously affect the normal access of the front-end host.
Under the SAN virtualization architecture, the data migration work when updating the equipment can be performed by the virtual layer. Since the SAN virtual layer isolates the direct connection between the front-end host and the back-end storage device, all storage devices are under the control of the virtual layer and then bridged to the front-end server, so the access path of the front-end host can be transferred through the virtual layer, and then combined with the background data copy and migration function; the virtual layer can allow the disk space of the old device to continue to provide access services for the front-end server, and then move the data one by one to the disk space of the new device during off-peak hours. After the data migration is completed, the access path is transferred to the new device, so that the downtime required for data migration can be minimized.
Tiered Storage
Currently, many storage devices claim to provide tiered functions, which can configure disk space of different performance levels according to the access performance requirements of the front-end host. However, the limitation is that tiered management can only be performed for disks connected to the local controller, and cannot cover storage devices outside the local machine. Therefore, when there are multiple storage devices of different brands and models in the user environment, this tiered management function will have blind spots that cannot be taken into account.
The aforementioned problems can be solved by using a SAN virtualization architecture. Since all storage devices are controlled by the SAN virtualization layer and then bridged to the front-end server, as long as the appropriate access path settings are made on the virtualization layer, the space provided by the high-performance storage entity can be easily allocated to the key application servers that require high performance based on the front-end server's access performance requirements, while the disk space with ordinary performance can be reserved for backup, archiving and other applications that do not require high performance.
Alternatively, you can use the time when the data was generated as a distinction and move data of a certain period of time to low-cost storage media. This type of migration can be easily done with the help of the virtual layer.
NAS Virtualization
Compared to SAN virtualization technology, which focuses on access path and disk space management issues, NAS virtualization technology at the file level mainly focuses on access directory management issues.
In a large NAS application environment, due to the large number of shared files and the large number of front-end users, the access connection relationship between the file server, the directory and files on the NAS and the client computer will become very complicated. In addition to being difficult to manage, it is also difficult to change the connection structure or update the device. Once the back-end NAS device changes, it will affect the modification of many access paths.
One way to solve this problem is to insert a virtual layer between the client computer and the NAS, and manage the access connection between the front and back ends through the intermediary of the virtual layer.
Traditional network file transfer or sharing applications rely on the file server or NAS and the client computer to identify and confirm the access path through the Universal Naming Convention (UNC). The directory and path provided by UNC allow the client computer to access the files on the network. Under the NAS virtualization architecture, the front-end computer accesses the space on the back-end NAS not through the physical location or name, but through the virtual location assigned by the virtual layer's "Global Name Space".
Under the architecture of the global namespace, the reliance on UNC can be eliminated. All file storage resources are integrated into a unified virtual storage pool by the virtual layer. Therefore, the "logical" name or location of the file accessed by the user has nothing to do with the "actual" name or location - the access request initiated by the user will be redirected to the set location by the virtual layer without knowing the actual location of the file. Just like the user does not need to know the physical IP location, as long as the DNS translation can automatically connect to the correct Web. If a certain access path fails, it can also be automatically transferred to another access path through the NAS virtual layer, thus improving the reliability of the file access service.
Through the mediation of the NAS virtual layer, the access path will not be restricted by the physical connection. Administrators can easily move data between different NAS or file servers without worrying that the original access of front-end users will be affected. This can greatly reduce the difficulty of data migration. Administrators can also formulate policies to allow the virtual layer to automatically move files to different levels of storage devices based on the attributes or time of the files, thereby realizing data archiving or hierarchical storage.
The actual practice is usually to insert an application server containing global namespace function software into the network as a transit network connector. This application server is like a DNS server on an IP network. It will log in to all physical access paths on the NAS and file servers, convert them into global namespaces, and then map them to the front-end user computers. If there are any changes to the back-end storage device, you only need to change the access settings on the application server without affecting the front-end user computers.
Previous article:Problems faced by cluster storage technology
Next article:What is NAS?
- High signal-to-noise ratio MEMS microphone drives artificial intelligence interaction
- Advantages of using a differential-to-single-ended RF amplifier in a transmit signal chain design
- ON Semiconductor CEO Appears at Munich Electronica Show and Launches Treo Platform
- ON Semiconductor Launches Industry-Leading Analog and Mixed-Signal Platform
- Analog Devices ADAQ7767-1 μModule DAQ Solution for Rapid Development of Precision Data Acquisition Systems Now Available at Mouser
- Domestic high-precision, high-speed ADC chips are on the rise
- Microcontrollers that combine Hi-Fi, intelligence and USB multi-channel features – ushering in a new era of digital audio
- Using capacitive PGA, Naxin Micro launches high-precision multi-channel 24/16-bit Δ-Σ ADC
- Fully Differential Amplifier Provides High Voltage, Low Noise Signals for Precision Data Acquisition Signal Chain
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- Rambus Launches Industry's First HBM 4 Controller IP: What Are the Technical Details Behind It?
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- Recommend a mall for purchasing electronic components: free shipping for new customers’ first order, and also provide services such as sample application and technical support!
- Membrane switch components
- Zhengzhou Customs destroyed 20,000 counterfeit Texas Instruments integrated circuits. Have you ever come into contact with counterfeit and shoddy products at work?
- PWM input capture, problem of selecting trigger source
- 4.2V to 50V
- Qorvo CEO Bob Bruggeworth Elected Chairman of the Semiconductor Industry Association
- How to choose pliers, terminals, wires, and heat shrink tubing for DuPont cables?
- Can the P3.1 serial port RXD of STC be used as a normal external interrupt reception?
- Talk about the "obstacles" on the road to power supply upgrade
- How to do switch detection in TWS headset design