Zavřít
Enter Follow us

3S.cz

Specialized Section

HOW TO CHOOSE DISK ARRAY – CHAPTER 2

02.10.2015, 12:49

Today in our popular series we are going to deal with the topic of disk array architecture.

The basic subdivision may be summoned to midrange disk array and enterprise disk array. For specification let’s define these terms.

Modular disk array is a system of two (or more) mutually redundant controllers, which manages the expansionary disk boxes. That’s where modularity of these systems begins and also ends. In comparison the enterprise disk arrays exceed in extraordinary scalability. It is possible to add ports, disk capacities, caches, processors, all this online. Absolutely different architecture allows guaranteeing 100 % accessibility of data and the ability to connect disk arrays of a third party under its administration.  

Number of companies considering new disk system is in the situation when the classic concept of midrange disk arrays just are not sufficient enough for their needs but the higher level – enterprise storage – had not been affordable yet.

Times are changing and for instance the Hitachi HUS-VM disk system rewrites these clichés and makes the modern technologies widely affordable. To be specific its construction results from enterprise systems (including firmware), but using of new chips with higher level of integration and widening with expansion shelves known from the class of modular disk arrays enabled to significantly lower the price and make this technology accessible to all companies, not only the large ones.   

 

HARDWARE OR SOFTWARE STORAGE?

There are two approaches of producers to create disk system:

-        The real hardware system based on specialized high-efficient chips and thin layer of control microcode 

-        „Software storage“ – system built on commodity Intel architecture with embedded operation Linux/Unix system

Both ways have pros and cons. The real hardware disk array excels in efficiency, reliability and stability of performance regardless of volume of data saturation. Because the architecture is built on hardware, the implementation of new attributes is not completely flexible. 

In the contrary the software storage is nothing but efficient PC “bonneted” as disk array. The presence of operating system enables to easily implement the desired new protocols and functionalities (deduplication, compressions) but it is also its Achilles’ heel. That brings many problems. The owners of these systems know that they cannot load or fill up the storage too much (over 60 %), otherwise the internal overhead starts exponentially rise and the storage deals with its own problems instead of application attendance. Apart from reliability, fall-outs, necessity of occasional restarts...    

In the process of choosing the disk array it is wise to consider if you need so wide flexibility and that you really want to risk all the issues of seemingly cheap software storage.

 

CONTROLLER ARCHITECTURE

ALUA is the shortage for “Asymmetric Logical Unit Access”. It means that the efficiency of LUN changes according to which controller is used for access to LUN by servers. That is because every LUN is “owned” by one controller in the architecture of disk array. When communicating via controller that is not the owner of this LUN, the passing of data appears which is detrimental for efficiency.

To gain optimal efficiency it is necessary to manually set primary and backup ways between servers and disk array so it is always communicated via the path that goes to the controller owned by the required LUN.

It is not possible to handle this correctly in large environments.

The other option is to choose the “NO-ALUA” disk array. It means that it is communicated via all ports with equivalent speed and the administrator does not have to solve these problems at all. 

 

TWO OR MORE CONTROLLERS?

This request belongs among frequent myths. Four are better than two and that’s better, isn’t? That does not apply always. The truth is that many producers of four-controller systems in the world of midrange disk arrays chose this architecture because they realized that their performance is not capable of competing and the simplest way was to add two more controllers and to connect all together with some kind of bridges.  

This built up system is usually presented by the producers as ultimate and advanced architecture. A material fact is being unsaid though: even four-controller system is resistant against failure of only one controller.  

Just as the two-controller system. But the probability of breaking down is twice higher in the case of four-controller system.  

 

THE GLOSS AND THE MISERY OF SSD

The explosion of SSDs flooded nowadays IT. The business world loved it because it made possible to provide more transactions which meant higher incomes. But then after short period of time of using SSDs an unpleasant surprise showed up, which was presented by decreasing performance of IOs and prolonging of response time. One of the main reasons for slowing down of SSDs is the impossibility to rewrite or to delete individual SSD cells.  

The solution of slowing SSDs is to use Hitachi FMDs (Flash Module Drive) instead of SSDs. They are equipped with unique Hitachi flash controller, which is able to simultaneously service Input/output operations and to solve complicated process of deleting, reallocating and refreshing of SSD memory cells.

 

In the next chapter we are going to talk about a “hot topic” – How to put up an assignment or a contest?