However the best recommendation is to check your storage vendor. A recent trend is that non-IBM vendors are supporting the AIX native MPIO along with the ODM storage driver. MPIO software is always of special consideration because many problems related to paths, boot and performance problems come from this. In contrast, the use of vSCSI imposes higher latency and imposes higher CPU consumption. In addition, the suggestion for AIX with VIOSes is to use NPIV with dual-VIOS, since it provides better performance with less CPU consumption. Limiting the number of paths available to AIX is can reduce the boot time and cfgmgr, and improve error recovery, failover, and failbacks operations. Basic Fibre Channel I/O path and MPIO balancingĪs you can see, multipathing provides better performance and high-availability, because of the use of multiple paths, but the best practice for AIX for the number of paths is to have between two to four paths, despite the fact that AIX can support up to 16 paths. Figure 2 illustrates the multipathing concept: Figure 2. The main idea is to provide redundancy all the way down to the storage server. Multipathing provides load balancing and failover capabilities at the host and FC communication. Keep in mind, NPIV configurations require zoning using the WWPN. The best practice is to use soft zoning-that is, creating zones using only the worldwide port name (WWPN), for individual ports (single-host port) with one or more target (storage system) ports. In this regard, there are two types of zoning: hard zoning and soft zoning. SAN Zoning is important to avoid boot problems and paths issues on AIX. Zoning is used to keep the servers isolated from each other and controls which servers can reach the LUNs in the storage server. Types of switched core-fabric topologiesĪnother important consideration is the zoning configuration. Figure 1, below, summaries the best practices for SAN topologies: Figure 1. This ensures redundancy by using multiple independent paths from the hosts to the storage system. This means, all storage servers and hosts should be connected to two different fabrics. The result is a cost-effective configuration that guarantees the best performance for the AIX hosts.Īdditionally, all SAN topologies should always be divided in two fabrics with Failover configuration. On the other hand, the hosts with medium to low I/O requirements should use edge-switches, to save the expensive director switch ports. This configuration is a best practice because there are zero hops for AIX or VIOS hosts, resulting in improved I/O latency, avoidance of bottleneck and buffer starvation problems for AIX LPARs. In this scenario, you connect the VIOSes and high-demanding AIX LPARs to the director-switches. However, the best practice for AIX is to use the core-to-edge topology with director-class switches. These topologies fit better for certain demands. For example, there are different SAN topologies such as edge-core-edge, core-to-edge or full-mesh. However, there are basic principles to consider which we can rely on. Unique combinations of AIX configurations, SAN equipment, topologies and zoning each need unique recommendations. Also, be sure to avoid different FC switch vendors, except when running migrations. For instance, the IBM System Storage Interoperation Center (SSIC) provides interoperability requirements for IBM storage. For that reason, it’s a good idea to review this matrix before making big changes. Interoperability problems are notoriously difficult to isolate and can take a long time to obtain a fix from the vendor. For instance, if you’re using an IBM Storwize system, you can read the best practice Redbook for IBM Storwize V7000.Īdditionally, check the interoperability matrix from the storage vendor. These documents provide the best practices between the storage server and the most common OSes, including AIX. From my experience, I’ve found it really helpful to read the host attachment or host connectivity guides from the storage vendor. However, all of these combinations have unique requirements. AIX can work with storage systems from different vendors. Having said that, it’s also important to check the storage vendor site and follow its recommendations. One general rule of thumb is to avoid making changes if the performance is OK, because what could help in one situation may create an issue in other. In this article, I want to explain the major best practices for AIX in a SAN environment, such as storage systems, FC connectivity and multipathing recommendations. Therefore, one important aspect to get a reliable AIX is to follow the best practices for the SAN environment. This represents challenges, due to the fact the interoperability among vendors is less than ideal. AIX supports different SAN topologies, switches and storage servers from multiple vendors.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |