Technical Report NetApp Clustered Data ONTAP 8.3.x And 8.2

Transcription

Technical ReportNetApp Clustered Data ONTAP 8.3.x and 8.2.xAn IntroductionJay Goldfinch, NetAppNovember 2015 TR-3982AbstractThis technical report is an introduction to the architecture and key customer benefitsof the NetApp clustered Data ONTAP 8.3.x and 8.2.x operating system.NetApp Clustered Data ONTAP 8.3 and 8.2.x

TABLE OF CONTENTSNetApp Clustered Data ONTAP: Overview . 4Physical Cluster Components . 6Nodes . 6HA pairs . 7Drives, RAID groups, and Aggregates . 8Network Ports . 9Clusters . 10Logical Cluster Components . 10Storage Virtual Machines . 10Logical Interfaces (LIFs) . 11Flexible Volumes . 11LUNs . 12NAS . 13SAN . 14Key Features . 15Managability . 15Multiprotocol Unified Architecture . 16Storage Efficiency. 16Data Protection and Business Continuity . 17Storage QoS . 17Infinite Volume . 18Intelligent Scale-Out Storage . 18Nondisruptive Operations . 18Summary . 19Resources . 19LIST OF FIGURESFigure 1) A Data ONTAP cluster consisting of FAS controllers in a mix of all-flash, hybrid, and capacityconfigurations. A dedicated, redundant 10 gigabit Ethernet interconnect (top) connects the controllers.4Figure 2) A small Data ONTAP cluster with three storage virtual machines (SVMs). Clients and hosts connect to astorage virtual machine, rather than directly to the storage arrays that host the SVMs. Each SVM has its ownvolumes, LUNs, network connectivity (LIFs), and authentication. .4Figure 3) A six node Data ONTAP cluster with one storage virtual machine. Clients and hosts can access data on anynode from any logical interface. Flexible volumes, LUNs, and LIFs can move nondisruptively, so the SVM can growas the cluster scales out. .52NetApp Clustered Data ONTAP 8.3 and 8.2.x 2015 NetApp, Inc. All Rights Reserved

Figure 4) A single-node cluster consisting of a FAS8000 series controller running clustered Data ONTAP. ThisFAS8000 controller has one shelf of solid-state disks and another shelf of capacity drives. The controller hasredundant connections to its storage shelves.7Figure 5) An HA pair of FAS controllers provides redundancy. In most cases, controllers in an HA pair reside in thesame chassis with redundant power supplies and passive interconnect circuitry. This visualization splits the nodesapart to illustrate the HA interconnect and redundant disk connectivity. .7Figure 6) A Flash Pool aggregate consisting of high-capacity SATA drives in one storage shelf (grey), and solid statedrives in another storage shelf (green). In this example, the SATA drives are grouped together in two RAID groupswith six data drives and two parity drives each. The solid state drives are grouped together in one RAID group withsix data drives and one parity drive. .8Figure 7) A two-node cluster with two NAS data ports highlighted. Even through these two ports reside on differentphysical nodes, the ports are on the same VLAN, and therefore provide the same connectivity to clients or hosts.9Figure 8) A clustered Data ONTAP system consisting of three HA pairs. The client and host facing networks mayinclude SAN, NAS, or both. The cluster interconnect is dedicated, dual-fabric 10 Gigabit Ethernet. The managementnetwork provides administrative access to the cluster. Disk shelves and HA interconnects omitted for clarity. .10Figure 9) A NAS LIF with IP address 192.168.1.1. The LIF is not permanently bound to a specific physical port. If thetwo ports shown are in the same VLAN, an administrator can move the LIF to either port. .11Figure 10) A Flash Pool aggregate containing 8 flexible volumes, belonging to 3 distinct storage virtual machines(represented by burgundy, teal, and purple). The volumes are logically isolated. Each storage virtual machine canonly access its own volumes. Each of these volumes can be moved to a different aggregate while the data inside it isbeing accessed. .12Figure 11) A LUN move operation in progress. The SAN host is accessing the LUN in a new location on the righthand side. All write operations occur there. The contents of the LUN are pulled over from the original location on theleft hand side on a scheduled basis and as read requests are made. .13Figure 12) A NAS namespace. Dotted lines represent junctions to child volumes. Solid lines represent folders anddirectories inside a flexible volume. The path to a file or directory in a namespace remains the same, even if thevolume containing that file or directory is moved to a new physical location inside the cluster. .14Figure 13) ALUA MPIO. SAN hosts use the most direct path to a LUN. In this depiction, if the LUN or its containingvolume on the right moved a node in the HA pair on the left, the SAN host would begin accessing the LUN throughthe more direct path on the left. .153NetApp Clustered Data ONTAP 8.3 and 8.2.x 2015 NetApp, Inc. All Rights Reserved

NetApp Clustered Data ONTAP: OverviewClustered Data ONTAP is enterprise-capable, unified scale-out storage. It is the basis for virtualized,shared storage infrastructures. Clustered ONTAP is architected for nondisruptive operations, storage andoperational efficiency, and scalability over the lifetime of the system.An ONTAP cluster typically consists of fabric-attached storage (FAS) controllers: computers optimized torun the clustered Data ONTAP operating system. The controllers provide network ports that clients andhosts use to access storage. These controllers are also connected to each other using a dedicated,redundant 10 gigabit ethernet interconnect. The interconnect allows the controllers to act as a singlecluster. Data is stored on shelves attached to the controllers. The drive bays in these shelves may containhard disks, flash media, or both.Figure 1) A Data ONTAP cluster consisting of FAS controllers in a mix of all-flash, hybrid, and capacityconfigurations. A dedicated, redundant 10 gigabit Ethernet interconnect (top) connects the controllers.A cluster provides hardware resources, but clients and hosts access storage in clustered ONTAP throughstorage virtual machines (SVMs). SVMs exist natively inside of clustered ONTAP. They define the storageavailable to the clients and hosts. SVMs define authentication, network access to the storage in the formof logical interfaces (LIFs), and the storage itself, in the form of SAN LUNs or NAS volumes.Clients and hosts are aware of SVMs, but may be unaware of the underlying cluster. The cluster providesthe physical resources the SVMs need in order to serve data. The clients and hosts connect to an SVM,rather than to a physical storage array.Like compute virtual machines, SVMs decouple services from hardware. Unlike compute virtualmachines, a single SVM may use the network ports and storage of many controllers, enabling scale-out.One controller’s physical network ports and physical storage may be also shared by many SVMs,enabling multitenancy.Figure 2) A small Data ONTAP cluster with three storage virtual machines (SVMs). Clients and hosts connectto a storage virtual machine, rather than directly to the storage arrays that host the SVMs. Each SVM has itsown volumes, LUNs, network connectivity (LIFs), and authentication.4NetApp Clustered Data ONTAP 8.3 and 8.2.x 2015 NetApp, Inc. All Rights Reserved

A single cluster may contain multiple storage virtual machines (SVMs) targeted for various use cases,including server and desktop virtualization, large NAS content repositories, general-purpose file services,and enterprise applications. SVMs may also be used to separate different organizational departments ortenants.The components of an SVM are not permanently tied to any specific piece of hardware in the cluster. AnSVM’s volumes, LUNs, and logical interfaces can move to different physical locations inside the cluster,while maintaining the same logical location to clients and hosts. While physical storage and networkaccess moves to a new location inside the cluster, clients can continue accessing data in those volumesor LUNs, using those logical interfaces.Figure 3) A six node Data ONTAP cluster with one storage virtual machine. Clients and hosts can accessdata on any node from any logical interface. Flexible volumes, LUNs, and LIFs can move nondisruptively, sothe SVM can grow as the cluster scales out.This allows a cluster to continue serving data as physical storage controllers are added or removed fromit. It also enables workload rebalancing, and native, nondisruptive migration of storage services todifferent media types, like flash, spinning media, or hybrid configurations.With clustered ONTAP, you can expand flash capacity when you need performance, high-density driveswhen you need raw capacity, or both. You can scale up when you need a higher end storage array, orscale out horizontally when you need to distribute a workload. All of these operations can be performedwhile clients and hosts continue accessing their data.You can also scale compute in a public cloud using NetApp Private Storage (physical FAS systems nextto a public cloud data center) or Cloud ONTAP (clustered ONTAP running in a virtual machine inside apublic cloud), while maintaining control of your data.5NetApp Clustered Data ONTAP 8.3 and 8.2.x 2015 NetApp, Inc. All Rights Reserved

Clustered ONTAP uses NetApp’s Write Anywhere File Layout (WAFL), which delivers storage andoperational efficiency technologies like fast, storage efficient Snapshot copies; thin provisioning; volume,LUN, and file cloning; and deduplication. Most storage efficiency features are available regardless of theunderl

4 NetApp Clustered Data ONTAP 8.3 and 8.2.x 2015 NetApp, Inc. All Rights Reserved NetApp Clustered Data ONTAP: Overview Clustered Data ONTAP is enterprise-capable, unified scale-out storage. It is the basis for virtualized, shared storage infrastructures. Clustered ONTAP is architected for nondisruptive operations, storage and