Networking Dell EMC Ready Nodes - IT Best Of Breed

Transcription

insideHPC Special ReportA Practical Guide to NetworkingDell EMC Ready NodesWritten by Peter ffoulkesBROUGHT TO YOU BY 2018, insideHPC, LLC

A Practical Guide to Networking Dell EMC Ready NodesContentsIntroduction. 2How do I network vSAN Ready Nodes?. 3Dell EMC networking for vSAN. 4Network security with vSAN. 5How do I network Dell EMC Microsoft Storage SpacesDirect Ready Nodes?. 6Network switch configuration with Dell EMCReady Nodes. 8Dedicated storage networks. 8Disaggregation network architecture. 8Hybrid dedicated/leaf-spine network architecture. 9Where Ready Nodes are the optimal choice. 9IntroductionThe nature of Information Technology (IT) hasbeen transformed by the advent of softwaredefined infrastructure based on virtualizedservices for servers, networking, and storageto deliver capabilities that can expand andcontract as needed to meet the demands of anyworkload. Together these virtualized servicescan simplify and speed deployment, automatethe management of agile and highly scalableenvironments, and can deliver cost savings ofalmost 50% in comparison to proprietary models.To meet the needs of those who wish to simplifyand accelerate the deployment of softwaredefined storage systems with minimal risk,Dell EMC has a portfolio of Ready Solutions thatare certified reference systems for simplifiedconfiguration, sizing, quoting, deployment,financial packaging. Ready Nodes are PowerEdgeservers pre-configured for specific workloadsDell EMC has a portfolio of ReadySolutions that are certified referencesystems for simplified configuration,sizing, quoting, deployment, financialpackaging.and, available with software and services. ReadyBundles combine software, servers, network,and storage optimized together for specificworkloads. Ready Nodes and Ready Bundlesevolve with new components and versions socheck the corresponding Solution Overviewsfor the latest specifications.This document is focused on Dell EMC vSAN ReadyNodes and Microsoft Storage Spaces Direct ReadyNodes — a practical networking guide.Dell EMC vSAN Ready NodesDell EMC Microsoft Storage Spaces Direct Ready Nodes Reduce project risk Improve storage efficiency Scale quickly Confidence Convenience Customer support 2018, insideHPC, LLCwww.insidehpc.com 508-259-8570 kevin@insidehpc.com2

A Practical Guide to Networking Dell EMC Ready NodesHow do I network vSAN Ready Nodes?Dell EMC vSAN Ready Nodes are built on Dell EMCPowerEdge servers that have been pre-configured,tested and certified to run VMware vSAN. EachReady Node includes just the right amount of CPU,memory, network I/O controllers, HDDs and SSDsfor VMware vSAN.VMware vSAN is Software Defined Storage(SDS) that leverages a distributed control planeabstraction to create a pool of storage fromdisparate server-based disk hardware. Thatabstraction is comparable to the way the vSphereESXi hypervisor converts a cluster of serverhardware into a pool of compute resources (VMs).As an integrated feature of the ESXi kernel, vSANexploits the same clustering capabilities to delivera comparable virtualization paradigm to storagethat has been applied to server CPU and memoryfor many years.VM1VM2VM3VM4vSAN DatastoreESXi sical ServersCompute and Storage (Disk) VirtualizationvSAN SDS is an extension of hypervisor-based virtualizationSee Architecting a Dell EMC Hyper-converged Solutionwith VMware vSANDell EMC vSAN ReadyNode ExampleDell EMC vSAN ReadyNodesPRIMARY CUSTOMERReduce risks and lower total cost of ownershipwith a VMware vSAN building block that’s fast andeasy to scaleVMware vSphere, vCenter, ESXi, and vSANVMware vSAN customers looking to scale capacity, performance or bothCUSTOMER BENEFITS Reduces project risk with validated, tested, and certifiedconfigurations for vSAN deployment Improves storage efficiency with software-defined storage thatleverages servers and flash Scale quickly with faster configuration, fewer update steps andreduced time for maintenancePowerEdge Servers including R440, R640,R740, R740xd, R6415, R7415, R7425, C6420,FC430Dell EMC OpenManage software integration forVMware vCenterDell EMC Professional Services, ProDeploy,ProSupport, Dell Financial ServicesDELL EMC DIFFERENTIATION2 Dell EMC and VMware have collaborated on vSAN for years andthousands of hours of testing Patented failsafe memory fault isolation, automatic failover redundanthypervisors and Memory Page Retire help protect uptime Installed, implemented and supported by Dell EMC as a singletrusted source 2018, insideHPC, LLCwww.insidehpc.com 508-259-8570 kevin@insidehpc.com3

A Practical Guide to Networking Dell EMC Ready NodesEach physical host in a vSAN cluster, canconsist of three to 64 hosts (as of vSAN 6.6),contributing direct attached storage to aDisk Group in the aggregate storage pool.A hybrid vSAN deployment consists of physicalservers that have both spinning disks (HDD) andsolid state drives (SSD/flash) to accommodatetiered storage deployments, with the SSD reservedfor intense read and write operations. Thereare also all-flash deployments for particularlydemanding workloads, where sequential read/write transactions are dense and associatedHDD latency is intolerable.vSAN follows the same operational modelfor provisioning storage that vSphere uses toprovision virtual machines. vCenter Server actsas the common management point for bothvSAN and ESXi. In a vSAN environment, whena virtual machine is created, the administratoris asked to assign a previously-configured storagepolicy that reflects an application’s performanceand availability requirements. The policy that isapplied to the workload is all that is needed withregard to storage provisioning. There is no needto manage Logical Unit Numbers, or configuremasking or zoning, or set RAID levels. All thedetails and mundane tasks typically required in alegacy, non-software defined storage deploymentare relegated to the vSAN software, whichprovides centralized management and hardwareabstractions that place the burden of executingthe policy’s requirements on the software andaway from the hardware.Dell TechCenterThe Dell TechCenter web site containsmany detailed reference resources andbest practices guides such as the Leaf-SpineDeployment and Best Practices Guide forGreenfield Deployments to enable a networkadministrator or engineer with traditionalnetworking experience to deploy a layer2 or layer 3 leaf-spine architecture usingthe examples provided. 2018, insideHPC, LLCDELL EMC NETWORKING FOR VSANDepending on the design requirements, thenetwork fabric can be a leaf and spine (Clos) or atraditional hierarchical, three-tiered architecture(access, aggregation and core). The distributednature of vSAN’s virtualized data store lends itselfto a network architecture that can be scaled outhorizontally, can provide predictable performanceand control jitter between end nodes, offersmultiple paths of equal cost between the differentlayers of switching and lowers oversubscription.Such characteristics are typical of a leaf-and-spine(L/S) architecture, thereby making it the preferredarchitecture for VMware vSAN. Both a routed (L3)L/S and a switched (L2) network fabric canbe deployed to support vSAN, each with itsdesign considerations.Network bandwidth capacityThere are no “special” considerations thatneed to be made when designing a networkfabric that can support vSAN software-definedstorage other than total network bandwidthcapacity. Like any other high-performancedata center fabric, a vSAN network needsto be redundant, resilient and robust. Thevirtual constructs that are abstracted from thehardware are only as stable as the underlyinginfrastructure, so reliability and performanceare of the utmost importance.Do pay attention when using technologiessuch as Non-Volatile Memory express(NVMe) drives, capable of generating muchhigher volumes of network traffic.The VMware vSAN Network Guide providesin-depth information and guidelines regardingnetwork considerations. Starting with vSAN 6.6,multicast is not required on the physical switches.However, if some hosts in the vSAN cluster arerunning earlier versions of software, a multicastnetwork is still required.www.insidehpc.com 508-259-8570 kevin@insidehpc.com4

A Practical Guide to Networking Dell EMC Ready NodesTypical Leaf and Spine Architecture (Clos)A preferred architecture for vSAN network fabricsTypical leaf and Spine Architecture (Clos). Preferred architecture for vSAN network fabricsSee Architecting a Dell EMC Hyper-converged Solution with VMware vSANNETWORK SECURITY WITH VSANSee Designing the vSAN Network from VMwareAlongside vSAN, VMware NSX is the VMwarenetwork virtualization and security platformthat de-couples the network functions from thephysical devices, in a way that is analogous tode-coupling virtual machines (VMs) from physical 2018, insideHPC, LLCservers. In order to de-couple the new virtualnetwork from the traditional physical network,NSX natively re-creates the traditional networkconstructs in virtual space, which includes ports,switches, routers, and firewalls.www.insidehpc.com 508-259-8570 kevin@insidehpc.com5

A Practical Guide to Networking Dell EMC Ready NodesHow do I network Dell EMC Microsoft Storage SpacesDirect Ready Nodes?Dell EMC Microsoft Storage SpacesDirect Ready Nodes are built on Dell EMCPowerEdge servers, which provide thestorage density and compute power tomaximize the benefits of Storage SpacesDirect (S2D). They leverage the advancedfeature sets in Windows Server 2016Datacenter Edition to deploy a scalablehyper-converged infrastructure solutionwith Hyper-V and Storage Spaces Direct.For Microsoft server environments,S2D scales to 16 nodes in a cluster, andis a kernel-loadable module, (with noRDMA iWarp or RoCE needed) which isa low risk approach to implementing anS2D cluster. Dell EMC Microsoft StorageSpaces Direct Ready Nodes can beordered with software installed with oras bare metal for customers with volumelicense agreements.You can use Windows Server 2016Datacenter Edition with the ServerCore, or Server with Desktop Experienceinstallation options, the steps in the“Configure the Network” and “ConfigureStorage Spaces Direct” sections areidentical whether you are usingServer with Desktop Experienceor Server Core installations. Themanagement system can be runinside of a virtual machine or ona physical machine, however, themanagement system needs to bejoined to the same domain or a fullytrusted domain. There are severaldeployment options available toconnect the cluster nodes, andRDMA is not available for networkinginside a virtual machine.Hyper-V VMs run directly on the Storage Spaces Direct cluster that hoststhe storage, as shown. Virtual machine files are stored on local CSVs.This allows for scaling Hyper-V compute clusters together with the storageit is using, reducing the number of clusters required.See Deploying Storage Spaces Direct from Microsoft, and see MicorsoftStorage Spaces Direct Ready Nodes - Sample Switch ConfigurationsDell EMC Microsoft Storage Spaces Direct Ready Node ExampleDell EMC MicrosoftStorage Spaces DirectReady NodesPRIMARY CUSTOMERCustomers or managed service providers invested in Microsoft Hyper-V A pre-validated solution that offers new levels offlexibility, confidence and time to valueMicrosoft Windows Server 2016 and StorageSpaces Direct softwarePowerEdge Servers including R640, R740xdManagement integrationCUSTOMER BENEFITS Confidence in preconfigured, tested and certified solutions Convenience of procuring, deploying with Dell EMC guidance thatcovers multiple scenariosDell EMC Professional Services, ProDeploy,ProSupport, Dell Financial Services Customer support that is streamlined, and collaborative supportfrom initial contactDELL EMC DIFFERENTIATION Takes the guesswork out of building Storage Spaces Direct clusters Admin work reduced by fewer interfaces and steps to completetasks, with no specialized knowledge needed Customer support: Single point of contact for solution support 2018, insideHPC, LLCwww.insidehpc.com 508-259-8570 kevin@insidehpc.com6

A Practical Guide to Networking Dell EMC Ready NodesThe disks intended for Storage SpacesDirect need to be empty and withoutpartitions or other data. If a diskhas partitions or other data, it willnot be included in the Storage SpacesDirect system.Storage Spaces Direct requires high bandwidthand low latency network connections betweennodes. This network connectivity is important forboth system performance and reliability. Whenusing physical nodes instead of VMs, using atleast two 10 Gb connections between the nodesis recommended, preferably with RDMA toincrease throughput and reduce the CPU usagefor network traffic.There are two common versions of remote directmemory access (RDMA) network adapters —RDMA over Converged Ethernet (RoCE) andInternet Wide-area RDMA Protocol (iWARP).You can use either with Storage Spaces Direct aslong as it has the Windows Server 2016 logo, butiWARP usually requires less configuration. Topof Rack switches and server configurations varydepending on the network adapters and switches.Configuring the server and switch correctly isimportant to ensure reliability and performanceof Storage Spaces Direct.Ensure that there is no data on anyof the disks of the cluster beforerunning configuration commands orscripts. It will remove any data on thedisks that are not being used by theoperating system.If you’re deploying a hyper-converged cluster,the last step is to provision virtual machines onthe Storage Spaces Direct cluster.After deploying your clustered file server, werecommend testing the performance of yoursolution using test workloads before bringing upany production workloads. This lets you confirmthat the solution is performing properly andwork out any lingering issues before adding thecomplexity of production workloads. For moreinfo, see Test Storage Spaces Performance UsingSynthetic Workloads.Windows Server 2016 also introduces a newvirtual switch that has network teaming builtin called Switch Embedded Teaming (SET). Thisvirtual switch allows the same physical NIC portsto be used for all network traffic while usingRDMA. This reduces the number of physical NICports that would otherwise be required and allowsnetwork management through the SoftwareDefined Network features of Windows Server. 2018, insideHPC, LLCwww.insidehpc.com 508-259-8570 kevin@insidehpc.com7

A Practical Guide to Networking Dell EMC Ready NodesNetwork switch configuration with Dell EMC Ready NodesStorage networks are constantly evolving. From traditional Fibre Channel to IP-based storage networks,each technology has its place in the data center. IP-based storage solutions have two main networktopologies to choose from based on the technology and administration requirements. Dedicatedstorage network topology, shared leaf-spine network, software defined storage, and iSCSI SAN are allsupported. Hybrid network architectures are common, but add to the complexity.DEDICATED STORAGE NETWORKSTraditional network architectureSee Dell EMC Switch Configuration Guide for iSCSI and Software Defined StorageFibre Channel storage has imparted a traditionalnetwork philosophy and implementationmethodology to IP-based storage. The dedicatedstorage network has a proven design that providesperformance, predictability, and manageability tostorage deployments. The storage traffic is isolatedfrom the application traffic to allow each networkto be optimized for its own purpose.DISAGGREGATION NETWORK ARCHITECTURELeaf-spine architectureAs a direct result of increasing east-west trafficwithin the data center (server-server, server-storage,etc.), an alternative to the traditional accessaggregation-core network model is becoming morewidely used. This architecture is known as a Closor leaf-spine network and is designed to minimizethe number of hops between hosts, thus reducingnetwork communication latency.In a leaf-spine architecture, the access layer isreferred to as the leaf layer. Servers and storagedevices connect to leaf switches at this layer. 2018, insideHPC, LLCAt the next level, the aggregation and core layers arecondensed into a single spine layer. Every leaf switchconnects to every spine switch to ensure that all leafswitches are no more than one hop away from oneanother. This minimizes latency and the likelihood ofbottlenecks in the network. A leaf-spine architectureis highly scalable. As administrators add racks tothe data center, a pair of leaf switches are addedto each new rack. Spine switches may be added asbandwidth requirements increase.www.insidehpc.com 508-259-8570 kevin@insidehpc.com8

A Practical Guide to Networking Dell EMC Ready NodesHYBRID DEDICATED/LEAF-SPINE NETWORK ARCHITECTUREA dedicated storage network has been a popularstandard for iSCSI based storage for performanceand administration reasons including low latency,predictability, lack of competition with applicationtraffic, simplified management, and straightforward fault isolation. The following diagramshows the dedicated storage network on the rightand a leaf-spine network for application traffic onthe left, may need to coexist as hybrid networksto optimize the balance between north-south andeast-west network traffic.While these established approaches are generallywell understood, the configuration details can becomplex and change over time.Where Ready Nodes are the optimal choiceThe perennial question for IT asset acquisitionis whether to build or consume, and there isno ‘one size fits all’ answer to the question.Individual components provide the maximumflexibility, but also require the maximum of selfsupport. Reference architectures provide therecipe and guidance, but the customer still hasthe responsibility for making the system work.At the other extreme of the product spectrum,engineered systems and hybrid cloud platformsprovide a fully supported integrated offering, yetdeliver limited flexibility based upon high levelappliance-like building blocks and services.By disaggregating networking system software, andby decoupling networking software from hardwareplatforms, companies can free themselves fromthe rigid and proprietary environments ofyesterday to embrace software-defined principlesand unlock innovation at any scale, and based onopen, industry standard hardware and software.For more information visit the Dell EMCTechCenter for the latest in-depth guidance.Dell EMC Ready Solutions offer the ultimatecompromise between the flexibility of completecustom architectures and the restricted nature ofappliance-like products and services, deliveringthe best of both worlds and supporting technologyevolution with enterprise grade support.“The network has become again the bottleneck of a system, mostly because of NVMe drives.Four NVMe drives, aggregated, are capable of generating around 11 gigabits per second of bandwidth,which tops a 100-gigabit connection. They may saturate and block I/O with just four drives, so weare looking to 25 gigabit, which becomes 50 gigabit because we’re using a spine-leaf approach.Every server has two lanes, so aggregate bandwidth is 50 gigabits per second. This helps us ensurethat the network will not be too much of a storage bottleneck.” Antonio Cisternino, University of Pisa Chief Information OfficerFor Dell EMC customer case studies, see http://www.dellemc.com/poweredge-stories 2018, insideHPC, LLCwww.insidehpc.com 508-259-8570 kevin@insidehpc.com9

A Practical Guide to Networking Dell EMC Ready NodesIntel Technology Enables Scale-Out Storage ArchitecturesIntel technologies are already helping data centers move to SDI and software-defined storage systems.Intel architecture-based solutions maximize infrastructure investments and include a portfolio ofinteroperable, scalable, and programmable products and technologies. This includes key softwarecapabilities that significantly enhance the value of foundational Intel architecture in SDI hardwareand software ecosystems. One example is the ability to speed replication with erasure coding, whichis optimized for Ceph* software on the Intel Xeon processor family.Key storage-related products and technologies include: T he Intel Xeon processor family with storage workload optimization enables moreefficient storage, smarter data protection, and exceptional system performance. T he Intel Solid-State Drive (Intel SSD) Data Center Family with NVMe delivers moreperformance with fewer resources to boost storage density, speed, and reliability. I ntel QuickAssist Technology, a workload acceleration tool in Intel Xeon processorE5 and E7 families for compression and encryption offload. I ntel Cache Acceleration Software (Intel CAS) improves application performanceby using an Intel SSD as a cache or frequently accessed data. I ntel security technologies verify that virtual servers boot into “known good states”(Intel Trusted Execution Technology, or Intel. Source: “Making the Case for Software Defined Storage” 2018, insideHPC, LLCwww.insidehpc.com 508-259-8570 kevin@insidehpc.com10

tested and certified to run VMware vSAN. Each Ready Node includes just the right amount of CPU, memory, network I/O controllers, HDDs and SSDs for VMware vSAN. VMware vSAN is Software Defined Storage (SDS) that leverages a distributed control plane abstraction to create a pool of storage from disparate server-based disk hardware. That