10GbE Key Trends, Drivers And Predictions PRESENTATION TITLE . - SNIA

Transcription

PRESENTATIONTITLE GOESHERE10GbE – KeyTrends, DriversandPredictions

Ethernet Storage ForumMembersThe SNIAEthernet StorageForum (ESF)focuses oneducating endusers s.Ethernet Storage Forum2

HousekeepingPresentation with live Q&A at the endQuestions submitted via web tool will be answered verballyUnanswered questions will be placed on www.sniaesfblog.orgRequest complete the 10 second feedback formEthernet Storage Forum3

Today’s PanelJason Blosil, SNIA–ESF ChairGary Gumanow, SNIA–ESF Board of DirectorsDavid Fair, SNIA–ESF Board of DirectorsEthernet Storage Forum4

SNIA Webinar AgendaTrends driving new networking requirementsTechnologies enhancing 10GbENew server platformsI/O virtualizationNetwork flexibility (cabling and DCB)10GbE market adoptionFuture of 1GbE / Fast EthernetWhere do we go from hereEthernet Storage Forum5

Top Storage TrendsCloud ServicesEnd-To-EndManagementEthernet Storage ForumSSD FlashMobileDelivery of ContentConvergingArchitecture6

Drivers of 10GbEEconomic DriversTechnical DriversCompetitiveAdvantageIncreased Use of Virtual Resources for Real Work Loads!Ethernet Storage Forum7

Economic Drivers – 10GbEIt’s about /GbHardware1GbEAdapter – sub 100 / portSwitch – sub 75/ port10GbEAdapter – sub 300 / port( 30 / Gb)Switch – sub 500 / port( 50 / Gb)Green InitiativesData center avoidanceData center efficiency10GbE offers 10x the bandwidth at 5x the priceORa 50% reduction in / Gb of bandwidthEthernet Storage Forum8

Cloud ServicesApplicationBased SilosZones ofVirtualizationPrivate CloudPublic CloudAppsServersNetworkHybridStorageWorkloads moving increasingly to virtualized cloud infrastructurePrivate Cloud: behind the firewall of an enterprise, closed to publicPublic Cloud: accessible via service providers for general purchaseHybrid Cloud: private clouds linked to public cloudsEthernet Storage Forum9

Disruption of FlashTechnologyServerPerformance GrowthFlashSAS/FCSATATimeEthernet Storage ForumRelatively smalldifferencesbetween HDD typesFlash is a game changerBalancing cost andperformance is keyFlash performancerelocates performancebottlenecks10

SSD Performance80SSDAverage Latency (ms)7015K rpm FC Drive6050Sequential I/O throughput per drive403012015K rpm FCdrive20100002000400060008000Read IOPS per Drive (4KB)SSD versus 15k rpm FC, 4KB randomreads10000MB/sec per Drive10806040200Large Seq. ReadEthernet Storage Forum 2010 NetApp. All rights reserved.Large Seq. Writes11

End-to-End Flash CategoriesHost-side Flash Software Software only, may be tied toparticular flash hardwareFlash as DAS / Cache Flash hardware, storespersistent data May be combined withsoftware to form cacheFlash-based VSA SoftwareNetwork-based FlashFlash in Controller Flash Hardware and software “Bump in the wire” Flash hardware andsoftware “Behind wire”Pure Flash in Array All flashEthernet Storage ForumHybrid Flash / HDD Array Mixed flash / HDD12

Vote #1Ethernet Storage Forum

2012 Server Platforms EnableNew Ethernet CapabilitiesNew server platforms coming to market as of March 2012 letEthernet accomplish more than everBandwidth per port doubles with PCI Express 3.0Integrating PCIe3 onto the processor itself reduces latencyWhat Intel calls “Direct Data I/O” changes the I/O paradigmfor serversThe Ethernet controller talks directly with the processor’s last-levelcacheNo longer is Ethernet traffic forced to detour through main memoryon ingress and egressThis new server platform has the I/O headroom to run up to250 Gbps of Ethernet I/O per processorMore than a factor of three more headroom than any precedingserver platformEthernet Storage Forum14

Flexible LOM EncouragesUpgrades To 10GbESimilar to blade mezzanine cards, flexible LOM cards can be configured byOEMs at the time of orderOEMs each have their own names for this form factor“Classic” LOM10GBASE-TLAN-on-Motherboard (LOM)Ethernet Storage Forum“Flexible” LOMCustom PCI Express 10GBASE-TConverged Network AdapterPCI Express NICStandard PCI Express 10GBASE-TConverged Network Adapter15

10GbE Brings Remote DataMemory Access (RDMA)Technology To The Data CenterRemote Direct Memory Access (RDMA) is a technology that allows anapplication to transfer data directly to/from the data space of anotherapplication while bypassing the OS kernelPermits low latency data transfers with low packet jitter, low CPU utilization,and traffic segregated into prioritized classesTwo RDMA-over-Ethernet technologies being deployed today over10GbE, iWARP (internet Wide-Area RDMA Protocol) and RoCE (RDMAover Converged Ethernet)High-performance computing (HPC) workloads written to the OpenFabrics Alliance stack for InfiniBand can run on a 10GbE networksupporting either iWARP or RoCEWindows Server 2012 will take advantage of RDMA capabilities tosupport network Microsoft’s file system called “SMB Direct 3.0.”Ethernet Storage Forum16

Ethernet Virtualization For10GbE OverviewMultiple approaches to “carving up” 10GbE ports andassigning the slices to virtual machines have emergedDemand queuingPCI-SIG Single-Root I/O Virtualization (SR-IOV)OEM-driven NIC partitioning: Dell’s NPAR, HP’s Flex-10, IBM’s vNICOEM-driven network virtualization: Cisco’s FEXDemand queuing is supported by Microsoft and VMwareSR-IOV is supported in Linux Xen and KVMMicrosoft has committed to SR-IOV support in 2012VMware is showing it in their ESX 5.1 betaUsers have multiple and increasing choices for Ethernetvirtualization of 10GbE portsEthernet Storage Forum17

Vote #2Ethernet Storage Forum

VM Density Drives More I/OThroughputWhat would you estimate is the average number of virtual machines perphysical x86 server in your environment today? How do you expect thisto change over the next 24 months? (Percent of respondents, N 463)Today40%24 months from now36%35%31%30%30%Server VirtualizationImpact on theNetwork26%24%25%23%It has created morenetwork traffic inthe data center(30%)40Gb20%15%10%12%8%5%7%10GbE & 16Gb FC3%0% 55-1011-25 25Ethernet Storage ForumDon’t Know19Source: 2011 Enterprise Strategy Group

Leverage existing infrastructure –Flexibility Network media optionsSFP Optical or Twin-Ax10GBASE-TiSCSI adapter with SFP cageTwinaxial Cable with SFP moduleson endsSupports 5-7m distancesTop of RackOptical versions availablesupporting 100m or moreTop of Rack, End of Row (EOR)ExpensiveLess ExpensiveOpticalTwin-AxExpensive optical transceiversExpensive CablesEthernet Storage ForumBackwards compatible with GigabitEthernetUses Cat6/Cat6A/Cat7 unshieldedtwisted pair cabling with RJ-45connectorsSupports 100m distancesCAT6ALess Expensive CablingNewer Switches20

What is Data Center Bridging?Enhancements to EthernetProvided enhanced QoS support to Ethernet10GbE Technology today What constitutes DCB Standards?PFC aka Priority based Flow Control (802.1Qbb)ETS aka Enhanced Transmission Selection (802.1Qaz)QCN aka Congestion Notification (802.1Qau)DCBX aka Data Center Bridging capability eXchangeLLDP vs. DCBX––LLDP: Primarily a link level information exchange protocolDCBX: Neighbors can configure parameters based on info exchange and statemachineEthernet Storage Forum21

Data Center BridgingData Center Bridging Capabilities eXchange Protocol (DCBX)Supports centralized configuration of DCB and related protocolsInitiated by endpoints (hosts/storage) configure themselvesEnhanced Transmission Selection (ETS)Provides priority groups with bandwidth controlsPriority Flow Control (PFC)Enable lossless Ethernet operation to provide deterministicperformanceWhat’s neededHBA or Converged Network Adapter supporting DCB10G Ethernet Switch supporting DCB10G Storage Array supporting Ethernet storage over DCBEthernet Storage Forum22

DCB ComponentsAllocate bandwidth basedupon predeterminedclasses of trafficHALT an individual stream,but NOT all of them!IEEEDCB802.1Qbb(Per-Priority Flow Control)10GELinkIEEEDCBEthernet Storage ForumIEEEDCB802.1Qaz(Enhanced Transmission Selection)3G5G4G1G t110GELink4G3G t2802.1Qau(Congestion Management)End-to-End Communicationbetween end-points. Tells theend-point to BACK OFF!23

DCB & 10GbEEnables convergence of LAN and SAN trafficDCB enables mixed trafficDeterministic performance in converged, mixed trafficenvironmentLossless Ethernet using Priority Flow Control enables supportEase of useCentralized configuration via switch with DCBX supportHosts, storage announce capabilitiesSwitch enables PFC, Jumbo-frames, other parameters via DCBXEasy server integrationIntegrated LOM & Modular servers with DCB capabilitiesEthernet Storage Forum24

Ethernet Unifies the DataCenterSolve All Your Use CasesSmall/ MediumBusinessCIFSData Centers,Remote OfficesNFSTraditional FCData CentersiSCSI1GbE /10GbE10Gb Ethernet with DCBFCoEIncreased asset andstorage utilizationSimplified storage anddata managementImproved flexibility andbusiness agilityReduced costs throughconsolidationImproved storage andnetwork efficienciesFile and BlockEthernet Storage Forum25

10GbE AdoptionEthernet Storage Forum26

Future of 1GbE / Fast EthernetData CenterManagement networks for storage, servers, switches, etc.Low demand storage applications such as print servers, basic fileservices, and active directoriesLow demand or medium demand block storage, such as iSCSI storagefor email applicationsOutside of the data centerVoice over IP (each desk requires wired connection)Video surveillanceVirtual desktops (VDI)And general client networkingConsumerEthernet Storage Forum27

Where Do We Go FromHere?Data intensive applications are relentlessly driving the needfor higher-speed connectivityCloud computing, mobile networking, high-speed consumerbroadband10GbE adoption is paving the way for 40G interfaces in theaggregation layer and 100G uplinks at the core layer40GbE is the next logical step in the evolution of the datanetworkForecasters expect 40GbE to begin significant edge server deploymentin 2015-16Ethernet Storage Forum28

Vote #3Ethernet Storage Forum

Questions?How to contact us:Jason Blosil – jason.blosil@netapp.comDavid Fair – david.l.fair@intel.comGary Gumanow – gary gumanow@dell.comFull Q&A session from this Webcast will be posted on theSNIA-ESF BlogEthernet Storage Forum30

Cloud computing, mobile networking, high-speed consumer broadband 10GbE adoption is paving the way for 40G interfaces in the aggregation layer and 100G uplinks at the core layer 40GbE is the next logical step in the evolution of the data network Forecasters expect 40GbE to begin significant edge server deployment in 2015-16 28