Best Practice Network Design For The Data Center

Transcription

Best Practice Network Design forthe Data CenterMihai Dumitru, CCIE2 #16616.1

A Few Words about Cronus eBusiness: 39 employees, 3 national offices Focus on large enterprise customers from banking andretail, plus education Specializing in:– System integration (consulting, project management, networkequipment sale and deployment, maintenance)– Managed services (operational support, network management,server hosting and business continuity) Cisco Gold Partner– One triple CCIE, one dual CCIE and more Solarwinds Gold Partner2

What We Will Cover In This Session: Classical Data Center Network Architecture Impact of new features and products on hierarchicaldesign for data center networks Data center services insertion Layer 3 features and best practices Layer 2 features, enhancements and best practices3

Hierarchical Design Network Layers:Defining the Terms Data Center CoreRouted layer which is distinct fromenterprise network coreProvides scalability to build multipleaggregation blocksEnterprise NetworkData CenterCore Aggregation LayerProvides the boundary between layer-3routing and layer-2 switchingPoint of connectivity for service devices(firewall, SLB, etc.)AggregationLayer 3 LinksLayer 2 Trunks Access LayerProvides point of connectivity forservers and shared resourcesAccessTypically layer-2 switching4

Scaling the Topology With a DedicatedData Center Core A dedicated Data Center Core provides layer-3insulation from the rest of the network Switch port density in the DC Core is reserved forscaling additional DC Aggregation blocks or pods Provides single point of DC route summarizationEnterprise NetworkData CenterCoreMultiple AggregationBlocks/Pods5

Mapping Network Topology tothe Physical Design Design the Data Center topology in a consistent, modularfashion for ease of scalability, support, and troubleshooting Use a pod definition to map an aggregation block or otherbounded unit of the network topology to a single pod The server access connectivity model can dictate port countrequirements in the aggregation and affect the entire designData CenterCoreServer PodNetwork EquipmentNetwork RackAggregationServer RackAccess6

Traditional Data Center ServerAccess Models End-of-Row (EoR)High density chassis switch at end or middle ofa row of racks, fewer overall switchesProvides port scalability and local switching, maycreate cable management challenges Top-of-Rack (ToR)Small fixed or modular switch at the top ofeach rack, more devices to manageSignificantly reduces bulk of cable by keepingconnections local to rack or adjacent rack Integrated SwitchingSwitches integrated directly into blade server chassis enclosureMaintaining feature consistency is critical to network management,sometimes pass-through modules are used7

Impact of New Features and Products OnHierarchical Design forData Center Networks8

Building the Access Layer usingVirtualized Switching Virtual Access LayerData CenterCoreStill a single logical tier oflayer-2 switchingCommon control plane withvirtual hardware and softwarebased I/O modulesLayer 3 LinksAggregationLayer 2 Trunks Cisco Nexus 2000Switching fabric extender moduleActs as a virtual I/O modulesupervised by Nexus 5000 Nexus 1000vSoftware-based Virtual DistributedSwitch for server ladeSwitch 3100Nexus 2000Nexus 1000v9

Migration to a Unified Fabric at theAccess Supporting Data and Storage Nexus 5000 Series switches support integration of both IP dataand Fibre Channel over Ethernet at the network edge FCoE traffic may be broken out on native Fibre Channel interfaces fromthe Nexus 5000 to connect to the Storage Area Network (SAN) Servers require Converged Network Adapters (CNAs) to consolidate thiscommunication over one interface, saving on cabling and powerLANSANIP Data and StorageAggregationServerAccessEthernetFibre ChannelEthernet plus FCoE10

Cisco Unified Computing System (UCS) A cohesive system including a virtualized layer-2 access layer supportingunified fabric with central management and provisioning Optimized for greater flexibility and ease of rapid server deployment in aserver virtualization environment From a topology perspective, similar to the Nexus 5000 and 2000 seriesLANIP etFibre ChannelUCS FEX UplinksSANDual SANFabricsUCS 6100 SeriesFabric InterconnectsUCS 5100 EnclosureUCS B-Series ServersUCS 2100 Fabric ExtendersUCS I/O Adapters11

Nexus 7000 Series Virtual DeviceContexts (VDCs) Virtualization of the Nexus 7000 Series ChassisUp to 4 separate virtual switches from a singlephysical chassis with common supervisor module(s)Separate control plane instances andmanagement/CLI for each virtual switchInterfaces only belong to one of the active VDCsin the chassis, external connectivity required topass traffic between VDCs of the same switch Designing with VDCsVDCs serve a “role” in the topology similar to aphysical switch; core, aggregation, or accessTwo VDCs from the same physical switch shouldnot be used to build a redundant networklayer – physical redundancy is more robust12

Virtual Device Context Example:Services VDC Sandwich Multiple VDCs used to “sandwich”services between switching layersEnterprise NetworkAllows services to remain transparent(layer-2) with routing provided by VDCsAggregation blocks only communicatethrough the core layer Design considerations:Access switches requiring services areconnected to sub-aggregation VDCAccess switches not requiring servicesmay be connected to aggregation VDCAllows firewall implementations not toshare interfaces for ingress and egressFacilitates virtualized services byusing multiple VRF instances inthe sub-aggregation VDCCoreAggregation VDC6500ServicesChassisSub-AggregationVDCAccess13

Data Center Service Insertion14

Data Center Service Insertion:Direct Services Appliances Appliances directly connectedto the aggregation switchesService device type and Routedor Transparent mode can affectphysical cabling and traffic flows. Transparent modeASA example:Data CenterCoreAggregationEach ASA dependant onone aggregation switchSeparate links for fault toleranceand state traffic either run throughaggregation or directlyDual-homed with interface redundancyfeature is an optionServicesAccessCurrently no EtherChannelsupported on ASA15

Data Center Service Insertion:External Services Chassis Dual-homed Catalyst 6500Services do not depend on asingle aggregation switchData CenterCoreDirect link between chassis forfault-tolerance traffic, may alternativelytrunk these VLANs through Aggregation Dedicated integration pointfor multiple data centerservice devicesProvides slot real estate for6500 services modulesAggregationServicesFirewall Services Module (FWSM)Application Control Engine (ACE) ModuleAccessOther services modules, alsobeneficial for appliances16

Using Virtualization and ServiceInsertion to Build Logical Topologies Logical topology exampleusing services VDC sandwichphysical modelLayer-2 only services chassis withtransparent service contextsEnterprise NetworkData CenterCoreAggregationVDCVLANs above, below, and between servicemodules are a single IP subnetSub-aggregation VDC is a layer-3 hoprunning HSRP providing defaultgateway to server farm subnetsMultiple server farm VLANS can beserved by a single set of VLANsthrough the services modulesTraffic between server VLANs does notneed to transit services device, but may bedirected through services using virtualizationVLAN 161VLANs171,172ServicesVLAN 162VLAN 170Sub-AggregationVDCTransparentFWSM ContextTransparentACE ContextVLAN 163VLAN 180AccessClient-ServerFlowWeb ServerFarm17

Using Virtualization and ServiceInsertion to Build Logical Topologies Logical Topology to supportmulti-tier application traffic flowSame physical VDC serviceschassis sandwich modelEnterprise NetworkData Center CoreAggregation VDCAddition of multiple virtual contexts tothe transparent services modulesAddition of VRF routing instanceswithin the sub-aggregation VDCVLAN 161FT VLANsServicesVLAN 162FT VLANService module contexts and VRFsare linked together by VLANs toform logical traffic pathsExample Web/App server farmand Database server cluster homedto separate VRFs to direct trafficthrough the servicesSub-AggVDCTransparentFWSM ContextsTransparentACE ContextsVLAN 163VLAN 180VLAN 151FT VLANsVLAN 152FT VLANVLAN 153VRF InstancesVLAN 181AccessWeb/AppServer FarmDB ServerClusterClient-Server FlowServer to Server Flow18

Active-Active Solution VirtualComponents Nexus 7000(VDC max 4)VDCs, VRFs, SVIs ASA 5580Virtual ContextsNexus7000(ASA max 50 VCs)(FWSM max 250) ACE Service ModuleVirtual Contexts, Virtual IPs(VIPs)ASA IPS 4270Virtual SensorsACE Virtual Access LayerVirtual Switching SystemNexus 1000v(ACE max 250 VCs)(ACE 4710 20 VCs)(VS max 4)IPS/IDSVirtual Blade Switching19

Layer 3 Features and Best Practices20

Layer-3 Feature Configurationin the Data Center Summarize IP routes at the DCAggregation or Core to advertise fewerdestinations to the enterprise core Avoid IGP peering of aggregation switchesthrough the access layer by setting VLANinterfaces as passiveEnterprise NetworkData CenterCore Use routing protocol authentication to helpprevent unintended peering If using OSPF, set consistent referencebandwidth at 10,000 or higher for supportof 10 Gigabit Ethernet HSRP timers at hello-1 dead-3 provide abalance of fast failover without too muchsensitivity to control plane loadAggregationLayer 3Layer 2Access21

IGP Hello and Dead/Hold TimersBehavior Over Shared Layer 2 Domain Routing protocols insert destinationsinto the routing table and maintainpeer state based on receipt ofcontinuous Hello packets.Network “A”Node 1 Upon device or link failure, routingprotocol only removes the failedpeer’s routes only after Dead/Holdtimer is expired. Tuning Dead/Hold timers lowerprovides faster convergenceover this type of topology. A firewall module running an IGPis an example of a Data Centerdevice peering over a L2 domain,or any VLAN interface (SVI).Node 2CommonSubnet/VLANXNode 3What happened tohellos from Node 3?I will wait for the holdtimer to expire Node 4Network “B”22

IGP Hello and Dead/Hold TimersBehavior Over Layer-3 Links Upon device or link failure, routingprotocol immediately removesroutes from failed peer basedon interface down state. Tuning the IGP Hello and Dead/Hold timers lower is not requiredfor convergence dueto link or device failure. Transparent-mode services orusing static routing with HSRP canhelp ensure all failover cases arebased on point-to-point links. Note that static routing with HSRPis not a supported approach for IPmulticast traffic.Network “A”Node 1Node 2Point-toPoint RoutedLinksXNode 3My one direct link toNode 3 has gonedown. I betterremove Node 3’sroutes from the tableimmediately Node 4Network “B”23

Layer 2 Features, Enhancements and BestPractices24

Classic Spanning Tree Topology“Looped Triangle” Access Layer-2 protocols are designed tobe plug-and-play, and forward trafficwithout configurationNAggregation Stability is enhanced by controllingthe location of the STP root switch,and using consistent topologiesNNRN RN Looped topologies are required toprovide link redundancy and servermobility across access switches Using STP to break the networkloop reduces available bandwidth in aVLAN due to blocked linksNAccessRoot portAlternateDesignatedNRType NetworkRoot Guard Most STP issues result from undesiredflooding due to link issues or softwareproblems causing loss of BPDUs25

Spanning Tree Configuration Features:Rootguard, Loopguard, Portfast, BPDUguardThese Features Allow STP to Behave with MoreIntelligence, but Require Manual Configuration: Rootguard prevents a port from accepting a better pathto root where this information should not be received Loopguard restricts the transition of a port to adesignated forwarding role without receivinga BPDU with an inferior path to root Port fast (Edge Port) allows STP to skip the listeningand learning stages on ports connected to end hosts BPDUguard shuts down a port that receives aBPDU where none should be found, typicallyalso used on ports facing end hosts26

Updated STP Features:Bridge Assurance Specifies transmission of BPDUson all ports of type “network”. Protects against unidirectional linksand peer switch software issues(LoopGuard can only be enabled onroot and alternate ports) Requires configuration, best practiceis to set global default to type“network”, default is “normal” IOS Example:AggregationNNNRN RNetwork PortsAll Send BPDUsEdge PortsNo BPDUsNNAccessEEEspanning-tree portfast network default Caution: Both ends of the link musthave Bridge Assurance enabled(otherwise the port is blocked)Root portAlternate portDesignated portEEdge portNNetwork portRoot GuardR27

STP Configuration Feature PlacementIn the Data CenterBridge Assurance Replace the Requirement ForLoopguard On Supported SwitchesNEData ootN N N R R R REBNNLayer 2 (STP Bridge Assurance)N N N R R R RLayer 2 (STP BA Rootguard)NNAccessEBLayer 3STANDBYNNNetwork portEdge portNormal port typeBPDUguardRootguardLoopguardNLEBEBEBLLayer 2 (STP BPDUguard)28

Redundant Paths Without STP Blocking:Basic EtherChannel Bundles several physical links into a logical oneNo blocked ports (redundancy not handled by STP)Per frame (not per-vlan) load balancing Control protocols like PAgP (Port Aggregation Protocol)and LACP (Link Aggregation Control Protocol) handlethe bundling process and monitor the health of the link Limited to parallel links between two switchesRoot portAlternate portDesignated portABAChannel looks like asingle link to STPB29

Designs Not Relying on STP:Virtual Switching System (VSS) Merges two bridges into one, allowingMulti-Chassis EtherChannels Also merges Layer-3 and overall switch manageme

Data Center Core Aggregation Access Services Data Center Service Insertion: Appliances directly connected to the aggregation switches Service device type and Routed or Transparent mode can affect physical cabling and traffic flows. Transparent mode ASA example: Each ASA dependant on one aggregation switch Separate links for fault toleranceFile Size: 1MBPage Count: 35