Mirantis OpenStack Reference Architecture For Dell Hardware

Transcription

Reference ArchitectureMirantis OpenStackReference Architecturefor Dell HardwareOverviewThis document describes the Mirantis OpenStack for Cloud Native Appswith Dell—a fully validated deployment of Mirantis OpenStack on Dell PowerEdge R630 and R730xd servers networked with Dell NetworkingS3048-ON and S4048-ON switches.The deployment is engineered as a scalable, rack-based OpenStack dev/test/production environment for cloud-native web applications.This Reference Architecture details all hardware and software componentsof the Mirantis OpenStack for Cloud Native Apps; describes their physicalintegration and logical interoperation; and discusses in depth certain criticalaspects of the design relevant to performance, high availability, data-lossprevention, and scaling.Use Case SummaryThis Reference Architecture (RA) is designed for cloud-native applications.It comes with the Murano catalog preloaded to support deployment ofdeveloper tools, databases, CI/CD toolchain elements (e.g., Jenkins),leading PaaS solutions (e.g., Cloud Foundry), containers and orchestrators(e.g., Docker, Kubernetes), and other solutions, including those available inthe online OpenStack Community App Catalog. Neutron networking allowsfor self-service networking isolated by tenants while Ceph provides bothblock and object storage to support various application types.

REFERENCE ARCHITECTURE2Table of ContentsMirantis OpenStackOverview . . . . . . . . . . . . . . . . . . . . . . . . 1Mirantis is a number one contributorto OpenStack. Fuel—a core part of theMirantis OpenStack distribution — hasbeen taken under Big Tent.Use Case Summary. . . . . . . . . . . . . . . 1Mirantis OpenStack . . . . . . . . . . . . . . 2Networking. . . . . . . . . . . . . . . . . . . . 2Mirantis OpenStack Architecture. . 3Nodes Types. . . . . . . . . . . . . . . . . . . . . 3High Availability. . . . . . . . . . . . . . 4Mirantis recommends the followingconfiguration of Mirantis OpenStack: Mirantis OpenStack version 7.0,based on OpenStack Kilo.Optional Components. . . . . . . . 4 Ubuntu 14.04 as host OS.Network Architecture. . . . . . . . . . . . . 5 The following OpenStack servicesdeployed: Keystone, Nova, Glance,Cinder, Neutron, Horizon, Heat,Murano, Ceilometer.Rack Physical Cabling Schema. . . . 7Server Interface Configuration. . 8Logical Networking Schema. . . . . . . 9Rack Hardware Specification. . . . . 10Disk Configuration. . . . . . . . . . . . . . . 11Validated Configuration . . . . . . . 12Scaling . . . . . . . . . . . . . . . . . . . . . . . 12Suggested Rack Configurations . . 13Testing . . . . . . . . . . . . . . . . . . . . . . . 13References. . . . . . . . . . . . . . . . . . . . . . 14Glossary. . . . . . . . . . . . . . . . . . . . . . . . 14Trademarks. . . . . . . . . . . . . . . . . . . . . 15 Three OpenStack Controller nodes inan HA configuration usingHAProxy, Corosync/Pacemaker,MySQL Galera, RabbitMQ, MongoDB. A Ceph cluster provides a redundantbackend for Glance, Cinder, NovaEphemeral, and Swift-API.– Ephemeral volumes configuredto use Ceph backend to providelarge volumes and support livemigration– 3x replication is utilized toprotect data Secure public OpenStack APIendpoints and Horizon with TLSv1.2.24 DIMMS of DDR4 RAM, and supportthe 12-Gb PowerEdge RAID controllers(PERC9). These features, combined witha high-performance energy-efficientdesign, should serve well for a varietyof workloads your cloud infrastructurewill need to support.The server configurationsrecommended are well-tested andproven with the Mirantis OpenStackfor Cloud Native Apps referencedesign, creating a foundation that cansecure your infrastructure investmentand allow for easy expansion of yourgrowing cloud.NetworkingThe network layer is based on Neutronwith OpenvSwitch and VLANs fortenant traffic separation because thisconfiguration eliminates the cost of anSDN solution.10-GbE networking infrastructure plusthe Neutron DVR feature enables highperformance East-West traffic.Designated NICs used for Storagereplication traffic reduce pressurefrom tenant traffic. NIC bonding andmulti-chassis LAG across multipleswitches for uplinks and downlinksprovide both high performance andphysical redundancy for the wholenetwork infrastructure. Zabbix or LMA toolchain monitoringservices installed.Key Benefits Sahara (optional). E asy to deploy integrated solution byMirantis on Dell PowerEdge Servers.Hardware OptionsFor this Reference Architecture, the DellPowerEdge R630 and R730xd serverswere chosen because of their hardwareoptions, performance, and scalability.Both platforms are powered by theIntel Xeon processor E5-2600 v3product family, are able to utilize up to F lexible enough to cover a broad setof use cases. Based on best-of-breed,enterprise-ready Mirantis OpenStackand Dell PowerEdge Servers.

REFERENCE ARCHITECTURE3Mirantis OpenStack ArchitectureMirantis OpenStackMirantis OpenStack consistsof the following “core” OpenStackservices: Keystone Nova Neutron Cinder GlanceSwift is substituted by Ceph withRadosGW providing the sameSwift API.The following “optional”OpenStack services are installedin Mirantis OpenStack: Horizon Heat Murano CeilometerAll services except Ceilometer areusing local MySQL as a databasebackend. Ceilometer usesMongoDB. All services are usingRabbitMQ as a messaging queueservice. Ceph provides Swift APIvia RadosGW and also is used asa backend for Cinder and Glance.Mirantis OpenStack Overview.Mirantis OpenStack 7.0 is aKilo-based OpenStack distributionrunning on top of Ubuntu 14.04.Node TypesServers used in this ReferenceArchitecture will serve as one ofthese node types: Infrastructure,Controller, Compute, or Storage.which incorporates all coreOpenStack infrastructure servicessuch as MySQL, RabbitMQ,HAProxy, OpenStack APIs, Horizon,and MongoDB. This node is notused to run VMs.Infrastructure NodeCompute NodeThe Infrastructure node is an Ubuntu14.04-based node, which carries twovirtual appliances:The Compute node is a hypervisorcomponent of a cloud, which runsvirtual instances. Fuel Master node —an OpenStackdeployment tool.Storage Node Cloud Validation node —a setof OpenStack post-deploymentvalidation tools including Tempestand Rally.Controller NodeThe Controller node is a controlplane component of a cloud,The storage node is a componentof an OpenStack environment,which keeps and replicates all userdata stored in your cloud includingobject and block storage. Ceph isused as a storage backend.Nova Ephemeral storage may uselocal disks or Ceph. The local diskgives you better performancebut Ceph enables live migrationfor instances with Ephemeralstorage used. Placement ofEphemeral storage to localdisks is considered as technicalpreview.To learn more, please see thedescription of the MirantisOpenStack architecture 0/reference-architecture.html.

REFERENCE ARCHITECTURE4High AvailabilityThe High Availability modelimplemented in Mirantis OpenStack(MOS) is well described in officialdocumentation adeploymentHere is the short description.To protect core OpenStack servicesfrom failure of a Controller node,control plane components areduplicated on multiple Controllernodes. It’s possible to deploy 3, 5, 7,or any other odd number of Controllernodes and add more nodes afterthe initial deployment (but keepingodd total number) to distribute loadand increase the redundancy level.Pacemaker/Corosync is used to managecritical infrastructure parts when failureoccurs. Because of the quorum majorityvoting algorithm used in Corosync,a cluster of ‘N’ Controller nodes maysurvive when ‘N/2-1’ nodes go down.For three nodes it’s one node down,for five nodes it’s two nodes, and so on.Almost all services are working inactive/active mode and load is balancedby HAProxy listening at a virtual IPaddress. OpenStack componentssupport active/active mode natively,as do RabbitMQ and MongoDB. Galeracluster is used to protect MySQL.From the networking perspective, highavailability is upheld by redundantand bonded connections to two ToRswitches. That means if one cable getscut or an entire switch goes down,a cloud will survive seamlessly withhalf its normal network bandwidthstill available.Another low-level hardware feature thatprovides redundancy is RAID1 for diskson Infrastructure and Storage nodesto keep an operating system intact incase of a disk failure. Also it’s highlyrecommended to use two independenthot-plug power supplies for eachhardware piece. This helps protectcomponents from powersupply failures.Ceph provides redundancy for storeddata. The recommended replicationnumber is three, which means at a giventime there are three copies of the samedata spread across Ceph nodes.a separate physical disk on each node.Each Controller node contains twoSSDs: the first one for OS plus otherservices and the second onefor MongoDB only.Optional ComponentsSahara is a service used to deploybig data (i.e., Hadoop) clusters on topof OpenStack that may be installedon demand. Sahara wasn’t used duringthe validation.There are two possible ways to useMongoDB with Ceilometer: installthe database locally or externally.Both options are available in MirantisOpenStack. When installing MongoDBlocally we recommend using at leastthree separate servers to form aMongoDB cluster. Placing MongoDBonto Controller nodes may causeresource starvation for key servicessuch as MySQL or RabbitMQ, which maylead to severe issues with the cloud asa whole. The best approach is to usededicated nodes for MongoDB; eitherexternal ones or nodes deployed aspart of MOS.For validation purposes, we putMongoDB on the Controllers, but give itBy default, Mirantis OpenStack protectspublic OpenStack APIs and Horizonwith TLSv1.2 using self-signed or userprovided certificates, but it’s possibleto disable this feature if necessary. Forvalidation purposes, we used selfsigned certificates.Mirantis OpenStack may be extendedvia Fuel plugins to provide twovalidated options to monitor your cloud:Zabbix or LMA toolchain (Elasticsearch/Kibana, InfluxDB/Grafana, and Nagios).Neither was used during this validation.

REFERENCE ARCHITECTURE5Network ArchitectureThe underlying network consists ofOpenStack control plane, data plane,and BMC Management. A pair of ToRS4048-ON switches in each rackforms a Virtual Link Trunking (VLT)group and provides for OpenStackcontrol and data plane trafficand uplinks. VLT allows servers’bonded NICs and rack uplinks tobe connected to both switches(opposite to traditional LACP), thusevery connection is secured againstphysical link or entire switch failure.S3048-ON serves BMC Managementnetwork 1-Gbps connections,including Dell iDRAC, switchmanagement traffic, and OpenStackProvisioning traffic. Ceph IO andmonitoring traffic goes on theOpenStack Management networkand is secured with LACP.It’s recommended to run Cephreplication traffic separately andsecure it with LACP along with theOpenStack Management network.We did this during validation.Depending on intended load,you may or may not elect to run theOpenStack Private network overLACP. Alternatively, you may wishto enable LACP for some Computenodes but not others, providingdifferent SLAs for these loads. Weran the Private network over LACPduring validation.NETWORK DESIGN - The networking model of Mirantis OpenStack requires the following networks to be set up:NETWORKVALID IDs1BMC Management NetworksMirantis OpenStack Admin/PXE (OpenStack Provisioning) Network120Out of Band IPMI network100OpenStack Control Plane Networks:OpenStack Management and Ceph Public network140OpenStack Public network160OpenStack Ceph Replication network180Out of Band IPMI network100OpenStack Data Plane Network:OpenStack Private Network200–1,000These VLAN numbers are given for reference.1To learn more about MirantisOpenStack's networking modelsee referencearchitecture.html#networkarchitecture.We recommend deploying MirantisOpenStack with Neutron DVRenabled. It runs a standalone routeron every Compute node. It routesEast-West and North-South trafficof Floating IPs for local instances,dramatically decreasing load toController nodes, which serve onlyNorth-South NAT traffic in this case.To learn more about DVR .0/reference-architecture.html#neutron-with-dvr.We had DVR deployed during validation.

REFERENCE ARCHITECTURE6Mirantis OpenStackThe following subnets are used by Mirantis OpenStack by default.NETWORKSUBNET DETAILSMirantis OpenStack PXE Network172.16.1.0/24*OpenStack Management and Ceph Public Network172.17.0.0/24*OpenStack Public NetworkDepends on requirements. Minimum /26 networkis recommended. To calculate the required size ofPublic Network see -addressrequirements.OpenStack Private NetworkFor validation purposes a single tenant networkwill be created during the deployment withthe network address 192.168.122.0/24.*It’s possible to remove the network afterthe validation is complete.OpenStack Ceph Replication Network172.18.0.0/24*Out of Band IPMI and Switch Management NetworkNo specific requirements*This network may be fully isolated inside a cloud anddoesn't need to be routed to a customer’s network.

REFERENCE ARCHITECTURERack Physical Cabling Schema7

REFERENCE ARCHITECTURE8Server Interface ConfigurationController NodeInterface1st 1G1st & 2nd 10G bond0IMPIVLANs120 (untagged)Active/Active LACP100 (untagged)140, 160, 180, 200–1,000 (tagged)Compute NodeInterface1st 1G1st & 2nd 10G bond0IMPIVLANs120 (untagged)Active/Active LACP100 (untagged)140, 160, 180, 200–1,000 (tagged)Storage NodeInterface1st 1G1st & 2nd 10G bond02rd & 4th 10Gbond12IMPIVLANs120 (untagged)Active/Active LACPActive/ActiveLACP180 (tagged)100 (untagged)140, 160, 200–1,000(tagged)Note: Storage node has two network cards with two ports each.And each bonded interface consists of two ports from different physicalnetwork cards to provide protection against network card failure.2

REFERENCE ARCHITECTURELogical Networking Schema9

REFERENCE ARCHITECTURE10Rack Hardware SpecificationThis Reference Architecture covers DellPowerEdge R630 and R730xd serverconfigurations but it’s possible to useother server configurations. Pleasecontact your Dell sales representativefor available options.PowerEdge R630The PowerEdge R630 two-socket rackserver delivers uncompromising densityand productivity. Part of the 13thgeneration of PowerEdge servers,the R630 is ideal for virtualization.The processor and memory densitywith up to 24 DIMMs of DDR4 RAMprovides great memory bandwidth.PowerEdge R730xdThe incredible versatility of thePowerEdge R730 server deliversoutstanding functionality in just 2Uof rack space. With the Intel Xeon processor E5-2600 v3 product familyand up to 24 DIMMs of DDR4 RAM,the R730 has the processing cyclesand threads necessary to deliver more,larger, and higher-performing storagefor virtual machines. Highly scalablestorage, with up to sixteen 12-Gb SASdrives and the high-performance 12-GbPowerEdge RAID Controller (PERC9),can greatly accelerate data access foryour virtualized environment.Recommended Server ConfigurationsController NodeCompute NodeStorage NodeInfrastructure NodeServer ModelR630R630R730xdR630Sockets2222Cores Per Socket8868RAM Configuration256 GB256 GB256 GB256 GBStorage Configuration2x 400-GB SSDdisks1x 400-GBSATA disk4x 200-GB SATAIntel DCS371020x 1.2-TB SAS2 x 1.2-TB SASFlexBay2x 400-GB SSD disksNetworking Configuration2x 1-G and 2x 10-G2x 1-G and 2x 10-G2x 1G and 4x10-G (2 cardswith 2 ports each)2x 1-G and 2x 10-GSwitchesSwitch ModelQuantityDell Networking S3048-ON1Dell Networking S4048-ON2Additional Rack EquipmentYou may order additional equipment such as Ethernet-managed PDUsand wire-management panels. Please, contact your Dell representative.

REFERENCE ARCHITECTURE11Disk ConfigurationNodeRAID TypeDisks NumberDisk TypePurposeControllerNone1SSDOperating SystemNone1SSDMongoDBComputeNone1SSDOperating SystemStorageRAID1210k SAS (front)Operating SystemJBOD2010k SAS (front)CEPH OSDsJBOD4SSDCEPH JournalRAID12SSDOperating SystemInfrastructureCompute NodeController NodeEach Compute node requires atleast one disk with average speedcharacteristics for OS deployment.We recommend using at least a400-GB disk for this purpose.RAID1 is optional.Controller nodes requireat least one disk with goodspeed characteristics for OSand OpenStack services. It’srecommended to use a 400-GBSSD for these purposes. RAID1is optional, but preferable. IfMongoDB is running on Controllernodes, we recommend using asecond disk for the database.We suggest using an SSD with atleast 200-GB for a cloud with 30Compute nodes. Actual MongoDBload depends on Ceilometerconfiguration. Please contactyour Mirantis representative fordetailed information.Ephemeral storage type is a cloudwide configuration option thatselects a backend type for NovaEphemeral volumes. If Ephemeralstorage on local disks is chosen,it’s better to put that storage on aseparate disk, which may be SASor even SSD depending on the SLArequired for cloud applications.These disks may be put intoRAID0 or RAID1 to utilize the RAIDcontroller’s cache and gain evenmore performance. It’s possibleto use different disk types indifferent groups of Compute nodes,increasing flexibility.Even with local Ephemeralstorage, it’s still possible to livemigrate instances that don’t haveEphemeral storage attached. Weused one 400-GB SSD for OSduring validation. Ceph is used forEphemeral storage.For validation purposes two 400GB SSDs were used: one for OS andOpenStack services, and anotherfor MongoDB. For RAID1, at leastthree disks should be used: twoSSDs in RAID1 for OS and the thirdSSD with no RAID for MongoDB.Storage NodeStorage nodes require at leastone disk with average speedcharacteristics for the OS. Werecommend using at least a 200GB disk for this purpose. RAID1is optional but preferable if youhave configured a small number ofStorage nodes. To speed up Cephwe recommend putting the Cephjournal on SSD disks. The amountand size of required SSDs is roughlycalculated like this: 40GB SSD forevery 1-1.2-TB SAS disk, and asingle SSD may be used with up tofive SAS disks. The above formulacalculates two 200-GB SSDs for ten1.2-TB SAS disks, and so on. Takingall the above into account, the nextconfiguration for Dell R730xd turnsout to be: two of SAS located inthe FlexBay are set up as RAID1.Four SATA 200-GB Intel DC S3710SSDs and twenty 1.2-TB SAS disks.We used this exact configurationduring validation.Infrastructure NodeInfrastructure nodes require atleast one disk with average speedcharacteristics for OS deployment.We recommend using at least a400-GB disk for this purpose.RAID1 is optional, but preferable.Otherwise, the Fuel Master nodemust be backed up.We used two 400-GB SSDs inRAID1 during validation.

REFERENCE ARCHITECTUREValidated ConfigurationHere are the validated configurationssummed up.12– 3 Compute nodes 400-GB SSD disk one for OS 2 ports 10-GB NIC bondedinto LACP to carry controland data plane networks 1x 1-GB NIC to carry BMCnetworks Hardware configuration– Dell PowerEdge R630 is used forInfrastructure, Controller, andCompute nodes– Dell PowerEdge R730xd is used forStorage node– 1 Infrastructure node 2x 400-GB Intel SSD disksbuilt into RAID1 1x 1-GB Intel NIC to carryBMC networks 1x 1-GB Intel NIC to carryPublic networks– 3 Controller nodes– Dell Networking S3048-ON andDell Networking S4048-ON areused.– Cabling and logical networkconfiguration are done asdescribed in correspondingsections.– Tenant traffic segmentationis done by VLANs and DVR isused to accelerate East-Westcommunication. Software configuration 2x 400-GB Intel SSD disks,one for OS and another onefor MongoDB 2 ports 10GB Intel NICbonded int

Mirantis OpenStack Mirantis is a number one contributor to OpenStack. Fuel—a core part of the Mirantis OpenStack distribution —has been taken under Big Tent. Mirantis recommends the following configuration of Mirantis OpenStack: Mirantis OpenStack version 7.0, based on OpenStack Kilo. Ubuntu 14.04 as host OS.File Size: 1MBPage Count: 15People also search foropenstack architecture pdfopenstack architecture diagramopenstack nova architectureopenstack glance architectureopenstack vdi architectureopenstack architecture ppt