MCP Reference Architecture - Docsportal Documentation

Transcription

MCP Reference Architectureversion q4-18

Mirantis Cloud Platform Reference ArchitectureContentsCopyright notice1Preface2Intended audience2Documentation history2Introduction3MCP capabilities3MCP design3DriveTrain4MCP clusters4Cloud infrastructure6Infrastructure management capabilities6Deployment and lifecycle management automation7LCM pipeline overview7High availability in DriveTrain8SaltStack and Reclass metadata model9Infrastructure nodes overview11Infrastructure nodes disk layout12Hardware requirements for Cloud Provider Infrastructure13Control plane virtual machines15Networking18Server networking18Access networking18Switching fabric capabilities19Multi-cluster architecture20Staging environment22OpenStack cluster24OpenStack cloud capabilities24OpenStack compact cloud25OpenStack Cloud Provider infrastructure28OpenStack large cloud30Virtualized control plane34 2022, Mirantis Inc.Page i

Mirantis Cloud Platform Reference ArchitectureVirtualized control plane overview34OpenStack VCP Core services34OpenStack VCP extension services35OpenStack VCP extra services36Manila storage networking planning38Ironic planning39Ironic components39Ironic network logic40MCP Ironic supported features and known limitations41Virtualized control plane layout43High availability in OpenStack44Secure OpenStack API46Compute nodes planning48OpenStack network architecture50Selecting a network technology50Types of networks51MCP external endpoints53Storage traffic54Neutron OVS networking56Limitations56Node configuration56Network node configuration for VXLAN tenant networks57Network node configuration for VLAN tenant networks57VCP hosts networking58Neutron VXLAN tenant networks with network nodes for SNAT (DVR forall)58Plan the Domain Name System61Plan load balancing with OpenStack Octavia62Storage planning64Image storage planning65Block storage planning65Object storage planning66Ceph planning67 2022, Mirantis Inc.Page ii

Mirantis Cloud Platform Reference ArchitectureMCP Ceph cluster overview67Ceph services69Additional Ceph considerations70Ceph OSD hardware considerations74Tenant Telemetry planning75Heat planning79Kubernetes cluster82Kubernetes cluster overview82Kubernetes cluster components84Network planning86Types of networks87Calico networking considerations88Network checker overview89MetalLB support90Etcd cluster91High availability in Kubernetes91Virtual machines as Kubernetes pods93Limitations94Virtlet manager95Virtlet tapmanager95Virtlet vmwrapper95Container Runtime Interface Proxy96OpenStack cloud provider for l cluster overview100OpenContrail 3.2 cluster overview101OpenContrail 4.x cluster overview102OpenContrail components104OpenContrail 3.2 components105OpenContrail 4.x components108OpenContrail traffic flow 2022, Mirantis Inc.110Page iii

Mirantis Cloud Platform Reference ArchitectureUser Interface and API traffic110SDN traffic111OpenContrail vRouter112OpenContrail HAProxy driver with LBaaSv2113OpenContrail IPv6 support114StackLight LMA116StackLight LMA overview116StackLight LMA components117StackLight LMA high availability122Monitored components123StackLight LMA resource requirements per cloud size125Repository planning127Local mirror design127Mirror image content130 2022, Mirantis Inc.Page iv

Mirantis Cloud Platform Reference ArchitectureCopyright notice2022 Mirantis, Inc. All rights reserved.This product is protected by U.S. and international copyright and intellectual property laws. Nopart of this publication may be reproduced in any written, electronic, recording, or photocopyingform without written permission of Mirantis, Inc.Mirantis, Inc. reserves the right to modify the content of this document at any time without priornotice. Functionality described in the document may not be available at the moment. Thedocument contains the latest information at the time of publication.Mirantis, Inc. and the Mirantis Logo are trademarks of Mirantis, Inc. and/or its affiliates in theUnited States an other countries. Third party trademarks, service marks, and names mentionedin this document are the properties of their respective owners. 2022, Mirantis Inc.Page 1

Mirantis Cloud Platform Reference ArchitecturePrefaceThis documentation provides information on how to use Mirantis products to deploy cloudenvironments. The information is for reference purposes and is subject to change.Intended audienceThis documentation is intended for deployment engineers, system administrators, anddevelopers; it assumes that the reader is already familiar with network and cloud concepts.Documentation historyThe following table lists the released revisions of this documentation:Revision dateFebruary 8, 2019 2022, Mirantis Inc.DescriptionQ4 18 GAPage 2

Mirantis Cloud Platform Reference ArchitectureIntroductionMirantis product is Mirantis Cloud Platform (MCP). This is a software product that is installed onbare metal servers in a datacenter and provides virtualization cloud capabilities to the end usersof the platform. MCP also includes deployment and lifecycle management (LCM) tools thatenable cloud operators to deploy and update the Mirantis Cloud Platform using automatedintegration and delivery pipeline.MCP capabilitiesMirantis Cloud Platform (MCP) provides two broad categories of capabilities to two distinctgroups of users: Cloud operatorsUsers from this category are engineers responsible for operations of the cloud platform.They are interested in stability of the platform, reliable life cycle operations, and timelyupdate of the platform software. Tenant usersUsers from this category run workloads on the cloud platform using interfaces providedby the platform. They need to understand what types of virtual resources are availableon the cloud platform, how to utilize them, and what are the limitations of the platforminterfaces.Cloud operators and administrators can use MCP to manage the following elements of the cloudplatform infrastructure: Physical infrastructureHardware servers, host operating system. Cloud platform softwareHypervisors, control plane services, identity information. Network configurationHost networking, IP routing, filtering, and VPN.Tenant users can use MCP to manage the following resources provided by the cloud platform: Virtual infrastructureVirtual server instances and/or containers, virtual networks and resources, virtualstorage, and tenants identity information. Applications running on the infrastructureAny workloads that run in the virtual infrastructure using resources of physicalinfrastructure agnostically through virtualization mechanisms.MCP designMirantis Cloud Platform provides capabilites described above as functions of its softwarecomponents. 2022, Mirantis Inc.Page 3

Mirantis Cloud Platform Reference ArchitectureDriveTrainDriveTrain is code name for the MCP LCM framework that includes Gerrit, Jenkins, MCP Registry,SaltStack, Reclass, and metadata model. The DriveTrain components perform the followingfunctions: SaltStackFlexible and scalable deployment and configuration management and orchestrationengine that is used for automated lifecycle management of MCP clusters. ReclassReclass is an External Node Classifier (ECN) that, coupled with SaltStack, provides aninventory of nodes for easy configuration management. Reclass metadata modelThe metadata model is a hierarchical file based store that allows to define all parametervalues used by Salt to configure services of MCP. The model hierarchy is merged andexposed to Salt through the Reclass ENC. GerritGit repository and code review management system in which all MCP codebase and themetadata model are stored and through which all changes to MCP clusters aredelivered. JenkinsBuild automation tool that, coupled with Gerrit, enables continuous integration andcontinuous delivery of updates and upgrades to the MCP clusters. MCP RegistryA set of repositories with binary artifacts required for MCP cluster deployment andfunctioning. This is a local mirror of Registry published by Mirantis from its productdelivery infrastructure. MAASMetal-as-a-Service (MAAS) is a provisioning software that allows you to manage physicalmachines. OpenLDAPOpenLDAP server stores and provides identity information for other components ofDriveTrain and, optionally, for MCP clusters.MCP clustersUsing DriveTrain, you can deploy and manage multiple MCP clusters of different types. MCPclusters provide certain Operator and Tenant functions, as described below. StackLight Logging, Monitoring, and Alerting (LMA)Responsible for collection, analysis, and visualization of critical monitoring data fromphysical and virtual infrastructure, as well as alerting and error notifications through aconfigured communication system, such as email. 2022, Mirantis Inc.Page 4

Mirantis Cloud Platform Reference Architecture OpenStackPlatform that manages virtual infrastructure resources, including virtual servers,storage devices, networks and networking services such as load balancers, andprovides management functions to Tenant users. Kubernetes support terminated since 2019.2.5Platform that manages virtual infrastructure resources, including container images,pods, storage and networking resources for containerized applications. CephDistributed storage platform that provides storage resources, such as objects andvirtual block devices, to virtual and physical infrastructure. OpenContrail (optional)MCP enables you to deploy OpenContrail as a software-defined networking solution.MCP OpenContrail is based on official OpenContrail releases with additionalcustomizations by Mirantis.NoteIf you run MCP OpenContrail SDN, you need to have Juniper MX or SRX hardwareor virtual router to route traffic to and from OpenStack tenant VMs. High AvailabilityIn MCP, the high availability of control plane services is ensured by Keepalived andHAProxy. Keepalived is a Linux daemon that provides redundancy for virtual IPaddresses. HAProxy provides load balancing for network connections. 2022, Mirantis Inc.Page 5

Mirantis Cloud Platform Reference ArchitectureCloud infrastructureA cloud infrastructure consists of the physical infrastructure, network configuration, and cloudplatform software.In large data centers, the cloud platform software required for managing user workloads runs onseparate servers from where the actual workloads run. The services that manage the workloadscoupled with the hardware on which they run are typically called the control plane, while theservers that host user workloads are called the data plane.In MCP, the control plane is hosted on the infrastructure nodes. Infrastructure nodes run all thecomponents required for deployment, lifecycle management, and monitoring of your MCPcluster. A special type of infrastructure node called the foundation node, in addition to otherservices, hosts a node that runs the bare-metal provisioning service called MAAS and the SaltMaster service that provides infrastructure automation.MCP employs modular architecture approach by using the Reclass model to describeconfiguration and distribution of services across the infrastructure nodes. This allows the productto arrange the same services into different configurations depending on the use case.Infrastructure management capabilitiesMCP provides the following infrastructure management capabilities to cloud operators andadministrators: Install MCP and its components on bare metal infrastructure. Update components of MCP to improve existing capabilities and get security or other fixes. Upgrade cloud platform components and other components of MCP installation to gain newcapabilities. Add, remove, and replace elements of the control plane and data plane physical and virtualinfrastructure, including hypervisor servers and servers that host control plane services. Configure bare metal servers, including disk and network settings, operating system, and IProuting. Collect and expose metrics and logs from the infrastructure. Generate alerts and notifications about events in the infrastructure. Deploy distributed massively-scaled shared storage (Ceph) and attach it to a cloud in orderto provide reliable storage to virtual machines. 2022, Mirantis Inc.Page 6

Mirantis Cloud Platform Reference ArchitectureDeployment and lifecycle management automationMCP utilizes the Infrastructure-as-Code concept for deployment and lifecycle management of acloud datacenter. In this concept, all infrastructure elements are described in definition files.Changes in the files are reflected in the configuration of datacenter hosts and cloud services.DriveTrain is the lifecycle management (LCM) engine of MCP. It allows cloud operators to deployand manage MCP clusters.DriveTrain implements an opinionated approach to Infrastructure-as-Code. Cloud operators canuse DriveTrain to describe their infrastructures as declarative class-based metadata model.Changes in the model parameters are applied through DriveTrain LCM orchestration.The LCM orchestration is handled by Groovy pipelines executed by the Jenkins server. Theconfiguration management is provided by Salt formulas executed by the SaltStack agents(minions).LCM pipeline overviewDriveTrain implements lifecycle management (LCM) operations as Jenkins pipelines. For the listof the components of DriveTrain, see MCP design.The following diagram describes the workflow of the DriveTrain LCM pipeline:LCM pipeline workflow#Description1An operator submits changes to the cluster metadata model in Gerrit forreview and approval.2Depending on your configuration and whether you have a stagingenvironment or deploy changes directly to a production MCP cluster, theworkflow might slightly differ. Typically, with a staging MCP cluster, youtrigger a deployment job in Jenkins before merging the change. This allowsyou to verify it before promoting to production. However, if you deploy anMCP cluster onto production, you might want to approve and merge thechange first. 2022, Mirantis Inc.Page 7

Mirantis Cloud Platform Reference Architecture3Jenkins job invokes the required SaltStack formulas and Reclass models fromGerrit and artifacts from the MCP Registry.4SaltStack applies changes to the cloud environment.Seealso Local mirror design Mirror image contentHigh availability in DriveTrainDriveTrain isavailability isDriveTrain inenvironmentsthe integration framework for the MCP product. Therefore, its continuousessential for the MCP solution to function properly. Although you can deploythe single node Docker Swarm mode for testing purposes, most productionrequire a highly-available DriveTrain installation.All DriveTrain components run as containers in Docker Swarm mode cluster which ensuresservices are provided continuously without interruptions and are susceptible to failures.The following components ensure high availability of DriveTrain: Docker Swarm mode is a special Docker mode that provides Docker cluster management.Docker Swarm cluster ensures: High availability of the DriveTrain services. In case of failure on any infrastructure node,Docker Swarm reschedules all services to other available nodes. GlusterFS ensures theintegrity of persistent data. Internal network connectivity between the Docker Swarm services through the Dockernative networking. Keepalived is a routing utility for Linux that provides a single point of entry for all DriveTrainservices through a virtual IP address (VIP). If the node on which the VIP is active fails,Keepalived fails over the VIP to other available nodes. nginx is web-server software that exposes the DriveTrain service’s APIs that run in a privatenetwork to a public network space. GlusterFS is a distributed file system that ensures the integrity of the MCP Registry andGerrit data by storing the data in a shared storage on separate volumes. This ensures thatpersistent data is preserved during the failover.The following diagram describes high availability in DriveTrain: 2022, Mirantis Inc.Page 8

Mirantis Cloud Platform Reference ArchitectureSaltStack and Reclass metadata modelSaltStack is an automation tool that executes formulas. Each SaltStack formula defines onecomponent of the MCP cluster, such as MySQL, RabbitMQ, OpenStack services, and so on. Thisapproach enables MCP product developers to combine the components as needed so thatservices do not interfere with each other and can be reused in multiple scenarios.Reclass is an external node classifier (ENC) which enables cloud operators to manage aninventory of nodes by combining different classes into MCP cluster configurations. Reclassoperates classes which you can view as tags or categories of metadata parameters.The metadata model itself consists of hierarchically structured classes and correspondingparameters.The following diagram displays the Mirantis Reclass metadata model’s hierarchy of classes: 2022, Mirantis Inc.Page 9

Mirantis Cloud Platform Reference Architecture 2022, Mirantis Inc.Page 10

Mirantis Cloud Platform Reference ArchitectureMCP reclass classesService classSystem classCluster classA service class defines oneservice, or a group of relatedservices, and the mostspecific configurationparameters for them. Theparameters in this layer ofthe metadata model aretranslated directly into valuesin the configuration files forthe corresponding service,and so on.The service classes areprovided by and match theSalt formulas installed ontothe Salt Master node. Ametadata parameter valuedefined in one of the serviceclasses might be overriddenby values from higher levelsin the hierarchy, whichinclude the system andcluster levels.A system class defines a role(with different granularity)that is applied to a node,logical or physical. Systemclasses typically include andcombine service classes andother system classes in a wayto describe completelyconfigured, integrated, andready-to-use system.The system classes aredistributed as a Gitrepository. The repository iscopied to the Salt Masternode during the bootstrap ofDriveTrain.A metadata parameter valueset in a system class could beoverridden by the valuesfrom a higher level in thehierarchy, which is the clusterlevel.A cluster class definesconfiguration of a specificMCP cluster. This kind ofclasses can combine systemclasses according to thearchitecture of the cluster.A cluster metadata model istypically generated using theautomation pipeline that isexecuted by DriveTrainJenkins. This pipeline usesCookiecutter as a templatingtool to generate the clustermodel.The cluster metadata modelis distributed as a Gitrepository.Infrastructure nodes overviewInfrastructure nodes are the physical machines that run all required services for the MCP clusterdeployment, lifecycle management, and monitoring, also known as control plane services.The exact number of the infrastructure nodes in each MCP environment and distribution of theMCP components across the infrastructure nodes depend on the use case and are defined in thedeployment model.The MCP Cloud Provider Reference Configuration requires 9 infrastructure nodes to run controlplane services. See OpenStack compact cloud for details.NoteYou can use either Neutron or OpenContrail SDN, but not both at the same time. 2022, Mirantis Inc.Page 11

Mirantis Cloud Platform Reference ArchitectureSeealsoHardware requirements for Cloud Provider InfrastructureInfrastructure nodes disk layoutInfrastructure nodes are typically installed on hardware servers. These servers run allcomponents of management and control plane for both MCP and the cloud itself. It is veryimportant to configure hardware servers properly upfront because changing their configurationafter initial deployment is costly.For instructions on how to configure the disk layout for MAAS to provision the hardwaremachines, see MCP Deployment Guide: Add a custom disk layout per node in the MCP model.Consider the following recommendations:LayoutMirantis recommends using the LVM layout for disks on infrastructure nodes. This optionallows for more operational flexibility, such as resizing the Volume Groups and LogicalVolumes for scale-out.LVM Volume GroupsAccording to Hardware requirements for Cloud Provider Infrastructure, an infrastructurenode typically has two or more SSD disks. These disks must be configured as LVM PhysicalVolumes and joined into a Volume Group.The name of the Volume Group is the same across all infrastructure nodes to ensureconsistency of LCM operations. Mirantis recommends following the vg role namingconvention for the Volume Group. For example, vg root.LVM Logical VolumesThe following table summarizes the recommended Logical Volume schema for infrastructurenodes in the CPI reference architecture. The /var/lib/libvirt/images/ size may be adjusted tothe size of all VMs hosted on the node depending on the VCP VMs size. The disk size for alarge deployment may require more that 3 TB for StackLight LMA and OpenContrail.Follow the instructions in the MCP Deployment Guide to configure infrastructure nodes inyour cluster model.Logical Volume schema for infrastructure nodes in CPIServer roleAll roles 2022, Mirantis Inc.Servernameskvm01 kvm09LogicalVolumepath/dev/vg root/lv rootMountpoint'/'Size50 GBPage 12

Mirantis Cloud Platform Reference ArchitectureVCP infrastructurekvm01,kvm02,kvm03/dev/vg root/lv gluster/srv/glusterfs200 GBVCP infrastructurekvm01,kvm02,kvm03/dev/vg root/lv mcp images/var/lib/libvirt/images1200 GBStackLight LMAkvm04,kvm05,kvm06/dev/vg root/lv mcp images/var/lib/libvirt/images5500 GBTenant gatewaykvm07,kvm08,kvm09/dev/vg root/lv mcp images/var/lib/libvirt/images700 GBHardware requirements for Cloud Provider InfrastructureThe reference architecture for MCP Cloud Provider Infrastructure (CPI) use case requires 9infrastructure nodes to run the control plane services.The following diagram displays the components mapping of the infrastructure nodes in the CPIreference architecture. 2022, Mirantis Inc.Page 13

Mirantis Cloud Platform Reference ArchitectureHardware requirements for the CPI reference architecture are based on the capacityrequirements of the control plane virtual machines and services. See details in Virtualizedcontrol plane layout.The following table summarizes the actual configuration of the hardware infrastucture nodesused by Mirantis to validate and verify the CPI reference architecture. Use it as a reference toplan the hardware bill of materials for your installation of MCP.Hardware requirements for rInfrastructure node(VCP)3Supermicro SYS-6018R-TDWIntel E5-2650v424825619001IntelX520-DA22Infrastructure node (StackLightLMA)3Supermicro SYS-6018R-TDWIntel permicro SYS-6018R-TDWIntel Supermicro SYS-6018R-TDW444496035IntelX520-DA22Ceph OSD9 Supermicro SYS-6018R-TDWIntel E5-2620v41169696037IntelX520-DA2261One SSD, Micron 5200 MAX or similar.2Three SSDs, 1900 GB each, Micron 5200 MAX or similar.3(1, 2, 3)Two SSDs, 480 GB each, WD Blue 3D (WDS500G2B0A) or similar.4(1, 2, 3, 4)Depends on capacity requirements and compute planning. See details inCompute nodes planning.5Minimal system storage. Additional storage for virtual server instances mightbe required.6Minimal recommended number of Ceph OSD nodes for production deploymentis 9. See details in Additional Ceph considerations.7Minimal system storage. Additional devices are required for Ceph storage,cache, and journals. For more details on Ceph storage configuration, see CephOSD hardware considerations. 2022, Mirantis Inc.Page 14

Mirantis Cloud Platform Reference ArchitectureNoteRAM capacity of this hardware configuration includes overhead for GlusterFS serversrunning on the infrastructure nodes (kvm01, kvm02, and kvm03).The rule of thumb for capacity planning of the infrastructure nodes is to have at least 10%more RAM than planned for all virtual machines on the host combined. This rule is alsoapplied by StackLight LMA, and it will start sending alerts if less than 10% or 8 GB of RAMis free on an infrastructure node.SeealsoVirtualized control planeControl plane virtual machinesMCP cluster infrastructure consists of a set of virtual machines that host the services required tomanage workloads and respond to API calls.MCP clusters includes a number of logical roles that define functions of its nodes. Each role canbe assigned to a specific set of the control plane virtual machines. This allows to adjust thenumber of instances of a particular role independently of other roles, providing greater flexibilityto the environment architecture.To ensure high availability and fault tolerance, the control plane of an MCP cluster typicallyspreads across at least three physical nodes. However, depending on your hardware you maydecide to break down the services on a larger number of nodes. The number of virtual instancesthat must run each service may vary as well.The reference architecture for Cloud Provider Infrastructure use case uses 9 infrastructure nodesto host the MCP control plane services.The following table lists the roles of infrastructure logical nodes and their standard code namesused throughout the MCP metadata model:MCP infrastructure logical nodesServer roleInfrastructure node 2022, Mirantis Inc.Server rolecodename inmetadatamodelkvmDescriptionInfrastructure KVM hosts that providevirtualization platform all VCP componentPage 15

Mirantis Cloud Platform Reference ArchitectureNetwork nodegtwNodes that provide tenant network data planeservices.DriveTrain Salt MasternodecfgThe Salt Master node that is responsible forsending commands to Salt Minion nodes.DriveTrain LCM enginenodecidNodes that run DriveTrain services incontainers in Docker Swarm mode cluster.RabbitMQ server nodemsgNodes that run the message queue server(RabbitMQ).Database server nodedbsNodes that run the clustered MySQL database(Galera).OpenStack controller nodectlNodes that run the Virtualized Control Planeservice, including the OpenStack API serversand scheduler components.OpenStack compute nodecmpNodes that run the hypervisor service and VMworkloads.OpenStack DNS nodednsNodes that run OpenStack DNSaaS service(Designate).OpenStack secretsstorage nodeskmnNodes that run OpenStack Secrets service(Barbican).OpenStack telemetrydatabase nodesmdbNodes that run the Telemetry monitoringdatabase services.Proxy nodeprxNodes that run reverse proxy that exposesOpenStack API, dashboards, and othercomponents externally.Contrail controller nodesntwNodes that run the OpenContrail controllerservices.Contrail analytics nodesnalNodes that run the OpenContrail analyticsservices.StackLight LMA log nodeslogNodes that run the StackLight LMA loggingand visualization services.StackLight LMA databasenodesmtrNodes that run the StackLight databaseservices.StackLight LMA nodesmonNodes that run the StackLight LMA monitoringservices.Ceph RADOS gatewaynodesrgwNodes that run Ceph RADOS gatewaydaemon and expose Object Storage API.Ceph Monitor nodescmnNodes that run Ceph Monitor service.Ceph OSD nodesosdNodes that provide storage devices for Cephcluster. 2022, Mirantis Inc.Page 16

Mirantis Cloud Platform Reference ArchitectureNoteIn the Cloud Provider reference configuration, Ceph OSDs run on dedicated hardwareservers. This reduces operations complexity, isolates the failure domain, and helps avoidresources contention.SeealsoOpenStack compact cloud 2022, Mirantis Inc.Page 17

Mirantis Cloud Platform Reference ArchitectureNetworkingThis section describes the key hardware recommendations on server and infrastructurenetworking, as well as switching fabric capabilities for CPI reference architecture.Server networkingServer machines used in CPI reference architecture have 1 built-in dual port 1 GbE networkinterface card (NIC), and two additional 1/10 GbE NICs.The built-in NIC is used for network boot of the servers. Only one interface is typically for PXEboot, the other one is kept unused for redundancy.The first pair of 1/10 Gbit Ethernet interfaces is used for the management and control planetraffic. These interfaces should be connected to an access switch in 1 or 10 GbE mode.In CPI referernce architecture, the interfaces of the first NIC are joined in a bond logical interfacein 802.3ad mode.The second NIC with two interfaces is used for the data plane traffic and storage traffic. On theoperating system level, ports on this 1/10 GbE card are joined into an LACP bond (Linux bondmode 802.3ad).Recommended LACP load balancing method for both bond interfaces is transmission hash policybased on TCP/UDP port numbers (xmit hash policy layer3 4).This NIC must be connected to an access switch in 10 GbE mode.NoteThe LACP configuration in 802.3ad mode on the server side must be supported by thecorresponding configuration of switching fabric. See Switching fabric capabilities fordetails.SeealsoLinux Ethernet Bonding Driver DocumentationAccess networkingThe top of the rack (ToR) switches provide connectivity to servers on physical and data-linklevels. They must provide support for LACP and other technologies used on the server side, forexample, 802.1q VLAN segmentation. Access layer switches are used in stacked pairs.In MCP CPI reference architecture validation lab, the following 10 GbE switches were used as thetop of the rack (ToR) for PXE, Management Public, Storage, and Tenant networks in MCP: Dell Force10 S4810P 48x 10 GbE ports, 4x 40 GbE ports 2022, Mirantis Inc.Page 18

Mirantis Cloud Platform Reference ArchitectureUse this a reference when planning the hardware bill of materials for your installation of MCP.The following diagram illustrates how a server is connected to the switching fabric and how thefabric itself is configured.Switching fabric capabilitiesThe following table summarizes requirements for the switching fabric capabilities:Switch fabric capabilities summaryName of re

OpenStack VCP Core services 34 OpenStack VCP extension services 35 . This documentation provides information on how to use Mirantis products to deploy cloud environments. The information is for reference purposes and is subject to change. . Mirantis product is Mirantis Cloud Platform (MCP). This is a software product that is installed on