7 Key Considerations For Microservices-based Application Delivery

Transcription

White Paper7 key considerations formicroservices-basedapplication deliveryEnsuring the success of your cloud-native journeyBy Lee Calcote and Pankaj Gupta

Citrix 7 key considerations for microservices-based application deliveryThe role of application delivery inyour cloud-native journeyAs digital transformation is changing how yourorganization conducts business, it is also changinghow your products and services are delivered. Theinfrastructure and practices by which your software iscontinuously deployed and operated—your applicationdelivery—is the fulcrum of your organization’sdigital transformation. Likely you are progressingon your cloud-native journey—that is, transitioningfrom monolithic to container-based microservicesarchitectures with the goal of achieving agility,portability, and on-demand scalability. Kubernetes isthe platform of choice for many companies, providingthe automation and control necessary to managemicroservices-based applications at scale and withhigh velocity.The network is part and parcel to each and everyservice request in your microservices-based application.Therefore, it may come as no surprise that at the coreof application delivery is your application deliverycontroller, an intelligent proxy that accelerates andmanages application delivery. With no standarddefinition of what an application delivery controllerdoes, the capabilities of intelligent proxies vary broadly.In this white paper, we’ll explore application deliverycontrollers as they relate to your architecture choices,your use of container platforms, and open source tools.7 key considerations formicroservices-based applicationdeliveryBefore embarking on your cloud-native journey, it’sessential to critically assess your organization’sreadiness so you can choose the solutions thatbest fit your business objectives. There are sevenkey considerations to address when planning yourmicroservices-based application delivery design:1. Architecting your foundation the right way2. Openly integrating with the cloud-native ecosystem23. Choosing the perfect proxy4. Securing your applications and APIs5. Enabling CI/CD and canary deployment withadvanced traffic steering6. Achieving holistic observability7. Managing monoliths and microservicesA thorough evaluation of these seven considerationsis best done with specific tasks and goals in mind.Depending on the size and diversity of your organization,you may need to account for a variety of stakeholders’needs—that is, tasks and goals that differ based on roleand responsibility.In context of application delivery, we’ll survey themost common roles with a generalized view of theirresponsibilities and needs as stakeholders. To helpfacilitate a general understanding, we’ve grouped someroles when responsibilities overlap across multiple teams: Platform: Platform teams are responsible fordeploying and managing their Kubernetesinfrastructure. They are responsible for platformgovernance, operational efficiency, and developeragility. The platform team is the connective tissueamong various teams like DevOps, SREs, developers,and network operations teams and therefore mustaddress and balance the unique needs of a diversegroup of stakeholders, or influencers, when choosingcloud-native solutions. DevOps: DevOps teams are responsible forcontinuously deploying applications. They care aboutfaster development and release cycles, CI/CD andautomation, and canary and progressive rollout. SREs: Site reliability engineers must ensureapplication availability. They care about observability,incident response, and postmortems. SREs oftenact as architects for the DevOps team and are oftenextensions of or directly belong to DevOps teams. Developers: Development teams are responsiblefor application performance and are focused onensuring a seamless end-user experience, includingtroubleshooting, and microservices discovery androuting. Application performance and troubleshootingis a shared responsibility among multiple teams.

Citrix 7 key considerations for microservices-based application delivery NetOps: Network operations teams are responsiblefor ensuring stable, high-performing networkconnectivity, resiliency, security (web applicationfirewalls and TLS, for example), and are commonlyfocused on north-south traffic. They care aboutestablishing networking policies and enforcingcompliance; achieving management, control, andmonitoring of the network; and gaining visibility for thepurpose of resources and capacity planning. DevSecOps: DevSecOps teams care about ensuring astrong security posture and rely on automated tools toorchestrate security for infrastructure, applications,containers, and API gateways. DevSecOps worksvery closely with NetOps to ensure a holisticsecurity posture.Each role has nuanced responsibilities. Whether youhave a single person or entire teams assigned to theseroles, each role’s function needs to be accounted for.It’s important to note that these stakeholders areundergoing a transformation in their responsibilities—or at least a transformation in the way that theyperform their responsibilities. Depending upon yourorganization’s size and structure, your stakeholders mayor may not have clearly defined lines of accountabilityamong roles. As you adopt a cloud-native approach toapplication deployment and delivery, you may find thatthe once-defined lines have blurred or that they are3being redrawn. Be aware that the individuals who fillthese roles typically go through a period of adjustmentthat can be unsettling until they adapt.Your cloud-native infrastructure should be asaccommodating as possible to you, your team, and yourcollective responsibilities and process, so we encourageyou to seek solutions that address the needs of all yourstakeholders. Significantly, this includes evaluatingdifferent architectural models that are best suited tothe purpose. While every organization doesn’t travelthe same road to cloud-native, every journey startswith initial architectural decisions—decisions that havesubstantial bearing on your path to cloud native.1. Architecting your foundation theright wayCloud native novices and experts alike find thatdesigning their application delivery architectures is themost challenging part of building microservices. Yourarchitectural choices will have a significant impacton your cloud-native journey. Some architectures willprovide greater or fewer benefits while others will proveless or more difficult to implement.Whether you are a cloud-native pro or a novice, yourselection of the right application delivery architectureDiverse stakeholders have unique needsPlatform teamPlatform governance, operational efficiency, developer agilityDevOpsDevelopersSRENetOpsDevSecOpsFaster release anddeployment cycles,CI/CD and automation,canary and progressiverolloutUser experience,troubleshooting,microservice discovery,and routingApplication availability,observability, incidentresponse, postmortemsNetwork policy andcompliance; manage,control, and monitornetwork; resource andcapacity planningApplication andinfrastructure security,container security andAPI gateways, andautomation

Citrix 7 key considerations for microservices-based application deliverywill balance the tradeoff between the greatest benefitsand the simplicity needed to match your team’s skill set.Figure 1 highlights four common application deliveryarchitecture deployment models. Open source tools integration Service mesh and Istio integration IT skill set requiredLearn more about the evaluation criteria.Tip: Traffic directionsNorth-south (N-S) traffic refers to traffic betweenclients outside the Kubernetes cluster and servicesinside the cluster, while east-west (E-W) trafficrefers to traffic between services inside theKubernetes cluster.Let’s examine each of the four deployment models.Two-tier ingressTwo-tier ingress is the simplest architectural modelto deploy to get teams up and running quickly. In thisdeployment model, there are two layers of ADCs forN-S traffic ingress. The external ADC (at Tier 1), shownin green in Figure 2, provides L4 traffic management.Frequently, additional services are assigned to thisADC and can include web application firewall (WAF),secure sockets layer/transport layer security offload(SSL/TLS) functionality, and authentication. A two-tieringress deployment model is often managed by theexisting network team (which is familiar with internetfacing traffic), and it can also be used as an ADC forother existing applications simultaneously.Each of the deployment models in Figure 1 come withtheir list of pros and cons and are typically the pointof focus of different teams. So how do you choose theright architecture for your deployment? Given the needsof your stakeholders and the many specifics involvedin managing both north-south (N-S) and east-west(E-W) traffic, it is critical to assess the four differentarchitectures with respect to the following areas: Application security ObservabilityThe second ADC (Tier 2), shown in orange inFigure 2, handles L7 load balancing for N-S traffic.It is managed by the platform team and is used within Continuous deployment Scalability and performanceFigure 1: Citrix architectures for microservices-based application deliveryCitrix ADCHighCitrix ADC (CPX)Citrix ADCCitrix ADC (CPX)Citrix ADCPodCitrix ADCBenefitNodeCitrix ADC xy)PodNodeNode(Kube-proxy)KubernetesService Mesh LiteKubernetesPodService ProxyPodPodService ProxyService ProxyKubernetesService MeshNode(Kube-proxy)Unified IngressKubernetesTwo-Tier xy)PodPodService ProxyComplexity4

Citrix 7 key considerations for microservices-based application deliverythe Kubernetes cluster to direct traffic to the correctnode. Layer 7 attributes, like information in the URLand HTTP headers, can be used for traffic loadbalancing decisions. The orange ADC continuouslyreceives updates about the availability and respectiveIP addresses of the microservices pods within theKubernetes cluster and can make decisions about whichpod is best able to handle the request. Deployed as acontainer inside the Kubernetes cluster, the orange ADCcan be deployed as a container with Citrix CPX or withanother similar product.Tip: Improving kube-proxy performanceBy default, kube-proxy uses iptables (x tableskernel modules), so it does not perform as well asother proxies. You can configure kube-proxy to runin different modes by setting the --proxy-mode flag.Setting this flag to ipvs enables IPVS mode (netfilterkernel modules), which provides much improvedperformance and also enables a choice of loadbalancing algorithms through the --ipvsschedulerparameter beyond the default round robin algorithm.The E-W traffic between microservices pods is managedby kube-proxy, an open source, basic L4 load balancerwith simple IP address-based round robin or leastconnection algorithm. kube-proxy lacks advancedfeatures like Layer 7 load balancing, security, andobservability, making it a blind spot for E-W traffic.Figure 2: Two-tier ingress proxy architectureCitrix ADCCitrix ADC xy)Node(Kube-proxy)Kubernetes5Pros of two-tier ingressWith the right proxy, SSL termination can be doneat the edge, and traffic can be inspected easily. Thisenables N-S traffic to be comprehensively securedacross L3-7. ADC collects and reports telemetry onthe N-S application traffic it sees, which means thatthis architecture provides robust observability for N-Straffic. ADC can also integrate with CI/CD tools likeSpinnaker to provide traffic management to N-S trafficfor excellent continuous deployment capabilities.Two-tier ingress scales very well for N-S traffic. CitrixADC, for example, can reach hundreds of Gbps—or evenTbps—throughput through active-active clustering ofADCs if required. Integration with third-party tools likePrometheus, Grafana, and Zipkin are supported out ofthe box with ADC, so you can continue to use the toolswith which you are familiar to collect data and manageyour systems for N-S traffic.The bifurcated design of two-tier ingress makes itrelatively simple to implement demarcation points forcontrol. The network team can own and manage thegreen ADC, and the platform team can work inside theKubernetes environment. Neither the network team northe platform team needs extensive retraining, whichmakes this architecture quick to implement.Cons of two-tier ingressThe limitations of kube-proxy have made the use ofthird-party tools like Project Calico necessary to providenetwork policies, segmentation, and security supportfor inter-microservices communication. Similarly, kubeproxy’s lack of detailed telemetry capabilities providesvery little observability for E-W traffic. kubeproxy doesnot have the extensive APIs to integrate with continuousdeployment tools, and its basic round-robin loadbalancing does not provide the granular load balancingneeded to incorporate a CI/CD strategy inside thecluster. And kube-proxy does not currently integratewith service meshes, so there is no open source controlplane integration for your E-W traffic management.Overall, two-tier ingress provides excellent servicesfor N-S traffic but lacks control for E-W traffic. It is apopular architecture because it is simple to implement

Citrix 7 key considerations for microservices-based application deliveryand is frequently a starting point for enterprises on theircloud-native journey to microservices adoption.Learn more about two-tier ingress.6capabilities for N-S traffic, but has very limitedfunctionality for E-W traffic due to the limitations ofkube-proxy. A network-savvy platform team is key forimplementing this architecture.Learn more about unified ingress.Unified ingressUnified ingress is very similar to the two-tier ingressarchitecture, except that it unifies two tiers ofapplication delivery controllers (ADCs) for N-S trafficinto one. Reducing an ADC tier effectively removes onehop of latency for N-S traffic.Unified ingress has the same benefits and drawbacksas the two-tier ingress proxy architecture for security,observability, continuous deployment, scale andperformance, open source tools support, and servicemesh integration. Where it differs is in the skill setsrequired for implementation. With unified ingress, boththe ADCs for N-S traffic and kube-proxy for the E-Wtraffic are managed by the platform team, who mustbe very network savvy to implement and managethis architecture.Figure 3: Unified ingress proxy architectureCitrix )Node(Kube-proxy)KubernetesA unified ingress proxy architecture is capable ofparticipating in the Kubernetes cluster’s overlaynetwork. This allows it to communicate directly with themicroservices pods. Therefore, the platform team has tobe knowledgeable about layers 3-7 of the network stackto take full advantage of this architecture.In summary, unified ingress proxy architecture ismoderately simple to deploy compared to servicemesh (which we will cover next), and it offers robustService meshA service mesh is a dedicated infrastructure layerto control how different parts of an applicationcommunicate with one another. The service meshlandscape has exploded because service meshesoffer the best observability, security, and fine-grainedmanagement for traffic among microservices—that is,for E-W traffic. As an additional layer of infrastructure,service meshes do bear additional complexity as atradeoff to the value they provide.A typical service mesh architecture is similar to the twotier ingress proxy architecture for N-S traffic and offersthe same rich benefits for N-S traffic. The key differencebetween service mesh and two-tier ingress, and wheremost of the value lies, is that service mesh employs alightweight proxy as a sidecar to each microservicepod for E-W traffic. Microservices do not communicatedirectly: Communication among microservices happensvia the sidecar, which enables inter-pod traffic to beinspected and managed as it enters and leaves the pods.By using proxy sidecars, service mesh offers the highestlevels of observability, security, and fine-grainedtraffic management and control among microservices.Additionally, select repetitive microservice functions likeretries and encryption can be offloaded to the sidecars.Despite each sidecar’s being assigned its own memoryand CPU resources, sidecars are typically lightweight.You have the option to use Citrix CPX as a sidecar.Sidecars, which are managed by the platform teamand attached to each pod, create a highly scalable,distributed architecture, but they also add complexitybecause they result in more moving parts.Pros of service meshThe advantages of service mesh for N-S traffic aresimilar to those for two-tier ingress. Service mesh,

Citrix 7 key considerations for microservices-based application deliveryhowever, brings added advantages for E-W traffic.The presence of sidecars enables you to setsecurity policies and control communication amongyour microservices. You can mandate things likeauthentication, encryption, and rate limiting for APIsamong microservices if required.Figure 4: Service mesh architectureCitrix ADC7performance, varies with proxy implementation and canbe easily measured by the open source tool, Meshery.Citrix CPX as a sidecar offers latency as low as 1 ms,whereas other solutions can add much more.Overall, a service mesh architecture provides excellentsecurity, observability, and fine-grained trafficmanagement for all traffic flows. The major downside isthat it is complex to implement and manage.Learn more about service mesh.Citrix ADC (CPX)PodPodService ProxyService ProxyPodPodService ProxyService ProxyKubernetesBecause E-W traffic is seen by the sidecars, there ismuch more telemetry to provide holistic observabilityfor better insights and improved troubleshooting.Furthermore, Citrix CPX as a sidecar has well-definedAPIs that integrate with myriad open source tools, sothat you can use the observability tools you’re usedto. Sidecar APIs allow integration with CI/CD toolslike Spinnaker.Similarly, sidecars will integrate with a service meshcontrol plane like Istio for E-W traffic. Also, repetitivefunctions like retries and encryption can be offloadedto the sidecars. The distributed nature of the sidecarmeans that the solution is scalable for such features asobservability and security.Cons of service meshThe biggest drawback of a service mesh architecture isthe complexity of implementation (managing hundredsor thousands of sidecars is not trivial). The learningcurve can be steep for the platform team becausethere are so many moving parts. A sidecar for everypod adds to CPU and memory needs. Similarly, sidecarsadd latency. Latency, which may affect applicationService mesh liteWhat if you want service mesh–like benefits with muchless complexity? The answer is service mesh lite, whichis a variant of service mesh.With a service mesh lite architecture, the ADC shownin green in Figure 5 is responsible for Layer 4-7 loadbalancing for N-S traffic to handle inbound requests andload balance to the right Kubernetes cluster. The greenADC may carry out SSL termination, web applicationfirewalling, authentication, or other network services. Itis managed by the networking team.Depending on isolation and scale requirements,service mesh lite proxy architecture uses a single orseveral ADCs (shown in orange in Figure 5) that proxycommunications among microservices pods to manageinter-pod (E-W) traffic rather than using individualsidecars attached to each pod. Proxies can be deployedper node or per namespace and are managed byplatform teams.Pros of service mesh liteService mesh lite provides many of the same benefitsas service mesh but reduces the overall complexityby only having a small set of proxy instances percluster to manage the inter-pod traffic. Passing allE-W traffic through a small set of proxies providesthe same advanced policy control, security, and finegrained traffic management of a service mesh proxyarchitecture without all the complexity.Another advantage of service mesh lite is reducedlatency as compared to service mesh because end-userrequests go through fewer proxies. The main advantage

Citrix 7 key considerations for microservices-based application deliveryis reduced complexity and a lower skill set required toimplement compared to service mesh. Similar to twotier ingress, the networking team can manage the greenADC, and the platform team can manage the orangeADC. With service mesh lite, both teams can work infamiliar environments and develop at their own speed.Figure 5: Service mesh lite architectureCitrix ADCCitrix ADC (CPX)PodPodPodPodKubernetesOverall, service mesh lite provides most of the servicemesh features, but with reduced complexity and a lowerIT skill set requirement. Many organizations that startedwith the two-tier ingress architecture find it to be an easytransition to service mesh lite for the added benefits itbrings to their E-W traffic—including better observability,enhanced security, better integration with open sourcetools, and support for continuous deployment.Cons of service mesh liteService mesh lite removes the implementation andmanagement associated with service mesh, but theabsence of a proxy per pod means that you sacrificesome functionality offload. For example, encryption forE-W must be implemented in each microservice, itself,if required.Learn more about service mesh lite.Which architecture?After reviewing the four architecture choices, you mightbe wondering, “what‘s the right architecture choice formy organization?” There are no right or wrong answers.Like other architectural choices, proxy deployment8models should be selected based on, in part, yourapplication needs, your team structure, and your team’sskill set.Your model of proxy deployment is an importantconsideration, but just one of many when planning foryour application delivery infrastructure. Ensuring thatthe application delivery components in your deploymentare well-integrated into the cloud-native ecosystem isyour next consideration.2. Openly integrating with the cloudnative ecosystemIt’s imperative that your various application deliverytools and processes, including your proxy, bewell-integrated into commonplace cloud-nativeinfrastructure. It’s no secret that much of today’sinnovation happens in open source software. Andclouds, both public and private, are built upon opensource software. So in most cases, your infrastructurewill comprise popular open source infrastructure andtools that you have picked up on your journey to cloudnative. To the extent this is the case, you’ll find commonintegrations by categories in Figure 6:Figure 6: Key categories of consideration for proxyintegration with Kubernetes platforms and open sourcetoolsPlatformCI/CD Container Orchestration Continuous Integration/Continuous DeliveryObservabilityManagement Metrics Tracing Logs Monitoring and Alerting Packaging Control plane Management planeCloud-native environments make liberal use of opensource software projects. Irrespective of which projectsyou use, suffice it to say that cloud-native applicationdelivery can’t be done with just containers. Thecombination of containers, container orchestration,and a service mesh will get you very far. And alongsidea CI/CD system, these components are the mostsignificant and ubiquitously used components of cloud-

Citrix 7 key considerations for microservices-based application deliverynative infrastructure. Integration with each of thesecategories of cloud-native infrastructure is criticalso that developers and operators can design andrun systems that communicate and interoperate as awhole. The fact that these bedrocks of cloud-nativeinfrastructure are open source unlocks their ability tobe integrated.Tip: OpenMetricsThe cloud-native ecosystem needs a common formatfor the exchange of metrics. Observability painsgrow with the release of each newly instrumentedservice that presents its own metric format.OpenMetrics is an effort to create an open standardfor transmitting metrics at scale, with support forboth text representation and protocolbBuffers.OpenMetrics builds on Prometheus’s expositionformat, popular telemetry formats, and protocolsused in infrastructure and application monitoring.At the heart of the cloud-native ecosystem is theextensible and scalable orchestration infrastructure,Kubernetes. The cloud-native ecosystem (both opensource and closed source) extends Kubernetes bywriting custom resource definitions (CRDs) andassociated controllers. The CRDs and controllers giveoperators a Kubernetes-native way to manage all partsof their platforms— both open source and closedsource. This integration affords tool unification andpowerful composable intent-based primitives that trulyenable a software-defined platform.Critical to the speed of delivery is an early investmentin continuous integration/continuous delivery (CI/CD).It’s likely that you have already wrangled continuousintegration. Continuous deployment pipelines are yournext step in seeing that changes to your source codeautomatically result in a new container being built, anda new version of your microservice being tested anddeployed to staging, and eventually to production.For many, the notion that CI/CD is an area of earlyinvestment is counterintuitive, and they find it hard toswallow the upfront engineering effort required to geta solid pipeline in place. The sooner CI/CD basics are9implemented, however, the sooner the dividends startpaying out. We will cover advanced continuous deliveryconsiderations later in this white paper.Citrix integrates with the leading Kubernetes platformsand open source toolsKubernetes platformsObservability tools Google GKE Prometheus Amazon EKS Grafana Azure Kubernetes Service Elasticsearch Red Hat OpenShift Kibana ZipkinCI/CD toolsNetwork and control plane Spinnaker Istio Helm gRPC CNIWith cloud-native infrastructure being inherentlydynamic (in contrast to infrastructure not driven byAPIs,) the ability to observe cloud-native infrastructureand its workloads is also necessary. Software iswritten with functionality and debugging in mind. Mostoften, developers use logging as the primary methodfor debugging their applications. Integration withElasticsearch and Kibana is key here.Performance counters are another way to trackapplication behavior and performance. Akin to SNMP forphysical and virtual network monitoring, the equivalentcloud-native “standard” is the use of Prometheus andGrafana, so it’s important that your application deliverysolution integrate with these tools. Currently there isno recognized standard for cloud-native applicationperformance monitoring metrics.Irrespective of the metrics format, there are a fewmetrics that have been identified as key indicators of thehealth of a cloud-native application (that is, the healthof a service): latency, traffic, errors, and saturation. Yourapplication delivery solution should assist in producingthese signals. It should also provide support for thetracing of your distributed, cloud native workloads.The aforementioned integrations with open sourcetools enable loosely coupled systems that areresilient, manageable, and observable. Citrix ADC

Citrix 7 key considerations for microservices-based application deliveryalso embodies these characteristics. All of theinfrastructure integrations detailed here depend uponAPIs for interchange and interoperability. Cloud nativeapplications, too, are centered around declarative APIsto interface with the infrastructure and serve userfacing workloads.The endpoints that your APIs expose are now beingmanaged by open source service meshes. Servicemeshes deliver the next generation of networkingdesigned for cloud native applications. At the core of aservice mesh is its data plane (its collection of proxies).Proxy selection criteria and deployment model tradeoffsare our next area of consideration.3. Choosing the perfect proxyHistorically, application delivery controllers (ADCs) werepurchased, deployed, and managed by IT professionals,most commonly to run enterprise-architectedapplications. With their distributed systems design andephemeral infrastructure, cloud-native applicationsrequire ADCs to be as dynamic as the infrastructure(containers, for example) upon which they run. TheseADCs are often software based, and are used as proxyservers. Because cloud-native applications are typicallydeveloper-led initiatives in which developers are creatingthe application—that is, the microservices—and theinfrastructure, developers, and platform teams areincreasingly making, or heavily influencing, decisions forADCs (and other) infrastructure.Selecting your proxy is one of the most importantdecisions your team will make. A developer’s selectionprocess gives heavier weight to a proxy’s APIs (due totheir ability to programmatically configure the proxy)and on a proxy’s cloud-native integrations (as previouslynoted). A top item on the list of demands for proxies isprotocol support. Generally, protocol considerations canbe broken into two types: TCP, UDP, HTTP: Network team–centric considerationin which efficiency, performance, offload, and loadbalancing algorithm support are evaluated. Supportfor HTTP2 often takes top billing.10 gRPC, NATS, Kafka: A developer-centric considerationin which the top item on the list is application-levelprotocols, specifically those commonly used inmodern distributed application designs.The reality is that selecting the perfect proxy involvesmore than protocol support. Your proxy should meet allkey criteria: High performance and low latency High scalability and small memory footprint Deep observability at all layers of the network stack Programmatic configuration and ecosystemintegrationWith a Kubernetes-native control plane, using CRDs andassociated controllers enables po

microservices-based application delivery Before embarking on your cloud-native journey, it's essential to critically assess your organization's readiness so you can choose the solutions that best fit your business objectives. There are seven key considerations to address when planning your microservices-based application delivery design: 1.