DIGITAL NOTES ON CLOUD COMPUTING - MRCET

Transcription

DIGITAL NOTESONCLOUD COMPUTINGB.TECH III YR / II SEM(2017-18)DEPARTMENT OF INFORMATION TECHNOLOGYMALLA REDDY COLLEGE OF ENGINEERING & TECHNOLOGY(Autonomous Institution – UGC, Govt. of India)Recognized under 2(f) and 12 (B) of UGC ACT 1956(Affiliated to JNTUH, Hyderabad, Approved by AICTE - Accredited by NBA & NAAC – ‘A’ Grade - ISO 9001:2015 Certified)Maisammaguda, Dhulapally (Post Via. Hakimpet), Secunderabad – 500100, Telangana State, IndiaCloud ComputingPage 1

MALLA REDDY COLLEGE OF ENGINEERING & TECHNOLOGYDepartment of Information TechnologyIII Year B.Tech IT – II SemL T /P/D C4 /-/- 4(R15A0529) CLOUD COMPUTINGObjectives: To understand the various distributed system models and evolving computing paradigmsTo gain knowledge in virtualization of computer resourcesTo realize the reasons for migrating into cloudTo introduce the various levels of services that can be achieved by a cloud.To describe the security aspects in cloud and the services offered by a cloud.UNIT- Systems Modeling: Distributed System Models and Enabling Technologies- Scalable Computingover the Internet- System Models for Distributed and Cloud Computing- Software Environments forDistributed Systems and Clouds-- Performance, Security, and Energy EfficiencyComputer Clusters for Scalable Parallel Computing: Clustering- Clustering for Massive ParallelismComputer Clusters and MPP Architectures-Design Principles of Computer Clusters-Cluster Job andResource Management.UNIT- IIVirtualization: Virtual Machines and Virtualization of Clusters and Data Centers- ImplementationLevels of Virtualization -Virtualization Structures/Tools and Mechanisms-Virtualization of CPU, Memory,and I/O Devices-Virtual Clusters and Resource Management-Virtualization for Data-Center AutomationUNIT- IIIFoundations: Introduction to Cloud Computing- Migrating into a Cloud-The Enterprise Cloud ComputingParadigm.UNIT- IVInfrastructure as a Service (IAAS)& Platform (PAAS):Virtual machines provisioning and Migrationservices-On the Management of Virtual machines for Cloud Infrastructures-Aneka—Integration ofPrivate and Public CloudsUNIT- VSoftware as a Service( SAAS)&Data Security in the Cloud:Google App Engine – Centralizing Email Communications- Collaborating via Web-Based CommunicationTools-An Introduction to the idea of Data Security- The Current State of Data Security in the CloudCloud Computing and Data Security Risk- Cloud Computing and Identity.TEXT BOOKS:1. Distributed and Cloud Computing, Kaittwang Geoffrey C.Fox and Jack J Dongrra, Elsevier India 2012.Cloud ComputingPage 2

2.Mastering Cloud Computing- Raj Kumar Buyya, Christian Vecchiola and S.TanuraiSelvi, TMH, 2012.3. Michael Miller, Cloud Computing: Web-Based Applications That Change the Way You Work andCollaborate Online, Que Publishing, August 2008.Reference Books:1. Cloud Computing : A Practical Approach, Anthony T.Velte, Toby J.Velte, Robert Elsenpeter, TataMcGraw Hill, rp2011.2. Enterprise Cloud Computing, Gautam Shroff, Cambridge University Press, 2010.3. Cloud Computing: Implementation, Management and Security, John W.Rittinghouse, JamesF.Ransome, CRC Press, rp2012.4. Cloud Application Architectures: Building Applications and Infrastructure in the Cloud, GeorgeReese, O’reilly, SPD, rp2011.5. Cloud Security and Privacy: An Enterprise Perspective on Risks and Compliance, Tim Mather, SubraKumaraswamy, Shahed Latif, O’Reilly, SPD, rp2011.Outcomes: To distinguish the different models and computing paradigms. To explain the levels of virtualization and resources virtulaization To analyze the reasons for migrating into cloud To effectively use the cloud services in terms of infrastructure and operating platforms.To apply the services in the cloud for real world scenariosCloud ComputingPage 3

8INDEXTitleComputing Paradigm & Degrees of ParallelismThe Internet of Things (IoT) & Cyber-Physical SystemsSystem Models For Distributed And Cloud ComputingService-Oriented Architecture (SOA)Performance Metrics & Energy Efficiency in Distributed ComputingClustering for Massive ParallelismBasic Cluster ArchitectureLevels of Virtualization ImplementationVMM Design Requirements and ProvidersXen ArchitectureFull virtualization- CPU Memory-I/O VirtualizationCloud OS for Virtualized Data CentersIntroduction to Cloud ComputingIntroduction – Migration into cloudChallenges in the CloudIntroduction to IAASOVF (Open Virtualization Format)Live Migration EffectAnekaSaaSIntegration Products And PlatformsGoogle App EngineCentralizing Email CommunicationsCollaborating via Web-Based Communication ToolsAn Introduction to the idea of Data SecurityThe Current State of Data Security in the CloudCloud Computing and IdentityCloud Computing and Data Security RiskCloud ComputingPage 5960Page 4

UNIT -1Scalable Computing over the InternetHigh-Throughput Computing-HTCHTC paradigm pays more attention to high-flux computing. The main application for high-fluxcomputing is in Internet searches and web services by millions or more users simultaneously.The performance measures high throughput or the number of tasks completed per unit of time.HTC technology needs to improve batch processing speed, and also address the acute problemsof cost, energy savings, security, and reliability at many data and enterprise computing centersComputing Paradigm Distinctions Centralized computingo This is a computing paradigm by which all computer resources are centralized in onephysical system.o All resources (processors, memory, and storage) are fully shared and tightly coupled withinone integrated OS.o Many data centers and supercomputers are centralized systems, but they are used in parallel,distributed, and cloud computing applications. Parallel computing In parallel computing, all processors are either tightly coupled with centralized sharedmemory or loosely coupled with distributed memory . Interprocessor communication is accomplished through shared memory or via messagepassing. A computer system capable of parallel computing is commonly known as a parallel computer Programs running in a parallel computer are called parallel programs. The process of writingparallel programs is often referred to as parallel programming Distributed computing A distributed system consists of multiple autonomous computers, each having its ownprivate memory, communicating through a computer network. Information exchange in a distributed system is accomplished through message passing. A computer program that runs in a distributed system is known as a distributed program. The process of writing distributed programs is referred to as distributed programming. Distributed computing system uses multiple computers to solve large-scale problems overthe Internet using a centralized computer to solve computational problems. Cloud computing An Internet cloud of resources can be either a centralized or a distributed computing system.The cloud applies parallel or distributed computing, or both. Clouds can be built with physical or virtualized resources over large data centers that arecentralized or distributed. Cloud computing can also be a form of utility computing or service computingCloud ComputingPage 5

Degrees of Parallelism Bit-level parallelism (BLP) :o converts bit-serial processing toword-level processing gradually. Instruction-levelparallelism (ILP)o the processor executes multiple instructions simultaneously rather thanonly one instructionat a time.o ILP is executed through pipelining, superscalarcomputing, VLIW (very long instructionword) architectures, and multithreading.o ILP requiresbranch prediction, dynamic scheduling, speculation, and compiler support towork efficiently. Data-level parallelism (DLP)o DLP through SIMD (single instruction, multipledata) and vector machines using vector orarray types of instructions.o DLP requires even more hardwaresupport and compiler assistance to work properly. Task-level parallelism (TLP):o Ever since the introduction of multicoreprocessors and chip multiprocessors (CMPs), wehave been exploring TLPo TLP is far from beingvery successful due to difficulty in programming and compilation ofcode for efficient execution onmulticore CMPs. Utility Computingo Utility computing focuses on a business model in which customers receive computingresources from a paid service provider. All grid/cloud platforms are regarded as utilityservice providers. ooooooooooThe Internet of Things (IoT)Traditional Internet connects machines to machines or web pages to web pages.IoT was introduced in 1999 at MITnetworked interconnection of everyday objects, tools, devices, or computersa wireless network of sensors that interconnect all things in our daily life.Three communication patterns co-exist: namely H2H (human-to-human), H2T (humantothing),and T2T (thing-to-thing).connect things (including human and machine objects) at any time and any placeintelligently with low costIPv6 protocol, 2128 IP addresses are available to distinguish all the objects on Earth,including all computers and pervasive devicesIoT needs to be designed to track 100 trillion static or moving objects simultaneously.IoT demands universal addressability of all of the objects or things.The dynamic connections will grow exponentially into a new dynamic network of networks,called the Internet of Things (IoT).Cyber-Physical Systemso A cyber-physical system (CPS) is the result of interaction between computational processesand the physical world.o CPS integrates “cyber” (heterogeneous, asynchronous) with “physical” (concurrent andinformation-dense) objectsCloud ComputingPage 6

o CPS merges the “3C” technologies of computation, communication, and control into anintelligent closed feedback systemo IoT emphasizes various networking connections among physical objects, while the CPSemphasizes exploration of virtual reality (VR) applications in the physical worldSYSTEM MODELS FOR DISTRIBUTED AND CLOUD COMPUTINGo Distributed and cloud computing systems are built over a large number of autonomouscomputer nodes. These node machines are interconnected by SANs, LANs, or WANso A massive system is with millions of computers connected to edge networks.o Massive systems are considered highly scalableo massive systems are classified into four groups: clusters, P2P networks, computing grids, andInternet cloudsComputing clustero A computing cluster consists of interconnected stand-alone computers which workcooperatively as a single integrated computing resource.Cluster Architectureo the architecture consists of a typical server cluster built around a low-latency, high bandwidthinterconnection network.o build a larger cluster with more nodes, the interconnection network can be built with multiplelevels of Gigabit Ethernet, Myrinet, or InfiniBand switches.o Through hierarchical construction using a SAN, LAN, or WAN, one can build scalableclusters with an increasing number of nodeso cluster is connected to the Internet via a virtual private network (VPN) gateway.o gateway IP address locates the clusterCloud ComputingPage 7

oClusters have loosely coupled node computers.oAll resources of a server node are managed by theirown OS.oMost clusters have multiple system images as a resultof having many autonomous nodes under different OS controlSingle-System Image -Clustero an ideal cluster should merge multiple system images intoa single-system image (SSI)o acluster operating system or some middleware have to support SSI at various levels,including the sharing of CPUs, memory, and I/O across all cluster nodes.o illusion created by software or hardware that presents a collection of resources as oneintegrated, powerful resourceo SSI makes the cluster appear like a single machine to the user.o A cluster with multiple system images is nothing but a collection of independentcomputers.Hardware, Software, and Middleware Support –Clustero Clusters exploring massive parallelism are commonly known as MPPs –Massive ParallelProcessingo The building blocks are computer nodes (PCs, workstations, servers, or SMP), specialcommunication software such as PVM or MPI, and a network interface card in eachcomputer node.o Most clusters run under the Linux OS.o nodes are interconnected by a high-bandwidth networko Special cluster middleware supports are needed to create SSI or high availability (HA).o all distributed memory to be shared by all servers by forming distributed shared memory(DSM).o SSI features are expensiveo achieving SSI, many clusters are loosely coupled machineso virtual clusters are created dynamically, upon user demandGrid ComputingA web service such as HTTP enables remote access of remote web pagescomputing grid offers an infrastructure that couples computers, software/middleware, specialinstruments, and people and sensors together Enterprises or organizations present grids as integrated computing resources. They can alsobeviewed as virtual platforms to support virtual organizations. The computers used in a grid are primarilyworkstations, servers, clusters, andsupercomputers Peer-to-Peer Network-P2P P2P architecture offers a distributed model of networked systems. P2P network is client-oriented instead of server-oriented In a P2P system, every node acts as both a client and a serverCloud ComputingPage 8

Peer machines are simply client computers connected to the Internet.All client machines act autonomously to join or leave the system freely. This implies that nomaster-slave relationship exists among the peers.No central coordination or central database is needed. The system is self-organizing withdistributed control.P2P two layer of abstractions as given in the figure Each peer machine joins or leaves the P2Pnetwork voluntarilyOnly the participatingpeers form the physicalnetwork at any time.Physical network is simply an ad hocnetworkformed at various Internet domainsrandomly using the TCP/IP and NAIprotocols.Peer-to-Peer Network-Overlay network Data items or files are distributed in the participating peers.Based on communication or file-sharing needs, the peer IDs form an overlay network at thelogical level.When a new peer joins the system, its peer ID is added as a node in the overlay network.When an existing peer leaves the system, its peer ID is removed from the overlay networkautomatically.An unstructured overlay network is characterized by a random graph. There is no fixed routeto send messages or files among the nodes. Often, flooding is applied to send a query to allnodes in an unstructured overlay, thus resulting in heavy network traffic and nondeterministicsearch results.Structured overlay networks follow certain connectivity topology and rules for inserting andremoving nodes (peer IDs) from the overlay graphCloud Computing A cloud is a pool of virtualized computer resources. A cloud can host a variety of different workloads, including batch-style backend jobs andinteractive and user-facing applications.” Cloud computing applies a virtualized platform with elastic resources on demand byprovisioning hardware, software, and data sets dynamicallyThe Cloud LandscapeInfrastructure as a Service (IaaS) This model puts together infrastructures demanded by users—namely servers, storage,networks, and the data center fabric.Cloud ComputingPage 9

The user can deploy and run on multiple VMs running guest OSes on specific applications.The user does not manage or control the underlying cloud infrastructure, but can specifywhen to request and release the needed resources.Platform as a Service (PaaS) This model enables the user to deploy user-built applications onto a virtualized cloudplatform. PaaS includes middleware, databases, development tools, andsome runtime support such asWeb 2.0 and Java. The platform includes both hardware andsoftware integrated with specific programminginterfaces. The provider supplies the API andsoftware tools (e.g., Java, Python, Web 2.0, .NET). Theuser is freed from managing the cloudinfrastructure.Software as a Service (SaaS) This refers to browser-initiated application software overthousands of paid cloud customers. The SaaS model applies to business processes, industryapplications, consumer relationshipmanagement (CRM), enterprise resources planning (ERP),human resources (HR), andcollaborative applications. On the customer side, there is no upfrontinvestment in servers or software licensing. On the provider side, costs are rather low, comparedwith conventional hosting of userapplicationsInternet clouds offer four deployment modes: private, public, managed, and hybridSOFTWARE ENVIRONMENTS FOR DISTRIBUTED SYSTEMSAND CLOUDSService-Oriented Architecture (SOA) In grids/web services, Java, and CORBA, an entity is, respectively, a service, a Java object,and a CORBA distributed object in a variety of languages. These architectures build on the traditional seven Open Systems Interconnection (OSI) layersthat provide the base networking abstractions. On top of this we have a base software environment, which would beo .NET or Apache Axis for web services,o the Java Virtual Machine for Java, and a broker network for CORBA On top of this base environment one would build a higher level environment reflecting thespecial features of the distributed computing environment. SOAapplies to building grids, clouds, grids of clouds, clouds of grids, clouds of clouds (alsoknown asinterclouds), SS (sensor service : A large number of sensors provide data-collectionservices (ZigBeedevice, a Bluetoothdevice, WiFi access point, a personal computer, a GPA, or a wirelessphoneetc Filter services : to eliminate unwanted raw data, in orderto respond to specific requests fromthe web, the grid, or web servicesLayered Architecture for Web Services and Grids Entity Interfaces Java methodinterfaces correspond to the Web Services Description Language (WSDL),Cloud ComputingPage 10

CORBA interface - definition language (IDL) specificationsThese interfaces are linked with customized, high-level communication systems: SOAP,RMI, and IIOPThese communication systems support features including particular message patterns (suchas Remote Procedure Call or RPC), fault recovery, and specialized routing.Communication systems are built on message-oriented middleware (enterprise bus)infrastructure such as Web-Sphere MQ or Java Message Service (JMS)Cases of fault tolerance- the features in the Web ServicesReliable Messaging (WSRM)Security -reimplements the capabilities seen in concepts suchas Internet Protocol Security (IPsec)Several models with, for example, JNDI (Jini and JavaNaming and DirectoryInterface) illustrating differentapproaches within the Java distributed object model. TheCORBA TradingService, UDDI (Universal Description,Discovery, and Integration), LDAP (Lightweight DirectoryAccess Protocol), and ebXML (Electronic Business usingeXtensibleMarkup Languageearlier years, CORBA and Java approaches were used indistributed systems rather than today’sSOAP, XML, or REST(Representational State Transfer).Web Services and ToolsREST approach: delegates most ofthe difficult problems to application (implementation-specific) software. Ina web services language minimal information in the header, and the message body (that is opaque to genericmessageprocessing) carries all the needed information. architectures are clearly more appropriatefor rapid technology environments. REST can use XML schemas but not those that are part of SOAP; “XML overHTTP” is apopular design choice in this regard. Above the communication and managementlayers, we have the ability to compose newentities or distributed programs by integrating severalentities together.CORBA and Java: the distributed entities are linked with RPCs, and the simplest way to buildcompositeapplications is to view the entities as objects and use the traditional ways of linkingthemtogether. For Java, this could be as simple as writing a Java program with method calls replacedbyRemote Method Invocation (RMI), CORBA supports a similar model with a syntax reflecting theC style of its entity (object)interfaces.Cloud ComputingPage 11

Parallel and Distributed Programming ModelsPERFORMANCE, SECURITY, AND ENERGY EFFICIENCYPerformance Metrics: In a distributed system, performance is attributed to a large numberof factors.System throughput is often measured in MIPS, Tflops (tera floating-point operationspersecond), or TPS (transactions per second).Systemoverhead is often attributed to OS boot time, compile time, I/O data rate, and theruntime support systemused.Other performance-related metrics include the QoS for Internet and web services;systemavailability and dependability; and security resilience for system defense againstnetwork attacksDimensions of ScalabilityAny resource upgrade ina system should be backward compatible with existing hardware andsoftware resources. System scaling can increase or decrease resources depending on manypracticalfactorsSize scalability This refers to achieving higher performance or more functionality by increasingthe machinesize. The word “size” refers to adding processors, cache, memory, storage, or I/Ochannels. Themost obvious way to determine size scalability is to simply count the number ofprocessorsinstalled.Cloud ComputingPage 12

Not all parallel computer or distributed architectures are equally sizescalable.For example, the IBM S2 was scaled up to 512 processors in 1997. But in 2008,theIBMBlueGene/L system scaled up to 65,000 processors. Software scalability This refers to upgrades in the OS or compilers, adding mathematical andengineeringlibraries, porting new application software, and installing more userfriendlyprogramming environments. Some software upgrades may not work with large systemconfigurations. Testing and fine-tuning of new software on larger systems is a nontrivial job. Application scalability This refers to matching problem size scalability with machine sizescalability. Problem size affects the size of the data set or the workload increase. Instead ofincreasingmachine size, users can enlarge the problem size to enhance system efficiencyor cost-effectiveness. Technology scalability This refers to a system that can adapt to changes in building technologies,such as thecomponent and networking technologies Whenscaling a system design with new technology one must consider three aspects: time,space, andheterogeneity. (1) Time refers to generation scalability. When changing to new-generationprocessors,one must consider the impact to the motherboard, power supply, packagingand cooling,and so forth. Based on past experience, most systems upgrade theircommodity processors everythree to five years. (2) Space is related to packaging and energy concerns. Technology scalabilitydemandsharmony and portability among suppliers. (3) Heterogeneity refers to the use ofhardware components or software packages fromdifferent vendors. Heterogeneity may limit thescalability.Amdahl’s Law Let the program has been parallelized or partitioned for parallelexecution on a cluster ofmany processing nodes.Assume that a fraction α of the code must be executedsequentially, called the sequentialbottleneck. Therefore, (1 α) of the code can be compiledfor parallel execution by n processors. The total execution time of the program is calculated byα T (1 α)T/n, where the firstterm is the sequential execution time on a single processor and thesecond term is theparallel execution time on n processing nodes.I/O time or exception handling timeis also not included in the following speedup analysis. Amdahl’s Law states that the speedup factorof using the n-processor system over the useof a single processor is expressed by:Cloud ComputingPage 13

the code is fully parallelizable with α 0. As the cluster becomes sufficiently large, thatis, n , S approaches 1/α, an upper bound on the speedup S. this upper bound is independentof the cluster size n. The sequential bottleneck is theportion of the code that cannot be parallelized.Gustafson’s Law To achieve higher efficiency when using a large cluster, we must consider scaling theproblem sizeto match the cluster capability. This leads to the following speedup lawproposed by John Gustafson(1988), referred as scaled-workload speedup.Let W be the workload in a given program.When using an n-processor system, the user scales the workload to W′ αW (1 α)nW.Scaled workload W′ is essentially the sequential execution time on a singleprocessor. The parallelexecution time of a scaled workload W′ on n processors is definedby a scaled-workload speedupas follows:Network Threats and Data IntegrityENERGY EFFICIENCY IN DISTRIBUTED COMPUTINGPrimary performance goals in conventional parallel and distributed computing systems arehighperformance and high throughput, considering some form of performance reliability (e.g.,Cloud ComputingPage 14

fault toleranceand security). However, these systems recently encountered new challengingissues includingenergy efficiency, and workload and resource outsourcingEnergy Consumption of Unused Servers: To run a server farm (data center) a company has tospend a huge amount of money for hardware,software, operational support, and energy everyyear. Therefore, companies should thoroughlyidentify whether their installed server farm (morespecifically, the volume of provisioned resources)is at an appropriate level, particularly in termsof utilization.Reducing Energy in Active Servers: In addition to identifying unused/underutilized servers forenergy savings, it is also necessary toapply appropriate techniques to decrease energyconsumption in active distributed systems with negligibleinfluence on their performance.Application Layer: Until now, most user applications in science, business, engineering, andfinancial areas tend toincrease a system’s speed or quality. By introducing energy-awareapplications, the challenge is todesign sophisticated multilevel and multi-domain energymanagement applications without hurtingperformance.Middleware Layer: The middleware layer acts as a bridge between the application layer and theresource layer. Thislayer provides resource broker, communication service, task analyzer, taskscheduler, securityaccess, reliability control, and information service capabilities. It is alsoresponsible for applyingenergy-efficient techniques, particularly in task scheduling.Resource Layer: The resource layer consists of a wide range of resources including computingnodes and storageunits. This layer generally interacts with hardware devices and the operatingsystem; therefore, itis responsible for controlling all distributed resources in distributedcomputing systems. Dynamic power management (DPM) and dynamic voltage-frequencyscaling (DVFS) are two popular methods incorporated into recent computer hardware systems.In DPM, hardware devices, such as the CPU, have the capability to switch from idle mode to oneor more lower power modes. In DVFS, energy savings are achieved based on the fact that thepower consumptionin CMOS circuits has a direct relationship with frequency and the square ofthe voltage supply.Network Layer: Routing and transferring packets and enabling network services to the resourcelayer are the mainresponsibility of the network layer in distributed computing systems. Themajor challenge to buildenergy-efficient networks is, again, determining how to measure,predict, and create a balancebetween energy consumption and performance.Clustering for Massive Parallelism. A computer cluster is a collection of interconnected stand-alone computers which can worktogether collectively and cooperatively as a single integrated computing resource pool. Clustering explores massive parallelism at the job level and achieves high availability (HA)through stand-alone operations. Benefits of computer clusters and massively parallel processors (MPPs) includeCloud ComputingPage 15

Scalable performance, HA, fault tolerance, modular growth, and use of commoditycomponents. These features can sustain the generation changes experienced in hardware,software, and network components.Design Objectives of Computer ClustersScalability: Clustering of computers is based on the concept of modular growth. To scale a cluster fromhundreds of uniprocessor nodes to a supercluster with 10,000 multicore nodes is a nontrivialtask. The scalability could be limited by a number of factors, such as the multicore chiptechnology, cluster topology, packaging method, power consumption, and cooling schemeapplied.Packaging Cluster nodes can be packaged in a compact or a slack fashion. In a compact cluster, thenodes are closely packaged in one or more racks sitting in a room, and the nodes are notattached to peripherals (monitors, keyboards, mice, etc.). In a slack cluster, the nodes are attached to their usual peripherals (i.e., they are completeSMPs, workstations, and PCs), and they may be located in different rooms, differentbuildings, or even remote regions. Packaging directly affects communication wire length, and thus the selection ofinterconnection technology used. While a compact cluster can utilize a high-bandwidth, low-latency communication networkthat is often proprietary, nodes of a slack cluster are normally connected through standardLANs or WANs. ControlA cluster can be either controlled or managed in a centralized or decentralized fashion. Acompact cluster normally has centralized control, while a slack cluster can be controlledeither way.In a centralized cluster, all the nodes are owned, controlled, managed, and administered by acentral operator.In a decentralized cluster, the nodes have individual owners. This lack of a single point ofcontrol makes system administration of such a cluster very difficult. It also calls for specialtechniques for process scheduling, workload migration, checkpointing, accounting, and othersim

Cloud Computing Page 3 2.Mastering Cloud Computing- Raj Kumar Buyya, Christian Vecchiola and S.TanuraiSelvi, TMH, 2012. 3. Michael Miller, Cloud Computing: Web-Based Applications That Change the Way You Work and Collaborate