Tiera: Towards Flexible Multi-Tiered Cloud Storage Instances

Transcription

Tiera: Towards Flexible Multi-Tiered Cloud StorageInstances Ajaykrishna Raghavan, Abhishek Chandra, and Jon B WeissmanDepartment of Computer Science and EngineeringUniversity of MinnesotaMinneapolis, MN ral TermsCloud providers offer an array of storage services that represent different points along the performance, cost, and durability spectrum. If an application desires the composite benefits of multiple storage tiers, then it must manage the complexity of different interfaces to these storage services andtheir diverse policies. We believe that it is possible to provide the benefits of customized tiered cloud storage to applications without compromising simplicity using a lightweightmiddleware. In this paper, we introduce Tiera, a middlewarethat enables the provision of multi-tiered cloud storage instances that are easy to specify, flexible, and enable a richarray of storage policies and desired metrics to be realized.Tiera’s novelty lies in the first-class support for encapsulated tiered cloud storage, ease of programmability of datamanagement policies, and support for runtime replacementand addition of policies and tiers. Tiera enables an application to realize a desired metric (e.g., low latency or lowcost) by selecting different storage services that constitutea Tiera instance, and easily specifying a policy, using eventand response pairs, to manage the life cycle of data storedin the instance. We illustrate the benefits of Tiera througha prototype implemented on the Amazon cloud. By deploying unmodified MySQL database engine and a TPC-W Webbookstore application on Tiera, we are able to improve theirrespective throughputs by 47% 125% and 46% 69%, overstandard deployments. We further show the flexibility ofTiera in achieving different desired application metrics withminimal overhead.Management, ies and Subject DescriptorsBlock!Store!Tier!D.4.2 [Operating Systems]: Storage Management—Storage hierarchies; H.3.2 [Information Storage And Retrieval]: Systems and Software—Performance evaluation(efficiency and effectiveness) INTRODUCTIONMany cloud providers today offer an array of storage services that represent different points along the performance,cost, and durability spectrum. As an example, Amazonprovides ElastiCache (a caching service protocol compliantto Memcached), Simple Storage Service (S3), Elastic BlockStore (EBS), and Glacier as different cloud storage options1 .A single service generally optimizes one metric trading offothers. For example, Amazon ElastiCache offers low latency,but at high cost and low durability. Amazon S3 offers highdurability and low cost but low performance. If the application is willing to use multiple cloud storage services, thenit can realize composite benefits. For example, an application that requires low latency reads as well as durability,might choose to use a combination of Amazon ElastiCacheand Amazon EBS, with most frequently accessed data being stored in ElastiCache and the rest in the EBS persistentstore.Object!Store!Tier!Figure 1: Tiera middleware enables applications to easily usemultiple tiers to realize composite benefitsThis work was supported by NSF Grant CSR-1162405.Accessing multiple tiers introduces significant complexityto the application. The application has to not only dealwith different interfaces and data models of the storage, butat the same time, has to program policies to manage dataacross the different storage services to realize the desiredmetric. For example, two open source web applications thatuse multiple storage services when deployed in the cloud,Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies are notmade or distributed for profit or commercial advantage and that copies bearthis notice and the full citation on the first page. Copyrights for componentsof this work owned by others than ACM must be honored. Abstracting withcredit is permitted. To copy otherwise, or republish, to post on servers or toredistribute to lists, requires prior specific permission and/or a fee. Requestpermissions from Permissions@acm.org.Middleware ’14, December 08 - 12 2014, Bordeaux, FranceCopyright 2014 ACM 978-1-4503-2785-5/14/12 . te that we use Amazon as an illustrative example in thispaper, but similar issues also apply to other cloud providers.1

Single and Multithreaded ProcessesAPPLICATION)WordPress [28] and Moodle [17], have 1000 5000 additional lines of code to manage data across different storagetiers (WordPress across Memcached and S3, Moodle acrossMemcached, Local Disk, and MongoDB). The popular openCONTROL)source database, MySQL [18] has over 6000 lines of code toAPPLICATION)INTERFACE)!LAYER)METADATA)support memory and S3 as storage media. Hence we see!DD!that an application must make a choice: opt for simplicityAA!TTby using one or a small number of storage tiers and live!A)A)!with the tradeoffs, or embrace complexity – be willing to!CONTROL)STORAGE)INTERFACE)LAYER)use different interfaces, decide on the appropriate capacityLAYER)for each kind of storage, and manage data placement andCONTROL)other policy requirements explicitly. We believe this is afalse choice.In this paper, we present Tiera, a middleware that enSTORAGE)ables the provision of flexible and easy-to-use multi-tiered.) STORAGE)TIER)1)TIER)N)cloud storage instances. A Tiera instance encapsulates mulSilberschatz, Galvin and Gagne 20094.5Operating System Concepts – 8 Editiontiple cloud storage services and enables easy specificationFigure 2: Three layers of a Tiera serverof a rich array of data storage policies to achieve desiredtradeoffs. The client of a Tiera instance is shielded fromcan be accessed by the application using a globally uniquethe underlying complexity introduced by the multi-tieredidentifier that acts as the key to access the correspondingcloud storage services, and specifying a Tiera instance isvalue stored (as an object). It is left to the applicationsimple and straightforward as we will illustrate. The novto decide the keyspace from which to select this globallyelty of Tiera lies in the first-class support for encapsulatedunique identifier. Tiera exposes a simple PUT/GET API to thetiered storage, ease of programmability of data managementapplication to store and retrieve data. An object stored intopolicies, and the ability to dynamically replace/add storageTiera cannot be edited, though an application can choose topolicies and tiers. In the long run, we envision Tiera inoverwrite an object.stances that span a cloud and edge resources or multipleTiera treats objects stored within it as an uninterpretedclouds (public or private) or data centers. Here, we focus onsequenceof bytes that can be of variable size and representa single cloud implementation.any type of application data, e.g., text files, tables, images,The key contributions of this paper are:etc. Tiera tracks the common attributes or metadata for We present the design and implementation of Tiera, aeach object: size, access frequency, dirty flag, location (i.e.lightweight middleware that enables easy specificationwhich tiers), and time of last access. In addition, each Tieraof multi-tiered cloud storage instances.object may also be assigned a set of tags. Tags are stored We show how Tiera can support a rich array of storageas part of object metadata and provide a method to addpolicies to realize a desired metric (e.g., low latency orstructure to the object name space. It enables an applicalow cost) through a powerful event-response mechation to define object classes (those that share the same tag).nism.The user can then easily specify policies that apply to all We demonstrate the benefit of Tiera by deploying twoobjects of a particular class. Tags may also be used to passunmodified applications, MySQL and the online bookapplication “hints” to Tiera. For example, an applicationstore application bundled with the TPC-W benchmarkcould add a “tmp” tag to temporary files and a policy couldon Tiera in the Amazon cloud, yielding an increasedictate that objects with “tmp” tag be stored in inexpensivein their respective throughputs by 47% 125% andvolatile storage.46% 69%, over standard deployments.thThe rest of the paper is organized as follows. Section 2provides an overview of Tiera. This section also providesexamples of Tiera instance specifications to demonstrate itspower, flexibility, and ease-of-use. Section 3 explains theimplementation details of a Tiera prototype, implementedin the Amazon cloud. Section 4 discusses the results of ourexperimental evaluation. The results demonstrate how anapplication can use primitives provided by Tiera to realizethe composite benefits of multiple storage services withoutchanges to the application logic itself. Section 5 describesrelated work. Section 6 concludes the paper and describespossible future research directions.2.2ArchitectureAn application interacts with Tiera by specifying the different cloud storage tiers it desires to use and their respectiveinitial capacities. The application also specifies a policy thatgoverns the life cycle of data stored through Tiera. A tiercan be any source or sink for data with a prescribed interface.The storage tiers along with the Tiera server, constitute aTiera instance. Once configured, the application interactswith the Tiera instance to store and retrieve data. It canalso interact with the instance to alter its configuration andthe governing policy.The Tiera server has three primary roles: (1) to interfacewith applications to enable storage and retrieval of data, (2)to interface with different storage tiers to read/write data tothem, and (3) to manage the data placement and movementacross different tiers. These roles are performed by threelayers: the application interface layer, the storage interfacelayer, and the control layer respectively (Figure 2).The application interface layer exposes a simple PUT/GETAPI that allows an object to be placed/retrieved with re-2. TIERA OVERVIEW2.1 Data ModelTiera implements an object storage model where data ismanaged as objects [16]. This model enforces an explicitseparation of data and metadata enabling unified access todata distributed among the different storage services thatconstitute a Tiera instance. An object stored using Tiera2

spect to the Tiera instance. The client can merely callPUT/GET and let the Tiera server decide in which tier theobject should be placed/retrieved (e.g., in the Amazon context: Memcached, S3 bucket, EBS volume, and so on) basedon the control layer. Note that it is also possible to supportthe traditional POSIX file system interface to Tiera. Wehave developed such an interface using Filesystem in UserSpace (FUSE) [10] to run applications that require a POSIXinterface to Tiera.The storage interface layer interfaces with the storage services encapsulated by the Tiera instance. The Tiera serveruses service-specific APIs to interact with the different storage tiers to carry out different operations such as objectstorage/retrieval, moving data across tiers, and resizing thestorage tiers.The control layer decides how data is to be placed andmanaged throughout the Tiera instance lifecyle. It providestwo primary mechanisms—event and response—to managethe data within the instance. An event is the occurrence ofsome condition and a response is the action executed on theoccurrence of an event. Events can be combined such that aparticular response is initiated only when all the conditionshold, and similarly multiple responses can be associated witha single event. These two mechanisms together form the primary building blocks for data management policies in Tiera.Events may be defined on individual named objects or object classes, the latter allowing a single policy to apply toobject collections (sharing a common tag) and differentialpolicies to be easily specified.ResponsestoreArgumentsObjects, TiersstoreOnceObjects, TiersretrieveObjectscopyObjects, Destination Tiers,BandwidthCapencryptObjects, KeydecryptObjects, KeycompressObjectsuncompressObjectsdeleteObjects, TiersmoveObjects, Destination Tiers,BandwidthCapTier, PercentIncreasegrowshrinkTiera LowLatencyInstance(time t) {% two tiers specified with initial sizestier1: { name: Memcached, size: 5G };tier2: { name: EBS, size: 5G };Tier, PercentDecreaseFunctionStores objects in thespecified tiers.Stores objects in thespecified tiers. An object is stored only if itscontent is unique.Retrieves objects froman underlying tier.Copies objects to thespecified tiers. Transfer speeds are throttledif a bandwidth cap isspecified.Encrypts objects withthe specified key.Decrypts objects withthe specified key.Compresses the specified objects. The ZLIBcompression library isused to perform compression.Inflates the specifiedobjects.Deletes objects fromthe specified tiers.Moves objects to thespecified tiers.Expands tier capacityby the specified percentage.Reduces tier capacityby the specified percentage.Table 1: Supported responses in Tiera% action event defined to always store data% into Memcachedevent(insert.into) : response {insert.object.dirty true;store(what:insert.object, to:tier1);}% write back policy: copying data to% persistent store on a timer eventevent(time t) : response {copy(what: object.location tier1 &&object.dirty true,to: tier2);}}making it very easy to add a new response. Other responseswill be added to Tiera in the future to support transactions,data snapshotting, and object versioning. In the next section (2.3) we will show how a rich array of data managementpolicies can be easily constructed using these event-responsemechanisms.2.3Defining Tiera InstancesTiera instance configuration, including policies are specified through an instance specification file. The instancespecification provides the desired storage tiers to use, theircapacities, and the set of events along with correspondingresponses to be executed. An application realizes the tradeoffs it desires by (i) selecting different storage tiers that constitute the instance, and (ii) specifying the event-responsepairs used to define the policy.For example, consider the following specification for aTiera instance called LowLatencyInstance (Figure 3). Thisinstance uses two storage tiers and an event that specifiesthat a data object is to be placed in tier1 (Memcached)when it is inserted into this Tiera instance (with PUT). Theinstance also implements a write-back policy by combininga timer event with the copy response, to write out any dirtydata (i.e data added or modified in a tier since the last copy)to the persistent store at regular time intervals. It is assumed that the specific tier names (e.g. Memcached andEBS) are known to Tiera.Tiera enables rich policies that can be specified easily toFigure 3: LowLatency Tiera instanceTiera supports three different kinds of events: (1) timerevents that occur at the end of a specified time period, (2)threshold events that can be based on attributes of dataobjects and of the tiers themselves, and occur when thevalue of the attribute reaches a certain value, and (3) action events that occur when actions such as data insertion or deletion are performed. Table 1 shows some ofthe responses currently supported by our Tiera implementation: store data in a tier, retrieve data from a tier, movedata between tiers, copy data from one tier to another, anddelete data in a tier. Tiera also supports advanced responses: storeOnce, grow/shrink , compress/uncompress,and encrypt/decrypt. Tiera’s design is highly modular3

Tiera PersistentInstance() {tier1: { name: Memcached, size: 200M };tier2: { name: EBS, size: 1G };tier3: { name: S3, size: 10G};Tiera GrowingInstance(time t) {tier1: { name: Memcached, size: 200M };tier2: { name: EBS, size: 2G };% Placement Logicevent(insert.into) : response {store(what: insert.object,to: tier1);}% write-through policy using action event% and copy responseevent(insert.into tier1) : response {copy(what: insert.object, to: tier2);}% Growing with workload, add as much Memcahed% storage as its current size everytime the tier% tier is 75% fullevent(tier1.filled 75%) : response {grow(what: tier1, increment: 100%);}}% simple backup policyevent(tier2.filled 50%) : response {copy(what: object.location tier2,to: tier3, bandwidth: 40KB/s);}}% write-back policyevent(time t) : response {move(what: object.location tier1,to: tier2);}Figure 4: Persistent Tiera Instance% LRU Policyevent(insert.into tier1) : response {if (tier1.filled) {% Evict the oldest item to another tiermove(what: tier1.oldest, to: tier2);}store(what: insert.object, to: tier1);}}Figure 6: Expanding a tierto store data, but its working set size is about to exceed200M B. To handle this increase in working set, a thresholdevent can be added to the instance specification to grow theMemcached tier when the amount of stored data reaches acap. The event-response specification for doing this is illustrated in Figure 6.% MRU Policyevent(insert.into tier1) : response {if (tier1.filled) {% Evict the newest item to another tiermove(what: tier1.newest, to: tier2);}store(what: insert.object, to: tier1);}3.TIERA IMPLEMENTATIONWe now describe our implementation of a prototype Tieraserver (which is under 4000 lines of code) in the Amazonpublic cloud using the following storage tiers: Memcached,Ephemeral Storage (Amazon EC2 local volumes), AmazonEBS, and Amazon S3. The Tiera server is deployed as aThrift server [4] on an EC2 instance (can be co-located withthe application on the same EC2 instance). Thrift is a remote procedure call framework, that enables applicationswritten in different languages to communicate with eachother. The use of Thrift makes it easy to interface applications written in different languages with Tiera. Whenthe server starts up, it begins by reading the configurationfile that is used to indicate the different tiers (and theircapacities) that would constitute the instance, the size ofthe thread pool dedicated to service client requests, the sizeof thread pool dedicated to service responses and evaluateevents, and the location to persistently store metadata andcredentials for an Amazon Web Services account. All object metadata is stored and persisted using BerkeleyDB [19].Once the tiers to be used are established, and the two threadpools and the metadata store are initialized, the instance isready to serve client requests. When the instance receives aclient request, it is serviced by a thread from the thread pooldedicated to service user requests. The thread servicing thePUT/GET requests takes an appropriate action, as dictatedby the policy programmed on the instance.The prototype supports the three types of events mentioned previously: action, threshold, and timer. The prototype currently supports all the responses listed in Table 1. Adesired policy is implemented in the instance by hand-codingFigure 5: Implementing LRU and MRU in Tierarealize a desired tradeoff (e.g., latency vs. cost or durability). For instance, for the LowLatencyInstance, by choosingto read/write data from/to Memcached, low latency will beachieved but at high monetary cost and reduced data durability. If an application desires better data durability, itcould specify a smaller time value t for data write-back.As another example, the PersistentInstance (Figure 4)trades performance for better data durability. This instanceuses a small Memcached tier to cache the most recently written data. It implements a write-through policy between tier1(Memcached) and tier2 (EBS). A write-through policy canbe specified using an action event with a copy response thatcauses an object to be inserted into tier2 as soon as it isinserted into tier1.Tiera also allows easy specification of object placementand caching policies through the use of object attributessuch as access frequency and time of last access. For example, access frequency can be used for easy specificationof hot and cold objects. Similarly, time of last access canbe used to identify old and new objects, making it simple toimplement LRU or MRU eviction policies in Tiera, as shownin Figure 5.The event-response framework also allows for dynamicmodification to the instance configuration to respond tochanging workload. For example, consider a scenario wherean application is using the PersistentInstance (Figure 4)4

the event-response pairs into the control layer. Automatedcompilation and optimization of specification files will beaddressed in future work. We next describe how differentevents are implemented in the prototype.Timer events are handled by a dedicated thread in thecontrol layer. This thread is responsible for examining ifa timer event has occurred. Once this thread determinesthat an event has indeed occurred, it signals a free thread(part of the thread pool mentioned) to service the event byexecuting the response associated with the particular timerevent. The original thread continues to check the occurrenceof other timer events. At present Tiera allows timer eventsto be specified at the granularity of seconds.Threshold events can be specified as background or foreground (default is foreground). Background events are evaluated by threads when actions effecting a variable on whichthe threshold is defined occur. For example, consider athreshold event being defined on the amount of data storedin a tier. Two actions effect this variable directly, (1) storingnew data in the tier, and (2) deleting data stored in the tier.Both actions trigger the threshold event to be evaluated andcheck if the defined threshold has been reached. Backgroundevents are evaluated asynchronous to the actions mentionedand must be explicitly declared as such. Foreground eventsare evaluated synchronously and are presumed to be the default.Action events are generally foreground events and are evaluated in the context of the thread servicing a client request.Responses associated are required to be fast since they effectlatency of data access. If a slow response needs to be associated with an action event then it should be specified as abackground event. The occurrence of the event will cause athread to be signalled, which would wake up and service anyresponse associated with the the action event.4.benefits of using multiple tiers.4.1.1EXPERIMENTAL EVALUATIONWe evaluated the Tiera prototype in the Amazon cloud.The Tiera instance containers and the clients were hosted onEC2 instances. For our experiments we used EC2 t1.microinstances–1 ECU, 615 MB of RAM, and 8GB of EBS storage, and EC2 m3.medium instances–3 ECU, 3.75 GB ofRAM, and 8GB of EBS storage to host the Tiera instances.The client workloads were generated using a combinationof benchmarking tools: sysbench [23], TPC-W [24], YahooCloud Serving Benchmark [7] (YCSB), fio [9], and our ownbenchmarks. These benchmark tools were themselves run onan EC2 t1.mirco, or m3.medium instances and all measurements were made from these clients running in the Amazoncloud (i.e., no wide-area latency). Our experiments illustrate: (1) how easy it is to run unmodified applications onTiera, and provide them the composite benefits of multiplestorage tiers, (2) how Tiera enables an application to optimize for a particular metric, and (3) the minimal overheadintroduced by Tiera.4.1Case Study: MySQL and TPC-W On TieraIn this section we present our experience running twoapplications on Tiera–(1) MySQL – a popular opensourcedatabase management system (DBMS), and (2) an onlinebookstore application (bundled with the TPC-W benchmark).We were able to run both applications on Tiera without anymodifications to the applications themselves. Running theseapplications on Tiera, we are able to offer them composite5MySQL On TieraMySQL [18] is a popular open source DBMS used by manyapplications. Within the Amazon cloud, MySQL is typically deployed on an EBS volume (persistent block store)attached to an EC2 instance. This deployment performsreasonably well when the amount of data accessed isn’t verylarge or when there aren’t many concurrent requests, andso requests can be served from the local instance’s buffercache or MySQL’s built-in caches. However, the throughput drops significantly and the response latency increaseswhen data requested by the client can no longer be servedfrom these caches. Hence, many techniques have been explored to maintain the throughput level and keep the response latency bounded. One such technique is to storethe database completely in memory. MySQL has a specialstorage engine called Memory Engine that stores databasescompletely in memory. However this technique only worksfor non-transactional workload and when the database cancompletely fit in the node’s memory. Also storing the entiredatabase in a single node’s memory makes the deploymentvulnerable to failure2 .Another common technique is to modify the end application such that it caches the results of a database access in amemory storage system like Memcached [15]. When the application uses other storage services like Memcached to storedatabase results, it has to deal with additional complexitiessuch as being able to scale up and scale down the storageservice with a change in the workload.Apart from the limitations mentioned above, we see thateither MySQL needs to be modified heavily (the MemoryEngine implementation is 4000 lines of code) or the end application needs to be modified to optimize for performance.Here, we explore the possibility of running unmodifiedMySQL on Tiera to overcome the limitations mentionedabove. Using Tiera also enables a MySQL deployment toeasily optimize other metrics such as cost or reliability. Thebenefits of the optimizations could be passed to the end applications, without the applications having to manage multiple storage services and deal with the associated complexities.Performance Optimization:For this experiment, we used the unmodified MySQL Community Edition [18] version 5.7. We hosted MySQL on anm3.medium EC2 instance. We generated OLTP workloadusing sysbench. We hosted the benchmark tool itself on aseparate t1.micro instance. The OLTP workload followedthe special distribution, that is a certain percentage of thedata is requested 80% of the time. We varied this percentage of data requested from 1% to 30%. We also varied theconcurrency of the workload. We first ran MySQL on a nonroot EBS volume attached to the m3.medium EC2 instance,which is a standard way to deploy MySQL in the cloud. Wethen deployed MySQL on two different Tiera instances (described below) and subjected them to the same workloads.And last, we subjected the MySQL Memory Engine to similar workloads.For the following experiment we defined two Tiera instances – MemcachedReplicated and MemcachedEBS.The MemcachedReplicated instance consists of two Mem2For better fault tolerance it is required to run MySQL in aspecial cluster mode which implies additional costs.

Tiera MemcachedReplicated Inst.Tiera MemcachedEBS Inst.MySQL On EBS11020% Data Fetched 80% of the Time95 Percentile Latency (ms)Transactions Per Second20018016014012010080604020030(a) Throughput240220200180160140120100806040200Tiera MemcachedReplicated Inst.Tiera MemcachedEBS Inst.MySQL On EBS11020% Data Fetched 80% of the Time30(b) 95 Percentile Response Latency1009080706050403020100Tiera MemcachedReplicated Inst.Tiera MemcachedEBS Inst.MySQL On EBS11020% Data Fetched 80% of the Time95 Percentile Latency (ms)Transactions Per SecondFigure 7: Throughput and 95 Percentile Response Latency For Read-Only Workload With 8 Threads30(a) 02000Tiera MemcachedReplicated Inst.Tiera MemcachedEBS Inst.MySQL On EBS11020% Data Fetched 80% of the Time30(b) 95 Percentile Response LatencyFigure 8: Throughput and 95 Percentile Response Latency For Read-Write Workload With 8 Threadscached tiers: one Memcached tier in the same availabilityzone as the client and the other in a separate availabilityzone in AWS3 . We defined a simple data management policyfor this instance: on a PUT to the instance the data is writtento both tiers before being acknowledged. This replication ofdata provides better fault tolerance than having just onecopy in memory as in the Memory Engine. The GET requestis served from the Memcached tier in the same availabilityzone as the client. The MemcachedEBS instance consists oftwo tiers as well: a Memcached tier and an EBS tier. Thedata management policy for this instance involved writingdata to both the Memcached and EBS tier on PUT and serving data from Memcached for GET. The instance specificationfiles for both these instances are under 15 lines each (in contrast to nearly 4000 additional lines of code needed to support MySQL directly over memory). Both these instanceshad Memcached tiers large enough to fit the entire databasein memory. The MemcachedEBS instance has a lower cost ofstorage per GB compared to the MemcachedReplicated instance, since it has a lesser amount of Memcached storage.Since we need to provide a POSIX interface to MySQL, weused the FUSE filesystem interface we developed to interface MySQL with the Tiera instances. The FUSE filesystemwe developed splits the database files into 4 KB objects (OSpage size) and stores them in Tiera.In Figures 7 and 8, we plot the throughput in terms oftransactions per second and the 95 percentile response latency for read-only and read-write workloads with 8 threads.We see that for both read-only and read-write workloadsthe MemcachedReplicated instance performs the best, supporting the highest throughput and providing the lowestresponse latencies. The MySQL deployment on the TieraMemcachedReplicated instance provides a 125% increase inthroughput compared to the standard MySQL deploymenton EBS

Store (EBS), and Glacier as di erent cloud storage options1. A single service generally optimizes one metric trading o others. For example, Amazon ElastiCache o ers low latency, but at high cost and low durability. Amazon S3 o ers high durability and low cost but low performance. If the appli-cation is willing to use multiple cloud storage .