ARTIFICIAL INTELLIGENCE, ROBOTICS AND

Transcription

ARTIFICIAL INTELLIGENCE,ROBOTICS AND AUTOMATIONTHE BEST OR THE WORST THING EVERTO HAPPEN TO HUMANITY?

Contents1THE EDGE OF THE PRECIPICE. 032WHAT IS AI, RPA AND ROBOTICS?. 043TIME FOR A CONTRACT REFRESH. 0502 Artificial Intelligence, Robotics and Automation

THE EDGE OF THE PRECIPICEAs Professor Stephen Hawking said[1], we do not yet fully understand and cannot predictthe true impact of AI, and yet the race to business and operational transformation viathe implementation of digital technologies, such as artificial intelligence (AI) and roboticprocess automation (RPA), is on an inexorable rise.And whilst there may be some debate as to the socio-economic impact of the riseof the machines and whether they will in time decimate the human race in a form ofscience fiction disaster movie, for the time being their use is slightly more prosaic. Thereis no doubt that AI and RPA are here to stay, and businesses, academic institutionsand governments are being encouraged to develop their intelligence further, and soit is essential to look to the intelligent future and work to both facilitate innovation,allowing businesses to embrace technology and at the same time mitigate any associatedrisks. We examine some of the business opportunities and challenges faced, as wellas providing our insight on how to manage these issues both in strategic sourcingprogrammes and in transformative, technology-enabled iper.com 03

WHAT IS AI, RPA AND ROBOTICS?There is much talk of AI, robotics and RPA, almost on an interchangeable basis. In this paper, these terms aredefined as having the following meanings:Artificial Intelligence – technically a field of computerscience and a phrase coined by John McCarthy in the late1950s, AI is the simulation of human intelligence by machines,often sub-divided into ‘strong’ and ‘weak’ AI (strong or hard AIis true human mimicry, often the focus of Hollywood, whereasweak or soft AI is more often focussed on a narrow task).Machine learning – is theability of a machine toimprove its performancein the future by analysingprevious results. Machinelearning is an applicationof AI.Robotic processautomation (RPA) –the use of software toperform repeatable orclerical operations, previouslyperformed by a human.Neural networks – an example of machinelearning; a neural network is a connectednetwork of many simple processors, modelledon the human brain.Deep learning – a form of machine learningconcerned with the human brain’s function andstructure.Heuristics – a ‘rule of thumb’, more akin togut feeling (as opposed to algorithms which willguarantee an outcome), used in AI to problemsolve quickly.One thing it is important to note is that, in spite of the hype surrounding RPA, it won’t do much by itself “out ofthe box”. It needs to be taught and it will continue to learn before and after deployment, as indicated in the diagrambelow. This means that the use of RPA comes with an investment cost and a time requirement that is important tobear in mind when seeking to understand when the issues set out here are likely to manifest. It also goes some wayto underlining that the use of RPA requires a relatively long-term investment in order to obtain and maintain the fullpotential benefits.Training phaseInitiation – no data“Does this scan indicate cancerous growth?”Training withinitial dataAccuracy50%Level:63%Results of testingfed into systemTesting onnew data74%Human trained via medicalschool, on-the-job experience –after 10 years of practice hasseen several thousand scansLive use phasePatient presentswith symptomsTesting onnew dataCT scan carried out04 Artificial Intelligence, Robotics and AutomationMachine combines results frompotentially hundreds of thousandsof scans, including input fromedge cases from the bestdiagnosticians

TIME FOR A CONTRACT REFRESHThe potential benefits of implementing AI or RPA withina business can be significant and even transformative forthe commercial well-being of that company – so long as itis set up to succeed. The use of AI and RPA, particularlyin outsourcing deals, can give rise to a number of noveland differently nuanced issues that, if not addressed at theoutset, could create some significant issues for the future.SERVICE LEVELS AND FAILUREBroadly speaking, current service level models aredevised to incentivise suppliers to avoid ‘low grade’ issuesthat might arise if staff do not follow proper processes.This is because human beings are by definition fallibleand will be more or less efficient depending upon a largenumber of factors.This is not the case with AI-based services, which do not(or should not) suffer from the same challenges as thosethat will likely give rise to human error. Accordingly,it is not unreasonable to expect improved service levelsfor processes supported by RPA and in fact, this willoften be perceived to be one of the key reasons for theimplementation of RPA.The flip side is that if RPA failures occur, there is a fargreater risk that the incidents will be catastrophic ratherthan minor. This is because AI-based systems tend towork at a demonstrable accuracy level or will, if thisaccuracy level cannot be achieved, fail in a significant wayfar below the relevant standard. It is far less likely thatsuch systems will degrade by small margins as humanprovided services might. When a defect or error occursits likelihood of repetition and going unseen is increasedbeyond that of a human error, as it will likely have been“programmed” within the RPA solution and accordinglywill become part of the norm. Only continued oversightand management of the solution will enable these errorsto be recognised, unless the RPA itself can recognise itsown errors.CONFIDENTIALITY AND IPSoftware has been writing software without humanintervention for some time. Who owns the resulting newcode? Similarly, valuable, derived data from huge raw datasets may be sold in much the same way as market data.Likewise, who owns a new derived data set which hasbeen created by the machine?Clearly, any applicable agreement will need to includeterms that deal with the relevant issues. The key hereis understanding the likely different outputs that mightbe created as a consequence of the deployment of theAI or RPA technology. It will be important to rethinkthe provisions insofar as they relate to matters such asconfigurations, outputs that reflect or are a manifestation ofbusiness rules, and templates generated by the AI or RPA.There are two particular issues that may require differenttreatment – background IP and know-how provisions.It is not unusual for customers to agree thatmodifications or enhancements to the supplier’sbackground IP are owned by the supplier (often on thebasis that they are worthless without the underlyingproduct). However, in AI or RPA deployments, thiscategory of IP can instead have its own intrinsic valuethat the customer ought to consider before lettinggo, because, for example, if used by the supplier or athird party it would allow that entity to replicate thecustomer’s business practices (potentially even moreefficiently than the customer) or because it is somethingthat the customer will continue to need to own becauseof the value to the company itself.www.dlapiper.com 05

Similarly, most customers will agree a know-how clause,permitting the supplier to use the knowledge gainedby the supplier in the course of providing the services.But this ought to be reconsidered on the basis that itmight aquire knowledge, not of humans, but of machinesand software opening up the possibility for the supplierto re-use material and knowledge that the customerbelieves to have been protected.AUDIT AND TECHNOLOGYCustomers often ask for audit rights – especially inparticular sectors such as financial services where aregulated entity is required to ensure appropriate auditrights and may incur substantial sanctions from itsregulators if it cannot audit and monitor the work of itsservice providers.Such monitoring is easier within the traditional sourcingenvironment, when a supplier can be audited mainlythrough a review of documents, reports and procedures.Any work done by a human can be checked by anotherhuman relatively easily. In the new context of AI andRPA, it is more difficult to work out how the AI systemis working (and evolves) through the service.If a machine learning-based system has formulated itsown pattern-matching approaches to determine theprobability of a given action being the correct responseto particular inputs, human auditors will not necessarilybe able to derive or follow the underlying logic andreassure themselves in the same way that they mightbe able to by interviewing workers to check their levelof training and competency. It may well be that insteadof the traditional accountants and audit professionals,additional forensic IT experts should be added to theteam that performs the audit.06 Artificial Intelligence, Robotics and AutomationHR, REDUNDANCIES AND KNOWLEDGETRANSFERThose implementing AI and RPA clearly need to understandthe HR consequences. Transformational programmes willneed to address process risks such as collective consultationrequirements, where failure could potentially delay progressor give rise to significant financial penalties. Equally,potential redundancies will undoubtedly be a sensitiveissue, as well as potentially triggering severance payments.Newly created roles on the back of change may give rise toredeployment and retraining obligations for those displaced.Both remuneration design and representation structureswill potentially be impacted and come into play.A particular challenge will be understanding the impactof AI and RPA on the workforce sufficiently to identifylegal obligations and not fall foul of timing issues by failingto comply with any obligations in the required timescales,for example collective consultation processes or filingnotification of redundancies with competent authorities.Another difficulty where there is a proposed outsourcingand transformation will be understanding whether or notautomatic transfer rules apply such as those underTUPE/ARD or similar legislation, and who has the ability toeffect redundancies pre or post transfer. This will involveasking important questions around exactly when and howtransformation will impact employees, and navigating thelegal constraints accordingly.

Normal transfer-in/transfer-out TUPE model for outsourced mer’s (or replacementsupplier’s) employeesSupplierSupplier’s employeesService start dateService end dateAI-based service provision – transfer-in, gradual redundancyCustomerCustomer’semployeesTUPEWhat IP/knowledge doesthe customer get?SupplierSupplier’s employeesService start dateIt is accepted practice, where TUPE/ARD or similarlegislation applies, that offer and acceptance rewardsare used to re‑engage employees who are involved inproviding a given service that is to be outsourced, and thatthese employees may transfer to the supplier upon thecommencement of service provision.Generally, when a customer transfers its employeesto the supplier, it may expect to transfer those ofthe supplier’s employees (or at least a skilled andknowledgeable subset of them) who were providing theservices either back to the customer or onward to areplacement supplier where the services are terminated.From a customer perspective, this is aimed at ensuringthat the customer can continue with the services directly(or with third parties) with the same standards of serviceand with the benefit of relevant know-how, as well as notsaddling the supplier with staff it no longer requires andthe associated workforce restructuring issues.Service start dateWhere RPA or AI is involved in the service provision,some or possibly all of the employees previouslyproviding the services within the customer organizationmay have become redundant during the period ofservice supply as a result of the deployment of RPA orAI. It follows that there may be few, if any, employeesto transfer back to the customer or onward to a newsupplier, and a resulting loss of know-how transfer.Upon contract termination, if the AI system is licensedsoftware, it may well remain with the supplier, alongwith the experience and machine learning that it hasdeveloped during the provision of the services. In thatcontext, it is important to address in the contract howto reimport information into a new AI system so as tohave an accelerated period of learning. Exit provisionsare accordingly becoming more relevant, and also needto address who will own, or have rights to use, the IPR inthe tool itself, at least insofar as it represents a reflectionof the customer’s activities and operations.www.dlapiper.com 07

LIABILITYAt present, heads of uncapped loss are negotiatedassuming failure modes that we have seen in othercontracts where the work is done by humans. However,if a substantial portion of the work is to be undertakenusing artificial intelligence, the most likely failure modeswill be different, and the traditional liability positions takeon a new significance.Where AI is undertaking an increasing share of the work,with humans checking only a small portion of its output,errors might accumulate more rapidly and be caught lessfrequently. Similarly, whilst a machine might generallywork more quickly than a human work force, and worktwenty-four hours a day instead of eight hour shifts, theresilience of the machine needs to be considered. If itgoes down, that is the equivalent of every person in ahuman workforce not turning up: no work gets done.This makes low-level failure – the type that a service leveland service credit regime in a contract might be designedto avoid – less likely, and catastrophic failure a biggerissue.In addition, depending on the nature of the system and itsability to back-up its ‘experience’ in the form of its storedpatterns for processing the work, if that pattern is lostafter the human work force that was previously doingthe work has moved on, then the customer’s ability toundertake the work or even meaningfully recreate the AIsystems to undertake that work is badly compromised.The literal loss of corporate memory would be acute.The net result is that failures are likely to be a rarerspecies, but potentially more severe. The potentialfor lower-value claims from the customer against theservice provider is perhaps reduced, but the customerwill remain very nervous about a major outage andeven more concerned about the loss of those preciousexperience patterns that represent the AI itself.As a result, the traditional approach whereby suppliersare nervous of and customers often accepting of aposition where the supplier takes little, if any, liabilityfor the business impact or even for loss of data oncustomers, may require, at least from a customerperspective, a re-think – AI and RPA are not providinga service to support the business, they have becomethe business. Similarly, customers may see the lowerend of the current market-standard financial caps as notbeing sufficient if the truly catastrophic failure occurs,whereas a supplier will want to achieve an ongoingbalance between the risks it can accept versus its reward,together with its inherent capability and desire to be ableto take on material liability for what might be perceivedas ‘run of the mill’ services.COMMITTED BENEFITSOne of the principal benefits of deploying an AI or RPAsolution is to reduce and eradicate costs on a long-termbasis. Many contracts already include an element ofcommitment to benefits on the part of the supplier aspart of the deal and which will often be achieved by theimplementation of AI and RPA (as well as some moretraditional methods such as process improvement andrate arbitrage). We believe most major AI and RPA deals(whether standalone or as part of a broader outsourcing)will contain a level of committed benefits whereby thesupplier contractually promises to save the customermoney and if this is not achieved will pay the customeran amount to make up the shortfall. The likely quid proquo will be a request from the supplier to share in theexcess saving.08 Artificial Intelligence, Robotics and Automation

Contractualising the mechanism by which the savingsare committed is absolutely key, and often fundamentalto the rationale of selecting that supplier over another.This will likely require a clear understanding of thebaseline of costs against which the saving is to bedelivered, what the saving actually is and how it is to bequantified and an explanation of how the customer canbe sure that the saving has actually been delivered andthat it is sustainable.the underlying change is not delivered. This is dangerousbecause (i) the supplier might not be able and willingto stand behind it in the long term and so it mightlead to a negotiation and an unravelling of one of thefundamentals of the deal and (ii) without the delivery ofthe transformation project, the customer is not beingtransformed and so on exit will be – operationally –even further behind its desired state than it was at thebeginning of the transaction.PROTECTION ON EXITREGULATORY OVERSIGHTA key risk with an AI or RPA deployment is the termof the relevant licence and what happens if/when thiscomes to an end; the same risks exists at the end of anoutsourcing transaction of which AI or RPA plays a keypart. If deployed RPA or AI is suddenly removed, thecustomer may be faced with a material spike in costs asit has to replace the solution, both temporarily and thenpermanently. There is also a significant loss of knowledgewhich could impact on the customer’s ability to conductits business. Accordingly, it is key to address three majorissues before the contract is signed, rather than leavingthese to be dealt with upon exit. These issues are:In many sectors, not least the Financial Servicessector, customers are subject to increasing regulationin connection with their use of technology andoutsourcing to support their business. Many of theregulatory requirements are aligned to “traditional”outsourcing models and can be difficult to apply directlyto transactions that have a heavily automated aspect tothem. Ensuring regulatory compliance whilst achievingthe full benefits of an outsourcing harnessing AI androbotics will need to be a carefully approached task.1) the “leave behind” IPR, whether owned or licensed tothe customer – this will need to cover configurations ofsoftware, manifestations of business rules applied by thecustomer, process improvements and anything embeddedwithin a process that would “break” the process if it wasremoved;Whilst some AI or automated systems might operate on astandalone basis, more often than not the relevant systemswill be connecting to and interacting with other systemswithin the wider IT environment. Where this happens,the terms upon which software running within the widerenvironment (i.e. that the AI or automated system mightinteract with) need to be considered. It may not be the casethat the contemplated interaction falls within the scopeof the licence applicable to such third-party software, orif it does, it may trigger provisions which impact upon thelicence fee for that third-party system.2) a continuing standalone licence to the AI/RPAsoftware – it is preferable to negotiate a standalonelicence to the version of the software being used by thecustomer at the point of exit, on terms that can survivea termination, even if this means a separate fee is payablefor it; and3) an obligation to deliver the transformation itself,not just the commercial benefits – most AI/RPA-heavydeals are of course transformative. Where the dealinvolves a level of committed benefits, there is a riskthat the customer concentrates on the supplier “cuttinga cheque” to achieve the benefits, even if operationallyINTERPLAY WITH OTHER SOFTWAREMany software licences now specifically address thesituation where the licensed system is to interface withAI or other form of automated system instead of humanusers. In extreme cases, this form of interaction mightsimply be prohibited. In others, the terms might providefor differential fee structures based upon how the systemis to be used. For instance, where software is licensed onwww.dlapiper.com 09

the basis of a fee per user or fee per ‘seat’, any human usersmight count as a single user, whereas automated systemscount as multiple users – commonly between 3 and 10 –on the basis that the automated systems have the potentialto use the system at a significantly greater rate than ahuman user might.There is some plausible logic for this when looked at fromthe perspective of the software vendors. If large numbers ofhuman users are rapidly replaced by a much smaller numberof automated systems, and those automated systems onlycount as one ‘user’ despite doing the same volume ofwork as several human users might previously have done,then the vendor’s future revenue stream will soon dry up.The ongoing cost of support and future development workdoes not change, but now needs to be spread across asmaller population of largely robotic ‘users’ to maintain thesame revenue and margin position, so the fee for these newtypes of users has to be greater. Whilst that argument islogical, from the customer’s perspective a large differentialin pricing for apparently the same ‘user’ access might stillseem to be an unfair charging scheme.To avoid this problem, moving to a ‘pay as you go’ chargingscheme based on transactions processed, or computepower consumed, or some other common ‘cloud’ or ‘X as aService’ type of metric might be sensible. Under those typesof models, the automation problem is solved, as the chargesare based on the level of work done with the system,regardless of whether the work is done by human users oran automated system.Checking the third-party licence terms of any software withwhich the AI/automated system will interface should be acritical part of developing the business case for any AI orautomation implementation project.ERRORS IN DATA AND PERPETUATION OFMISTAKESIn the background section above, we set out howmachine learning-based systems are ‘trained’ via apositive feedback loop. In theory, as the system isexposed to more data, it ought to continually improveand the accuracy of its ‘decisions’ should thereforeincrease.10 Artificial Intelligence, Robotics and AutomationAs with most computer systems however, the oldadage of ‘garbage in, garbage out’ still applies. It is quitepossible that, if an apparently highly performing system iscontinually exposed to poor quality data, or data whichsuggests incorrect decisions are in fact correct, then itsaccuracy – in objective terms – will gradually diminish(but its accuracy in performance terms will be high).Any biases, inaccuracies or bad assumptions which arepresent in the human users (whose actions form thetraining data used to train the system) will be reflectedin the decisions made by the trained system. Similarly,if the system is continually being fed data from differentsources, and one source is continually providing incorrectfeedback on the decisions taken by the machine learningsystem, that will impact upon the accuracy of the system.In scenarios where a particular AI system is used toprovide services to many different customers, then boththe platform vendor, and each customer, has an interestin ensuring that none of the users ‘pollute’ the system byinputting bad data that could diminish the accuracy of thesystem’s output for all users. In such circumstances, it isin all parties’ interests that each customer commits toensuring the quality of any data fed into the system andcommits to avoiding any action which could result in thequality of the system being compromised.DATA PROTECTION AND INFORMEDCONSENTAI and smart robots pose some obvious data protectionconcerns (and we will address such topics in more detaillater in this series of articles). Such concerns take on anew relevance once we take into account the substantialsanctions that may be applied under the new EuropeanGeneral Data Protection Regulation.The main concerns stem from the fact that any AI systemby definition is based on the processing of a large volume ofdata. Initially, such data may not be personal data within themeaning of the Regulation, but it can become personal data(i.e. it is attributable to a specific person) or even sensitivedata, as a result of deep pattern matching techniques andother processing that AI might perform.

This may result in data being processed in a manner forwhich consent had not been granted, without any otherrelevant justification being applicable, or beyond theboundaries set out by earlier consent. Furthermore, theAI solution may end up making its own decisions aboutthe data management, thus changing the purposes laidout by the data controller who should be ultimatelyresponsible for the data processing.Furthermore, depending on the complexity of the systemand the ability to detect “unusual” activity, it may beharder to determine when an AI-based system is beinghacked, making it more difficult to determine whetherthere has been a resulting data breach. All such issueswill have to be carefully addressed in the design phase,when it is being decided how an AI solution will functionand what technical controls can be applied, and also inany agreement between parties involved in using that AIsolution to process data.Last but not least, and this is a rather pervasive point,it should be carefully determined between the partieswho is responsible for what, if there is any dependency,particularly considering all parties that may incurliabilities when dealing with smart robots or artificialintelligence.ARE WE HEADING TO ARMAGEDDON?Whether we believe we are or not, the use of RPAand AI is on an exorable journey to transform notjust sourcing contracts but our day-to-day lives.The continued use of RPA and AI signals a need tolook again at many transition contract terms throughthis new lens to ensure that they continue to berelevant and enable businesses to garner the fullbenefit of transformational outsourcing deals and AIand RPA implementation.With careful thought and attention to the issues,deploying of AI/RPA can be transformative,competitively advantageous and deliver real businessbenefit. Maybe then, AI and RPA will be one of thebest things to happen after all.If you would like is discuss any of the issues raisedhere, please contact your usual DLA Piper contact oremail outsourcing@dlapiper.comwww.dlapiper.com 11

www.dlapiper.comDLA Piper is a global law firm operating through various separate and distinct legal entities.Further details of these entities can be found at www.dlapiper.com.This publication is intended as a general overview and discussion of the subjects dealt with, and does not create a lawyer-client relationship. It is notintended to be, and should not be used as, a substitute for taking legal advice in any specific situation. DLA Piper will accept no responsibility for anyactions taken or not taken on the basis of this publication. This may qualify as “Lawyer Advertising” requiring notice in some jurisdictions. Prior resultsdo not guarantee a similar outcome.Copyright 2018 DLA Piper. All rights reserved. MAR18 3289622

02 Artificial Intelligence, Robotics and Automation As Professor Stephen Hawking said [1] , we do not yet fully understand and cannot predict the true impact of AI, and ye