Safeguarding The Use Of Complex Algorithms And Machine .

Transcription

Managing algorithmic risksSafeguarding the use of complex algorithmsand machine learning

02

Managing algorithmic risks Safeguarding the use of complex algorithms and machine learningTable of contentsExecutive summary1When algorithms go wrong2What are algorithmic risks?4Why are algorithmic risks gaining prominence today?5What do algorithmic risks mean for your organization?7What’s different about managing algorithmic risks?8How can your organization approach algorithmicrisk management effectively?9Is your organization ready to manage algorithmic risks?10Contact the authors1103

Managing algorithmic risks Safeguarding the use of complex algorithms and machine learningExecutive summaryThe algorithmic revolution is here.The rise of advanced data analytics andcognitive technologies has led to anexplosion in the use of algorithms across arange of purposes, industries, and businessfunctions. Decisions that have a profoundimpact on individuals are being influencedby these algorithms—including whatinformation individuals are exposed to,what jobs they’re offered, whether their loanapplications are approved, what medicaltreatment their doctors recommend, andeven their treatment in the judicial system.At the same time, we’re seeing a sharp risein machine-to-machine interactions that arebased on the Internet of Things (IoT) and alsopowered by algorithms.What’s more, dramatically increasingcomplexity is fundamentally turningalgorithms into inscrutable black boxes ofdecision making. An aura of objectivity andinfallibility may be ascribed to algorithms.But these black boxes are vulnerable to risks,such as accidental or intentional biases,errors, and frauds, thus raising the questionof how to “trust” algorithmic systems.1Embracing this complexity and establishingmechanisms to manage the associatedrisks will go a long way toward effectivelyharnessing the power of algorithms. Andthe upside is significant. Algorithms canbe used to achieve desired business goals,accelerate long-term performance, andcreate differentiation in the marketplace.Organizations that adapt a risk-awaremind-set will have an opportunity to usealgorithms to lead in the marketplace, betternavigate the regulatory environment, anddisrupt their industries through innovation.Increasing complexity,lack of transparencyaround algorithm design,inappropriate use ofalgorithms, and weakgovernance are specificreasons why algorithmsare subject to such risksas biases, errors, andmalicious acts.

Managing algorithmic risks Safeguarding the use of complex algorithms and machine learningWhen algorithms go wrongFrom Silicon Valley to the industrialheartland, the use of data-driven insightspowered by algorithms is skyrocketing.Growth in sensor-generated data andadvancements in data analytics andcognitive technologies have been thebiggest drivers of this change, enablingbusinesses to produce rich insights toguide strategic, operational, and financialdecisions. Business spending on cognitivetechnologies has been growing rapidly.And it’s expected to continue at a five-yearcompound annual growth rate of 55 percentto nearly 47 billion by 2020, paving the wayfor even broader use of machine learningbased algorithms.1 Going forward, thesealgorithms will be powering many of the IoTbased smart applications across sectors.While such a change is transformativeand impressive, cases of algorithms goingwrong or being misused have also increasedsignificantly. Some recent examples include: In the 2016 US elections, social mediaalgorithms were cited for shaping andswaying public opinion by creating opinionecho chambers and failing to clamp downon fake news. During the 2016 Brexit referendum,algorithms were blamed for the flash crashof the British pound by 6 percent in amatter of two minutes.2 Investigations have found that thealgorithm used by criminal justice systemsacross the United States to predictrecidivism rates is biased against certainracial classes.3 Researchers have found erroneousstatistical assumptions and bugs infunctional magnetic-resonance imaging(fMRI) technology, which raisedquestions about the validity of manybrain studies.4 In several instances, employees havemanipulated algorithms to suppressnegative results of product safety andquality testing. Users have manipulated some artificialintelligence-powered tools to makeoffensive and inflammatory comments. According to a recent study, online ads forhigh-paying jobs were shown more oftento men than to women.Increasing complexity, lack of transparencyaround algorithm design, inappropriate useof algorithms, and weak governance arespecific reasons why algorithms are subjectto such risks as biases, errors, and maliciousacts. These risks, in turn, make it difficult totrust algorithms’ decision choices and createconcerns around their accuracy.These risks have the potential to cascadeacross an organization and negativelyaffect its reputation, revenues, businessoperations, and even regulatory compliance.That’s why it’s important for organizationsto understand and proactively managethe risks presented by algorithms to fullycapture the algorithms’ value and drivemarketplace differentiation.Growth in sensorgenerated data andadvancements in dataanalytics and cognitivetechnologies have beenthe biggest drivers ofthis change, enablingbusinesses to produce richinsights to guide strategic,operational, and financialdecisions.Many traditional checks and balances aredesigned for managing “conventional risks”where algorithm-based decisions aren’tsignificantly involved. But these checks andbalances aren’t sufficient for managing risksassociated with today’s algorithm-baseddecision-making systems. This is due to thecomplexity, unpredictability, and proprietarynature of algorithms, as well as the lack ofstandards in this space.2

Managing algorithmic risks Safeguarding the use of complex algorithms and machine learningDefinitions of specific technology termsAlgorithms are routine processes or sequences of instructions for analyzing data, solving problems, andperforming tasks.5 Traditionally, researchers “programmed” algorithms to perform certain tasks. “Self-learning”algorithms, however, are increasingly replacing programmed algorithms.Analytics is the use of data, statistical modeling science, and algorithms to reliably generate insights, predictoutcomes, simulate scenarios, and optimize decisions.6Cognitive technologies refer to the underlying technologies that enable artificial intelligence (AI). AI is thetheory and development of computer systems that are able to perform tasks that normally requirehuman intelligence.7Machine learning is the ability of computer systems to improve their performance through exposure to data,without the need to follow explicitly programmed instructions. Machine learning, along with other technologies,such as computer vision and natural language processing, constitutes the field of cognitive technologies.8Neural networks are computer models often used for implementing machine learning. They’re designed tomimic aspects of the human brain’s structure and function, with elements representing neurons andinterconnections.9 These neural networks comprise layers of virtual neurons that recognize patterns in differentkinds of input data, such as numerical data, texts, images, and sounds.Deep learning is an advanced technique to implement machine learning, using neural networks with multiplelayers. By leveraging increased computing power, deep learning has expanded machine learning capabilitiesthrough more complex representations of data and modeling.3

Managing algorithmic risks Safeguarding the use of complex algorithms and machine learningWhat are algorithmic risks?Algorithmic risks arise from the use of dataanalytics and cognitive technology-basedsoftware algorithms in various automatedand semi-automated decision-makingenvironments. Figure 1 provides aframework for understanding the differentareas that are vulnerable to such risks andthe underlying factors causing them. Input data is vulnerable to risks, suchas biases in the data used for training;incomplete, outdated, or irrelevant data;insufficiently large and diverse sample size;inappropriate data collection techniques;and a mismatch between the data usedfor training the algorithm and the actualinput data during operations. Algorithm design is vulnerable to risks,such as biased logic, flawed assumptionsor judgments, inappropriate modelingtechniques, coding errors, and identifyingspurious patterns in the training data. Output decisions are vulnerable torisks, such as incorrect interpretationof the output, inappropriate use of theoutput, and disregard of the underlyingassumptions.These risks can be caused by severalunderlying factors:Human biases: Cognitive biases of modeldevelopers or users can result in flawedoutput. In addition, lack of governance andmisalignment between the organization’svalues and individual employees’ behaviorcan yield unintended outcomes.Example: Developers provide biasedhistorical data to train an image recognitionalgorithm, resulting in the algorithm beingunable to correctly recognize minorities.Figure 1: Framework for understanding algorithmic risksAlgorithmAlgorithm designdesignInputInputdatadataOutputOutput decisionsdecisionsUnderlying factorsHuman biasesTechnical flawsTechnical flaws: Lack of technical rigor orconceptual soundness in the development,training, testing, or validation of thealgorithm can lead to an incorrect output.Example: Bugs in trading algorithms driveerratic trading of shares and suddenfluctuations in prices, resulting in millions ofdollars in losses in a matter of minutes.Usage flaws: Flaws in the implementationof an algorithm, its integration withoperations, or its use by end users can leadto inappropriate decision making.Usage flawsSecurity flawsAlgorithm design isvulnerable to risks, suchas biased logic, flawedassumptions or judgments,inappropriate modelingtechniques, codingerrors, and identifyingspurious patterns in thetraining data.Example: Drivers over-rely on driverassistance features in modern cars,believing them to be capable of completelyautonomous operation, which can result intraffic accidents.Security flaws: Internal or externalthreat actors can gain access to inputdata, algorithm design, or its output andmanipulate them to introduce deliberatelyflawed outcomes.Example: By intentionally feeding incorrectdata into a self-learning facial recognitionalgorithm, attackers are able to impersonatevictims via biometric authentication systems.4

Managing algorithmic risks Safeguarding the use of complex algorithms and machine learningWhy are algorithmic risksgaining prominence today?While algorithms have been in use for manyyears, the need to critically evaluate themfor biases, lack of technical rigor, usageflaws, and security vulnerabilities has grownsignificantly in recent times. This growingprominence of algorithmic risks can beattributed to the following factors:Algorithms are becoming pervasiveWith the increasing adoption of advanceddata analytics and machine learningtechnology, algorithm use is becomingmore prevalent and integral to businessprocesses across industries and functions.It’s also becoming a source of competitiveadvantage. One study predicts that 47percent of jobs will be automated by2033.10 Figure 2 highlights some prominentbusiness use cases of algorithms.These use cases are expected tosignificantly expand in the near future, giventhe tremendous growth in IoT-enabledsystems. These systems can lead to thedevelopment and proliferation of newalgorithms for connecting IoT devices andenabling smart applications.Figure 2. Algorithm use across business functions Automate production and otheroperational processes Advise on investment decisions Predict quality issues and failures Develop, analyze, and execute contracts Monitor flow across supply chain Enable predictive asset maintenance Execute automated trades and deals Generate automated reports Develop targetedmarketing campaigns Automate testing of systems Monitor cyber threats Automate systemmaintenance Support cyber incidentresponseFunctional viewof theorganization Measure effectiveness ofmarketing campaigns Monitor social media forconsumer insights Calculate discounts basedon customer data Support workforce planning5 Source, recruit, and hire talent Identify, prioritize, and monitor risks Manage performance of employees Spot fraud and conduct investigations Increase employee engagement andretention Analyze business ecosystems Enforce regulatory complianceMachine learning techniquesare evolvingImprovements in computationalpower coupled with the availability oflarge volumes of training data—dataused to train algorithms—are drivingadvancements in machine learning. Neuralnetworks are becoming an increasinglypopular way of implementing machinelearning. Techniques such as deep learningare being used for tasks like computervision and speech recognition.These advances in machine learningtechniques are enabling the creation ofalgorithms that have better predictivecapabilities but are significantly more complex.With the increasingadoption of advanced dataanalytics and machinelearning technology,algorithm use is becomingmore prevalent and integralto business processesacross industries andfunctions.

Managing algorithmic risks Safeguarding the use of complex algorithms and machine learningAlgorithms are becomingmore powerfulNot only are algorithms becoming morepervasive, but the power and responsibilityentrusted to them is increasing as well.Due to advancements in deep learningtechniques, algorithms are becoming betterat prediction and making complex decisions.Today, algorithms are being used to helpmake many important decisions, such asdetecting crime and assigning punishment,deciding investment of millions of dollars,and saving the lives of patients.Algorithms are becoming more opaqueAlgorithms run in the background and oftenfunction as black boxes. With their internalworkings and functioning largely hiddenfrom developers and end users, monitoringalgorithms can be difficult. Many newmachine learning techniques, such as deeplearning, are so opaque that it’s practicallyimpossible to understand what they deducefrom training data and how they reachtheir conclusions—thus making it hardto judge their correctness. This difficultyin understanding algorithmic decisions,coupled with the unpredictability andcontinuously evolving nature of algorithms,makes inspecting them a challenge.algorithms results in skewed algorithmsthat produce erroneous output, whichin turn leads to unintended actions anddecisions. In addition, attackers are alsotampering with the actual live data that thealgorithms are applied to. A recent reportrevealed that cyber criminals are makingclose to 5 million per day by tricking adpurchasing algorithms with fraudulent adclick data, which is generated by bots ratherthan humans.11Many new machinelearning techniques, suchas deep learning, are soopaque that it’s practicallyimpossible to understandwhat they deduce fromtraining data and how theyreach their conclusions—thus making it hard tojudge their correctness.Algorithms are becoming targetsof hackingMachine learning algorithms are exhibitingvulnerabilities that can be exploited byhackers. A common vulnerability is thedata used to train algorithms. Manipulatingthat training data as it’s presented to the6

Managing algorithmic risks Safeguarding the use of complex algorithms and machine learningWhat do algorithmic risksmean for your organization?As noted previously, data analytics andcognitive technology-based algorithms areincreasingly becoming integral to manybusiness processes, and organizations areinvesting heavily in them. But if the issueshighlighted in this report aren’t adequatelymanaged, the investments may not yield theanticipated benefits. Worse yet, they maysubject organizations to unanticipated risks.The immediate fallouts of these algorithmicrisks can include inappropriate andpotentially illegal decisions relating to: Finance, such as inaccurate financialreporting resulting in regulatory penaltiesand shareholder backlash, as well as takingon unanticipated market risks beyond theorganization’s risk appetite. Sales and marketing, such asdiscrimination against certain groups ofcustomers in product pricing, productofferings, and ratings. Operations, such as credit offers, accessto health care and education, and productsafety and quality. Risk management, such as not detectingsignificant risks. Information technology, such asinadequate business continuity planningand undetected cyber threats. Human resources, such asdiscrimination in hiring andperformance management practices.7Algorithms operate at faster speeds in fullyautomated environments, and they becomeincreasingly volatile as algorithms interactwith other algorithms or social mediaplatforms. Therefore, algorithmic risks canquickly get out of hand.Financial markets have already experiencedsignificant instability because of algorithms.The most high-profile instance was the flashcrash of 2010, which sent the Dow JonesIndustrial Average on a 1,000-point slide.12Algorithmic risks can also carry broader andlong-term implications for an organization,such as: Reputation risks: The use ofalgorithms can significantly increase anorganization’s exposure to reputationrisks. This is particularly true if the variousstakeholders believe that the workingsof the algorithm aren’t aligned to theethics and values of the organization, orif the algorithms are designed to covertlymanipulate consumers, regulators,or employees. Financial risks: Errors or vulnerabilitiesin algorithms, especially those used forfinancial and strategic decision making,can result in significant revenue loss fororganizations and negatively impact theintegrity of their financial reporting. Operational risks: As algorithms areused to automate supply chain and otheroperational areas, errors can result insignificant operational disruptions. Regulatory risks: Algorithms makingdecisions that violate the law, circumventexisting rules and regulations, ordiscriminate against certain groups ofpeople can expose organizations toregulatory and legal actions. Technology risks: The wide-scale useof advanced algorithms can open up newpoints of vulnerability for IT infrastructure. Strategic risks: With algorithms beingused increasingly as sources for strategicdecision making, errors or vulnerabilitieswithin them can put an organization at acompetitive disadvantage.Given the potential for such long-termnegative implications, it’s imperativethat algorithmic risks be appropriatelyand effectively managed. Only then canan organization harness the power ofalgorithms to expand its value propositionand bring added efficiency andeffectiveness to the development anddelivery of products and services in themarketplace. By embracing algorithmictechnology's complexities, and proactivelymanaging their risks, organizations canleverage this technology to acceleratecorporate performance.

Managing algorithmic risks Safeguarding the use of complex algorithms and machine learningWhat’s different aboutmanaging algorithmic risks?With the growing urgency of algorithmicrisk management, it’s important to notethat conventional risk managementapproaches may not be effective for thatpurpose. Instead, organizations shouldrethink and reengineer some of theirexisting risk management processes dueto the inherent nature of algorithms andhow they’re used within organizations. Forexample, algorithmic risk managementcan’t be a periodic point-in-time exercise.It requires continuous monitoring ofalgorithms, perhaps through the use ofother algorithms. Three factors differentiatealgorithmic risk management fromtraditional risk management:Algorithms are proprietaryAlgorithms are typically based onproprietary data, models, and techniques.They’re considered trade secrets andsources of competitive advantage. As aresult, organizations are typically unwillingto share data, source code, or the internalworkings of their algorithms. This makes itdifficult for regulatory agencies and outsidewatchdog groups to monitor them.Algorithms are complex, unpredictable,and difficult to explainEven if organizations were to share theiralgorithm codes, understanding themmay be difficult because of their inherentcomplexity. Many of today’s algorithmsare based on machine learning and otheradvanced technologies. They evolve overtime based on input data. In many cases,even the teams that develop them might notbe able to predict or explain their behaviors.Machine learning algorithms caneven develop their own languages tocommunicate with each other. This is anarea with both tremendous potential andrisk, given the anticipated growth in IoT andmachine-to-machine communications.There’s a lack of standardsand regulationsIn financial services, model validation hasbecome very important over the pastfew years, and there are widely acceptedstandards such as SR 11-7: Guidanceon Model Risk Management. But thesestandards have limitations when appliedto complex machine learning techniquessuch as deep learning. Currently, no widelyaccepted cross-industry standards existto govern many types of machine learningalgorithms, including processes around datacollection, training, and algorithm design.As a result, there’s a lack of consistentbusiness controls for development,implementation, and use of algorithms.Developers frequently use their experienceand theoretical knowledge to make thesedecisions without management oversight,leading to variations in processes and theincreased likelihood of errors.Also, regulations in this space are stillevolving and apply to only a limited setof algorithms, such as those relating tocapital management and stress testing inthe banking sector. While there have beensome attempts to broadly regulate theuse of algorithms (especially in Europe),there’s still a lack of clarity about, and manyunanswered questions around, how theseregulations will be implemented. This lack ofstandards and regulations makes it difficultto drive accountability and fairness in theuse of algorithms.Algorithmic riskmanagement can’t bea periodic point-in-timeexercise. It requirescontinuous monitoringof algorithms, perhapsthrough the use ofother algorithms.8

Managing algorithmic risks Safeguarding the use of complex algorithms and machine learningHow can your organizationapproach algorithmic riskmanagement effectively?To effectively manage algorithmic risks,there’s a need to modernize traditional riskmanagement frameworks. Organizationsshould develop and adopt new approachesthat are built on strong foundations ofenterprise risk management and alignedwith leading practices and regulatoryrequirements. Figure 3 depicts such anapproach and its specific elements: Strategy and governance: Create analgorithmic risk management strategy andgovernance structure to manage technicaland cultural risks. This should includeprinciples, policies, and standards; rolesand responsibilities; control processesand procedures; and appropriatepersonnel selection and training. Providingtransparency and processes to handleinquiries can also help organizations usealgorithms responsibly. Design, development, deployment,and use: Develop processes andapproaches aligned with the governancestructure to address the algorithm lifecycle from data selection, to algorithmdesign, to integration, to actual live usein production. Monitoring and testing: Establishprocesses for assessing and overseeingalgorithm data inputs, workings, andoutputs, leveraging state-of-the-art toolsas they become available. Seek objectivereviews of algorithms by internal andexternal parties.9Figure 3. A framework for algorithmic risk managementDesign, development,deployment, and useStrategy and governanceMonitoring and testingGoals and strategyPrinciples, policies,standards, and guidelinesAlgorithmdesign processAlgorithmtestingAccountability andresponsibilitiesLife cycle and changemanagementData assessmentOutput logging andanalysisRegulatory complianceHiring and training ofpersonnelAssumptions andlimitationsSensitivity analysisDisclosure to user andstakeholderInquiry and complaintproceduresEmbedding security andoperations controlsOngoing monitoringDeployment processContinuous improvementAlgorithm useIndependent validationInventory and riskclassificationsEnterprise risk managementThe use of intelligent algorithms offers a wide rangeof potential benefits to organizations, from innovativeproducts to improved customer experience, to strategicplanning, to operational efficiency, and even torisk management.

Managing algorithmic risks Safeguarding the use of complex algorithms and machine learningIs your organization ready tomanage algorithmic risks?A good starting point for implementing analgorithmic risk management frameworkis to ask important questions about thepreparedness of your organization tomanage algorithmic risks. For example: Does your organization have a goodhandle on where algorithms are deployed? Have you evaluated the potentialimpact should those algorithmsfunction improperly? Does senior management within yourorganization understand the need tomanage algorithmic risks? Do you have a clearly establishedgovernance structure for overseeing therisks emanating from algorithms?But it’s not a journey that organizationsneed to take alone. The growing awarenessof algorithmic risks among researchers,consumer advocacy groups, lawmakers,regulators, and other stakeholders shouldcontribute to a growing body of knowledgeabout algorithmic risks and, over time, riskmanagement standards. In the meantime,it’s important for organizations to evaluatetheir use of algorithms in high-risk andhigh-impact situations and implementleading practices to manage those risksintelligently so algorithms can be harnessedfor competitive advantage.Are you ready?Contact us to discuss how the ideaspresented in this paper apply to yourorganization. And how you can beginto open the algorithmic black boxand manage the risks hidden within.Managing algorithmic complexity can be anopportunity to lead, navigate, and disrupt inyour industry. Do you have a program in place to managethese risks? If so, are you continuouslyenhancing the program over time astechnologies and requirements evolve?The rapid proliferation of powerfulalgorithms in many facets of business isin full swing and is likely to grow unabatedfor years to come. The use of intelligentalgorithms offers a wide range of potentialbenefits to organizations, from innovativeproducts to improved customer experience,to strategic planning, to operationalefficiency, and even to risk management.Yet as this paper has discussed, some ofthose benefits could be diminished byinherent risks associated with the design,implementation, and use of algorithms—risks that are also likely to increase unlessorganizations invest effectively in algorithmicrisk management capabilities.10

Managing algorithmic risks Safeguarding the use of complex algorithms and machine learningContact the authorsDilip KrishnaManaging DirectorDeloitte Risk andFinancial AdvisoryDeloitte & Touche LLPdkrishna@deloitte.com 1 212 436 7939Dilip is a managing director with the DeloitteRisk and Financial Advisory practice atDeloitte & Touche LLP. He focuses on riskanalytics and information. Dilip worksprimarily with financial services clients ondata analytics problems, such as stresstesting; capital management; risk dataaggregation; and compliance, includingCCAR, Basel, and EPS.Nancy AlbinsonManaging DirectorDeloitte Risk andFinancial AdvisoryDeloitte & Touche LLPnalbinson@deloitte.com 1 973 602 4523Yang ChuSenior ManagerDeloitte Risk andFinancial AdvisoryDeloitte & Touche LLPyangchu@deloitte.com 1 415 783 4060Nancy is a managing director at Deloitte& Touche LLP and leads Deloitte Risk andFinancial Advisory’s Innovation Program.She guides the business in establishinginnovation strategies, identifying emergingclient needs, overseeing a portfolio ofstrategic investments from validation tocommercialization, and building a cultureof innovation.Yang, a senior manager at Deloitte &Touche LLP, is Deloitte Risk and FinancialAdvisory's innovation futurist. She's aspecialist in strategic, financial, operational,technological, and regulatory risk. Yangfocuses on exploring emerging trends foropportunities and threats for clients andfor Deloitte.Joseph BurdisManagerDeloitte Risk and Financial AdvisoryDeloitte & Touche LLPjburdis@deloitte.comPriyanka PriyadarshiniManagerDeloitte Risk and Financial AdvisoryDeloitte & Touche LLPppriyadarshini@deloitte.comTanmay TapaseSenior ConsultantDeloitte Risk and Financial AdvisoryDeloitte & Touche LLPttapase@deloitte.comEd HidaPartnerDeloitte Risk and Financial AdvisoryDeloitte & Touche LLPehida@deloitte.comMartin RoguljaManagerDeloitte Risk and Financial AdvisoryDeloitte & Touche LLPmrogulja@deloitte.comRyan HittnerSenior ManagerDeloitte Risk and Financial AdvisoryDeloitte & Touche LLPrhittner@deloitte.comIrfan SaifPrincipalDeloitte Risk and Financial AdvisoryDeloitte & Touche LLPisaif@deloitte.comContributors11

Managing algorithmic risks Safeguarding the use of complex algorithms and machine learningEndnotes1.“Worldwide Cognitive Systems and Artificial Intelligence Revenues Forecast to Surge Past 47 Billion in 2020, According to New IDC Spending Guide,” Press release,IDC Research, Inc., October 26, 2016, http://www.idc.com/getdoc.jsp?containerId prUS41878616.2.Netty Idayu Ismail and Lukanyo Mnyanda,” Flash Crash of the Pound Baffles Traders With Algorithms Being Blamed,” Bloomberg, December 7, nce-brexit-result.3.Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, “Machine Bias,” Propublica, May 23

Algorithms are routine processes or sequences of instructions for analyzing data, solving problems, and performing tasks. 5 Traditionally, researchers “programmed” algorithms to perform certain tasks. “Self-learning” algorithms, however, are incre