Prediction, Judgment, And Complexity: A Theor - Singlelunch

Transcription

Prediction, Judgment, and Complexity: A Theor.

The Economics ofArtificial Intelligence

National Bureau ofEconomic ResearchConference Report

The Economics ofArtificial Intelligence:An AgendaEdited byAjay Agrawal, Joshua Gans,and Avi GoldfarbThe University of Chicago PressChicago and London

The University of Chicago Press, Chicago 60637The University of Chicago Press, Ltd., London 2019 by the National Bureau of Economic Research, Inc.All rights reserved. No part of this book may be used or reproducedin any manner whatsoever without written permission, except in thecase of brief quotations in critical articles and reviews. For moreinformation, contact the University of Chicago Press, 1427 E. 60th St.,Chicago, IL 60637.Published 2019Printed in the United States of America28 27 26 25 24 23 22 21 20 191 2 3 4 5ISBN-13: 978-0-226-61333-8 (cloth)ISBN-13: 978-0-226-61347-5 (e-book)DOI: 0001Library of Congress Cataloging-in-Publication DataNames: Agrawal, Ajay, editor. Gans, Joshua, 1968– editor. Goldfarb,Avi, editor.Title: The economics of artificial intelligence : an agenda / AjayAgrawal, Joshua Gans, and Avi Goldfarb, editors.Other titles: National Bureau of Economic Research conference report.Description: Chicago ; London : The University of Chicago Press,2019. Series: National Bureau of Economic Research conferencereport Includes bibliographical references and index.Identifiers: LCCN 2018037552 ISBN 9780226613338 (cloth : alk.paper) ISBN 9780226613475 (ebook)Subjects: LCSH: Artificial intelligence—Economic aspects.Classification: LCC TA347.A78 E365 2019 DDC 338.4/70063—dc23LC record available at https://lccn.loc.gov/2018037552 This paper meets the requirements of ANSI/NISO Z39.48-1992(Permanence of Paper).DOI: 10.7208/chicago/9780226613475.003.0003

National Bureau of Economic ResearchOfficersKaren N. Horn, chairJohn Lipsky, vice chairJames M. Poterba, president and chiefexecutive officerRobert Mednick, treasurerKelly Horak, controller and assistantcorporate secretaryAlterra Milone, corporate secretaryDenis Healy, assistant corporate secretaryDirectors at LargePeter C. AldrichElizabeth E. BaileyJohn H. BiggsJohn S. ClarkesonKathleen B. CooperCharles H. DallaraGeorge C. EadsJessica P. EinhornMohamed El-ErianDiana FarrellJacob A. FrenkelRobert S. HamadaPeter Blair HenryKaren N. HornLisa JordanJohn LipskyLaurence H. MeyerKaren MillsMichael H. MoskowAlicia H. MunnellRobert T. ParryJames M. PoterbaJohn S. ReedMarina v. N. WhitmanMartin B. ZimmermanDirectors by University AppointmentTimothy Bresnahan, StanfordPierre-André Chiappori, ColumbiaAlan V. Deardorff, MichiganEdward Foster, MinnesotaJohn P. Gould, ChicagoMark Grinblatt, California, Los AngelesBruce Hansen, Wisconsin–MadisonBenjamin Hermalin, California, BerkeleySamuel Kortum, YaleGeorge Mailath, PennsylvaniaMarjorie B. McElroy, DukeJoel Mokyr, NorthwesternCecilia Rouse, PrincetonRichard L. Schmalensee, MassachusettsInstitute of TechnologyIngo Walter, New YorkDavid B. Yoffie, HarvardDirectors by Appointment of Other OrganizationsJean-Paul Chavas, Agricultural and AppliedEconomics AssociationMartin J. Gruber, American FinanceAssociationPhilip Hoffman, Economic HistoryAssociationArthur Kennickell, American StatisticalAssociationJack Kleinhenz, National Association forBusiness EconomicsRobert Mednick, American Institute ofCertified Public AccountantsPeter L. Rousseau, American EconomicAssociationGregor W. Smith, Canadian EconomicsAssociationWilliam Spriggs, American Federationof Labor and Congress of IndustrialOrganizationsBart van Ark, The Conference BoardDirectors EmeritiGeorge AkerlofJagdish BhagwatiDon R. ConlanRay C. FairFranklin FisherSaul H. HymansRudolph A. OswaldAndrew PostlewaiteJohn J. SiegfriedCraig Swan

Relation of the Directors to theWork and Publications of theNational Bureau of Economic Research1. The object of the NBER is to ascertain and present to the economics profession, and to thepublic more generally, important economic facts and their interpretation in a scientific mannerwithout policy recommendations. The Board of Directors is charged with the responsibility ofensuring that the work of the NBER is carried on in strict conformity with this object.2. The President shall establish an internal review process to ensure that book manuscripts proposed for publication DO NOT contain policy recommendations. This shall apply both to theproceedings of conferences and to manuscripts by a single author or by one or more co-authorsbut shall not apply to authors of comments at NBER conferences who are not NBER affiliates.3. No book manuscript reporting research shall be published by the NBER until the Presidenthas sent to each member of the Board a notice that a manuscript is recommended for publication and that in the President’s opinion it is suitable for publication in accordance with the aboveprinciples of the NBER. Such notification will include a table of contents and an abstract orsummary of the manuscript’s content, a list of contributors if applicable, and a response formfor use by Directors who desire a copy of the manuscript for review. Each manuscript shallcontain a summary drawing attention to the nature and treatment of the problem studied andthe main conclusions reached.4. No volume shall be published until forty-five days have elapsed from the above notificationof intention to publish it. During this period a copy shall be sent to any Director requestingit, and if any Director objects to publication on the grounds that the manuscript containspolicy recommendations, the objection will be presented to the author(s) or editor(s). In caseof dispute, all members of the Board shall be notified, and the President shall appoint an adhoc committee of the Board to decide the matter; thirty days additional shall be granted forthis purpose.5. The President shall present annually to the Board a report describing the internal manuscript review process, any objections made by Directors before publication or by anyone afterpublication, any disputes about such matters, and how they were handled.6. Publications of the NBER issued for informational purposes concerning the work of theBureau, or issued to inform the public of the activities at the Bureau, including but not limitedto the NBER Digest and Reporter, shall be consistent with the object stated in paragraph 1.They shall contain a specific disclaimer noting that they have not passed through the reviewprocedures required in this resolution. The Executive Committee of the Board is charged withthe review of all such publications from time to time.7. NBER working papers and manuscripts distributed on the Bureau’s web site are not deemedto be publications for the purpose of this resolution, but they shall be consistent with the objectstated in paragraph 1. Working papers shall contain a specific disclaimer noting that they havenot passed through the review procedures required in this resolution. The NBER’s web siteshall contain a similar disclaimer. The President shall establish an internal review process toensure that the working papers and the web site do not contain policy recommendations, andshall report annually to the Board on this process and any concerns raised in connection with it.8. Unless otherwise determined by the Board or exempted by the terms of paragraphs 6and 7, a copy of this resolution shall be printed in each NBER publication as described inparagraph 2 above.

ContentsI.AcknowledgmentsxiIntroductionAjay Agrawal, Joshua Gans, and Avi Goldfarb1AI as a GPT1. Artificial Intelligence and the ModernProductivity Paradox: A Clash ofExpectations and StatisticsErik Brynjolfsson, Daniel Rock, andChad SyversonComment: Rebecca Henderson232. The Technological Elements ofArtificial IntelligenceMatt Taddy613. Prediction, Judgment, and Complexity:A Theory of Decision-Making andArtificial IntelligenceAjay Agrawal, Joshua Gans, and Avi GoldfarbComment: Andrea Prat894. The Impact of Artificial Intelligenceon Innovation: An Exploratory AnalysisIain M. Cockburn, Rebecca Henderson,and Scott SternComment: Matthew Mitchell115vii

viiiContents5. Finding Needles in Haystacks: ArtificialIntelligence and Recombinant GrowthAjay Agrawal, John McHale,and Alexander Oettl6. Artificial Intelligence as the Next GPT:A Political-Economy PerspectiveManuel TrajtenbergII.149175Growth, Jobs, and Inequality7. Artificial Intelligence, Income, Employment,and MeaningBetsey Stevenson1898. Artificial Intelligence, Automation, and WorkDaron Acemoglu and Pascual Restrepo1979. Artificial Intelligence and Economic GrowthPhilippe Aghion, Benjamin F. Jones, andCharles I. JonesComment: Patrick Francois23710. Artificial Intelligence and Jobs:The Role of DemandJames Bessen11. Public Policy in an AI EconomyAustan Goolsbee12. Should We Be Reassured If Automationin the Future Looks Like Automationin the Past?Jason Furman29130931713. R&D, Structural Transformation,and the Distribution of IncomeJeffrey D. Sachs32914. Artificial Intelligence and Its Implicationsfor Income Distribution and UnemploymentAnton Korinek and Joseph E. Stiglitz34915. Neglected Open Questions in theEconomics of Artificial IntelligenceTyler Cowen391

ContentsIII.Machine Learning and Regulation16. Artificial Intelligence, Economics, andIndustrial OrganizationHal VarianComment: Judith Chevalier39917. Privacy, Algorithms, and Artificial IntelligenceCatherine Tucker42318. Artificial Intelligence and Consumer PrivacyGinger Zhe Jin43919. Artificial Intelligence and International TradeAvi Goldfarb and Daniel Trefler46320. Punishing Robots: Issues in the Economicsof Tort Liability and Innovation inArtificial IntelligenceAlberto Galasso and Hong LuoIV.ix493Machine Learning and Economics21. The Impact of Machine Learningon EconomicsSusan AtheyComment: Mara Lederman50722. Artificial Intelligence, Labor, Productivity,and the Need for Firm-Level DataManav Raj and Robert Seamans55323. How Artificial Intelligence and MachineLearning Can Impact Market DesignPaul R. Milgrom and Steven Tadelis56724. Artificial Intelligence andBehavioral EconomicsColin F. CamererComment: Daniel KahnemanContributorsAuthor IndexSubject Index587611615625

AcknowledgmentsThis volume contains chapters and ideas discussed at the first NBER Conference on the Economics of Artificial Intelligence, held in September 2017in Toronto. We thank all the authors and discussants for their contributions.Funds for the conference and book project were provided by the Sloan Foundation, the Canadian Institute for Advanced Research, and the CreativeDestruction Lab at the University of Toronto. At the Sloan Foundation,Danny Goroff provided guidance that improved the overall agenda. TheNBER digitization initiative, under the leadership of Shane Greenstein, wasa key early supporter. We thank our dean, Tiff Macklem. In addition, JimPoterba at the NBER has been generous, giving us the flexibility needed tobring this project together. Special thanks are due to Rob Shannon, DenisHealy, Carl Beck, and Dawn Bloomfield for managing the conference andlogistics and to Helena Fitz-Patrick for guiding the book through the editorial process. Finally we thank our families, Gina, Natalie, Rachel, Amelia,Andreas, Belanna, Ariel, Annika, Anna, Sam, and Ben.xi

3Prediction, Judgment,and ComplexityA Theory of Decision-Makingand Artificial IntelligenceAjay Agrawal, Joshua Gans, and Avi Goldfarb3.1IntroductionThere is widespread discussion regarding the impact of machines onemployment (see Autor 2015). In some sense, the discussion mirrors a longstanding literature on the impact of the accumulation of capital equipmenton employment; specifically, whether capital and labor are substitutes orcomplements (Acemoglu 2003). But the recent discussion is motivated bythe integration of software with hardware and whether the role of machinesgoes beyond physical tasks to mental ones as well (Brynjolfsson and McAfee2014). As mental tasks were seen as always being present and essential,human comparative advantage in these was seen as the main reason why, atleast in the long term, capital accumulation would complement employmentby enhancing labor productivity in those tasks.The computer revolution has blurred the line between physical and menAjay Agrawal is the Peter Munk Professor of Entrepreneurship at the Rotman School ofManagement, University of Toronto, and a research associate of the National Bureau of Economic Research. Joshua Gans is professor of strategic management and holder of the Jeffrey S.Skoll Chair of Technical Innovation and Entrepreneurship at the Rotman School of Management, University of Toronto (with a cross appointment in the Department of Economics),and a research associate of the National Bureau of Economic Research. Avi Goldfarb holdsthe Rotman Chair in Artificial Intelligence and Healthcare and is professor of marketing atthe Rotman School of Management, University of Toronto, and is a research associate of theNational Bureau of Economic Research.Our thanks to Andrea Prat, Scott Stern, Hal Varian, and participants at the AEA (Chicago),NBER Summer Institute (2017), NBER Economics of AI Conference (Toronto), ColumbiaLaw School, Harvard Business School, MIT, and University of Toronto for helpful comments.Responsibility for all errors remains our own. The latest version of this chapter is availableat joshuagans.com. For acknowledgments, sources of research support, and disclosure ofthe authors’ material financial relationships, if any, please see http://www.nber.org/chapters/c14010.ack.89

90Ajay Agrawal, Joshua Gans, and Avi Goldfarbtal tasks. For instance, the invention of the spreadsheet in the late 1970sfundamentally changed the role of bookkeepers. Prior to that invention,there was a time- intensive task involving the recomputation of outcomes inspreadsheets as data or assumptions changed. That human task was substituted by the spreadsheet software that could produce the calculations morequickly, cheaply, and frequently. However, at the same time, the spreadsheetmade the jobs of accountants, analysts, and others far more productive.In the accounting books, capital was substituting for labor, but the mentalproductivity of labor was being changed. Thus, the impact on employmentcritically depended on whether there were tasks the “computers cannot do.”These assumptions persist in models today. Acemoglu and Restrepo(2017) observe that capital substitutes for labor in certain tasks while at thesame time technological progress creates new tasks. They make what theycall a “natural assumption” that only labor can perform the new tasks asthey are more complex than previous ones.1 Benzell et al. (2015) considerthe impact of software more explicitly. Their environment has two types oflabor—high- tech (who can, among other things, code) and low- tech (whoare empathetic and can handle interpersonal tasks). In this environment,it is the low- tech workers who cannot be replaced by machines while thehigh- tech ones are employed initially to create the code that will eventuallydisplace their kind. The results of the model depend, therefore, on a classof worker who cannot be substituted directly for capital, but also on theinability of workers themselves to substitute between classes.In this chapter, our approach is to delve into the weeds of what is happening currently in the field of artificial intelligence (AI). The recent waveof developments in AI all involve advances in machine learning. Thoseadvances allow for automated and cheap prediction; that is, providing aforecast (or nowcast) of a variable of interest from available data (Agrawal,Gans and Goldfarb 2018b). In some cases, prediction has enabled full automation of tasks—for example, self- driving vehicles where the process ofdata collection, prediction of behavior and surroundings, and actions areall conducted without a human in the loop. In other cases, prediction is astandalone tool—such as image recognition or fraud detection—that mayor may not lead to further substitution of human users of such tools bymachines. Thus far, substitution between humans and machines has focusedmainly on cost considerations. Are machines cheaper, more reliable, andmore scalable (in their software form) than humans? This chapter, however,considers the role of prediction in decision- making explicitly and from thatexamines the complementary skills that may be matched with predictionwithin a task.1. To be sure, their model is designed to examine how automation of tasks causes a changein factor prices that biases innovation toward the creation of new tasks that labor is moresuited to.

A Theory of Decision-Making and Artificial Intelligence91Our focus, in this regard, is on what we term judgment. While judgmentis a term with broad meaning, here we use it to refer to a very specific skill.To see this, consider a decision. That decision involves choosing an action,x, from a set, X. The payoff (or reward) from that action is defined by afunction, u(x, ) where is a realization of an uncertain state drawn froma distribution, F( ). Suppose that, prior to making a decision, a prediction(or signal), s, can be generated that results in a posterior, F( s). Thus, thedecision maker would solvemax xX( ) dF ( s).u x,In other words, a standard problem of choice under uncertainty. In thisstandard world, the role of prediction is to improve decision- making. Thepayoff, or utility function, is known.To create a role for judgment, we depart from this standard set-up instatistical decision theory and ask how a decision maker comes to know thefunction, u(x, )? We assume that this is not simply given or a primitive of thedecision- making model. Instead, it requires a human to undertake a costlyprocess that allows the mapping from (x, ) to a particular payoff value, u, tobe discovered. This is a reasonable assumption given that beyond some rudimentary experimentation in closed environments, there is no current way foran AI to impute a utility function that resides with humans. Additionally,this process separates the costs of providing the mapping for each pair, (x, ).(Actually, we focus, without loss in generality, on situations where u(x, ) u(x) for all and presume that if a payoff to an action is state independentthat payoff is known.) In other words, while prediction can obtain a signalof the underlying state, judgment is the process by which the payoffs fromactions that arise based on that state can be determined. We assume thatthis process of determining payoffs requires human understanding of thesituation: it is not a prediction problem.For intuition on the difference between prediction and judgment, considerthe example of credit card fraud. A bank observes a credit card transaction.That transaction is either legitimate or fraudulent. The decision is whetherto approve the transaction. If the bank knows for sure that the transactionis legitimate, the bank will approve it. If the bank knows for sure that it isfraudulent, the bank will refuse the transaction. Why? Because the bankknows the payoff of approving a legitimate transaction is higher than thepayoff of refusing that transaction. Things get more interesting if the bankis uncertain about whether the transaction is legitimate. The uncertaintymeans that the bank also needs to know the payoff from refusing a legitimatetransaction and from approving a fraudulent transaction. In our model,judgment is the process of determining these payoffs. It is a costly activity,in the sense that it requires time and effort.As the new developments regarding AI all involve making predictionmore readily available, we ask, how does judgment and its endogenous appli-

92Ajay Agrawal, Joshua Gans, and Avi Goldfarbcation change the value of prediction? Are prediction and judgment substitutes or complements? How does the value of prediction change monotonically with the difficulty of applying judgment? In complex environments(as they relate to automation, contracting, and the boundaries of the firm),how do improvements in prediction affect the value of judgment?We proceed by first providing supportive evidence for our assumption thatrecent developments in AI overwhelmingly impact the costs of prediction.We then use the example of radiology to provide a context for understanding the different roles of prediction and judgment. Drawing inspiration fromBolton and Faure-Grimaud (2009), we then build the baseline model withtwo states of the world and uncertainty about payoffs to actions in eachstate. We explore the value of judgment in the absence of any predictiontechnology, and then the value of prediction technology when there is nojudgment. We finish the discussion of the baseline model with an exploration of the interaction between prediction and judgment, demonstratingthat prediction and judgment are complements as long as judgment isn’t toodifficult. We then separate prediction quality into prediction frequency andprediction accuracy. As judgment improves, accuracy becomes more important relative to frequency. Finally, we examine complex environments wherethe number of potential states is large. Such environments are common ineconomic models of automation, contracting, and boundaries of the firm.We show that the effect of improvements in prediction on the importanceof judgment depend a great deal on whether the improvements in predictionenable automated decision- making.3.2AI and Prediction CostsWe argue that the recent advances in artificial intelligence are advancesin the technology of prediction. Most broadly, we define prediction as theability to take known information to generate new information. Our modelemphasizes prediction about the state of the world.Most contemporary artificial intelligence research and applications comefrom a field now called “machine learning.” Many of the tools of machinelearning have a long history in statistics and data analysis, and are likelyfamiliar to economists and applied statisticians as tools for prediction andclassification.2 For example, Alpaydin’s (2010) textbook Introduction toMachine Learning covers maximum likelihood estimation, Bayesian estimation, multivariate linear regression, principal components analysis, clustering, and nonparametric regression. In addition, it covers tools that maybe less familiar, but also use independent variables to predict outcomes:2. We define prediction as known information to generate new information. Therefore, classification techniques such as clustering are prediction techniques in which the new informationto be predicted is the appropriate category or class.

A Theory of Decision-Making and Artificial Intelligence93regression trees, neural networks, hidden Markov models, and reinforcement learning. Hastie, Tibshirani, and Friedman (2009) cover similar topics.The 2014 Journal of Economic Perspectives symposium on big data coveredseveral of these less familiar prediction techniques in articles by Varian(2014) and Belloni, Chernozhukov, and Hansen (2014).While many of these prediction techniques are not new, recent advancesin computer speed, data collection, data storage, and the prediction methodsthemselves have led to substantial improvements. These improvements havetransformed the computer science research field of artificial intelligence. TheOxford English Dictionary defines artificial intelligence as “[t]he theory anddevelopment of computer systems able to perform tasks normally requiringhuman intelligence.” In the 1960s and 1970s, artificial intelligence researchwas primarily rules- based, symbolic logic. It involved human experts generating rules that an algorithm could follow (Domingos 2015, 89). Theseare not prediction technologies. Such systems became very good chessplayers and they guided factory robots in highly controlled settings; however, by the 1980s, it became clear that rules- based systems could not dealwith the complexity of many nonartificial settings. This led to an “AI winter”in which research funding artificial intelligence projects largely dried up(Markov 2015).Over the past ten years, a different approach to artificial intelligence hastaken off. The idea is to program computers to “learn” from example dataor experience. In the absence of the ability to predetermine the decisionrules, a data- driven prediction approach can conduct many mental tasks.For example, humans are good at recognizing familiar faces, but we wouldstruggle to explain and codify this skill. By connecting data on names toimage data on faces, machine learning solves this problem by predictingwhich image data patterns are associated with which names. As a prominentartificial intelligence researcher put it, “Almost all of AI’s recent progress isthrough one type, in which some input data (A) is used to quickly generatesome simple response (B)” (Ng 2016). Thus, the progress is explicitly aboutimprovements in prediction. In other words, the suite of technologies thathave given rise to the recent resurgence of interest in artificial intelligenceuse data collected from sensors, images, videos, typed notes, or anythingelse that can be represented in bits to fill in missing information, recognizeobjects, or forecast what will happen next.To be clear, we do not take a position on whether these prediction technologies really do mimic the core aspects of human intelligence. While PalmComputing founder Jeff Hawkins argues that human intelligence is—inessence—prediction (Hawkins 2004), many neuroscientists, psychologists,and others disagree. Our point is that the technologies that have been giventhe label artificial intelligence are prediction technologies. Therefore, inorder to understand the impact of these technologies, it is important toassess the impact of prediction on decisions.

94Ajay Agrawal, Joshua Gans, and Avi Goldfarb3.3Case: RadiologyBefore proceeding to the model, we provide some intuition of how prediction and judgment apply in a particular context where prediction machinesare expected to have a large impact: radiology. In 2016, Geoff Hinton—oneof the pioneers of deep learning neural networks—stated that it was no longer worth training radiologists. His strong implication was that radiologistswould not have a future. This is something that radiologists have been concerned about since 1960 (Lusted 1960). Today, machine- learning techniquesare being heavily applied in radiology by IBM using its Watson computerand by a start-up, Enlitic. Enlitic has been able to use deep learning to detectlung nodules (a fairly routine exercise)3 but also fractures (which is morecomplex). Watson can now identify pulmonary embolism and some otherheart issues. These advances are at the heart of Hinton’s forecast, but havealso been widely discussed among radiologists and pathologists (Jha andTopol 2016). What does the model in this chapter suggest about the futureof radiologists?If we consider a simplified characterization of the job of a radiologist,it would be that they examine an image in order to characterize and classify that image and return an assessment to a physician. While often thatassessment is a diagnosis (i.e., “the patient has pneumonia”), in many cases,the assessment is in the negative (i.e., “pneumonia not excluded”). In thatregard, this is stated as a predictive task to inform the physician of thelikelihood of the state of the world. Using that, the physician can devise atreatment.These predictions are what machines are aiming to provide. In particular,it might provide a differential diagnosis of the following kind:Based on Mr Patel’s demographics and imaging, the mass in the liver has a66.6 percent chance of being benign, 33.3 percent chance of being malignant,and a 0.1 percent of not being real.4The action is whether some intervention is needed. For instance, if apotential tumor is identified in a noninvasive scan, then this will informwhether an invasive examination will be conducted. In terms of identifyingthe state of the world, the invasive exam is costly but safe—it can deduce acancer with certainty and remove it if necessary. The role of a noninvasiveexam is to inform whether an invasive exam should be forgone. That is, itis to make physicians more confident about abstaining from treatment andfurther analysis. In this regard, if the machine improves prediction, it willlead to fewer invasive examinations.3. “You did not go to medical school to measure lung nodules.” http://www.medscape.com/viewarticle/863127#vp 2.4. http://www.medscape.com/viewarticle/863127#vp 3.

A Theory of Decision-Making and Artificial Intelligence95Judgment involves understanding the payoffs. What is the payoff to conducting a biopsy if the mass is benign, malignant, or not real? What is thepayoff to not doing anything in those three states? The issue for radiologistsin particular is whether a trained specialist radiologist is in the best positionto make this judgment or will it occur further along the chain of decisionmaking or involve new job classes that merge diagnostic information suchas a combined radiologist/pathologist (Jha and Topol 2016). Next, we formalize these ideas.3.4Baseline ModelOur baseline model is inspired by the “bandit” environment considered byBolton and Faure-Grimaud (2009), although it departs significantly in thequestions addressed and base assumptions made. Like them, in our baseline model, we suppose there are two states of the world, { 1, 2} with priorprobabilities of { ,1 – }. There are two possible actions: a state independent action with known payoff of S (safe) and a state dependent action withtwo possible payoffs, R or r, as the case may be (risky).As noted in the introduction, a key departure from the usual assumptions of rational decision- making is that the decision maker does not knowthe payoff from the risky action in each state and must apply judgment todetermine that payoff.5 Moreover, decision makers need to be able to makea judgment for each state that

IV. Machine Learning and Economics 21. The Impact of Machine Learning on Economics 507 Susan Athey Comment: Mara Lederman 22. Artifi cial Intelligence, Labor, Productivity, and the Need for Firm-Level Data 553 Manav Raj and Robert Seamans 23. How Artifi cial Intelligence and Machine Learning Can Impact Market Design 567