Artificial Intelligence - Apc

Transcription

1920 Kch EEat K PSW AGI SNEGLOBAL INFORMATIONSOCIETY WATCH 2019Artificial intelligence:Human rights, social justice and developmentAssociation for Progressive Communications (APC),A rticle 19, and Swedish International Development Cooperation Agency (Sida)1 / Global Information Society Watch

Table of contentsIntroduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3Vidushi Mardaarticle 19Decolonising AI: A transfeminist approach to data and social justice. . . . . . . . . . . . . . . . . . . . 8Paz Peña and Joana VaronRadicalising the AI governance agenda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Anita Gurumurthy and Nandini Chamiit for changeCountry and regional reports introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18Alan FinlayKorea, Republic of. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22Korean Progressive Network Jinbo net - MiruSouth Africa. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26Human Sciences Research Council - paul plantinga,rachel adams and saahier parke

1920 Kch EEat K PSW EAGI SNIntroductionVidushi Marda1ARTICLE 19www.article19.orgMuch has been written about the ways in whichartificial intelligence (AI) systems have a part toplay in our societies, today and in the future. Givenaccess to huge amounts of data, affordable computational power, and investment in the technology,AI systems can produce decisions, predictions andclassifications across a range of sectors. Thisprofoundly affects (positively and negatively) economic development, social justice and the exerciseof human rights.Contrary to popular belief that AI is neutral, infallible and efficient, it is a socio-technical systemwith significant limitations, and can be flawed. Onepossible explanation is that the data used to trainthese systems emerges from a world that is discriminatory and unfair, and so what the algorithmlearns as ground truth is problematic to begin with.Another explanation is that the humans buildingthese systems have their unique biases and trainsystems in a way that is flawed. Another possibleexplanation is that there is no true understandingof why and how some systems are flawed – somealgorithms are inherently inscrutable and opaque,2and/or operate on spurious correlations that makeno sense to an observer.3 But there is a fourthcross-cutting explanation that concerns the globalpower relations in which these systems are built. AIsystems, and the deliberations surrounding AI, areflawed because they amplify some voices at the expense of others, and are built by a few people and123Lawyer and Digital Programme Officer at ARTICLE 19, non-residentresearch analyst at Carnegie India. Many thanks to Mallory Knodeland Amelia Andersdotter for their excellent feedback on earlierversions of this chapter.Diakopoulos, N. (2014). Algorithmic Accountability Reporting:On the Investigation of Black Boxes. New York: Tow Centre forDigital Journalism. s-correlationsimposed on others. In other words, the design, development, deployment and deliberation around AIsystems are profoundly political.The 2019 edition of GISWatch seeks to engage atthe core of this issue – what does the use of AI systems promise in jurisdictions across the world, whatdo these systems deliver, and what evidence do wehave of their actual impact? Given the subjectivitythat pervades this field, we focus on jurisdictions thathave been hitherto excluded from mainstream conversations and deliberations around this technology,in the hope that we can work towards a well-informed,nuanced and truly global conversation.The need to address the imbalancein the global narrativeOver 60 years after the term was officially coined,AI is firmly embedded in the fabric of our public andprivate lives in a variety of ways: from deciding ourcreditworthiness,4 to flagging problematic contentonline,5 from diagnosis in health care,6 to assisting law enforcement with the maintenance of lawand order.7 AI systems today use statistical methods to learn from data, and are used primarily forprediction, classification, and identification of patterns. The speed and scale at which these systemsfunction far exceed human capability, and this hascaptured the imagination of governments, companies, academia and civil society.AI is broadly defined as the ability of computersto exhibit intelligent behavior.8 Much of what is re45678O’Neil, C. (2016). Weapons of Math Destruction: How Big DataIncreases Inequality and Threatens Democracy. New York: CrownPublishing Group.Balkin, J. (2018). Free Speech in the Algorithmic Society: Big Data,Private Governance, and New School Speech Regulation. Yale LawSchool Faculty Scholarship Series. https://digitalcommons.law.yale.edu/fss papers/5160Murali, A., & PK, J. (2019, 4 April). India’s bid to harness AI for Healthcare.Factor Daily. Wilson, T., & Murgia, M. (2019, 20 August). Uganda confirms use ofHuawei facial recognition cameras. Financial Times. -296ca66511c9Elish, M. C., & Hwang, T. (2016). An AI Pattern Language. New York:Intelligence and Autonomy Initiative (I&A) Data & Society. https://www.datasociety.net/pubs/ia/AI Pattern Language.pdf3 / Global Information Society Watch / SNEAK PEEK

Surden, S. (2014). Machine Learning and the Law. Washington LawReview, 89(1). https://scholar.law.colorado.edu/articles/8110 For example, image recognition algorithms have shockingly lowrates of accuracy for people of colour. See: American Civil LibertiesUnion Northern California. (2019, 13 August). Facial RecognitionTechnology Falsely Identifies 26 California Legislators withMugshots. American Civil Liberties Union Northern -legislators-mugshots; AI systemsused to screen potential job applicants have also been foundto automatically disqualify female candidates. By training a MLalgorithm on what successful candidates looked like in the past,the system embeds gender discrimination as a baseline. See:Daston, J. (2018, 10 October). Amazon scraps secret AI recruitingtool that showed bias against women. Reuters. 08G11 McLendon, K. (2016, 20 August). ArtificialIntelligence Could Help End Poverty Worldwide.Inquisitr. ligence-could-help-end-poverty-worldwide12 Solon, O. (2017, 15 February). Elon Musk says humans mustbecome cyborgs to stay relevant. Is he right? The lligence-is-he-right1920 Kch EEat K PSW EAGI SNferred to as “AI” in popular media is one particulartechnique that has garnered significant attention inthe last few years – machine learning (ML). As thename suggests, ML is the process by which an algorithm learns and improves performance over timeby gaining greater access to data.9 Given the ability of ML systems to operate at scale and producedata-driven insights, there has been an aggressiveembracing of its ability to solve problems and predict outcomes.While the expected potential public benefits of ML are often conjectural, as this GISWatchshows, its tangible impact on rights is becomingincreasingly clear across the world.10 Yet a historicalunderstanding of AI and its development leads to asystemic approach to explanation and mitigation ofits negative impact. The impact of AI on rights, democracy, development and justice is both significant(widespread and general) and bespoke (impactingon individuals in unique ways), depending on thecontext in which AI systems are deployed, and thepurposes for which they are built. It is not simplya matter of ensuring accuracy and perfection in atechnical system, but rather a reckoning with thefundamentally imperfect, discriminatory and unfair world from which these systems arise, and theunderlying structural and historical legacy in whichthese systems are applied.Popular narratives around AI systems have beennotoriously lacking in nuance. While on one end, AI isseen as a silver bullet technical solution to complexsocietal problems,11 on the other, images of sex robots and superintelligent systems treating humanslike “housecats” have been conjured.12 Global de-liberations are also lacking in “global” perspectives.Thought leadership, evidence and deliberation areoften concentrated in jurisdictions like the UnitedStates, United Kingdom and Europe.13 The politicsof this goes far beyond just regulation and policy– it impacts how we understand, critique, and alsobuild AI systems. The underlying assumptions thatguide the design, development and deploymentof these systems are context specific, yet globallyapplied in one direction, from the “global North” towards the “global South”. In reality, these systemsare far more nascent and the context in which theyare deployed significantly more complex.Complexity of governance frameworksand formGiven the increasingly consequential impact thatAI has in societies across the world, there has beena significant push towards articulating the ways inwhich these systems will be governed, with variousframeworks of reference coming to the fore. Theextent to which existing regulations in national,regional and international contexts apply to thesetechnologies is unclear, although a closer analysisof data protection regulation,14 discrimination law15and labour law16 is necessary.There has been a significant push towards critiquing and regulating these systems on the basisof international human rights standards.17 Given theimpact on privacy, freedom of expression and freedom of assembly, among others, the human rightsframework is a minimum requirement to whichAI systems must adhere.18 This can be done byconducting thorough human rights impact assessments of systems prior to deployment,19 including913 One just needs to glance through the references to discussionson AI in many high-level documents to see which jurisdictions theevidence backing up claims of AI come from.14 Wachter, S., & Mittelstadt, B. (2019). A Right to ReasonableInferences: Re-Thinking Data Protection Law in the Age of Big Dataand AI. Columbia Business Law Review, 2019(2). https://papers.ssrn.com/sol3/papers.cfm?abstract id 324882915 Barocas, S., & Selbst, A. D. (2016). Big Data’s Disparate Impact.California Law Review, 671. https://papers.ssrn.com/sol3/papers.cfm?abstract id 247789916 Rosenblat, A. (2018). Uberland: How Algorithms are Rewriting theRules of Work. University of California Press.17 ARTICLE 19, & Privacy International. (2018). Privacy and Freedomof Expression in the Age of Artificial Intelligence. Artificial-Intelligence-1.pdf18 Kaye, D. (2018). Report of the Special Rapporteur to the GeneralAssembly on AI and its impact on freedom of opinion andexpression. es/ReportGA73.aspx19 Robertson, A. (2019, 10 April). A new bill would force companies tocheck their algorithms for bias. The Verge. er-bill-introduced-house-senate4 / Global Information Society Watch / SNEAK PEEK

20 ngprinciplesBusinesshr eN.pdf21 Moyn, S. (2018). Not Enough: Human Rights in an Unequal World.Cambridge: The Belknap Press of Harvard University Press.22 https://www.fatml.org23 Lettinga, D. & van Troost, L. (Eds.) (2015). Can human rights bringsocial justice? Amnesty International Netherlands. https://www.amnesty.nl/content/uploads/2015/10/can human rights bringsocial justice.pdf24 Chui, M., Chung, R., & van Heteren, A. (2019, 21 January). Using AIto help achieve Sustainable Development Goals. United NationsDevelopment Programme. 9/Using AI to help achieve SustainableDevelopment Goals.html25 Artificial Intelligence for Development. (2019). GovernmentArtificial Intelligence Readiness Index 2019. https://ai4d.ai/index201926 https://digitalcooperation.org27 spx1920 Kch EEat K PSW EAGI SNassessing the legality of these systems againsthuman rights standards, and by industry affirmingcommitment to the United Nations Guiding Principles on Business and Human Rights.20Social justice is another dominant lens throughwhich AI systems are understood and critiqued.While human rights provide an important minimumrequirement for AI systems to adhere to, an ongoingcritique of human rights is that they are “focusedon securing enough for everyone, are essential– but they are not enough.”21 Social justice advocates are concerned that people are treated in waysconsistent with ideals of fairness, accountability,transparency,22 inclusion, and are free from biasand discrimination. While this is not the appropriateplace for an analysis of the relationship betweenhuman rights and social justice,23 suffice to say thatin the context of AI, the institutions, frameworksand mechanisms invoked by these two strands ofgovernance are more distinct than they are similar.A third strand of governance emerges from adevelopment perspective, to have the United Nations’ (UN) Sustainable Development Goals (SDGs)guide responsible AI deployment (and in turn useAI to achieve the SDGs),24 and to leverage AI foreconomic growth, particularly in countries wheretechnological progress is synonymous with economic progress. There is a pervasive anxiety amongcountries that they will miss the AI bus, and in turngive up the chance to have unprecedented economic and commercial gain, to “exploit the innovativepotential of AI.”25The form these various governance frameworkstake also varies. Multiple UN mechanisms are currently studying the implications of AI from a humanrights and development perspective, including butnot limited to the High-level Panel on Digital Cooperation,26 the Human Rights Council,27 UNESCO’sWorld Commission on the Ethics of Scientific Knowl-edge and Technology,28 and also the InternationalTelecommunication Union’s AI for Good Summit.29Regional bodies like the European Union High-LevelExpert Group on Artificial Intelligence30 also focus onquestions of human rights and principles of socialjustice like fairness, accountability, bias and exclusion. International private sector bodies like thePartnership on AI31 and the Institute of Electrical andElectronics Engineers (IEEE)32 also invoke principlesof human rights, social justice and development. Allof these offer frameworks that can guide the design,development and deployment of AI by governments,and for companies building AI systems.Complexity of politics: Power and processAI systems cannot be studied only on the basis oftheir deployment. To comprehensively understandthe impact of AI in society, we must investigate theprocesses that precede, influence and underpindeployment, i.e. the process of design and development as well.33 Who designs these systems, andwhat contextual reality do these individuals comefrom? What incentives drive design, and what assumptions guide this stage? Who is being excludedfrom this stage, and who is overrepresented? Whatimpact does this have on society? On what basis aresystems developed and who can peer the process ofdevelopment? What problems are these technologies built to solve, and who decides and defines theproblem? What data is used to train these systems,and who does that data represent?Much like the models and frameworks of governance that surround AI systems, the process ofbuilding AI systems is inherently political. The problem that an algorithm should solve, the data that analgorithm is exposed to, the training that an algorithm goes through, who gets to design and overseethe algorithm’s training, the context within which analgorithmic system is built, the context within whichan algorithm is deployed, and the ways in which thealgorithmic system’s findings are applied in imperfect and unequal societies are all political decisionstaken by humans.28 UNESCO COMEST. (2019). Preliminary Study on the Ethics ofArtificial Intelligence. 29 https://aiforgood.itu.int30 -level-expertgroup-artificial-intelligence31 https://www.partnershiponai.org32 /autonomoussystems.html33 Marda, V. (2018). Artificial Intelligence Policy in India: A Frameworkfor Engaging the Limits of Data-Driven Decision-Making.Philosophical Transactions of the Royal Society A: Mathematical,Physical and Engineering Sciences, 376(2133). https://doi.org/10.1098/rsta.2018.00875 / Global Information Society Watch / SNEAK PEEK

34 O’Donovan, C. (2018, 27 August). Clashes Over Ethics At MajorTech Companies Are Causing Problems For Recruiters. BuzzFeedNews. sts-ethical1920 Kch EEat K PSW EAGI SNTake, for instance, an algorithmic system that isused to aid law enforcement in allocating resourcesfor policing by studying past patterns of crime. At firstglance, this may seem like an efficient solution to acomplicated problem that can be applied at scale.However, a closer look will reveal that each step ofthis process is profoundly political. The data used totrain these algorithms is considered ground truth.However, it represents decades of criminal activity defined and institutionalised by humans with their ownunique biases. The choice of data sets is also political– training data is rarely representative of the world. Itis more often than not selectively built from certainlocations and demographics, painting a subjectivepicture of all crime in a particular area. Data is alsonot equally available – certain types and demographics are reported and scrutinised more than others.Drawing from the example of predictive policing,the impact of AI systems redistributes power in visibleways. It is not an overstatement to say that AI fundamentally reorients the power dynamics betweenindividuals, societies, institutions and governments.It is helpful to lay down the various ways andlevels at which power is concentrated, leveraged andimposed by these systems. By producing favourableoutcomes for some sections of society, or by havingdisproportionate impact on certain groups within asociety, the ways in which people navigate everydaylife is significantly altered. The ways in which governments navigate societal problems is also significantlyaltered, given the widespread assumption that usingAI for development is inherently good. While there isa tremendous opportunity in this regard, it is imperative to be conscientious of the inherent limitationsof AI systems, and their imperfect and often harmful overlap with textured and imperfect societiesand economies. AI systems are primarily developedby private companies which train and analyse dataon the basis of assumptions that are not always legal or ethical, profoundly impacting rights such asprivacy and freedom of expression. This essentially makes private entities arbiters of constitutionalrights and public functions in the absence of appropriate accountability mechanisms. This link betweenprivate companies and public function power wasmost visibly called out through the #TechWontBuildIt movement, where engineers at the largesttechnology companies refused to build problematictechnology that would be used by governments toundermine human rights and dignity.34 The designand development of AI systems is also concentratedin large companies (mostly from the United Statesand increasingly from China).35 However, deploymentof technology is often imposed on jurisdictions in theglobal South, either on the pretext of pilot projects,36or economic development37 and progress. These jurisdictions are more often than not excluded from thetable at stages of design and development, but arethe focus of deployment.Current conversations around AI are overwhelmingly dominated by a multiplicity of effortsand initiatives in developed countries, each comingthrough with a set of incentives, assumptions andgoals in mind. While governance systems and safeguards are built in these jurisdictions, ubiquitousdeployment and experimentation occur in otherswho are not part of the conversation. Yet the socialrealities and cultural setting in which systems aredesigned and developed differ significantly fromthe societies in which they are deployed. Givenwide disparity in legal protections, societal values,institutional mechanisms and infrastructural access, this is unacceptable at best and dangerousat worst. There is a growing awareness of the needto understand and include voices from the globalSouth; however, current conversations are deficientfor two reasons. First, there is little recognition ofthe value of conversations that are happening inthe global South. And second, there is little, if any,engagement with the nuance of what the “globalSouth” means.ConclusionHere, I offer two provocations for researchers in thefield, in the hope that they inspire more holistic,constructive and global narratives moving forward:The global South is not monolithic, and neitherare the effects of AI systems. The global South is acomplex term. Boaventura de Sousa Santos articulates it in the following manner: The global Southis not a geographical concept, even though thegreat majority of its populations live in countriesof the Southern hemisphere. The South is rather ametaphor for the human suffering caused by capitalism and colonialism on the global level, as wellas for the resistance to overcoming or minimisingsuch suffering. It is, therefore, an anti-capitalist,35 See, for example, the country report on China in this edition ofGISWatch.36 Vincent, J. (2018, 6 June). Drones taught to spotviolent behavior in crowds using AI. The ior-crowds37 Entrepreneur. (2019, 25 June). Artificial Intelligence Is Filling TheGaps In Developing Africa. Entrepreneur South Africa. https://www.entrepreneur.com/article/3372236 / Global Information Society Watch / SNEAK PEEK

1920 Kch EEat K PSW EAGI SNanti-colonialist, anti-patriarchal and anti-imperialistSouth. It is a South that also exists in the geographic North (Europe and North America), in the form ofexcluded, silenced and marginalised populations,such as undocumented immigrants, the unemployed, ethnic or religious minorities, and victimsof sexism, homophobia, racism and Islamophobia.38The “global South” is thus dispersed across geography, demographics and opportunity. It must beafforded the same level of deliberation and nuanceas those jurisdictions setting the tone and pace forthis conversation. It is incumbent on scholars, researchers, states and companies to understand theways in which AI systems need to adapt to contextsthat are lesser known, in a bottom-up, context-driven way. To continually impose technology on someparts of the world without questioning local needsand nuance, is to perpetuate the institutions ofcolonialism and racism that we fight so hard to resist. The fact that AI systems need to be situatedin context is well understood in current debates.However, “context” necessarily denotes a local,nuanced, granular, bottom-up understanding of theissues at play. Treating the global South “context”as one that is monolithic and generally the opposite of the global North means that we lose valuablelearnings and important considerations. A similarshortcoming involves generalising findings aboutAI systems in one context as ground truth acrosscontexts – which requires a reminder that much likethe “global South”, AI is not a monolithic sociotechnical system either. The institutional reality withinwhich systems function, along with infrastructuralrealities, cultural norms, and legal and governanceframeworks are rarely, if ever, applicable acrosscontexts.The governance and politics of AI suffer fromfundamental structural inequalities. At present, jurisdictions from the global South do not form part ofthe evidence base on which AI governance is built.As a result, considerations from the global Southare simply added in retrospect to ongoing conversations, if at all. This is an inherent deficiency. Giventhe invisible yet consequential ways in which AIsystems operate, it is crucial to spend time buildingevidence of what these systems look like in societies across the world. Narratives around AI thatinform governance models need to be driven in abottom-up, local-to-global fashion that looks at different contexts with the same level of granularity inthe global South as was afforded to the global North.Much like AI systems operate in societies that haveunderlying structural inequalities, the deliberationaround AI suffers from a similar underlying structural problem. It is incumbent on researchers, policymakers, industry and civil society to engage withthe complexities of the global South. Failing this,we risk creating a space that looks very much likethe opaque, inscrutable, discriminatory and exclusive systems we aim to improve in our daily work.This edition of GISWatch attempts to start creatingan evidence base that nudges conversations awayfrom that risk.38 de Sousa Santos, B. (2016). Epistemologies of the South and thefuture. From the European South, 1, 17-29; also see Arun, C. (2019).AI and the Global South: Designing for Other Worlds. Draft chapterfrom Oxford Handbook of Ethics of AI, forthcoming in 2019.7 / Global Information Society Watch / SNEAK PEEK

Paz Peña1 and Joana ionLet’s say you have access to a database with information from 12,000 girls and young women between10 and 19 years old, who are inhabitants of somepoor province in South America. Data sets includeage, neighbourhood, ethnicity, country of origin,educational level of the household head, physicaland mental disabilities, number of people sharing ahouse, and whether or not they have running hot water among their services. What conclusions would youextract from such a database? Or, maybe the questionshould be: Is it even desirable to make any conclusionat all? Sometimes, and sadly more often than not,simply the possibility of extracting large amounts ofdata is a good enough excuse to “make them talk”and, worst of all, make decisions based on that.The database described above is real. And it isused by public authorities to prevent school drop-outsand teenage pregnancy. “Intelligent algorithms allowus to identify characteristics in people that could endup with these problems and warn the governmentto work on their prevention,”3 said a Microsoft Azure representative. The company is responsible forthe machine-learning system used in the PlataformaTecnológica de Intervención Social (Technological Platform for Social Intervention), set up by the Ministry ofEarly Childhood in the Province of Salta, Argentina.“With technology, based on name, surnameand address, you can predict five or six years aheadwhich girl, or future teenager, is 86% predestined tohave a teenage pregnancy,” declared Juan Manuel123Paz Peña is an independent consultant on tech, gender and humanrights.Joana Varon is the executive director of Coding Rights and an affiliateof the Berkman Klein Center for Internet and Society at HarvardUniversity.Microsoft. (2018, 2 April). Avanza el uso de la Inteligencia Artificialen la Argentina con experiencias en el sector público, privadoy ONGs. News Center Microsoft Latinoamérica. riencias-en-el-sector-publico-privado-y-ongs1920 Kch EEat K PSW EAGI SNDecolonising AI: A transfeminist approachto data and social justiceUrtubey, a conservative politician and governor ofSalta.4 The province’s Ministry of Early Childhoodworked for years with the anti-abortion NGO Fundación CONIN5 to prepare this system.6 Urtubey’sdeclaration was made in the middle of a campaignfor legal abortion in Argentina in 2018, driven by asocial movement for sexual rights that was at theforefront of public discussion locally and receiveda lot of international attention.7 The idea that algorithms can predict teenage pregnancy before ithappens is the perfect excuse for anti-women8 andanti-sexual and reproductive rights activists to declare abortion laws unnecessary. According to theirnarratives, if they have enough information frompoor families, conservative public policies can bedeployed to predict and avoid abortions by poorwomen. Moreover, there is a belief that, “If it is recommended by an algorithm, it is mathematics, so itmust be true and irrefutable.”It is also important to point out that the database used in the platform only has data on females.This specific focus on a particular sex reinforcespatriarchal gender roles and, ultimately, blames female teenagers for unwanted pregnancies, as if achild could be conceived wi

prediction, classification, and identification of pat-terns . The speed and scale at which these systems function far exceed human capability, and this has captured the imagination of governments, compa-nies, academia and civil society . AI is broadly defined as the ability of compute