Artificial Intelligence For Children - World Economic Forum

Transcription

Artificial Intelligencefor ChildrenTOOLKITMARCH 2022

Images: Getty ImagesContentsIntroduction31 C-suite and corporate decision-makers’ checklist6Where companies can fall short7Actions8The rewards of leading92 Product team guidelinesFoundational principles1011The challenge13Definition of children and youth13Social networks14Overarching limitations14Putting children and youth rent243 AI labelling system294 Guide for parents and guardians31Benefits and risks31Contributors33Endnotes35DisclaimerThis document is published by theWorld Economic Forum as a contributionto a project, insight area or interaction.The findings, interpretations andconclusions expressed herein are a resultof a collaborative process facilitated andendorsed by the World Economic Forumbut whose results do not necessarilyrepresent the views of the World EconomicForum, nor the entirety of its Members,Partners or other stakeholders. 2022 World Economic Forum. All rightsreserved. No part of this publication maybe reproduced or transmitted in any formor by any means, including photocopyingand recording, or by any informationstorage and retrieval system.Artificial Intelligence for Children2

March 2022Artificial Intelligence for ChildrenIntroductionThis toolkit is designed to help companiesdevelop trustworthy artificial intelligence forchildren and youth.For the first time in history, a generation of children isgrowing up in a world shaped by artificial intelligence(AI). AI is a set of powerful algorithms designed tocollect and interpret data to make predictions basedon patterns found in the data.Children and youth are surrounded by AI in manyof the products they use in their daily lives, fromsocial media to education technology, video games,smart toys and speakers. AI determines the videoschildren watch online, their curriculum as they learn,and the way they play and interact with others.accessibility. AI must be designed inclusively torespect the rights of the child user. Child-centricdesign can protect children and youth fromthe potential risks posed by the technology.AI technology must be created so that it is bothinnovative and responsible. Responsible AI is safe,ethical, transparent, fair, accessible and inclusive.Designing responsible and trusted AI is good forconsumers, businesses and society. Parents,guardians and adults all have the responsibility tocarefully select ethically designed AI products andhelp children use them safely.This toolkit, produced by a diverse team of youth,technologists, academics and business leaders, isdesigned to help companies develop trustworthyartificial intelligence (AI) for children and youth andto help parents, guardians, children and youthresponsibly buy and safely use AI products.What is at stake? AI will determine the future ofplay, childhood, education and societies. Childrenand youth represent the future, so everything mustbe done to support them to use AI responsiblyand address the challenges of the future.AI can be used to educate and empowerchildren and youth and have a positive impacton society. But children and youth can beespecially vulnerable to the potential risks posedby AI, including bias, cybersecurity and lack ofThis toolkit aims to help responsibly design,consume and use AI. It is designed to helpcompanies, designers, parents, guardians, childrenand youth make sure that AI respects the rights ofchildren and has a positive impact in their lives.Artificial Intelligence for Children3

Putting children and youth FIRST o are you?A corporate decision-maker, member of a product team or a parent or guardian?Corporate usersCompaniesshould keep inmind that childrenoften use AIproducts thatwere not designedspecificallyfor them.The checklist for C-suite executives and guidelinesfor product teams contain actionable frameworksand real-world guidance to help your companydesign innovative and responsible AI for childrenand youth. By using these guidelines, you can leadas a trusted company that delights your child users.The C-suite is responsible for setting the culturearound responsible AI and strategy for andinvestment in AI products. The checklist is designedto help executives learn more about the benefitsand risks of AI for children and youth so you canbetter lead, innovate and grow.Companies should keep in mind that children oftenuse AI products that were not designed specificallyfor them. It’s sometimes difficult to predict whatproducts might later be used by children or youth.As a result, you should carefully consider whetherchildren or youth might be users of the technologyyou’re developing. If they are, you should carefullyconsider how to help increase the benefits andmitigate potential risks posed by the technology forchildren and youth.Read more about the C-suite checklist.Product teams design, develop and deploythe AI technology that children and youth willuse. Responsible design starts with productteams and continues to be their ongoingresponsibility. The guidelines are designed forengineers, developers, product managers andother members of the product team to usethroughout the product life cycle.Artificial Intelligence for Children4

AI labelling systemThe AI labelling system is designed to be included in all AI products ontheir physical packaging and online accessible through a QR code. Likenutritional information on food packaging, the labelling system is designedto concisely tell consumers – including parents and guardians, as well aschildren and youth – how it works and the options available to the users.All companies are encouraged to adopt this tool to help create greater trustand transparency with the purchasers and child users of their products.Learn about the AI labelling systemConsumers: Parents and guardiansParents and guardians decide which AI-powered technologies to buyfor their children. By educating yourselves and better understanding thebenefits and risks posed by the technology, you can make deliberate andinformed decisions that can protect your children and be sure AI has apositive impact on their lives.Learn more about the Guide for parents and guardiansThe tool for parents and guardians is designed based on the AI labellingsystem (Figure 1) to understand these six important categories of AI.FIGURE 1AI labelling systemData useAgeSensorsAI useAccessibilityNetworksSource: World Economic ForumArtificial Intelligence for Children5

1C-suite andcorporate decisionmakers’ checklistActionable frameworks and real-worldguidance help companies design innovativeand responsible AI for children and youth.This checklist is for C-suite executives of companiesthat provide products and services incorporatingartificial intelligence (AI) intended for use by childrenand youth. Many companies use AI to differentiatetheir brands and their products by incorporatingit into toys, interactive games, extended realityapplications, social media, streaming platformsand educational products. With little more thana patchwork of regulations to guide them,organizations must navigate a sea of privacyand ethics concerns related to data captureand the training and use of AI models. Executiveleaders must strike a balance between realizingthe potential of AI and helping reduce the riskof harm to children and youth and, ultimately,their brand. Building on a foundation establishedin the World Economic Forum “Empowering AILeadership: AI C-Suite Toolkit”, the checklist isintended to help C-suite executives and othercorporate decision-makers reflect upon and actto create and support responsible AI for thisvulnerable population.Trusted and responsible AI for children and youth:A checklist for executivesAttracted by the extraordinary opportunity toinnovate with AI, companies are moving at arecord pace to incorporate AI into toys, broadcastand social media, smart speakers, educationtechnology, virtual worlds and more.AI ranges in complexity and impact from simplerecommendation and customization enginesto deeply immersive experiences that imitateand simulate human behaviour, emotions andinteractions. Implemented thoughtfully, thesesystems can delight, teach and evoke interactionwith their young users, enabling them to grow anddevelop at their own pace and according to theirlearning styles. But implemented without carefulforethought and the guidance of child developmentexperts and ethicists, AI can hinder developmentand infringe on the rights of vulnerable users.With the checklist, leaders can learn how evencompanies that mean well overlook potential issuesand how to mitigate the risks associated with AIadoption. Executives should aspire to the highestpossible ethical and social standards regardingchild development, suitability for purpose, nonbias, accessibility and privacy. Doing so providestremendous potential beyond the opportunity to dogood. It can elevate your brand and enable you toposition your company as a trustworthy steward ofsought-after products and services to your primarybuyers: parents, grandparents, teachers, educatorsand other care providers.Artificial Intelligence for Children6

Where companies can fall shortExecutiveleaders shouldcreate a cultureof responsibilitybacked byresources thatenable responsibleAI from design toend-of-product useand beyond.Given the acceleration of AI adoption and a lag inbroadly accepted standards and guidance, leadersmight be caught off guard. What are the riskiestbehaviours that your teams should avoid?purpose during design and prototypingstages can diminish the potential valueof AI and cause harm.––Not disclosing how AI is used: Companiesthat think buyers may object to AI may concealor downplay its use. Be transparent about theuse of AI and why you are using it.–Amplifying and perpetuating bias: AImodelling can contain inaccuracies andoversimplifications that lead to inaccessibilityand bias against marginalized groups, such asdisabled communities and users from differentcultural and socio-economic backgrounds.–Skipping user-focused validation: Bypassinguser and expert validation of suitability forLeaving privacy and security gaps:Data security, privacy and consent tocollect and use data are complicated dueto cybersecurity threats and a patchworkof regulations that vary geographically.These concerns reach past the useful lifeof a product: For minors, parents provideconsent, but their children may claimtheir right for their data to be forgottenas they get older.With these potential stumbling blocks in mind,what steps can corporate leaders take to protectand enhance their brand while leveraging theremarkable potential of AI?Artificial Intelligence for Children7

ActionsExecutive leaders should create a culture ofresponsibility backed by resources that enableresponsible AI from design to end-of-product useand beyond. These steps are recommended:1. Know the legal duties and regulatoryconstraints:Leverage existing guidance, such as the Instituteof Electrical and Electronics Engineers’ (IEEE)Code of Ethics,1 UNICEF’s Policy Guidance onAI for Children2 and World Economic Forumguidance,3 as well as the guidance containedin this toolkit and guidelines for the productteam, AI labelling system, and resourcesfor parents and guardians and children andyouth. Commit to internal and, if possible,external AI oversight. Report compliance andleadership measures publicly and in simplelanguage so buyers can understand.2. Build a diverse and capable team:Include ethicists, researchers, privacyspecialists, educators, child developmentexperts, psychologists, user-experience (UX)designers and data scientists. Collaborate withnon-profit organizations and educational andresearch institutions for more expertise.3. Train your team and provide resources forsuccess with this checklist:Educate team members about the importanceof responsible and trustworthy AI and provideFIGURE 2them access to the skills, tools and timethey need to execute your vision. Have opendialogue about unintended consequences,possible worst-case scenarios, and the reasonsfor ensuring your teams are considering the fiveAI characteristics critical to putting children andyouth FIRST (Figure 2).For more information, refer to the product teamguidelines, which offers detailed guidance onthe five areas.4. Offer expertise to inform development ofregulations, standards and guidance:Contribute to public forums on how AI isbeing used in your products or services.Share your experience in proposing guidanceand requirements.5. Welcome principled efforts to labelproducts and services:These should be done according to the potentialimpact of AI on users. Endorse and participate inactivities to develop labelling and rating standards.Label your offerings to help consumers makeinformed choices based on recommendationsabout, for example, user age, accessibility factorsand whether a camera and microphone are beingused. For additional information about labellingrecommendations, see the AI labelling system.Putting children and youth FIRST checklistCompany culture and processes address ethics and bias concerns regardinghow AI models are developed by people and the impact of AI models in use.AI models interact equitably with users from different cultures and with differentabilities; product testing includes diverse users.Offerings reflect the latest learning science to enable healthy cognitive, social,emotional and/or physical development.The technology protects and secures user and purchaser data, and thecompany discloses how it collects and uses data and protects data privacy;users may opt out at any time and have their data removed or erased.The company explains in non-technical terms to buyers and users why AI isused, how it works and how its decisions can be explained. The company alsoadmits AI’s limitations and potential risks and welcomes oversight and audits.Source: World Economic ForumArtificial Intelligence for Children8

The rewards of leadingWhen you deliver responsible AI-based offerings and engage in the development ofstandards, you can do much more than just affect your bottom line. You help youngusers grow into the best versions of themselves – a generation empowered by AI.References–Benjamin, Ruha, Race after Technology: Abolitionist Tools for the New Jim Code, Polity Books, 98/4/1/5681679?redirectedFrom fulltext–Coeckelbergh, Mark, AI Ethics, MIT Press, 2020, https://mitpress.mit.edu/books/ai-ethics–Dubber, Markus D., Frank Pasquale and Sunit Das (eds), The Oxford Handbook of Ethics of AI, Oxford UniversityPress, 2020, ��O’Neil, Cathy, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy,Crown Publishing Group, 2016, Russell, Stuart, Human Compatible: Artificial Intelligence and the Problem of Control, Penguin Publishing Group,2019, https://books.google.ch/books/about/Human Compatible.html?id 8vm0DwAAQBAJ&redir esc y–United Nations, “Young people help draw up UN digital protection recommendations”, UN News, 24 March rtificial Intelligence for Children9

2Product team guidelinesResponsible design starts with product teamsand continues to be their ongoing responsibilitythroughout the product life cycle.IntroductionWhy?Product teams design, develop and deploy the AI technology that childrenand youth will use. Responsible design starts with product teams andcontinues to be your ongoing responsibility throughout the product lifecycle. These guidelines are designed to help you develop responsible AIproducts for children and youth.Who?The entire product team: developers, programme managers, technicalwriters, product owners, software architects, UX designers, marketingmanagers and anyone else with a hand in product development.How?Dive into the five categories of “Putting children and youth FIRST” – Fair,Inclusive, Responsible, Safe and Transparent (Figure 3). Each theme isalso organized into three sections: goals, greatest potential for harm, andmitigate risks. Use these categories and resources as a starting point.Responsible AI is a journey, and you’ll want to form a diverse and dynamicteam as you develop AI for children and youth.FIGURE 3Putting children and youth FIRSTEthics, bias and liabilityAccessibility, neuro-differences and feedback from kidsAge- and developmental-stage-appropriate, reflects thelatest learning science and is designed with kids in mindDoes no harm; cybersecurity and addiction mitigationCan explain how the AI works and what it is being used forto a novice or lay audienceSource: World Economic ForumArtificial Intelligence for Children10

Foundational principlesThe United Nations Convention on the Rights ofthe Child lays out numerous principles for protectingthe rights, dignity, autonomy and safety of children.But the first principle under Article 3 guides manyof the others:“In all actions concerning children, whetherundertaken by public or private social welfareinstitutions, courts of law, administrative authoritiesor legislative bodies, the best interests of thechild shall be a primary consideration.”4The best placeto start is with asimple question:“Does the systemI’m building havethe best interests ofchildren in mind?”The best place to start is with a simple question:“Does the system I’m building have the bestinterests of children in mind?” Perhaps theanswer is not “no”, but “I’m not sure”. And whatif it is an emphatic “yes!”? No matter the answer,it is important to consider whether the positiveimpact can be clearly articulated and establishstrategies for determining whether or not yoursystem is having this intended impact. The goalof these guidelines is to help you identify the risksand uncover potential blind spots as a productis envisioned, built, tested and deployed.Human-centred design is the act of starting firstwith the person for whom a product is being built.In that way, it is possible to prioritize the needs,desires and scenarios of use over the capabilitiesof the technology. Building products for childrenentails going a step further and taking a childcentred design approach.5 In doing so, you willtake more responsibility for the psycho-socialdevelopment stage of your customers, the risksthey may encounter with your technology, and yourrole in limiting harm. Doing so can help you ask theright questions about not only the desirability ofyour product, but also the fitness and safety of it.These guidelines are not just for product teamsbuilding with children and youth in mind. Theyare relevant to products that children and youthmight use. Social, media, gaming and evenproductivity platforms are all highly likely to beused by children and youth, independent of theexpressed or implied target age.6 Because ofthis, the hope is that these guidelines are appliedacross more than just the narrowly definedmarket of smart toys for children and youth.As a member of a product team developingtechnology for customers, you are beholden to theirgreatest potential and their riskiest vulnerabilities.In these guidelines, five AI characteristics areexplored that developers, engineers, designers,UX professionals and programme managersshould apply to their work. When designing AI,children and youth must be put FIRST – whereAI-powered technology is built fairly, inclusively,responsibly, safely and transparently. Each ofthe five characteristics includes these elements –goals, the potential for harm and risk mitigationguidance – in a checklist (Figure 4), as well asfurther links/resources.Applying these principles will not be easy, nor is itintended to be. Easy is not the way of great productwork, so you are invited to dig in, reflect, perhapsget uncomfortable, and come out the other sidewith technology that respects and celebrates themost precious, cherished and vulnerable users:children and youth.Artificial Intelligence for Children11

FIGURE 4FairChecklist – Putting children and youth FIRSTGoalsGreatest potentialfor harmMitigate risksFairness for the user andtheir dignity are paramountBreaches of trustand consentEmploy proactive strategies for responsible governanceBias in training, expression andfeedback in the AI is assumedand actively addressedEmotional anddevelopmental harmEffort is spentunderstanding liabilityBias, unequal accessand impactUse ongoing ethical thinking and imaginationEmploy ethical governance for fairnessTest and train with data to understand the behaviour of the modeland its areas of biasThreat analysis includeshow the AI could beweaponized for harmInclusiveAccessibility is built-in; it isnot an afterthought“Inclusive” accounts for andcelebrates neurodiversityExclusion by designBias in, bias out,bias internalizedTechnology development cycleand testing includes feedbackfrom children and youthResponsibleThe technology is ageappropriate and has acognitive-developmentstage-appropriate designThe technology reflects thelatest learning scienceThe technology does no harmto customers and cannot beused to harm othersTechnology gonerogueBuild advisory councils and research participant pools thatrepresent high variability in the target audienceUnsophisticated,inflexible AI modelsActively seek user experience failures that create negative experiencesBuilt for small,silly adultsOvercommunicate privacy and security implicationsUn/intendedmalicious, obliqueor naive usageConduct user research to inform scenario planning for nefarioususe cases and mitigation strategiesAn unsafe communityCybersecurity, includingthe privacy and securityof customer data, is ahigh priorityTransparentActively seek user experience failures that create experiencesof exclusionTest and train with data to understand the behaviour of the modeland its areas of biasThe technology is createdwith children and youth atthe centre of the design anddevelopment processSafeBuild research plans, advisory councils and participant pools thatrepresent high variability in the target audienceA callous observerDemographicsallowed to definethe userBuild conviction around the behaviour of the AI and how it mightadjust to a user’s development stageBuild a multivariate measurement strategyBuild a transparent, explainable and user data-driven relationshipmodel between the child, guardian and technology to identify andmitigate harmHave the product team develop subject-matter expertise intechnology concerns related to children and youthThe potential for over-use isacknowledged and addictionmitigation is actively built inData privacy andsecurity breachesEveryone on the team canexplain how the AI works andwhat the AI is being used forto a novice or lay audienceLack or obfuscation ofinformed consentConfirm the terms of use are clear, easy to read and accessible to anon-technical, literate userSkirted or ignoredgovernmental rulesand regulationsClearly disclose the use of high-risk technologies, such as facialrecognition and emotion recognition, and how this data is managedAnyone who wants tounderstand the AI iseasily able to do soThe burden ofsecurity and privacyis left to the userExcluded guardiansBuild a security plan that takes children’s and youth’s cognitive,emotional and physical safety into accountExplicitly mention the geographic regions whose data protectionand privacy laws are honoured by the technologyUse more secure options as default and allow guardians to opt in toadvanced features after reading their specific terms of useClearly specify the age group for which the application is builtProvide guidelines for the environment in which the technology ismeant to be usedCreate alert mechanisms for guardians to intervene in case a risk isidentified during usageSource: World Economic ForumArtificial Intelligence for Children12

The challengePeople active in technology – ethicists, userresearchers, software developers, programme andproduct managers, and designers – wrote theseguidelines with people like themselves in mind:developers, programme managers, technical writers,product owners, software architects, UX designers,marketing managers, and anyone else with a hand inproduct development. The objective is to guide youthrough some of the risks associated with buildingAI for children and youth. Admittedly, little timewas spent addressing the value of AI and machinelearning (ML), and the goodness that technologycan bring to the lives of children and youth. Thepurpose of these guidelines is not to discouragethe use of AI in product design; instead, it is to helpbring balance to the propensity to see only thepositive potential outcomes of the products built.The challenge is to consider the other side ofthe AI-infused products you are building. Theseguidelines can be used to interrogate the workbeing undertaken. They will help uncover andmitigate the deficiencies and possible risksintroduced in a design before customers findthem. The hope is that by helping to do this,product teams can be confident in, proud ofand celebrated for the responsible AI theybring into the lives of children and youth.Definition of children and youthThere is no single definition of children and youth.They are people whose bodies and brains are stilldeveloping. Most cannot yet drive a car or holda job. The UN defines children as people under18 years of age. It is even possible to considerchildren and youth to be up to 26, since theprefrontal cortex only completes its developmentup to that age.7 Children have shorter attentionspans and limited vocabulary (in some cases).Age can be measured by years on this planet orabilities on tests of cognitive skills, physical dexterityor emotional intelligence. Age, like most humanconcepts, is not an absolute. Due to the variabilityof human capability relative to age, it is importantto think beyond age groups and leverage insteadthe more reliable concepts of cognitive, emotionaland physical stages8 as a way to understand,target, communicate and market a product.Spending much time with children and youth revealshow self-centred they can be. This is a result ofbrain development, and varies as a function ofdevelopmental stage.9 This self-centredness isexcellent for self-preservation but can morph intosomething unpleasant when children and youthencounter negative experiences. As they arequick to take credit for the sky being blue, a childmight also take credit for their parents’ divorce.Their self-centredness means everything is theirfault – the good things and the bad – and theirvulnerability, especially viewed through this lens,cannot be overstated. Child and youth customerswill likely internalize the good and bad parts oftechnology. A product team’s job is to work throughwhat this means and mitigate it accordingly.Artificial Intelligence for Children13

Social networksIt is importantto think beyondage groups andleverage insteadthe more reliableconcepts ofthe cognitive,emotional andphysical stages.Depending on your AI or product goals, you maybe connecting to or building a social network insideyour product. These guidelines do not deeplyexplore the risks of social networks for children andyouth. If a product includes a social component,however, the following are recommended:––Focus on safety: guard against nefarious actorswho will exploit your system to gain accessto children and youth for their own gains (e.g.computer viruses, child exploitation, bullying)The following information will help initiate thinkingabout the risks of social networks with childrenand youth:–Raising Children Network (Australia),“Social media benefits and risks: childrenand teenagers”, 22 December 2020–Kaspersky, Kids Safety, “The dangers ofsocial networks”, 26 February 2016–Texas A&M University, Division of InformationTechnology, “7 Tips for Safe Social Networking”–Develop user research plans10 that takea multivariate approach to your productquestions: qualitative and quantitative methods;longitudinal research and traditional usabilitywork; contextual inquiry and interviews; andbenchmarking and scorecardingFocus on fairness: design creative alternativesto embedding implicit social hierarchies intoyour experiences (e.g. custom avatar clothesthat cost real-world money; accumulation oflikes and followers)Overarching limitationsWhen it comes to researching and working withchildren and youth, the experience of engineersis probably limited. It is strongly recommendedto formally consult with experts in the fields ofchild development, developmental science,psychology and learning sciences, amongothers, to evaluate AI that will be used bychildren and youth. Experts will be needed whocan objectively ask questions about the value,safety and utility of your product, and who:–Interrogate your AI/ML and help you understandnot only the biases within it, but also ways tomitigate itAdditionally, among the resources listed, technologydesign researchers whose work focuses ontechnology for children are cited (in particular,Jason Yip, Julie Kientz and Alexis Hiniker). Theirwork captures much more depth and nuanceabout the risks of AI affecting children than ispossible to include in these guidelines.Artificial Intelligence for Children14

Putting children and youth FIRSTInclusiveResponsibleSafeTransparentWhenever data is collected, systems are engineered or products are sold, ethical obligations arise tobe fair and honest, and to do good work and avoid harm. These obligations are all the more pressingwhen working with children and youth, who are among the most vulnerable members of society.Adults have a special responsibility to help them flourish and to shield them from harm. Technologiesand systems powered by AI and ML could transform how people interact with each other. But theyalso bring potential bias, exclusion and lack of fairness to their users. With th

technologists, academics and business leaders, is designed to help companies develop trustworthy artificial intelligence (AI) for children and youth and to help parents, guardians, children and youth responsibly buy and safely use AI products. AI can be used to educate and empower children and youth and have a positive impact on society.