Over 700,000 children in London are living in poverty and artificial intelligence is now everywhere from TikTok to Facebook. At this year’s London Child Poverty Summit, leaders from tech, research, youth work and community organisations met to ask a simple question: how do we ensure every child benefits from AI, not just the already advantaged?
The day opened with a powerful poem by Harriet Ekpo, capturing both the urgency of digital connectivity and the pain of being left out. The Childhood Trust's Chief Executive Josephine McCartney welcomed everyone and introduced the host for the day, Rob Smith. McCartney encouraged a lively exchange of ideas, urging everyone to make the day a little "spicy” and called for open dialogue on the topics of the day.
From there, speakers and panellists stressed a single message: technology is inevitable, equity is not. It will take deliberate choices to close the gap.
Keynote: AI is a tool, not a cure-all
Tania Duarte (We and AI) set the tone with a clear-eyed keynote on AI’s double edge.
- AI can help address inequality, but only with critical literacy and inclusive decision-making
- Without guardrails, AI can entrench bias, deepen digital poverty and create emotional and cognitive pressures for young people
- The task is to imagine and build technologies that serve community needs, not technology for its own sake
Takeaway: progress requires ethical implementation, meaningful youth voice and investment in access, skills and support.
Panel: AI and Youth, navigating the digital frontier
Panellists: Sabeehah Mahomed (The Alan Turing Institute), Nicki Watts (AI Youth and The Childhood Trust), Katie Taylor (Toynbee Hall), Dr Mariya Stoilova (LSE), plus Saihajleen, a 17-year-old AI advocate and representative for young people.
What we heard
The divide is real
- 16% of food bank users have no internet access
- 44% of digitally excluded people report severe social isolation
- AI risks amplifying existing social and economic inequalities
Mind and development
- Many young people use AI for emotional support
- Risks include weakened social skills and changes in how emotions are processed and expressed
- AI-generated images can undermine identity and self-esteem
Surprises
- 38% of adults feel less confident online since AI’s rapid emergence
- Children still prefer hands-on creative activities to screen-based ones
Bright spots
- AI can support learning differences and neurodivergent needs
- New tools can personalise education and improve accessibility
Recommendations
- Build comprehensive digital literacy programmes
- Introduce robust child-centred AI regulation
- Invest in non-digital creative opportunities
- Teach critical thinking about technology, not just how to use tools
Early years focus: AI toys and child development
Emily Goodacre from The University of Cambridge shared emerging insights on AI toys in early childhood.
What is coming
- Generative AI toys, like “Gabo” chatbot devices, are being marketed from age 3
- Interactive storytelling and AI-enabled accessories are on the rise
- Major brands, including Barbie, are partnering with AI companies
Potential benefits
- Supplementing practitioners’ knowledge and practice
- Engaging, interactive learning experiences
- Support for children with limited social interaction
- Early AI literacy in age-appropriate ways
Critical concerns
- Safeguarding, privacy and data
- Protecting human relationships as the core of early years development
- Age-appropriate content and interactions
- The digital divide and uneven access
- Possible impacts on brain development and social-emotional learning
Implementation principles
- AI should enhance, not replace, human interaction
- Tools must be visually engaging, interactive and tightly governed
- Content, usage and data require strict oversight
- Design must match developmental stages and support families and practitioners
Research priorities
- Long-term effects on development, emotion and relationships
- Risk identification and effective mitigations
- Equity impacts over time
Fireside chat: from hype to responsibility
Tim Cook (AIConfident), a former government adviser who helped set up the UK’s Office for AI and Tania Duarte (We and AI), a volunteer-led nonprofit leader, explored what real-world AI literacy should look like.
- Empower people and organizations to use AI well, not just more
- The UK lacks a coherent AI literacy strategy and leans too heavily on the tech sector
- AI can both empower and exclude, so ethics, transparency and inclusion must be built in
- Practical routes forward include leadership cohorts, shared standards and collective action to:
- bridge the digital divide
- amplify youth voice
- protect online wellbeing
Closing call to action: pledges, not platitudes
The summit ended with a challenge: move from talk to action on youth engagement with AI. Attendees were asked to email their specific pledges to The Childhood Trust to build a shared portfolio of practical steps.
Themes for pledges
- Create interactive AI learning experiences that are safe and inclusive
- Prioritise online safety, privacy and data protection for children
- Ensure equitable access to devices, connectivity and support
- Invest in non-digital creative and social opportunities alongside digital skills
Bottom line: perspectives on AI will differ, but the shared mission is clear, support and protect young people as they navigate an increasingly digital world. Josephine McCartney called for attendees to "keep the dialogue going" about AI use, whether people agree with it or not. She emphasized how dialogue can help shift perspectives and ensure young people's voices are heard in digital debates.