Our Approach

We believe AI safety is not a fixed state but requires ongoing vigilance, continuous evaluation, and a commitment to updating practices as new evidence emerges.

Our definition of safety

Safety, in our context, means protecting children from mental and physical harms, both immediate and those that may emerge over time as AI becomes embedded in their daily lives. This requires understanding how risks arise from AI interactions and designing safeguards that address children's unique developmental vulnerabilities.

We acknowledge four categories of risk specific to child-facing AI: emotional and psychological harms (including over-attachment and inappropriate responses to distress), content safety risks (exposure to age-inappropriate material or dangerous advice), developmental risks (cognitive deskilling and impacts on social development), and fairness (discriminatory outcomes from biased training data or speech recognition systems).

Grounded in the Children and Families Act 2014

Our research agenda is informed by the UK's Children and Families Act 2014, landmark legislation that reformed services for vulnerable children and established stronger protections across education, welfare, and family life. The Act's framework provides the backbone for our approach to AI safety research.

The Act is built on a set of core principles (Section 19) that local authorities must follow when supporting children. These principles guide our research methodology and shape how we evaluate AI systems designed for children.

Participation

Authorities must take into account the views, wishes, and feelings of the child and their parents. Our research centres children's lived experiences and ensures their perspectives inform how we evaluate AI systems.

Information

Children and parents must be provided with the information and support necessary to participate in decisions. We produce accessible research that helps families understand AI risks and make informed choices.

Collaboration

The Act requires education, health, and social care services to work together for the benefit of the child. Our research bridges disciplines, bringing together educators, technologists, and child welfare experts.

Best Possible Outcomes

Support must be designed to help the child achieve the best possible educational and other outcomes. We evaluate AI systems against this standard, not just for safety, but for whether they genuinely support children's development.