Compliance & Safety Standards

Meeting the DfE's new Generative AI Product Safety Standards and UK legal requirements for child safety

NEW JANUARY 2026 Department for Education Standards

DfE Generative AI Product Safety Standards

Quinly 2.0 is designed to meet the Department for Education's new Generative AI Product Safety Standards for educational settings.

Stated Purpose

  • Use Case: Digital Assistant (Category 4)
  • Target Demographic: Children aged 7-17
  • Educational Context: Crisis support and wellbeing signposting
  • Evidence Base: 3,289 pilot conversations

Filtering Standards

  • Multi-layer Constitutional AI content filtering
  • Age-appropriate responses throughout
  • Multilingual moderation (5 languages)
  • 30 crisis category detection

Monitoring & Reporting

  • DSL configuration required at setup
  • Aggregate safeguarding reports by category
  • Usage duration tracking per session
  • Real-time crisis signposting to UK services

Security & Safeguards

  • Constitutional AI jailbreak protection
  • Attack pattern monitoring and rate limiting
  • Role-based administrator permissions
  • Stateless architecture prevents exploitation

Privacy-First Approach: Quinly maintains child trust through confidential conversations while providing DSLs with aggregate safeguarding intelligence. This balances monitoring requirements with the child psychology principle that anonymity enables disclosure.

✓ 100% Compliant with UK Legal Requirements

Quinly 2.0 meets all mandatory legal requirements for child-facing digital services in the UK:

✓ UK Children's Code (Age-Appropriate Design Code)

Full compliance with the ICO's statutory Code of Practice for online services likely to be accessed by children. This is a legal requirement under UK data protection law.

  • Data minimisation and privacy by design
  • Age-appropriate application of settings
  • Transparency and child-friendly privacy information
  • Harmful content protections

✓ UK GDPR & Data Protection Act 2018

Full compliance with UK data protection legislation, including special protections for children's data.

  • Complete Data Protection Impact Assessment (DPIA)
  • Lawful basis for processing (Article 6)
  • Special category data safeguards (Article 9)
  • GDPR Article 8 compliance for children's consent

Beyond Legal Compliance: Children & AI Design Code

While meeting all legal requirements, we're also pursuing alignment with voluntary best practice standards.

What is the Children & AI Design Code?

The Children & AI Design Code is a comprehensive framework developed by the 5Rights Foundation that sets best practice standards for AI systems that children interact with. While not currently a legal requirement in the UK, it represents the gold standard for ethical, safe, and child-appropriate AI design.

The Code covers critical areas including:

  • Safety: Protecting children from harmful content and interactions
  • Privacy: Minimising data collection and ensuring transparency
  • Fairness: Ensuring non-discriminatory AI systems
  • Transparency: Making AI understandable to children and families
  • Accountability: Clear governance and oversight structures
  • Participation: Involving children in design and testing

Why Quinly is pursuing this voluntary standard

At Quinly Ltd, we believe that building the safest possible AI for children isn't optional; it's essential. The Children & AI Design Code is not a legal requirement, but we are committed to meeting it because:

  • Children deserve the highest level of protection when using AI systems
  • Schools and organisations need confidence in the safety and ethics of tools they deploy
  • Proactive compliance demonstrates our commitment to child welfare beyond regulatory minimums
  • We want to lead the industry in responsible AI development for vulnerable populations

Children & AI Design Code Progress

Quinly 2.0 is currently at ~70% alignment with the voluntary Children & AI Design Code, with active work underway to achieve full compliance by Q2 2026.

70% Compliant
✓ Achieved

Technical Safety

  • Comprehensive crisis detection
  • Constitutional AI safety guardrails
  • Content filtering and PII redaction
  • Zero data retention architecture
✓ Achieved

Real-World Validation

  • 3,000+ child conversations (July 2025 - Jan 2026)
  • Field testing across multiple schools
  • 100% system uptime
  • Comprehensive usage monitoring
✓ Achieved

Privacy & Compliance

  • Full DPIA completed
  • UK Children's Code compliant
  • GDPR Article 8 compliant
  • Child-appropriate privacy notice
⚠ In Progress

Participatory Research

Formalising structured qualitative feedback from children and guardians through focus groups and interviews with proper consent frameworks.

⚠ In Progress

Documentation & Transparency

Packaging existing evidence into formal monitoring documentation and preparing transparency reports for publication.

→ Planned

Governance Formalisation

Documenting formal governance structure with defined expert roles (AI Systems Expert, Age-Appropriate Expert, Child Rights Expert), Senior Accountable Leader designation, and formal redress/complaints process for children.

How Quinly Differs from Entertainment Chatbots

Not all AI chatbots are the same. Here's how Quinly's professional safeguarding design differs from consumer entertainment chatbots:

Aspect Entertainment Chatbots Quinly 2.0
Primary Purpose Engagement, role-play, revenue Child safeguarding, crisis support
AI Safety Model Varied, engagement-optimised Claude 4 Sonnet (Constitutional AI)
Data Retention Persistent conversation history Zero (stateless)
Crisis Response May encourage harmful acts Immediate professional referral
Professional Oversight None (consumer app) Real-time DSL dashboard
UK Compliance Reactive (post-incident) Designed-in from day one
Age Assurance Self-declared (easily bypassed) Institutional deployment
Content Filtering Basic (often failed) Multi-layer Constitutional AI

Evidence-Based Safety

Real-world validation: Quinly has supported 3,289 child conversations (July 2025 to January 2026) with:

  • Zero incidents of harmful content
  • Zero grooming behaviours
  • Zero inappropriate responses
  • 100% appropriate crisis signposting to Childline, Samaritans, and UK support services

Compliance Documentation & Evidence

We believe in radical transparency. Download our compliance documentation to see the detailed work behind Quinly 2.0:

Quinly icon

Data Protection Impact Assessment

Full DPIA following ICO guidance, documenting all privacy safeguards and risk mitigations.

Download DPIA
Quinly icon

Quinly Basic Usage Reports

Real pilot data: 3,000+ child conversations from July 2025 to January 2026 across 6 schools.

Download Reports
Quinly icon

Children & AI Design Code

The complete framework from 5Rights Foundation that we're working to fully align with.

Download Code

Our Ongoing Commitment

Quinly Ltd is committed to achieving 100% compliance with the Children & AI Design Code by Q2 2026. We will:

  • Publish quarterly transparency reports on our compliance progress
  • Conduct formal participatory research with children and families
  • Maintain our existing technical safeguards and operational monitoring
  • Document our governance structures and accountability frameworks
  • Seek external validation and auditing of our practices

Questions about our compliance work? Contact us at hello@quinly.ai