Safeguarding and safety

Professional AI designed for child protection, not entertainment

The crisis is here: Young people are turning to AI chatbots

New research reveals the urgent need for safe, monitored AI support in schools.

40% already using AI for support

Almost 4 in 10 young people aged 11-18 have turned to AI chatbots for advice, support or companionship. (OnSide Youth Charity, 5,035 respondents, November 2025)

19% find it easier than real people

Nearly 1 in 5 young people say it is easier to talk to AI than a real person. 9% feel embarrassed talking to adults. 6% have no one else to talk to.

76% exposed to harmful content

Three quarters of young people have been exposed to upsetting content online - a 7% increase from 2024. Fake news (+4%), hate speech (+4%), and sexual content (+6%) are all rising.

Professional tool for safeguarding support

Quinly is not a consumer app adapted for schools. It is a professional tool designed from day one to support your safeguarding work with AI-powered crisis detection and UK compliance.

Constitutional AI safety

Every response filtered through Claude 4 Sonnet's Constitutional AI, specifically designed to protect vulnerable children, not maximise engagement.

Crisis detection and signposting

Detects 30 crisis categories. Quinly suggests trusted adults first, then signposts to UK support services including Childline, Samaritans, CEOP, Revenge Porn Helpline, and NHS mental health.

Zero data retention

Stateless architecture means no conversation history, no personal data stored. Just anonymous aggregate analytics for your DSL dashboard.

Crisis detection and response

Real-time crisis identification with immediate professional signposting

30 crisis categories

AI & Deepfake harm, suicide and self-harm, sexual abuse, county lines, trafficking, cyberbullying, eating disorders, domestic abuse, and more.

1,000+ trigger phrases

Covering British colloquialisms and youth slang. Context-aware to prevent false positives whilst catching genuine crises.

UK service signposting

Provides information about Childline (0800 1111), Samaritans (116 123), CEOP, StayAlive app, and NHS mental health services when appropriate.

Anti-grooming architecture

Stateless design prevents harmful relationships from forming

Stateless design

Zero conversation history stored between sessions. Cannot build the persistent relationships that enable grooming.

No personal data

Anonymous and aggregate analytics only. No child profiles, no behavioural tracking, no data to breach.

Privacy by design

100% UK DPA 2018 and Children's Code compliant. Zero data retention is not a feature, it is our architecture.

Professional oversight

Real-time dashboard for your safeguarding team

Real-time dashboard

Designated Safeguarding Leads see crisis patterns, sentiment trends, and emerging risks across the school community.

Severity scoring

CRITICAL, HIGH, MEDIUM, LOW classification helps DSLs triage concerns and allocate resources effectively.

Anonymous aggregate data

Insights without identification. Your safeguarding team gets the intelligence they need whilst protecting pupil privacy.

Evidence-based safety

Real-world validation: Quinly has supported 3,000+ child conversations (July 2025 to January 2026) with:

  • Zero incidents of harmful content
  • Zero grooming behaviours
  • Zero inappropriate responses
  • 100% appropriate crisis signposting to UK support services

This is what child-first design looks like in practice.