AI Transparency Statement
How Crown Datalytics uses automation and AI in a human-governed delivery model, and where the boundaries are.
How we use AI and automation
- To support drafting, analysis, triage, requirements structuring, transformation design and engineering acceleration.
- To help build governed agentic workflows where approval points, evidence trails and role boundaries are explicit.
- To improve internal delivery speed while keeping accountability with the responsible human reviewer.
Human oversight
Crown Datalytics does not treat AI output as inherently correct. Outputs should be reviewed, challenged and approved by an appropriately accountable human before they are relied on for production, compliance, security-sensitive or commercially significant decisions.
Client data and environments
Where AI or automation is used in connection with client environments, the approach should be governed by client-specific scope, access controls, confidentiality arrangements and approval rules. We aim to minimise unnecessary data exposure and align the automation model with the sensitivity of the client environment.
Automated decisions
We do not use the public website to make solely automated decisions about individuals that have legal or similarly significant effects without human involvement. If an engagement includes automated decision support, the relevant controls and approvals should be agreed in the engagement scope.
Limits of automated outputs
AI tools can omit context, reflect bad assumptions, or produce outputs that still require verification. Our operating position is that automation can improve speed and structure, but responsibility remains with the accountable human reviewer and the agreed delivery process.
Questions
If you want to understand how AI or automation is used in a specific Crown Datalytics engagement, contact info@crowndatalytics.com and we can explain the relevant controls, assumptions and operating boundaries.
