What Is CAUD? A Plain-Language Guide for Creators, Builders, and Policymakers
- 4 days ago
- 3 min read
If you've spent time on the Useful Arts site, you may have noticed the term CAUD. It stands for Creative AI Use Disclosure — and it's the core of what we're building. But we've heard from visitors that it isn't immediately clear what it means or why it matters to them specifically.
This post is our attempt to fix that. Here's a plain-language explanation of CAUD — and what it means depending on who you are.
The simple version
CAUD is a label — like a nutrition label, but for creative work. It tells you:
Whether AI was used in creating a piece of work
How it was used (generated, assisted, edited, trained on)
What data or creative material the AI drew from
Right now, there is no standard way to communicate any of this. A film might be partly AI-generated. A song might have been composed with an AI tool trained on other artists' work. A brand image might be fully synthetic. And audiences, collaborators, and platforms have no reliable way to know.
CAUD is our proposed answer to that gap.
If you're an artist or creator
You've probably already been asked — or wondered yourself — whether to disclose that you used AI in your work. The honest answer today is: there's no clear norm, no consistent expectation, and no standard language to do it with.
CAUD gives you a framework. It's not about shaming AI use — it's about making disclosure normal, consistent, and something creators control. Think of it the way a filmmaker credits their crew, or the way a musician lists their samples. Transparency builds trust. It protects authorship. And it opens the door to real conversations about value and attribution that creators deserve to be part of.
Useful Arts is gathering real-world input from creators right now — including students at LIM College — to understand what disclosure actually looks like in practice. Your experience is what shapes this standard.
If you build AI products
Transparency in AI isn't just an ethical question — it's becoming a business one. Regulators in the EU, US, and elsewhere are increasingly focused on AI disclosure. Users are starting to ask. And the companies that build trust early will have a significant advantage over those that address it reactively.
CAUD is being developed as infrastructure: a standardized disclosure framework that AI companies can adopt, integrate, and point to. Not as ideology — as tooling. Working with organizations like ASCET, a subsidiary of the US National Institute of Standards and Technology, Useful Arts is helping shape what responsible AI use in creative contexts actually looks like at a policy and standards level.
If you're building AI products that touch creative work, we'd like to talk.
If you shape policy
Policymakers are increasingly being asked to regulate something they can't yet fully see. AI's role in creative production is moving faster than the frameworks designed to govern it — and the creative economy, which employs millions and generates enormous cultural and economic value, is absorbing that uncertainty in real time.
CAUD is designed to be policy-ready. It's a structured disclosure model built from community input, grounded in real creative practice, and developed in dialogue with standards bodies. Useful Arts is actively engaging with representatives connected to the New York State Assembly and with ASCET to ensure that what we're building can actually inform thoughtful, forward-looking approaches to AI transparency and creative rights.
If you're working on AI policy and want to understand what the creative community needs, we're a resource.
Where things stand
CAUD is an active initiative, not a finished product. We're in the phase of gathering real-world examples of disclosure, structuring discourse around what a standard should include, and building the partnerships that will give it reach and legitimacy.
The goal — longer term — is something like Creative Commons for AI: a framework that's simple enough for any creator to use, rigorous enough for regulators to reference, and robust enough for platforms to integrate.
We'll be sharing more about what we're learning — including outputs from our student programs and our ongoing work with policy partners — in the months ahead. Subscribe below to follow along.

