Brussels Effect 6 : The Reach of AI Transparency Obligations Beyond Corporations
| Date(s) | Tuesday, 6 January 2026, 14:00-15:00 JST |
|---|---|
| Venue |
ZOOM Webinar (Register here) |
| Registration | Pre-registration required (If you cannot attend on the day but would like to be informed of the YouTube link later, please register.) |
| Language | Japanese only |
| Abstract |
The EU AI Act, adopted in 2024, introduces transparency obligations (Article 50) requiring that users can clearly recognize when text, images, video or audio have been generated or edited by AI. These obligations cover, among others, the disclosure of AI-mediated interactions in chatbots, watermarking or metadata for synthetic content, notices regarding the use of emotion recognition and biometric categorisation systems, and labelling of deepfakes—rules that go straight to the heart of practice in the generative AI era.
Crucially, these obligations are not confined to a small group of major corporations or AI developers. The Act places duties not only on “providers” who place AI systems on the market, but also on “deployers” who integrate them into services and content offered to users in the EU. The reach of Article 50 thus extends to marketing, advertising and PR teams, media and production companies, platform and SaaS providers, and potentially also to freelance designers, video creators, influencers and sole proprietors who use AI in a commercial or professional capacity.
By contrast, purely private, non-professional “hobby” use of AI falls outside the scope of these legal obligations. Yet in practice, the line between private and professional use is often blurred: sponsored social media posts, online sales targeting EU customers, or freelance work that partially relies on AI can all raise questions about when AI use becomes “professional” in the sense of the Act. In this respect, the practical reach of Article 50 may be broader than many expect.
The EU is currently developing a Code of Practice (CoP) on transparency to operationalise Article 50 in day-to-day practice. While not legislation in itself, the CoP is expected to function as a de facto benchmark to help companies and individuals comply without excessive transaction costs. At the same time, those who choose to depart from the CoP and adopt their own bespoke solutions may need to engage in individual discussions with the EU AI Office—an issue that is particularly significant for SMEs and individual professionals.
This session is especially intended for those who have so far assumed that “this doesn’t concern us (or me).” It offers an opportunity to understand how far Article 50’s AI transparency obligations may in fact extend, and to consider what kind of preparation is appropriate in light of that reach.
References:
Related webinars: Session 1
|
| Program |
14:00-14:05
14:05-14:15 14:15-14:30
14:30-14:55 Panelists
14:55-15:00 |
| Organized by |
Tokyo College, The University of Tokyo Institute for Future Initiatives, The University of Tokyo Next Generation Artificial Intelligence Research Center, The University of Tokyo |
| Supported by |
The AI Safety Institute The University of Osaka Research Center on Ethical, Legal and Social Issues Japan Deep Learning Association |
| Contact |
tg-event@tc.u-tokyo.ac.jp Tokyo College, Institute for Future Initiatives and Next Generation Artificial Intelligence Research Center, The University of Tokyo collect personal information from all attendees, but this information will not be disclosed to any third parties. |




