As the public sector increasingly adopts generative AI tools to deliver services, inform decisions, and engage citizens, the responsibility to use these technologies ethically and transparently becomes paramount. While generative AI offers powerful capabilities, from improving communication to streamlining operations, it also introduces new risks: misinformation, deepfakes, copyright violations, and threats to privacy.
This talk focuses on how public institutions can lead by example in responsible AI adoption. We’ll explore how transparency standards, like the Coalition for Content Provenance and Authenticity (C2PA), can be used to label content as human or AI generated as well as opt-out of text and data mining. Doing so is crucial to combatting disinformation, preserving trust in public communications, and asserting control over data.
Read More