AI

Grace Yee, Senior Director of Ethical Innovation (AI Ethics and Accessibility) at Adobe – Interview Series

Grace Yee is the Senior Director of Ethical Innovation (AI Ethics and Accessibility) at Adobe, where she is responsible for driving ethics efforts globally across the organization and developing processes, tools, training, and other resources to help ensure Adobe industry Leading AI innovation continues to evolve in line with Adobe’s core values ​​and ethical principles. Grace advances Adobe’s commitment to building and using technology responsibly, centering ethics and inclusivity in all the company’s work developing AI. As part of this work, Grace oversees Adobe’s AI Ethics Council and Review Board, which makes recommendations to help guide Adobe’s development teams and reviews new AI features and products to ensure they are compliant with Adobe’s ​​e’s principles of responsibility, accountability and transparency. These principles help ensure we bring AI-driven capabilities to market while reducing harmful and biased outcomes. Grace also works with policy teams to drive advocacy and help shape public policy, laws and regulations around AI to benefit society.

As part of Adobe’s commitment to accessibility, Grace helps ensure that Adobe’s products are inclusive and accessible to all users so that anyone can create, interact, and engage in digital experiences. Under her leadership, Adobe works with government groups, industry associations, and user communities to promote and advance accessibility policies and standards and drive impactful industry solutions.

Can you walk us through Adobe’s journey in shaping AI ethics over the past five years? What key milestones define this evolution, especially in the face of rapid developments such as generative artificial intelligence?

Five years ago, we formalized our AI ethics process through our AI ethics principles establishing accountability, responsibility, and transparency, which serve as the foundation for our AI ethics governance process. We assembled a diverse, cross-functional team of Adobe employees from around the world to develop actionable principles that stand the test of time.

Since then, we have developed a robust review process to identify and mitigate potential risks and biases early in the AI ​​development cycle. This multi-part assessment helps us identify and address features and products that may perpetuate harmful biases and stereotypes.

As generative AI emerges, we adapt our AI ethics assessment to address new ethical challenges. This iterative process allows us to stay ahead of potential issues and ensure our AI technology is developed and deployed responsibly. Our commitment to continuous learning and collaboration with teams across the company is critical to maintaining the relevance and effectiveness of our AI ethics program, ultimately enhancing the experience we provide our customers and promoting inclusivity. ​

How do Adobe’s AI ethical principles (Accountability, Responsibility, and Transparency) translate into daily operations? Can you share some examples of how these principles guide Adobe’s AI projects?

We stay true to Adobe’s commitment to ethical AI across our AI-driven features, implementing robust engineering practices to ensure responsible innovation while continually gathering feedback from employees and customers to make necessary adjustments.

New AI features undergo thorough ethical assessments to identify and mitigate potential biases and risks. When we launched Adobe Firefly, our family of generative AI models, we evaluated it to reduce the generation of content that could perpetuate harmful stereotypes. This assessment is an iterative process developed in close collaboration with the product team and incorporates feedback and learning to remain relevant and valid. We also conduct risk discovery exercises with product teams to understand potential impacts so we can design appropriate testing and feedback mechanisms. ​

How is Adobe addressing issues related to AI bias, especially in tools used by a globally diverse user base? Can you give an example of how to identify and mitigate bias in specific AI features?

We work closely with our product and engineering teams to continually improve our AI ethics assessment and review process. The ethical assessment of AI we had a few years ago was different from the assessment we have now, and I expect more changes to come. As technologies like Firefly evolve, this iterative approach allows us to incorporate new knowledge and address emerging ethical issues.

For example, when we added multilingual support to Firefly, my team noticed that it wasn’t providing the expected output, and some words were being blocked inadvertently. To alleviate this problem, we work closely with our internationalization team and native speakers to expand our model and include country-specific terminology and connotations. ​

Our commitment to continually improving our evaluation methods as technology advances helps Adobe balance innovation with ethical responsibility. By fostering inclusive and responsive processes, we ensure our AI technology meets the highest standards of transparency and integrity, allowing creators to use our tools with confidence.

In your role in shaping public policy, how is Adobe navigating the intersection between rapidly changing AI regulations and innovation? What role does Adobe play in developing these regulations?

We actively work with policymakers and industry groups to help develop policies that balance innovation with ethical considerations. Our discussions with policymakers focus on our approach to AI and the importance of developing technology to enhance the human experience. Regulators seek practical solutions to current challenges and promote more productive discussions by proposing frameworks such as our Ethical Principles for AI, which are collaboratively developed and applied consistently across our AI capabilities. It is crucial to come up with concrete examples that show how our principles work in action and demonstrate real-world impact, rather than talking through abstract concepts.

What ethical considerations does Adobe prioritize when sourcing training data, and how does it ensure that the datasets used are ethical and robust enough for AI needs?

At Adobe, we prioritize several key ethical considerations when sourcing training data for AI models. As part of our efforts to design Firefly for commercial security, we trained it on a dataset of licensed content such as Adobe Stock, as well as public domain content whose copyright has expired. We also focus on diversity in the dataset to avoid reinforcing harmful biases and stereotypes in model output. To achieve this, we work with different teams and experts to review and organize the data. By adhering to these practices, we strive to create AI technologies that are not only powerful and effective, but also ethical and inclusive for all users. ​

How important do you think transparency is in communicating to users how Adobe’s AI systems like Firefly are trained and what data they use?

Transparency is critical when communicating to users how Adobe’s generative AI features, such as Firefly, are trained, including the types of data used. It builds trust and confidence in our technology by ensuring users understand the processes behind our generative AI development. By making our data sources, training methods, and our existing ethical safeguards public, we empower users to make informed decisions about how to interact with our products. This transparency not only aligns with our core AI ethics principles, but also fosters collaborative relationships with users.

As artificial intelligence (especially generative artificial intelligence) continues to expand, what do you think are the most significant ethical challenges that companies like Adobe will face in the near future?

I believe that the most significant ethical challenges for companies like Adobe are reducing harmful bias, ensuring inclusivity, and maintaining user trust. The potential for AI to unintentionally perpetuate stereotypes or generate harmful and misleading content is an issue that requires ongoing vigilance and strong safeguards. For example, with recent advances in generative artificial intelligence, it is easier than ever for “bad actors” to create deceptive content, spread misinformation, and manipulate public opinion, thereby undermining trust and transparency.

To solve this problem, Adobe established the Content Authenticity Initiative (CAI) in 2019, aiming to build a more trustworthy and transparent digital ecosystem for consumers. CAI implements our solution to establish trust online – called Content Credentials. Content credentials include “ingredients,” or important information, such as the name of the creator, the date the image was created, the tools used to create the image, and any edits made along the way. This enables users to create digital chains of trust and authenticity.

As generative AI continues to expand, it will become even more important to promote widespread adoption of content credentials to restore trust in digital content.

What advice would you give to other organizations just starting to consider an ethical framework for AI development?

My advice is to start by establishing clear, simple, and practical principles to guide your work. Too often I see companies or organizations focus on things that look good in theory, but their principles are not practical. Our principles have stood the test of time because we designed them to be actionable. When we evaluate AI-powered features, our product and engineering teams know what we’re looking for and the criteria we expect from them.

I also recommend that organizations go into this process knowing that it will be iterative. I may not know what Adobe will invent in five or ten years, but I do know that we will refine our assessments to meet these innovations and the feedback we receive.

Thanks for the great interview, readers who want to learn more should visit Adobe.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button