
AI Nutrition
This project involved designing and implementing the AI Nutrition Label, a standardized, visually intuitive disclosure mechanism that solved the critical challenge of ethical transparency for AI tools in educational technology. The Label served as the necessary governance framework, enabling the organization to responsibly launch the major Ignite AI suite across the Canvas platform while prioritizing student data privacy and institutional trust.
Client
Instructure
Sector
EdTech
Role
Lead Designer
When
Q2, 2025
Duration
6 months

01
Context & Challenges
Context
Instructure was preparing for the landmark rollout of Ignite AI, a suite of powerful, integrated AI tools directly within the Canvas Learning Management System.
Challenges
This initiative presented a dual challenge: first, meeting the high demand for innovative AI features; and second, addressing the critical governance need for trust and transparency. There was no standardized mechanism for administrators to evaluate the ethical, security, and data privacy risks of these new internal features and other third-party AI tools. The challenge was creating a transparent framework that would allow the responsible launch of Ignite AI while protecting student data.

02
Problem & Solution
Problem
The launch of Ignite AI and other cutting-edge tools required immediate assurance for educational institutions regarding data handling and ethical use. Without this, the rollout faced resistance and undermined user trust.
Solution
Design and lead the development of the AI Nutrition Label, a visual and data-driven disclosure mechanism that communicated the ethical and operational DNA of every AI-powered product—including the flagship features within the new Ignite AI suite—to institutional decision-makers. The Label was the necessary compliance layer for the major product rollout.


03
Discovery & Process
Discovery
The process began by navigating this “new and ambiguous territory” to define foundational ethical guidelines for technology implementation. Key steps included:
- Defining the essential categories of information required by school district administrators (e.g., Data Privacy, Model Type, Security Protocol) for new features like Ignite AI.
- Conducting stakeholder workshops with Legal, Product, and Engineering to align on a minimum standard of required disclosure, particularly for internally developed AI systems.
- Researching and synthesizing external ethical AI frameworks (e.g., NIST, regulatory guidelines) to create an internal, scalable compliance standard.
- Iterative sketching and wireframing to determine the most effective visual hierarchy for presenting complex, technical information in a digestible format.
- We tested the design and content by placing prototype labels on the Emerging AI Library and gathered feedback from key audiences, including third-party providers and district administrators. This research was crucial for detailing the consumption of the AI Nutrition Label, ensuring its effectiveness in building trust, and clarifying information about AI-enabled products across the Instructure ecosystem.
Process
Recognizing the need for transparency regarding the AI products in our library, we required a low-cost solution to create and display AI Nutrition Labels on our Emerging AI Library. Following a thorough stakeholder review, we developed an internal AI Nutrition Label Creator tool. This tool now allows our team to dynamically generate and publish standardized labels on the fly, eliminating the need for significant development costs.

04
Refinement
The core design was the creation of a modular, easily scannable component—the “label“—that could be consistently applied across all AI products within the EdTech Marketplace, and critically, directly integrated with the Ignite AI feature documentation.
Key design decisions included
- Categorical Grouping: Grouping disclosure points into color-coded sections (e.g., Data Usage, Ethical Guardrails) to aid comprehension.
- Component Design: Leveraging the existing InstUI design system to ensure the label was responsive and visually consistent with the broader Canvas/Instructure product suite.
- Implementation Strategy: Designing the label component to be fed by metadata, allowing partners and internal teams (like the Ignite AI product team) to input required information via a centralized portal.
- Testing Focus: Usability testing focused on administrator understanding and speed of decision-making.

05
Delivery & Impact
Delivery
The AI Nutrition Label was successfully designed, developed, and implemented as a mandatory disclosure layer for the massive launch and ongoing development of the Ignite AI suite across Canvas. It was also applied to all AI-enabled products in the Marketplace.
Impact
The solution established the organization as a leader in responsible AI, enabling the successful, ethical, and secure launch of a major product line. It ensured the ethical and secure implementation of AI across the platform, with a focus on protecting student privacy and data. The improved transparency is expected to lead to:
- Policy Control: It gives administrators the necessary control and visibility to align AI features with their specific institutional policy and governance standards.
- Data Trust: It provides confidence-inspiring transparency by explicitly guaranteeing data privacy and ensuring institutional/student data is never used for AI model training without consent.
- Reduced friction: For the adoption of Ignite AI, and a clear competitive advantage in the responsible adoption of new technologies.

Visualizing Predictive AI

The EdCo Marketplace