Goodness All the Way Down: Designing Ethical, Empathetic (AI) Education

Megan Workmon Larsen
16 min readJan 15, 2025

--

Created with Adobe Firefly.

When I was little, I was obsessed with my mom’s brown leather lounge chair. I spent hours spinning in circles, looking at the world upside down for a new perspective, likely singing silly songs the entire time alongside my herd of Breyer horses wearing polished rocks on their heads and rocking the latest in Barbie’s fashion. The chair’s leather was a bit cracked from years and year of use. I loved it so much I eventually broke the footstool with some likely over enthusiastic dramatic endeavor. I loved that chair — it was joy, fun, comfort, and learning all in one.

What I did not know was that this chair, a Danish modern chair inspired by the iconic Eames lounge chair and ottoman design, embodied a philosophy of timelessness, joy, and functionality.

Charles and Ray Eames were iconic mid-century designers whose work spanned furniture, films, and exhibitions. They were known for their innovative approach, combining functionality, joy, and human-centered thinking. Their philosophy — ‘there has to be goodness all the way down’ — emphasized care and integrity in every layer of design. This idea reflects a commitment to excellence — not only in the visible outcomes but also in the unseen layers. Every choice, from material to method, mattered.

In AI education, or really just education, this philosophy offers a profound reminder: what lies beneath — algorithms, data, frameworks, learning outcomes, assignments — shapes outcomes as much as what we see. If we want AI to empower rather than alienate, we must infuse every layer with purpose, equity, and humanity. There needs to be goodness all the way down.

But, First, I Am In the Trough of Disillusionment *

Or, please take this blog with a grain of salt — I am currently over the initial hype of AI.

Working in AI education often feels like standing at the intersection of extraordinary potential and overwhelming complexity. AI is touted as a transformative solution, promising personalized learning, scalable access, and data-driven insights. Yet too often, systems fail to account for nuance, equity becomes an afterthought, and innovation outpaces the empathy and ethics required to ground it. I find myself asking: How do we move beyond surface-level excitement to create meaningful, human-centered impact at every level — from individual classrooms to institutional strategies? How do we design systems that connect rather than alienate, that inspire trust and scalability instead of easy, short-term gains?

A recent design lecture on the timelessness of the Eames philosophy offered clarity. Charles and Ray Eames approached design with care and integrity, balancing elegance with practicality and creativity with humanity. This principle resonates deeply as I reflect on AI education. It challenges us to consider not just what we build, but how and why.

Amid the fatigue of constant innovation, the Eameses’ philosophy offers a counterweight. It reminds me that even in the most complex systems, goodness is found in care at every layer. This belief is my compass as I strive to infuse every decision and interaction with purpose, humanity, and connection — not just within individual classrooms but also in designing institutional strategies for AI adoption. By applying systems thinking, we can build educational frameworks that align ethical design with empathetic scalability.

The following section explores six design principles inspired by the Eames philosophy, each paired with a course activity example, considerations for faculty, AI enthusiasts, skeptics, and leaders. Together, these principles focus on the layers and interconnectedness required to build systems for AI education that balance innovation, equity, and impact at scale.

Everyone likes a table. So, here you go!

Integrity Across Scales

Created with Adobe Firefly.

The Eameses demonstrated that even the smallest design decisions contribute to the larger whole, a principle equally vital in AI education. Integrity across scales means ensuring that every element of the educational process — from tools to techniques to learning outcomes — aligns with ethical, human-centered values. This principle reminds us that the choices we make at the smallest levels ripple outward to shape the broader impact of teaching and learning.

Course Example: Design a “What If?” activity where students use an AI tool to simulate alternate historical scenarios, such as “What if the Apollo 11 moon landing had been broadcast in virtual reality?”, “What if the Mongol Empire had fully integrated Europe into its rule?”, or “What if Marie Curie had lived to witness and influence the Manhattan Project?”, or “”What if Charles and Ray Eames had developed the first personal computer?”

  • Begin by discussing how the AI generates its responses, including the datasets and assumptions underpinning its outputs.
  • Ask students to analyze the integrity of the AI’s small-scale decisions, such as the selection of data or interpretation of context, and how these influence the larger narrative.
  • Encourage students to critically evaluate the human and ethical implications of the AI-generated scenarios.

This activity teaches students to think critically about the layers of design within AI tools and how foundational choices shape their broader educational value.

For Faculty: Maintain integrity in your teaching by connecting everyday decisions to broader learning objectives.

  • Use AI tools in ways that reflect ethical and human-centered principles, ensuring they enhance rather than distract from your students’ learning experiences.
  • Regularly assess whether the AI tools you employ are aligned with your course goals and provide meaningful value.
  • Model transparency by discussing the origins and limitations of the tools you use, emphasizing how small choices impact overall outcomes.

For AI Enthusiasts: Harness your enthusiasm to advocate for the thoughtful, principled use of AI across educational contexts.

  • Share examples of how AI has contributed to meaningful learning by respecting integrity at all levels.
  • Highlight the importance of aligning AI’s capabilities with the specific needs of students and learners, ensuring that technological solutions do not overshadow human connection.
  • Encourage others to critically examine how foundational AI design choices (e.g., data sources or algorithms) shape its broader impact.

For AI Skeptics: Your skepticism is essential for holding AI tools accountable and ensuring their alignment with ethical standards.

  • Ask whether AI tools demonstrate integrity in their design, from how they source data to how they generate results.
  • Highlight instances where small design flaws can lead to larger ethical or educational problems, encouraging a focus on principled decision-making.
  • Advocate for a balanced approach where AI tools are only used when they genuinely enhance learning outcomes.

For Leaders: Ensure integrity across scales by embedding ethical principles into institutional strategies for AI implementation.

  • Develop policies that require all AI tools to be vetted for alignment with educational values, equity, and accessibility.
  • Facilitate workshops or training programs that emphasize how small-scale decisions, such as data use, affect large-scale educational goals.
  • Regularly review how AI systems are functioning across the institution, ensuring they consistently reflect integrity at every level.

How do the tools and methods you use in teaching reflect your broader educational values? What small changes could you make to ensure alignment between day-to-day decisions and long-term ethical goals?

Transparency as a Design Principle

Created with Adobe Firefly.

The Eameses valued transparency, whether in the visible structure of a chair or the layered storytelling of Powers of Ten. In AI education, transparency is not just a feature but a necessity. It fosters trust by revealing how tools function, empowering educators and learners to critically engage with the systems they use. By prioritizing transparency, we can ensure that AI serves as a tool for understanding rather than a source of confusion.

Course Example: Host a “How AI Thinks” session to help students explore the decision-making process behind an AI tool.

  • Provide students with the same input prompt to an AI tool, such as ChatGPT, and compare its responses.
  • Guide them to analyze how the AI’s algorithms interpret data, make decisions, and incorporate biases.
  • Introduce basic concepts of model training, demonstrating how datasets shape AI outputs.

This session allows students to “see behind the curtain,” equipping them with the critical skills needed to question AI outputs effectively.

For Faculty: Transparency in teaching with AI begins with clear communication and ethical engagement.

  • Explain how AI tools used in your courses function, including their strengths, limitations, and biases.
  • Share guidelines for when and how students should use AI tools in assignments.
  • Model critical thinking by openly critiquing AI outputs in class discussions, encouraging students to evaluate their reliability.

For AI Enthusiasts: Use your passion for AI to champion its responsible and transparent use.

  • Advocate for tools that clearly explain their processes, data sources, and decision-making pathways.
  • Share examples of how transparency in AI tools has enhanced learning outcomes or student confidence.
  • Promote balanced conversations that acknowledge both the potential and the limitations of AI.

For AI Skeptics: Your critical perspective ensures that transparency remains a priority.

  • Question whether AI tools provide sufficient visibility into how they operate and what biases they carry.
  • Highlight the risks of using opaque AI systems, such as reinforcing existing inequities or diminishing critical engagement.
  • Advocate for better documentation and training resources to demystify AI for educators and learners.

For Leaders: Institutional policies and strategies must reflect a commitment to transparency in AI.

  • Require vendors to provide clear explanations of how their tools process data, make decisions, and address ethical concerns.
  • Offer professional development programs to help faculty understand and teach with AI tools transparently.
  • Regularly review and update AI practices within your institution to maintain trust and alignment with transparency goals.

What steps can you take to increase transparency in how AI tools are used in your classroom or institution? How might greater openness about AI’s processes impact trust and engagement among learners and educators?

Curate Spaces for Connection

Created with Adobe Firefly.

The Eameses created not just objects but environments — spaces where art, design, and ideas converged to foster connection and exploration. Similarly, AI in education should act as a catalyst for meaningful interactions, strengthening bonds between people, disciplines, and perspectives. By using AI as a tool to spark collaboration, educators can create opportunities for learners to share ideas, challenge assumptions, and build innovative solutions together.

Course Example: Facilitate an interdisciplinary workshop where students from different fields (e.g., art, computer science, and sociology) collaborate with an AI tool to design a solution to a real-world problem, such as improving urban sustainability.

  • Use AI to generate creative prompts or visualize data-driven scenarios, sparking ideas that bring diverse perspectives into dialogue.
  • Ensure that the final solution emerges from student discussion, synthesis, and collaboration, emphasizing the human connections that drive innovation.
  • Teach students to see AI as a supportive tool for creating spaces where differing ideas can flourish.

For Faculty: Create shared learning experiences that emphasize collaboration and critical dialogue.

  • Use AI tools to generate thought-provoking scenarios or datasets for group analysis and debate.
  • Design assignments where students rely on both AI inputs and peer insights, curating an environment where connection drives the learning process.
  • Foster reflective discussions about how AI influences the ways students collaborate and communicate.

For AI Enthusiasts: Showcase the power of AI as a connector in educational contexts.

  • Advocate for AI tools that facilitate collaborative creativity, such as brainstorming platforms or multilingual communication tools.
  • Share examples where AI has bridged divides, fostering interdisciplinary or cross-cultural dialogue.
  • Emphasize the importance of AI as a tool that complements, rather than replaces, human interaction.

For AI Skeptics: Your critical perspective is essential to ensuring that AI enhances, rather than hinders, authentic connections.

  • Raise questions about whether AI tools truly foster collaboration or risk isolating learners behind screens.
  • Advocate for hybrid approaches where AI is used to support, but not dominate, human interaction.
  • Highlight the risks of over-reliance on AI, such as diminishing interpersonal skills, and propose ways to mitigate them.

For Leaders: Focus on building institutional ecosystems where AI fosters interdisciplinary exchange and collective problem-solving.

  • Host university-wide innovation challenges or design labs where teams use AI to address societal challenges, ensuring diverse participation from across departments.
  • Invest in AI platforms and training that prioritize connection-building as a core outcome.
  • Regularly assess how AI tools are being used to support collaboration and adjust policies to keep human connection central to their application.

In what ways can AI serve as a connector in your educational setting? How could you design activities or policies that prioritize human connection and interdisciplinary collaboration?

Fostering Imagination Through Design

Created with Adobe Firefly.

The Eameses embraced play and experimentation, infusing their work with curiosity and joy. They believed that design should invite us to imagine new possibilities and question established boundaries. Similarly, AI in education can be a tool for fostering imagination — helping learners and educators explore creative solutions, rethink norms, and envision bold futures.

Course Example: Create a “Designing Delight” activity where students use AI tools to design an experience that sparks joy or solves a problem in a playful way.

  • Ask students to imagine and co-create with AI a whimsical solution to a real-world challenge, such as: A public park that encourages mindfulness and play, a virtual museum exhibit that turns historical facts into an interactive storytelling adventure, an AI-powered art installation that responds to viewer emotions, etc.

Encourage them to explore the AI’s creative outputs and iterate on them to infuse their own sense of fun, ethics, and purpose. Have students present their projects in a “festival of joy,” reflecting on how their designs balance delight with meaning. This activity brings curiosity and joy to the forefront, using AI as a collaborative partner to amplify students’ imaginations and encourage a playful approach to serious problem-solving.

For Faculty: Incorporate moments of play and wonder into your teaching to foster creative exploration.

  • Use AI to create playful challenges, such as brainstorming fantastical applications for new technologies or designing imaginative scenarios in your field.
  • Emphasize the joy of discovery by encouraging students to explore ideas without fear of failure, using AI to iterate and refine their creative outputs.
  • Highlight the importance of balancing pleasure with purpose, teaching students to value joy as part of the design process.

For AI Enthusiasts: Show how AI can enhance joy and creativity in education by unlocking new forms of expression and exploration.

  • Advocate for tools that prioritize human delight, such as generative art platforms, music composition tools, or storytelling assistants.
  • Share examples where AI has supported joyful, meaningful projects, like interactive poetry generators or playful virtual environments.
  • Encourage educators to embrace AI’s potential for fun and experimentation as part of principled learning.

For AI Skeptics: Ground conversations about AI in its ability to inspire joy and curiosity while maintaining ethical oversight.

  • Challenge whether AI tools are being used in ways that encourage meaningful creativity or if they are reinforcing rigid or transactional learning models.
  • Advocate for approaches that integrate AI into playful, human-centered activities, ensuring that pleasure is taken seriously as a design goal.
  • Emphasize the need for transparency in AI-generated outputs so that creativity remains a collaborative process between human and machine.

For Leaders: Foster institutional support for imaginative, joyful exploration that aligns with educational values.

  • Encourage faculty to experiment with creative AI tools, such as those for interactive storytelling, generative design, or immersive learning environments.
  • Host institutional showcases or “creativity sprints” where AI-supported projects celebrating joy and curiosity are highlighted.
  • Invest in initiatives that prioritize imaginative thinking across disciplines, blending ethical rigor with a sense of delight.

How can you use AI to inspire curiosity and creativity in your teaching or leadership? What playful or experimental projects could you design to encourage students or colleagues to explore new possibilities?

Embracing Feedback and Iteration

Created with Adobe Firefly.

The Eameses saw design as an iterative process, refining ideas through constant feedback and experimentation until form and function were in perfect harmony. In AI education, embracing feedback and iteration is essential for creating systems and experiences that are responsive, adaptable, and continuously improving. By integrating iterative practices into teaching and leadership, we can ensure that AI tools evolve to meet the needs of learners, educators, and institutions.

Course Example: Conduct a “Feedback Loop Design Lab” where students collaborate with an AI tool to improve a group project.

  • Begin with a draft idea or prototype, such as a public awareness campaign, a creative writing piece, or a sustainable product design.
  • Ask the AI for iterative suggestions, focusing on specific aspects like tone, clarity, or structure.
  • Encourage students to critique the AI’s feedback, refine their work based on a mix of human and machine input, and present the revised version to the class.
  • Facilitate a discussion on how the iterative process influenced the final outcome and what the students learned from engaging with AI-generated feedback.

This activity models the importance of iteration by showing how AI can complement, but not replace, human reflection and creativity in refining ideas.

For Faculty: Incorporate iterative practices into your teaching to model adaptability and growth.

  • Use AI tools to provide formative feedback on drafts or assignments, helping students revise their work before final submission.
  • Teach students to critically evaluate AI feedback, identifying its strengths, limitations, and areas for improvement.
  • Encourage students to view feedback — not as judgment — but as an opportunity for learning and refinement.

For AI Enthusiasts: Promote the role of AI as a facilitator of continuous improvement in educational practices.

  • Share examples where AI feedback tools have supported iterative processes, such as providing draft-specific suggestions or enhancing project revisions.
  • Advocate for AI systems that allow users to customize feedback parameters, ensuring alignment with specific learning goals.
  • Emphasize the importance of human oversight in refining AI’s role within feedback loops.

For AI Skeptics: Ensure that iterative processes remain ethical, balanced, and human-centered when AI is involved.

  • Critique AI tools that oversimplify or standardize feedback, potentially diminishing the depth of student learning.
  • Advocate for systems that allow for human-AI collaboration in feedback, ensuring AI remains a supportive tool rather than a decision-maker.
  • Raise questions about how AI feedback systems handle bias and nuance, ensuring they support rather than hinder student growth.

For Leaders: Foster a culture of iteration and improvement by integrating feedback loops into institutional AI practices.

  • Invest in professional development that helps educators use AI tools effectively for formative feedback and iteration.
  • Encourage departments to pilot and refine AI tools in a way that incorporates faculty and student input at every stage.
  • Regularly assess the impact of AI tools on learning outcomes, using iterative evaluations to ensure they remain aligned with institutional values.

How do you currently incorporate feedback into your use of AI tools? What processes could you implement to refine AI practices based on input from students, faculty, or colleagues?

Understanding and Honoring Layers

Created with Adobe Firefly.

The Eameses’ Powers of Ten film beautifully illustrated the interconnectedness of layers, from the microscopic to the cosmic. This principle reminds us that in design — and in AI education — every layer matters, from the foundational algorithms to the user experience and societal impact. Understanding and honoring these layers ensures that the systems we build are thoughtful, ethical, and aligned with human-centered values.

Course Example: Facilitate a “Layers of AI” exploration activity where students map the lifecycle of an AI tool they are using.

  • Begin by asking students to identify the layers of the tool: the data it uses, how it processes information, its user interface, and its impact on learning or decision-making.
  • Assign teams to focus on specific layers (e.g., data sourcing, algorithm design, or user interaction) and analyze the ethical and practical implications of each.
  • Conclude with a discussion on how these interconnected layers influence the overall effectiveness and fairness of the tool in education.

This activity helps students understand the complexity of AI systems and emphasizes the importance of interrogating every layer, from the unseen backend to the visible user interface.

For Faculty: Encourage students to think critically about the interconnected layers of the tools they use.

  • Discuss how data, algorithms, and design decisions influence the functionality and biases of AI tools in your field.
  • Highlight the relationship between these layers and broader educational outcomes, such as equity and accessibility.
  • Design assignments that require students to examine the assumptions and trade-offs built into AI systems.

For AI Enthusiasts: Use your enthusiasm to illuminate the layered complexity of AI systems in education.

  • Advocate for transparency in how AI tools are built, including data sourcing, model design, and user interface decisions.
  • Share examples of how understanding these layers has led to more ethical and effective implementations of AI.
  • Encourage others to see AI as a layered ecosystem that benefits from interdisciplinary insight.

For AI Skeptics: Your critical perspective can reveal blind spots in how AI layers are understood and addressed.

  • Question whether enough attention is being paid to foundational layers, such as data quality and bias mitigation.
  • Advocate for more visibility into how each layer of an AI tool contributes to its outputs and impacts.
  • Push for audits or reviews that examine the ethical and practical implications of AI tools across all layers.

For Leaders: Leadership in AI education requires a systemic understanding of the layers that shape outcomes.

  • Establish policies that require transparency and accountability for every layer of an AI tool’s lifecycle.
  • Facilitate cross-disciplinary collaboration to examine how different layers (e.g., technical, pedagogical, ethical) intersect in your institution’s AI practices.
  • Regularly assess the long-term impacts of AI tools, ensuring that their layers continue to align with institutional values and educational goals.

Think about an AI tool you use or plan to use. Have you explored its layers — from data sourcing to user experience to long-term impacts? How might understanding these layers influence your decisions about its use?

Timeless Design, Transformative Education

The enduring work of Charles and Ray Eames reminds us that great design, whether of a chair or an educational system, is grounded in principles that transcend trends — integrity, transparency, connection, imagination, iteration, and attention to layers. In AI education, these principles guide us to move beyond the surface, embedding purpose and humanity into every decision. By embracing timeless design, we can create systems that are not only innovative but also deeply ethical, equitable, and impactful — tools that inspire trust, spark creativity, and foster meaningful connection. As we navigate the complexities of AI, let us design with the Eameses’ belief in mind: there has to be goodness all the way down.

How are you approaching AI education? Is there goodness all the way down?

--

--

Megan Workmon Larsen
Megan Workmon Larsen

Written by Megan Workmon Larsen

Rebellious educational researcher, storyteller, and artist with an operatic flair and human-centered approach. Teaching AI now, because why not?

No responses yet