Process Over Product, Mindset Over Toolset: Inverting Bloom’s Taxonomy for Teaching AI

Megan Workmon Larsen
13 min readDec 5, 2024

--

Teaching AI like it is just a set of tools is a mistake. It’s not just outdated — it’s risky and potentially dangerous.

Institutions are rushing to teach learners how to use generative AI tools as if those tools themselves are the point of education. Spoiler alert: they are not. These tools will be obsolete in a year or change so quickly they will be unrecognizable, replaced by something faster, more powerful, and probably named after a Greek letter or forgotten mythological hero. Yet here we are, tying our teaching to tools destined for obsolescence instead of focusing on the skills that matter — curiosity, adaptability, and ethical awareness. Ah, education, we will never learn.

If we keep fixating on tools, we’re not just failing learners; we’re setting them up for irrelevance. AI education isn’t about training button-pushers, passive operators, or cog-turners even if it’s under the guise of operational efficiency. It’s about preparing thinkers and creators to thrive in a world defined by constant change. They need to learn how to adapt, how to question, and how to innovate in the face of ambiguity.

Some questions I have been pondering this week include: What happens when institutions churn out AI “button-pushers” without critical thinking skills? Who takes responsibility when tools built by well-intentioned yet uncritical learners amplify bias and harm? Are we preparing learners to shape the future of AI — or to be shaped by it? What are we risking by prioritizing convenience and conformity over curiosity and critical thinking?

This post is a challenge to educators, departmental leaders, and decision-makers on how we approach teaching and learning around AI. It’s time we stop clinging to traditional frameworks. It’s time to dig deeper, teaching the processes and mindsets that will help learners lead, not just survive, in the ever-changing age of AI.

So, Where Do You Start?

Frames galore. Created with Adobe Firefly.

The truth is, we’re drowning in AI literacy frameworks. I built my first one two years ago, a newbie to this space. Every organization and academic institution seems to have its own version — a neatly packaged list of competencies, often built on Bloom’s Taxonomy, marching predictably from “understand” to “create.” Yet, in practice, these frameworks feel like they’re designed for an era where innovation moved at the speed of committee meetings, not generative models cranking out paradigm shifts weekly.

The problem isn’t the lack or even the abundance of frameworks; it’s that many of them start from a traditional place for a new, non-traditional setting. When faced with the reality of education changing, the frameworks tend toward a focus on static knowledge or mechanical skill-building, assuming that what students, staff, or faculty learn today will still be relevant tomorrow. This is not just short-sighted — it’s risky in a world where tools update faster than your favorite app can push a patch.

So where do you start? I would suggest we need to invert the traditional approaches, starting not with passive understanding but with active creation. Let learners experiment, fail, refine, and innovate. Anchor learning not in mastery of the ephemeral but in cultivating skills that endure: critical inquiry, ethical reflection, and adaptability. This mindset-first approach doesn’t just prepare students, staff, faculty, any learner to use AI; it equips them to question it, reshape it, and lead in a world transformed by it.

Mindset Versus Toolset, A Dual

Older man with pixelated background, abstract, showing the dual of how we think with tools
Created with Adobe Firefly.

If your AI curriculum centers only on today’s tools, you are teaching learners to survive the present, not lead the future. A mindset-driven approach prepares them to question, adapt, and innovate as AI reshapes every field it touches. Learners must think critically, act ethically, and stay curious. No amount of understanding a specific tool interface will drive sustainable change. While mindset and process are essential, some educators might argue that learners also need practical, tool-specific training to function effectively in the immediate workforce. Employers may expect graduates to have hands-on proficiency with current tools. I agree, I just do not think it is the starting or the ending point.

Technical skills are important, but they are not enough on their own. Teaching the underlying principles of tools, of collaboration, of reflective practice, ensures that learners can adapt quickly to new technologies as they emerge. Mindset and process complement technical skills, equipping learners to apply their knowledge flexibly, evaluate outputs critically, and refine solutions effectively. Together, these approaches prepare learners to meet both immediate workforce demands and long-term challenges in a rapidly evolving AI landscape.

So, here are some considerations for a mindset versus toolset approach:

Ethics and Responsibility Are Non-Negotiables

AI is never neutral. It shapes outputs and lives, drives decisions, and magnifies power. That makes ethical foresight a requirement, not a footnote. Learners need to see past the slick interfaces and ask the hard questions: Who benefits? Who’s excluded? What biases lurk in the data? And most critically, what will they do when they uncover an uncomfortable truth or discovery?

Take facial recognition, for example — tools trained on skewed datasets disproportionately fail to recognize darker-skinned faces, leading to harmful, real-world consequences. Learners must not only grapple with these failures but take responsibility for designing systems that challenge, rather than reinforce, inequity. Teaching ethical foresight prepares technologists, humanists, and creatives to address challenges directly, shaping AI with accountability and purpose.

Adaptability Over Dependency

Clinging to specific tools is like trying to navigate rough seas with only one oar. You are limited, stuck reacting to the waves instead of steering through them. A mindset-first approach teaches learners to evaluate the whole vessel: the sails, the rudder, and even the currents themselves. It gives them the ability to adapt, pivot, and find new ways forward when tools change, as they always will. Rather than seeing change as a setback, they use it to explore fresh strategies, rethink their approach, and keep moving ahead.

Consider how learners engage with Large Language Models. Those who rely on memorized prompt tricks risk falling behind as newer versions emerge or entirely different systems take their place. These systems are increasingly complex and will continue to evolve in their reasoning structures. In contrast, learners trained in process — those who know how to evaluate outputs, question data sources, and iterate — are not shaken by change. They embrace it, using each evolution as an opportunity to test, refine, and rethink how they approach challenges.

Curiosity as a Driver

AI education thrives on curiosity. Learners need the freedom to tinker, test, fail, and try again. In many cases, it has even leveled the roles of instructor and student, transforming learning into a process of co-investigation. Instead of teaching learners to color inside the lines, we should hand them the tools and ask, “What will you create? What can we create together? This is what I learned in this process, how about you?” The focus shifts from achieving perfect results to exploring possibilities, uncovering patterns, and learning through iteration.

Curiosity is more than play. It is the engine for asking hard questions: Why did the model behave this way? What shaped this output? How can it be pushed further? When learners adopt a mindset of questioning everything, they move from passive engagement to active innovation. Recently in my life, someone scoffed at the idea of teaching curiosity and creative thinking. They dismissed play as a frivolous distraction, irrelevant to “serious” learning. Their reaction wasn’t surprising — it reflected an educational system that too often crushes curiosity instead of cultivating it.

But AI demands curiosity. Without it, we risk producing learners who can follow instructions but fail to question the systems they use. Curiosity isn’t a luxury. It’s a necessity.

A Case for Process Over Product

Process builds expertise that endures, focusing on critical skills that integrate knowledge and drive iterative creation. Reflection, iteration, and adaptation form the foundation of a process-driven approach. Learners refine their understanding with each cycle, integrating what they discover into new solutions and creative possibilities.

Generative AI thrives on this iterative cycle, where creating, testing, questioning, and refining for understanding are essential. Learning AI should mirror this approach, emphasizing the adaptability to troubleshoot unexpected outcomes, evaluate results critically, and apply knowledge in innovative ways. By engaging deeply in the process, learners build habits that last: they ask better questions, uncover patterns, and connect ideas across contexts. These habits not only prepare them for changing tools but also give them the confidence to innovate, adapt, and lead in any environment.

Inverting Bloom’s Taxonomy for AI Education

And now, I am fully prepared for the educational theorists to come for me with pitchforks.

Created with Adobe Firefly.

Traditional Bloom’s Taxonomy is a climb — learners start at the bottom, remembering and understanding concepts, before advancing to applying, analyzing, evaluating, and eventually creating. This made sense in a world where knowledge was considered more static or even a smidge more linear, but AI disrupts this model.

The gateway of learning in the age of AI is creation.

Starting with creation encourages learners to experiment, build, and test real-world ideas, sparking curiosity and uncovering the questions that drive deeper understanding. By engaging in creation, learners explore how tools work through hands-on experimentation and real-world application.

Learners begin by evaluating their outcomes, asking: What needs to change to improve this result? They then move to analyzing the problem: Why did this approach lead to this outcome? Next, they apply their insights to adjust and refine their work: What can I do differently next time? Through this process, learners build understanding by connecting their discoveries to broader concepts and systems. Finally, they remember key lessons and integrate them into future iterations.

Create is the entry point, not the pinnacle. It drives learners to evaluate the outcomes of their work, analyze what worked (and what did not), and apply what they’ve learned in new ways. This isn’t about skipping foundational knowledge — it’s about embedding that knowledge in meaningful action.

A flexible model to be adapted to different contexts, it might go something like this:

A visual of the Inverted Bloom’s Taxonomy for AI Education. Six stages descend in color-coded blocks: Create (orange) focuses on building and innovating; Evaluate (yellow) involves appraising outcomes and reflecting; Analyze (green) identifies patterns and connections; Apply (light green) adapts knowledge to solve problems; Understand (blue) connects concepts to broader contexts; and Remember (purple) consolidates knowledge for future use. Foundational knowledge is embedded throughout.

Create

AI demands action. Learners thrive when they jump in, get messy, and push systems to their limits. Whether it is crafting prompts, refining outputs, or building entirely new models, starting with creation invites curiosity and chaos right away. It’s not about perfect outcomes; it’s about testing boundaries, adapting to surprises, and figuring out what works — and what does not.

Description: Creation begins with building — whether crafting a prompt, designing a picture, training a model, or developing an AI-driven tool. This stage allows learners to explore AI’s potential, engage in hands-on experimentation, and test new ideas in real-world contexts.

Human Skills: Imagination, initiative, and the ability to synthesize diverse ideas into new concepts.

Reflective Questions:

  • What did I aim to build, and in what ways did the tool support or limit my efforts?
  • How could this creation evolve or be applied to a different challenge?
  • What unexpected insights did I gain through the process of creating with AI?

Evaluate

Once something is created, the next step is to evaluate the outcomes throughout the process. This stage involves reflecting critically on results, identifying strengths and weaknesses, and determining what worked and what needs improvement. Evaluation is where learners begin to ask deeper questions about the effectiveness of their efforts and the role of AI in shaping those outcomes.

Description: Evaluation focuses on examining the outcomes of creation. Learners assess what succeeded, what failed, and why, building the foundation for refinement and deeper understanding. This stage also includes appraising the ethical consequences of decisions made during the process and considering how alternative approaches might lead to different outcomes.

Human Skills: Critical thinking, discernment, and the ability to assess outcomes with curiosity and objectivity.

Reflective Questions:

  • What aspects of the outcome met my expectations, and what did not?
  • Where were the strengths and weaknesses in the result?
  • How does this output align with or deviate from the original goal? What was the process of getting there?
  • What ethical considerations emerged, and how do they influence my evaluation of this process?

Analyze

With outcomes evaluated, the next step is to analyze the underlying systems and processes that shaped the results. Analysis involves critically examining patterns, comparing data, and relating findings to real-world problems and decisions. Learners explore not only what happened, but also why, drawing inferences and predicting future outcomes based on trends and relationships.

Description: Analysis focuses on interpreting results and uncovering the factors that influenced them. Learners compare and contrast data, identify patterns and trends, and connect insights to authentic challenges and decision-making contexts.

Human Skills: Critical reasoning, interpretation, inference, and the ability to relate insights to practical problems and meaningful decisions.

Reflective Questions:

  • What trends or patterns are emerging in the data, and what do they reveal?
  • How do these findings compare to other outputs or contexts?
  • What about the process changed the outcome?
  • What predictions can I make based on these insights, and how might they inform future decisions?

Apply

After analyzing the results, the next step is to apply insights in meaningful contexts. This involves integrating methods into other systems, implementing ideas, and executing solutions to refine processes or solve challenges. Application bridges theory and practice, allowing learners to test their understanding while exploring creative approaches to real-world problems.

Description: Application focuses on turning knowledge into action. Learners use models, methods, and processes to integrate ideas, implement solutions, and address challenges. Creativity is key as learners adapt their understanding to produce effective outcomes.

Human Skills: Execution, problem-solving, creative thinking, and the ability to adapt processes within complex systems and contexts.

Reflective Questions:

  • How does what I learned now apply to my context and other existing systems?
  • How can I make use of the processes to operate, implement, execute or experiment in the real world?
  • What adjustments or adaptations will be necessary during implementation?
  • What outcomes emerged from applying these insights, and what opportunities followed?

Remember

The final step is to consolidate knowledge and ensure it is ready for future use. Remembering involves recalling key information, defining or redefining terms, and reflecting on how the process can be transferred to new contexts. This stage reinforces understanding, especially of process, and prepares learners to use their knowledge effectively in diverse situations.

Description: Remembering focuses on solidifying factual knowledge and understanding the steps of the process. Learners recall essential details, construct frameworks for future use, and reflect on how these lessons apply to different challenges.

Human Skills: Memory, synthesis, and the ability to generalize processes for broader application.

Reflective Questions:

  • What key information or insights should I retain for future use?
  • How did how I think about this tool and this process change over time? What do I need to remember?
  • How can I describe the process to ensure it transfers to new challenges?
  • What terms, steps, or methods should I preserve to guide future efforts?

Foundational Knowledge….or Scaffolding?

Illustration of a pixelated building surrounded by scaffolding
Created with Adobe Firefly.

This is all not to say we should skip foundational knowledge. Perhaps instead of a foundational, it is an ongoing scaffold of knowledge to support the whole. Foundational knowledge is woven into the process, ensuring it becomes meaningful and lasting. Learners develop principles that underpin AI, including algorithms, data literacy, machine learning basics, and programming. These technical concepts are paired with ethical awareness and contextual understanding to provide a comprehensive foundation.

This knowledge is not skipped but contextualized, emerging naturally as learners engage with real challenges. For example, understanding algorithms and models becomes essential when a project requires tuning a machine learning system to improve performance. Data literacy grows as learners grapple with preparing datasets, recognizing quality issues, and addressing biases that impact outcomes. Ethical considerations like fairness and transparency take center stage when learners evaluate the societal implications of their outputs, revealing the critical importance of these principles.

Learners revisit and deepen foundational knowledge at each stage of the learning process. By starting with creation and moving through evaluation, analysis, and application, they encounter these concepts repeatedly in practical, evolving contexts. This approach embeds learning in action, linking technical knowledge with problem-solving and critical thinking. Through this process, learners connect historical and interdisciplinary perspectives with systems thinking, ensuring they see AI’s role within larger networks of people, technologies, and global challenges. By grounding foundational knowledge in authentic inquiry, curiosity, adaptivity, and strategy, learners not only understand core concepts but are prepared to apply them across diverse fields, driving innovation and ethical engagement.

The Stakes

AI is reshaping work, education, and society in ways that are uneven and deeply consequential. How we teach and learn about AI will determine whether it becomes a tool for innovation and equity or reinforces existing disparities. Teaching AI as a static set of tools risks producing a generation that follows instructions without question, embedding current flaws into future systems.

Instead, we must flip the traditional model and start with creation. Learners must begin by building solutions that test boundaries and explore possibilities. Through evaluation and analysis, they can question assumptions, uncover biases, and refine their approaches. Applying these insights allows them to tackle authentic challenges and develop systems that address the needs of diverse communities. Understanding grows as they connect their learning to ethical considerations and broader implications. Finally, through reflection, they consolidate lessons and prepare to engage with AI in meaningful, responsible ways.

This approach equips learners to think critically and act with purpose. By prioritizing process over product and mindset over toolset, we prepare them not only to navigate a changing world but to lead with curiosity, creativity, and responsibility for shaping AI’s role in a just and inclusive future.

Some Last Questions to Consider

  • If AI is already transforming how we think, why are we still trying to fit it into rigid, one-size-fits-all frameworks? What will it take to adapt this approach to truly diverse contexts?
  • What keeps educators from embracing a mindset-first, process-driven approach to AI learning? Are traditional methods holding us back, or is it the fear of leaving behind what feels familiar?
  • Are we asking the right reflective questions, or are we settling for surface-level assessments? How can deeper, more disruptive inquiry drive clarity and purpose for both educators and learners?
  • Equity and principled innovation are not optional. How do we ensure they are non-negotiable priorities in every flipped or unflipped framework, even when it requires difficult conversations or challenges to existing norms?

--

--

Megan Workmon Larsen
Megan Workmon Larsen

Written by Megan Workmon Larsen

Rebellious educational researcher, storyteller, and artist with an operatic flair and human-centered approach. Teaching AI now, because why not?

Responses (1)