Am I Cheating?

I’ve been dismayed by the perspectives of many educators who believe that students will use AI to cheat and that this belief is enough to disregard, block, or avoid addressing the application of AI to teaching and learning. Of course, that’s not the only reason given, and all you have to do is look at this article to see the many reasons educators believe that AI is destructive and represents the end of education as we know it.

As a designer, a significant component of my job involves processing the data and information I collect during my Discovery events with clients.  If you are unfamiliar with Discovery, it's the first part of the design process, where various stakeholders are engaged through surveys, interviews, workshops, observations, and other types of ethnography events.  The process usually results in a large volume of data that must be processed to identify patterns and trends, which can then inform the design project.

Undoubtedly, this evaluation process demands a significant amount of time and effort. It's not uncommon for this task to span many days, often involving countless hours of screen time, so I’m constantly exploring ways to enhance data collection, organization, and processing.

With AI tools now widely available, it was logical to consider how they might support my Discovery analysis. This has led me to explore how AI can make the analysis process more efficient and effective. I've discovered that AI's analytical capabilities can significantly save me time while enhancing my understanding of the data, which supports my ability to make informed design decisions. 

Here is an example:  During an engagement, 110 teachers were sent into their school to document learning spaces and what they were curious about.  Over 300 images and descriptions were uploaded to Padlet in an hour.  The descriptions were downloaded in a spreadsheet and inputed into AI (ChatGPT).  AI was asked to identify, describe, and give examples of 10 patterns or trends in the data, which it did in about 20 seconds. The results gave our team a place to start, and further analysis indicated that the response was accurate and useful.

Is that cheating?

Here is another:  We asked 20 leaders to write their beliefs about the power and opportunity of collaborative learning experiences on large Post-it notes.  AI transcribed their responses (saving typing time) and was then asked to develop a list of 10 collaborative norms from the data, which it did in about 25 seconds.  The response was terrific and was slightly tweaked to add some context.  

Is that cheating?

I’ve taken my process and added AI computational capabilities to it.  My discovery process is still focused on human-centered inquiry, but I’ve modified the data analysis component to take advantage of AI.  In all cases, I must still do my due diligence and evaluate the AI response against my interpretation.  

Am I cheating?  

Or am I just being smart and evolving my process with additional tools that add new capacities to my work and improve it, given the new reality of AI?  Should I ignore these advancements and stay with the process I’ve always used?  Should I remain steadfast in using a legacy process, hoping that the use of AI fades?  Should I focus on the current status quo because it's what I’ve always done rather than creating a more effective process and product for my clients?

It’s easy to apply these questions to student writing. For context, I’ve had the great fortune of working with educators who were absolute masters at teaching writing, so I have firsthand knowledge of what teachers can do with kids when it comes to expressing their thoughts through writing. There is a process based on inquiry and asking questions, reflection, writing, and rewriting, consulting with the teacher, all in the service of a final composition.

Like my data analysis, can AI fit into the writing process?  Are there places, like in my process, where the capacities of AI can support and extend how and what kids write?  Or is it better to rely on what writing has always been and ignore technology that will undoubtedly have a formative impact on their lives?  Does it make sense to sit with kids and see what they think?  Does it make sense for professional educators to understand AI and say:  this could be a help to kids, and here is how we can begin to develop a new process of writing that takes advantage of AI in a smart way that honors the tradition and values of composition, but now uniquely adds value.  For an example, see page 4 of this document, which illustrates how AI can support and complement the writing process.

Should educators dismiss AI because they think kids will cheat when they write? That’s a pretty easy path. A more challenging path (and a more useful and human path) means working with students, colleagues, and parents to find AI’s place in education so kids understand AI and become informed about its value and challenges.

Assuming that students will use AI to cheat suggests that they are inherently dishonest and untrustworthy.  And while trust is earned, fostering a positive classroom culture begins with believing in students' integrity and potential rather than starting with distrust.

Leading with cheating isn’t a viable strategy or approach. It’s just not. Believe in kids, work with them, explore the technology, put it to use, get better at it, and help them understand a technology that will undoubtedly impact them for the rest of their lives.