Artificial intelligence comes to class


School of Education faculty members provide valuable perspectives on contentious advancements in artificial intelligence (AI) relevant to educators

By Laurel White

As recent advancements in artificial intelligence have spurred headlines and stoked heated conversations around the world, faculty from the School of Education have offered expert insight on what these developments could mean — for better or worse — for educators and their students.

For David Williamson Shaffer, the public debut of ChatGPT, an AI tool that can generate written content including essays and poetry, could signify exciting new frontiers for what and how students learn. Shaffer is the Sears Bascom Professor of Learning Analytics and a Vilas Distinguished Achievement Professor of Learning Sciences in the Department of Educational Psychology. He also is a data philosopher at the Wisconsin Center for Education Research.

Shaffer
Shaffer

In an op-ed published in Newsweek this spring, Shaffer argued ChatGPT could have a positive impact on our education system — but only if educators approach the new technology a certain way.

“We should stop and take a breath — the robots can’t do that, not yet anyway — and realize artificial intelligence like ChatGPT could change education for the better,” he wrote.

Shaffer acknowledges concern in the education community about ChatGPT’s ability to write passable essays and pass high-level exams, including the Medical Licensing Exam and law and business school tests. But he argues that reducing the conversation about the tool to fears of plagiarism wouldn’t best serve students.

“When I started teaching, my peers said students needed to learn to do arithmetic quickly because they ‘won’t always have a calculator in their pocket’ or they might need to ‘make change for a customer if the cash register isn’t working.’ Well, guess what? We all have calculators on our cell phones, most of us don’t use cash anymore, and if the register goes offline, they close the store,” Shaffer wrote. “It doesn’t make sense to teach kids to do things that a calculator, or a computer, or AI can do for them.”

Instead, he argues that educators should consider how the tool can be used to angle their teaching and assessments toward critical thinking and 21st century problem solving.

“We need to teach students how to use AI to answer the complex questions that are far beyond what AI can do on its own,” he wrote. “AI can’t solve the climate crisis, or move us toward social justice, or fix our broken political discourse, or keep social media from invading our privacy and fanning the flames of intolerance. It’s people — people using AI — who can do those things. But only if we help them learn how.”

He pointed to the work of AI for Good, a group that uses technology to predict missile attacks on Ukrainian homes, and of teen entrepreneur Christine Zhao, who uses ChatGPT in an app she created to help people with Alexithymia, a condition that limits the ability to relate to emotions.

Shaffer also lent his expertise on AI in the classroom to a story published in the Milwaukee Journal Sentinel and reports aired on WKOW-TV in Madison and WAOW-TV in Wausau.

Photo of Mitch Nathan
Nathan

Meanwhile, Mitchell Nathan Nathan offered a crucial perspective on how artificial intelligence could hinder efforts to effectively gauge student learning. Nathan is also a Vilas Distinguished Achievement Professor of Learning Sciences in the Department of Educational Psychology.

In a perspective article published in the March 2023 issue of Frontiers in Artificial Intelligence, Nathan says educators should work to build a keen awareness of the limits of some AI programs to interpret students’ body language and other physical interactions. He notes that educators are increasingly reliant on AI programs to evaluate data about students, but that those programs aren’t built to understand some very important — and very human — signals.

Nathan notes that the growth and increased appreciation of embodied learning and multimodal learning analytics (MMLA) — data analysis tools that track how students talk, move, use tools, and interact with others — are making it easier for AI systems to monitor and evaluate students based on their nonverbal behaviors, not just what they type or say.

“Managing demands of complexity and speed is leading to growing reliance by education systems on disembodied AI (dAI) programs, which, ironically, are inherently incapable of interpreting students’ embodied interactions,” he wrote. “This is fueling a potential crisis of complexity.

Nathan says educators should consider analyzing student data using methods that combine AI and human decision making. For example, so-called “detector-driven interviewing” methods use AI and non-invasive techniques to continually monitor human behavior for cognitive and affective patterns. He argues it is essential that humans evaluate the behaviors detected by these AI monitors.

Nathan urges educators to consider this more nuanced approach when using AI — and soon.

“This needs to change before educational practices become too dependent on dAI systems without proper considerations of ways to address these limitations,” he wrote. “The time is ripe to invest in alternatives such as augmented intelligence systems that cultivate the omnipresence and computational power of dAIs with the embodied meaning making of human interpreters and decision makers (as illustrated by approaches such as detector-driven interviewing) as a means to achieve an appropriate balance between complexity, interpretability, and accountability for allocating education resources to our children.”

Pin It on Pinterest