paint-brush
How AI will Influence Learning: An Interview with Dr Thomas Dietterichby@edemgold
543 reads
543 reads

How AI will Influence Learning: An Interview with Dr Thomas Dietterich

by Edem GoldAugust 17th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Dr. Thomas Dietterich, an expert in machine learning, discusses AI's potential impact on education. He distinguishes between external effects, like AI tools for writing and coding, and internal effects, focusing on AI-designed educational improvements. He emphasizes the need for tools that make students more effective and encourages teachers to help students build arguments and narratives. Dr. Dietterich acknowledges the potential of LLM-based tools but remains cautious, suggesting that they might be advanced auto-completion mechanisms. He hopes AI could diagnose student misconceptions and engage in clarifying dialogues. Ethically, he prioritizes privacy, heterogeneity, and inclusivity while implementing AI. Dr. Dietterich believes AI won't replace critical thinking but can enhance it by freeing time for constructing solid arguments. Successful integration requires thoughtful user experience design and addressing biases. Ultimately, he stresses that technology should focus on student success, not just on technological advancements.
featured image - How AI will Influence Learning: An Interview with Dr Thomas Dietterich
Edem Gold HackerNoon profile picture

“Writing is primarily about deciding what to say, not how to say it.” -Thomas Dietterich


Education is the foundation of modern society. It is the vehicle that past generations use to pass on lessons learned. It is the system that ensures that our system continues to survive and eventually thrives.


In a previous interview, I spoke to Professor Lance Cummings about the impact AI could possibly have on the relationship between teachers and students. In this interview, I speak to the eminent Dr. Dietterich, Emeritus Professor at Oregon State University. We attempted to discuss the potential impact AI could have on the state of education.


Enjoy!



Edem: Hey Thomas, it’s really lovely to have you and I’m really looking forward to hearing your thoughts. To begin, could you tell us a bit about yourself; professional qualifications and such?


Thomas: I have been working in machine learning since 1977 when I started graduate school working with Ryszard Michalski at the University of Illinois. Dr Michalski, along with Tom Mitchell and Jaime Carbonell, was the driving force behind the rebirth of the machine learning field. Arthur Samuel coined the term, “machine learning”, in his papers on an early reinforcement learning approach to playing Checkers. But AI researchers in the 1960s and 1970s focused on formal reasoning and knowledge representation rather than learning. My MS thesis developed techniques for what we would now call inductive logic programming (or statistical relational learning).


I moved to Stanford for my PhD, and there I worked on program induction from sample input-output pairs within the framework of logic programming.


I didn’t fully embrace statistical machine learning until around 1990 when I started comparing decision tree methods and neural networks. Early contributions included co-discovering the Multiple Instance Problem (in the context of drug design problems) and inventing error-correcting output coding. Throughout the 1990s, I worked on ensemble methods in machine learning, and my most-cited paper is a tutorial on ensemble methods.


I also became heavily involved in reinforcement learning, and two of my most theoretical papers were my JAIR paper on the MAXQ formalism for hierarchical reinforcement learning and a paper, currently available only on arXiv, on theory and algorithms for identifying exogenous state variables in Markov decision problems. I have also been very interested in applying RL techniques to problems in biological conservation (mostly in collaboration with Iadine Chades and her postdocs at CSIRO in Brisbane, Australia).


In the 2000s, I pursued several lines of research including (a) applications in environmental sensing and species distribution modeling, (b) intelligent desktop assistants for knowledge workers, and (c) robust machine learning. Within this last category, I’ve worked on the problem of novelty detection and its applications in computer security and in detecting novel classes of objects in computer vision.


Edem: Incredible! How do you envision AI influencing personalized learning in the future and what are the potential dangers or pitfalls of AI-powered LMS?


Thomas: As you can see from the above, I have not worked on applications of AI in education, so my answers will be very general. I am going to lump these two questions together and then divide them into two aspects. First, let’s discuss how generative AI tools for writing and coding will affect education. Teachers have little or no control over those tools, so this is an external “disturbance”. Second, we can also think about AI tools specifically designed to improve the educational process. This is the “internal” opportunity.


Let’s consider the external aspect first. As a teacher, I have two goals when giving a writing or programming assignment. The first is to build skills in writing (e.g., formulating good topic sentences and paragraphs, making proper word choices, and so on) and programming (e.g., learning how to work with control structures and data structures, test and debug code, and so on). The second goal is to teach students how to formulate a narrative and express a logical argument (in writing) or how to formulate requirements and design algorithms.


The emerging LLM-based writing and coding tools address the first of these but not the second. To use these tools, students will need to become good critics of topic sentences and word choices, but they won’t have as much need to be able to generate them. Teachers will need to help their students be better critics and editors. LLM tools don’t tell us what to write or what to code. So I expect teachers will be able to expand the amount of time they work on formulating arguments and narratives (for writing) and formulating requirements and abstractions (for programming).

In general, the right question to ask is


“How can we design tools such that the student working with those tools can become more effective than without those tools.” If the tools are interfering, they should be changed and improved.


Edem: Are there any specific technical advances (algorithms or models) you’re excited about and hopeful will have a significant effect on the future of Education?


Thomas: Everyone is excited about LLM-based tools. However, I remain to be convinced that they can be much more than fancy auto-completion mechanisms. One promising direction is to see whether we can use the LLM-internal representations as a foundation for training systems for diagnosing misconceptions and helping students overcome them. Anyone who has taught introductory programming can list dozens of misconceptions that students have when they are learning the basics of variables, if statements, for loops, and variable scoping. It should be possible to build a system that can look at a student’s solution and identify their misconceptions. If these systems could also engage the student in a clarifying dialogue and follow up with a tutorial dialogue, that would be wonderful.


Edem: What ethical considerations should be taken into account when implementing AI in the educational system?


Thomas:  For knowledge- and skill-centered topics, such as introductory programming and data structures, the main considerations are privacy and heterogeneity. Detailed educational data should be private and should not make up part of a student’s record for examination by potential employers. By “heterogeneity”, I mean that different students require different pedagogical approaches to get unstuck and remove a misconception. I first learned this when I was tutoring undergraduate vector analysis.


I tend to be a visual learner, so I love pictures and graphs. But one student I was tutoring was entirely an algebraic learner. Pictures and graphs of coordinate systems, for example, were worthless to them. Instead, I had to find another strategy. I ended up having the student physically act out moving along coordinate axes using the floor and walls of the classroom. The biggest ethical risk is that the tools we develop will work great for the typical students that belong in the central 60% of the distribution. But students who have had different kinds of experiences or bring different mental models to the subject may not do well with these tools. We need to help every student be successful.

One advantage of building AI tools at scale is that we have a better chance of discovering these subpopulations of students who require different instructional strategies.


I think Duolingo is an example of a company that has been able to optimize its instructional approaches by applying data analysis to analyze learner performance at scale. Of course, language learning is very skill-focused, so that probably makes it easier to optimize than, say, teaching about different political theories.


Edem: What measures can be taken to mitigate the risks of students becoming too reliant on AI-based systems and losing their ability to think critically?


Thomas: I don’t think this is going to be a problem. To the extent that AI-based tools can make the mechanics of writing and coding easier, this will leave more time for working on constructing sound arguments and thinking critically about them. Like most academics, it is through writing that I develop and test my arguments in favour or against particular hypotheses or conceptual frameworks. This is the most important thing we can teach our students.


Edem: In your professional opinion as an educator and engineer, what technical and personnel challenges need to be addressed in order for AI systems to be successfully integrated into the educational system?


Thomas: We need to think carefully about the entire user experience. What conceptual model will students need to have about the capabilities and limitations of these tools? How can the tools and the interaction be designed to make the student most effective?


This will require vast amounts of experimentation and user studies, and developing interactions that work well for the incredible diversity of students will be a huge challenge. In my view, most problems in HCI (and explainable AI, for that matter) are essentially problems of education. Hence, the tools of HCI (mental models, interaction design, user studies) will provide the solutions.


Edem: Historically, Academia has always been resistant to change. How do you foresee the overall educational system embracing and adapting to the influence of AI? Are there any specific challenges or barriers that need to be overcome?


Thomas: I think most of the resistance to change has been correct. Many bad ideas have been floated over the years, and educators rightly reject them as gimmicks and fads. Innovations that have positive effects have been widely adopted, such as active learning exercises, small group problem solving, flipped classrooms, and so on. If we can build AI tools that help students learn faster and be more effective, I think they will be rapidly embraced by teachers and students alike. But if we just think that giving students access to ChatGPT is going to have a positive impact, we will be very disappointed.


Edem: AI systems have been known to exhibit biases. This can have an effect on disproportionately represented students (racial minorities). How can we address the inherent biases in AI systems to ensure equitable and inclusive educational experiences for all students?


Thomas: One of the fundamental weaknesses of statistical learning is that it relies on statistics. Hence, it handles the common cases well, for which there is plenty of training data. Anyone who falls outside the common cases is likely to encounter more errors, more bugs, and a worse experience. Our evaluation metrics (such as accuracy) don’t detect this, because they ignore the rare cases.


The research community has been working hard on addressing the “rare subgroups” and “long tail distribution” problems, but I think we need to look beyond statistical solutions. We should study system failures to uncover the rare cases, then analyze them, and implement solutions. It may be that these solutions cannot be created using machine learning but instead must be hand-coded and tested.


Edem: To end, with the implementation of AI systems at Universities, there are concerns about the associated costs. How can we ensure that the integration of AI systems into Academia doesn’t widen the gap between the Haves and Have-Nots?


Thomas: We need to create incentives to ensure that everyone reaches a high level of performance. President Bush’s slogan, “No Child Left Behind”, is a good starting point. We should focus our resources on the Have Nots because the Haves will always find a way to take care of themselves. This requires strengthening our educational institutions and creating the proper incentives. One way to do this would be to first develop these tools in settings where students have traditionally struggled. However, we need to guard against the possibility that AI-based education will actually be inferior to non-AI-based education. In that case, it will be the Have Nots who will be stuck with the bad AI-based approaches. In short, the focus should be on students and their success, not on technology.


Edem: Thank you very much for your time Dr Thomas. Do you have any closing words?


Thomas: When it comes to teaching writing (and communication, more generally), an important goal is to help students find their own voice and discover the things that they are passionate about communicating. Writing is primarily about deciding what to say, not how to say it.


Also published here.

Lead image by Andrea de Santis on Unsplash