AI in education

From Sustainability Methods


Type Team Size
Me, Myself and I Group Collaboration The Academic System Software 1 2-10 11-30 30+

   

What, Why & When

Digital tools of artificial intelligence have become a a groundbreaking and dangerous array of softwares that modern education need to deal with. While fears are mounting that students will flock to openly available tools to solve their homeworks, current algorithms are undeniably so bad that one cannot know if we should laugh or cry, as Noam Chomsky put it. This makes it all the while more important to highlight the role of AI tools, how these could be used, and where we should actively criticise them or even altogether reject them. It is clear that approches such as ChatGPT have a humongous amount of knowledge, yet they cannot really make a lot of sense of it like at all. To this end, current AI is like a three year old with a massive brain, knowing vaguely everything humankind knew of a rudimentary but widespread level, but being incapable to make any sense of it, or show any experience whatsoever. „But it's as always and everywhere: The industry sees a huge energy reserve, while science views it as a time bomb.“ - (translated to English) Frank Schätzing: Der Schwarm.

AI in Education. Source: (1)

Use cases of AI

Is AI truly intelligent? Can we use it reliably in teaching, research and writing essays? Haven’t we all felt guilty of ignoring this question for too long? OpenAI launched ChatGPT at the end of November 2022. The impact on students ever since has been tremendous: AI is serving students as a personal tutor and study buddy, a test generator, language learning assistant, summarization tool, programming tool and brainstorming tool. Which unique value does it leave us to create? Where does it leave us as a society? And how can we use it with good conscience? In the following, we will discuss the roles of AI in education taking highly normative and pragmatic perspectives. Being the center for methods at Leuphana University, our judgement on AI & technology goes beyond its end-goal. Personal Tutor and Study Buddy Premise: AI serves as a versatile, 24/7 personal tutor, offering expertise in various subjects, eliminating the need for expensive courses. It functions as an intelligent study partner, explaining complex concepts in a clear and understandable manner, aiding in problem-solving.

Objection: Learning is based on experience, which ChatGPT cannot have. It cannot feel what a learner currently needs, and is not able to push a learner into a uncomfortable learning space that is often at the core of the grace of teaching. ChatGPT as it stands has way less expertise as -say- Wikipedia, and its knowledge is shallow at best. Learning to evaluate information from the internet is an important skill. Answers from ChatGPT are always way too short to do any decency to any topic. AI is not intelligent, far away from it. The word complex is anyway difficult and ambiguous, and I evaluate thousands of answers, which were not clear but wrong and build on unprecise wording. Worst of all, we cannot replace human interaction with AI. If we do this, we can pretty much guarantee our sanity to end, nothing will be a better guarantee for this but to use AI as buddies. Compromise: Outputs are as shallow as the inputs. If you apply the socratic method and approach the solution to a formulated question by asking questions, you can get a detailed understanding and simultaneously check the argumentation and logic of the AI’s output. Do not ever take any aspect of the AI’s output for granted! good prompt vs bad prompt

Is AI truly intelligent? Can we use it reliably in teaching, research and writing essays? Haven’t we all felt guilty of ignoring this question for too long? OpenAI launched ChatGPT at the end of November 2022. The impact on students ever since has been tremendous: AI is serving students as a personal tutor and study buddy, a test generator, language learning assistant, summarization tool, programming tool and brainstorming tool. Which unique value does it leave us to create? Where does it leave us as a society? And how can we use it with good conscience? In the following, we will discuss the roles of AI in education taking highly normative and pragmatic perspectives. Being the center for methods at Leuphana University, our judgement on AI & technology goes beyond its end-goal.

PERSONAL TUTOR & STUDY BUDDY

Premise: AI serves as a versatile, 24/7 personal tutor, offering expertise in various subjects, eliminating the need for expensive courses. It functions as an intelligent study partner, explaining complex concepts in a clear and understandable manner, aiding in problem-solving.

Objection: Learning is based on experience, which ChatGPT cannot have. It cannot feel what a learner currently needs, and is not able to push a learner into a uncomfortable learning space that is often at the core of the grace of teaching. ChatGPT as it stands has way less expertise as -say- Wikipedia, and its knowledge is shallow at best. Learning to evaluate information from the internet is an important skill. Answers from ChatGPT are always way too short to do any decency to any topic. AI is not intelligent, far away from it. The word complex is anyway difficult and ambiguous, and I evaluate thousands of answers, which were not clear but wrong and build on unprecise wording. Worst of all, we cannot replace human interaction with AI. If we do this, we can pretty much guarantee our sanity to end, nothing will be a better guarantee for this but to use AI as buddies.

Compromise: Outputs are as shallow as the inputs. If you apply the socratic method and approach the solution to a formulated question by asking questions, you can get a detailed understanding and simultaneously check the argumentation and logic of the AI’s output. Do not ever take any aspect of the AI’s output for granted! good prompt vs bad prompt

TEST GENERATOR

premise: ChatGPT can generate unlimited practice questions for any subject, helping students prepare for exams. More often than not, we tend to suppress questions that we have no answer to. Thus, an unbiased question generator counters the human slacker inside of us.

Objection: The questions are bad, and if they are able to compete with and real world teaching, then this teaching is obviously very bad. Also, the process of isolating the underlying questions is what makes you actually grasp the topic and its connections. Consequently, eliminating this step of learning is fatal if you want to actually learn something. But even if you just want to pass a test, you can save yourself time. Also, ChatGPT can’t identify your knowledge gaps. You must do that.

Compromise: Use AI to test yourself, but not to generate tests.

LANGUAGE LEARNING ASSISTENT

Premise: Multilingual AI assists in language practice, facilitating conversations and translations for language learners. Objection: Deepl and ChatGPT do a good job translating. You are able to converse and comprehend with speakers of different languages. Does it really matter to use words from one’s own repertoire when chatting or publishing in another language?

WRITING TOOL

Premise: AI can expand on or simplify notes and information, making it a valuable resource for writing and understanding.

Objection: AI is breaking down language barriers and at the same time extinguishing linguistic skills and individual expression. AI summaries are not valuable, but instead a shortcut through a mace of knowledge that helps people to become wizards. Shortcutting these maces makes them mere academic imposters. “You’re picking apart the style of my essay instead of the substance?” […] “That’s kind of... lazy.” – “Saltburn” (2023) Farleigh replies: “It's completely valid to debate the rhetoric of an argument. It's not what you argue but how.”

Both have a point: So many problems would be solved if people were simply convinced by logic, by content, by data. But life is not like that. This is why politics more often than not resemble show-offs, heavily theatricalized by spin-doctors. This is also why, still, many people do not believe in climate change although scientists agree that the change in temperature and CO2 emissions is not natural. Not to go as far as Yuval Noah Harari that humanity is based on story-telling, but story-telling is undeniably as important as the content of the story. In pursuit of entertainment, the story itself might not be significant at all. However, in science, it is a balancing act between accessibility and accuracy. A reminder: We do science here at our university. AI has the potential to hinder creativity in writing by compromising the autonomy, originality, and ethical considerations of the writing process. Adhering strictly to predefined templates or algorithms, as well as relying on potentially inaccurate or misleading information from AI, can contribute to the reduction or distortion of content. When the original text’s structure is bad and unclear, the AI will try to make sense of it based on frequency and wording. It is, however, terrible at reading “between the lines”. Accordingly, the AI summary of an unclearly structured text is very prone to be incomplete and just wrong. On the other hand, clearly structured texts with subtitles and topic sentences that guide a reader through the aspects and arguments have better chances to be summed up by AI in an adequate manner. But – such thoroughly structred texts will also be an easy read and using skimming and scanning skills you are better off just reading it on your own and not risking any mistakes. Tbh, on some days, a good enough AI summary isn’t worse than my unmotivated, busy-minded self can produce.

Compromise: Be cautious what you want to achieve with the use of AI. Is it to summarize a text to understand it better? Go for it, but be aware of some data getting lost. Is it to produce knowledge? Leave it – Use AI for inspiration and brainstorming, but do not settle for its output before you have made up your own mind. Humans are naturally lazy slackers and once AI provided you an output your creativity most definitely narrows.

PROGRAMMING TOOL

premise: AI can assist with programming by automating repetitive tasks, detecting errors, and providing intelligent suggestions for code optimization.

Objection: Learning a programming language can be cumbersome, and it is often difficult to solve specific problems within coding language. Since standards language processors such as CHatGPT are massively trained by programing language, making them an almost tautological tool for programming needs. While clearly all language processing algorithms are not much better then an Allen Key, this is exactly how they can be used. If you have a raw code idea or a general direction, these AI bots can fill in the blanks. Equally they can be a good starting point, but everything in between the start and the end is still relying on human programmers. Current analysis show that while ChatGPT is very good at making specific code bits, it is very bad in explaining the complexity of data analysis. Hence AI is still at its limit when it comes to transcending experience within programming. It can make many steps more efficient, but it frequently creates such severe mishaps, that any longer code is prone to errors no matter what. Hence the best benefit we currently have of language processing algorithms is that within programming we at least see quickly whether the suggested code works. This is however not the same to knowing that it is correct. AI cannot tell us that, this is still restricted to human experience.

Compromise: A balanced approach to using AI in coding involves leveraging AI tools to automate repetitive tasks, generate boilerplate code while ensuring that experienced human developers closely review, test, and refine the AI-generated code. Development teams should establish clear guidelines for when and how to use AI coding tools appropriately, considering factors such as project complexity and security requirements. AI tools should be used in conjunction with other code analysis and testing tools to catch potential issues and ensure code quality. Organizations should foster a culture of continuous learning and skill development among developers to avoid overreliance on AI and maintain a deep understanding of programming principles and best practices.

BRAINSTORMING TOOL

premise: AI can enhance brainstorming by offering diverse perspectives, generating creative ideas, and suggesting novel solutions based on vast data analysis and pattern recognition.

Objection: An alarming rate of students and colleagues use AI as an initial brainstorming tool. This is from a knowledge-ethical standpoint a disaster for academia, and the human race. Current AI is trained on text bodies out of the internet. The internet is a racist, sexist, hedonistic space that contains the worst and sometimes also the best that humankind has to offer. It is very difficult to add another valid thought to a list or collection of thoughts. Hence if an AI comes up with a list of thoughts it is troubling to come up with additional thoughts. Hence the use of AI for brainstorming clearly creates a bias and pre-determination that goes against the very core of science to create new knowledge. If you choose to sue AI for brainstorming, you choose to add nothing meaningful to this world. Now you go!

Compromise: Use AI for brainstorming in the sense of finding a starting point, but do not ever consult AI before brainstorming on your own. It can’t be that you consult AI for lack of creativity when it is AI that limits your horizon. If you hardly know anything about a topic, then AI can be of great use to provide a scope of a subject’s components. Beware that AI will never put out a novel idea.

Ethics

A NOTE ON NORMATIVITY

AI is probably decades away from making any normative claims that are not synthetic. Currently, an AI can only be told 3, yet this fails whenever it has to evaluate anything towards a better or worse. ChatGPT is notorious in offering you statements like “some people see it like this, other see it like that, it is complex”, thus not taking any normative decisions whatsoever. Using AI as a tool to generate or evaluate such thoughts is dangerous. Beside the aforementioned bias and predetermination is AI only build out of pieces, adding one thought or rule on top of each other. We currently still struggle to understand how conciseness or creativity originate, and as long as this is the case an AI will be incapable to generate any artificial form of higher intelligence. Yet from a philosophical standpoint, higher moral thinking is of undeniable importance to meaningfully add to normativity. If anyone thus uses AI as a tool to generate normative thoughts, then you either predetermine yourself for something that cannot be argue for at best. Yet it its worst you opt for thoughts taht are merely better than random, and thus your contribution to the world -based on AI- is overall meaningless.

A NOTE ON AI BEYOND SCHOLAR EDUCATION

Beside text and content work is AI also usable as a tool to generate images and even music. While this is not necessarily directly related to science, it can be implemented and part of some parts of science. The perception and generation of art through AI is beyond our text here, but can be predicted to be huge, since it already has a long tradition i.e. in music. Within science, there is a rising possibility to illustrate through images that AI created. This may provide more opportunities in the future, and is thus mentioned here.

Conclusion

In conclusion, while AI technologies like ChatGPT present an innovative frontier in education, their integration requires a balanced approach. The cons, such as potential reductions in critical thinking, issues with verifiable sources, a need for human interaction in learning, and ethical considerations including privacy and biases, highlight the need for cautious and thoughtful implementation. However, the potential benefits cannot be overlooked. By experimenting with AI in diverse educational tasks, developing critical thinking skills to understand and mitigate AI limitations and biases, and utilizing AI tools for tasks like summarizing, role-playing, brainstorming, and engaging in the Socratic method, students and educators can greatly enhance the learning experience. The key is to tailor AI usage, ensuring that it complements rather than replaces human elements in education. Emphasizing the importance of detailed, well-considered prompts can lead to more accurate and bias-free AI responses, ultimately making AI a valuable ally in the educational process.


Links & Further reading

Videos

Books

Tools

Papers



The author of this entry is Sergey Belomestnykh, Prof. Dr. Henrik von Wehrden.