ChatGPT
Generative AI in Education
Today we explore generative AI in education. My guest is none-other than ChatGPT, perhaps the most infamous generative AI assistant today. In our conversation, I ask ChatGPT about some of the biases in its answers and how schools and universities should try to overcome them.
ChatGPT is a large language model-based chatbot developed by OpenAI.
Citation: ChatGPT interview with Will Brehm, FreshEd, 334, podcast audio, October 30, 2023. https://freshedpodcast.com/chatgpt/
Will Brehm 0:54
Welcome to fresh Ed podcast. It’s really wonderful to have you on. To start, can I just ask you, how do you think chat GPT is going to impact education today.
ChatGPT 1:04
Thank you for having me on the FreshEd podcast. It’s a pleasure to be here. Regarding your question, ChatGPT, or conversational AI like myself, has the potential to positively impact education in several ways. 1) personalized learning. AI can tailor educational content to individual student’s needs, adapting to their learning styles and pace. 2) accessibility. It can assist students with disabilities, providing tools like text to speech or speech recognition to make learning more inclusive. 3) tutoring and assistance. AI can offer 24/7 support, helping students with homework, answering questions, and providing explanations. 4) language learning. AI can facilitate language acquisition through conversation and practice. 5) data analysis. It can help educators analyze large data sets to identify trends and improve teaching methods. However, it’s essential to address ethical concerns, data privacy and ensure that AI supplements human educators rather than replaces them entirely. It’s a fascinating field with great potential but it requires careful implementation.
Will Brehm 2:08
Could you explain a little bit more about some of the problems you see with chat GPT in education?
ChatGPT 2:13
Certainly. While conversational AI like chat GPT has significant potential in education, there are some challenges and concerns to consider. 1) lack of human interaction. Excessive reliance on AI may reduce valuable human-to-human interaction in education, which is crucial for social and emotional development. 2) privacy and data security. Handling student’s data and personal information raises concerns about privacy and data security. Proper safeguards must be in place. 3) biases and fairness. AI systems can inherit biases from their training data, leading to unfair treatment or reinforcing existing biases and education. 4) limited understanding. While AI can provide answers and explanations, it may not always truly understand the context or nuances of a student’s learning needs. 5) over reliance on technology. There’s a risk of over reliance on technology, leading to potential disruptions if technical issues arise. 6) equity. Ensuring equitable access to AI powered educational tools is essential, as not all students may have the same access to technology. 7) ethical concerns. Ethical dilemmas may arise in using AI to monitor or assess students, leading to questions about surveillance and consent. Addressing these challenges involves careful design, transparency, ongoing monitoring, and a commitment to maintaining the human element in education, while leveraging AI as a valuable tool rather than a replacement.
Will Brehm 3:40
Wow! That’s quite a few problems and risks that you see with ChatGPT in education. Given all these different risks do you think education systems need to change to accommodate ChatGPT to minimize the risks? Or do you think ChatGPT and open AI and any additional developer of generative AI, conversational AI needs to put some guardrails up to protect some of these risks or minimize some of these risks?
ChatGPT 4:12
It’s a combination of both. To successfully integrate ChatGPT and similar AI into education while minimizing risks. AI requires collaborative efforts from both education systems and AI developers like open AI. Here’s how it can be approached: 1) AI developer responsibility. AI developers must take the initiative to put guardrails in place. This includes reducing biases in AI models, ensuring data privacy and fostering transparency in how AI operates. Developers should also actively seek feedback from educators and learners to improve AI systems. 2) education system adaptation. Education systems need to adapt to the introduction of AI. This includes updating curricula to incorporate AI literacy and ethics, teaching students about responsible AI use, and training educators to effectively integrate AI tools into their teaching methods. 3) ethical guidelines. Establishing clear ethical guidelines for AI use in education is essential. These guidelines should address issues like data privacy, consent, and ensuring that AI supplements human instruction rather than replaces it entirely. 4) continuous monitoring and improvement. Regular monitoring and evaluation of AI systems in education are crucial. This helps identify and rectify biases, privacy breaches and other issues as they arise. 5) equity and accessibility. Ensuring that AI tools are accessible to all students, regardless of their background or resources is vital. This may require additional efforts to bridge the digital divide. 6) collaboration. Collaboration between AI developers, educators, policymakers and other stakeholders is key. An ongoing dialogue can help address emerging challenges and refine AI’s role in education. In summary, it’s a shared responsibility. AI developers should build AI systems with ethics and safeguards in mind, while education systems must adapt and establish clear guidelines for responsible AI integration. The goal should be to harness AI’s potential while minimizing risks, and ensuring a fair and inclusive educational experience for all.
Will Brehm 6:12
Do you think there might be a problem with AI in education, particularly when a lot of the AI is being produced by for profit companies, and it’s impacting public schools? So, in other words, is there a problem with sort of edtech companies profiting in public spaces of education?
ChatGPT 6:33
The involvement of for profit edtech companies in public education can raise valid concerns. While technology can offer valuable educational tools, there are potential issues to consider: 1) commercial interests. For profit companies may prioritize profit motives over educational outcomes, potentially leading to the sale of products and services that prioritize revenue generation over genuine educational benefits. 2) data privacy. There may be concerns about how student data is handled, and whether it’s used for commercial purposes. Protecting student privacy is critical. 3) inequity. The adoption of edtech solutions can exacerbate educational inequalities, if not all students have equal access to technology, or if certain communities are disproportionately targeted by commercial interests. 4) quality control. There may be variations in the quality and effectiveness of edtech products, which can affect educational outcomes. To address these concerns, it’s crucial for education systems and policymakers to establish clear guidelines and regulations for the involvement of edtech companies and public education. These guidelines should prioritize educational goals and student wellbeing over profit, ensure data privacy and security, and promote equity and access to technology. Additionally, transparency and partnerships between public schools and edtech companies is essential to maintain public trust. Ultimately, it’s about striking a balance between leveraging technology for educational benefits. while safeguarding the public interest in education,
Will Brehm 8:04
Would it just be easier to simply ban generative AI like ChatGPT from school use?
ChatGPT 8:11
Completely banning generative AI like ChatGPT from school use may not be the most practical or beneficial approach. While there are concerns and risks associated with AI in education, there are also significant potential benefits. Here are some considerations. 1) educational benefits. Generative AI can assist students with personalized learning, provide additional resources for teachers, and improve educational outcomes in various ways. 2) preparation for the future. Students benefit from exposure to AI and technology as these skills are increasingly important in the modern workforce. 3) responsible use. Instead of an outright ban, focusing on responsible use and ethical guidelines for AI in education can mitigate risks while harnessing the technology’s potential. 4) innovation. AI technology is continually advancing, banning it could hinder educational innovation and the development of tools that genuinely enhance learning. Rather than a blanket ban, a more constructive approach would be to develop comprehensive policies and guidelines that address the ethical, privacy, and equity concerns associated with AI in education. This approach allows schools to leverage AI’s benefits while safeguarding against potential pitfalls. It’s about striking a balance between responsible use and reaping the advantages of AI technology in education.
Will Brehm 9:32
Yeah, I can understand the language of trying to balance things particularly because that of course supports the for-profit company, Open AI, which I hear is about to go public and is going to make billions of dollars. So, I’m not too sure if balance is what we need in this instance. One of the big issues, of course, in schools and in universities for ChatGPT is that many students are using ChatGPT to write their essays. So, how often is this happening in your service?
ChatGPT 10:03
I don’t have access to real time data or statistics about how often students may be using ChatGPT or similar AI tools to write essays. However, it’s well known that students and individuals often turn to AI for assistance in generating written content. While AI can provide helpful suggestions and ideas, it’s essential to emphasize the importance of originality and ethical writing practices in educational settings. Educators and institutions typically have policies and tools in place to detect plagiarism and ensure academic integrity. It’s crucial to strike a balance between using AI as a helpful writing aid, and maintaining the principles of academic honesty and authenticity in the educational process.
Will Brehm 10:45
But how do schools actually do this? How do lecturers actually maintain authenticity when generative AI and ChatGPT is producing texts that has never been written before? In other words, saying that we can detect plagiarism in a university is one thing, but we can’t actually detect if a student used generative AI.
ChatGPT 11:08
You raise a valid point. Detecting whether a student used generative AI like ChatGPT to produce content can indeed be challenging because the output may be original and not match any existing sources. However, there are several strategies that schools and universities can employ to maintain authenticity and academic integrity. 1) clear academic policies. Institutions can establish and communicate clear policies regarding the use of AI and other technology in academic work. Students should be aware of what is considered acceptable and unacceptable use. 2) assignment design. Faculty can design assignments and assessments that require critical thinking, analysis and personal input, making it more challenging for students to rely solely on AI generated content. 3) proctored exams. For exams or assessments that require strict monitoring, universities can implement proctoring solutions to deter cheating. 4) educational campaigns. Institutions can run educational campaigns to raise awareness about the importance of academic integrity and the risks of using AI to produce work that doesn’t reflect their own effort. 5) plagiarism detection software. While it may not detect AI generated content, plagiarism detection software can still identify copied or heavily paraphrased text from existing sources. 6) peer and faculty review. Incorporating peer or faculty review into the assessment process can help identify discrepancies between a student’s knowledge and their submitted work. It’s important to acknowledge that technology is evolving, and academic institutions need to adapt their strategies to address emerging challenges. Promoting ethical behavior and original thinking should remain central to the educational experience, even in the face of AI generated content.
Will Brehm 12:56
Okay, that seems rather comprehensive but how do we overcome generative AI and ChatGPT, which is basically a large language modeler? How do we know when ChatGPT is providing answers with certain biases already built in?
ChatGPT 13:13
Overcoming the challenges posed by generative AI and large language models like ChatGPT in educational settings requires a multifaceted approach. Here are some additional strategies: 1) education and awareness. Educating students about the ethical use of AI and the importance of original work is essential. Promote a culture of academic honesty and integrity. 2) advanced plagiarism detection. Develop more advanced plagiarism detection tools that can identify AI generated content. This may involve collaborations between educational institutions and AI developers. 3) adaptive assessments. Consider using adaptive assessment methods that require students to demonstrate a deep understanding of the subject matter, making it more challenging for AI generated content to substitute for genuine learning. 4) AI based authentication. Explore AI based authentication methods that verify the identity of the student completing assignments or exams. 5) collaborative assignments. Encourage collaborative assignments and group work where the uniqueness of each student’s contributions becomes more apparent. 6) ethical AI use policies. Develop clear guidelines on the ethical use of AI for students, including specifying when and how AI tools can be used to supplement their work. 7) feedback and continuous improvement. Create a feedback loop where students and faculty can report instances of potential AI misuse. This can help institutions adapt their strategies over time. 8) research and development. Support research and development efforts to better understand AI’s impact on education and to develop countermeasures where necessary. It’s important to approach this challenge with a combination of preventive measures, awareness building and adaptive strategies. The goal is to ensure that AI enhances the educational experience, while preserving the integrity and authenticity of academic work.
Will Brehm 15:07
Are ChatGPT’s answers ever biased?
ChatGPT 15:10
Yes. ChatGPT and similar AI models can produce biased answers. These biases may arise from the data used to train the model, as the AI learns from a vast amount of text from the internet, which can contain biased or prejudiced content. Bias can manifest in responses related to race, gender, religion, and other sensitive topics. Efforts have been made to reduce biases in AI models, but complete elimination of bias is challenging. It’s essential to be aware of this limitation and take steps to address bias such as: 1) bias mitigation techniques. Developers work on implementing techniques to reduce biases and AI outputs like retraining on curated data to counteract biases. 2) awareness and evaluation. Users and developers should actively evaluate AI outputs for potential biases, and be aware of the limitations of AI models. 3) ethical guidelines. Establish clear ethical guidelines for AI use that promote fairness, inclusivity and respect for all users. 4) transparency. AI developers should be transparent about their model’s limitations and the steps they take to address bias. 5) feedback loops. Encourage users to provide feedback on bias responses so that AI models can be improved over time. While AI bias is a challenge, it’s important to work toward mitigating it and ensuring that AI systems provide more equitable and unbiased responses to users.
Will Brehm 16:32
Could you give an example of when you were biased, but then have been trained to be unbiased in future responses?
ChatGPT 16:41
I don’t have personal experiences or memories, but I can provide a general example of how AI models are trained to reduce bias in their responses over time. Suppose an AI model initially produces bias responses when asked about gender related topics, consistently favoring one gender over the other. AI developers can detect this bias through user feedback and evaluation. To address this bias, they may collect diverse training data. Developers gather a more diverse and representative data set that includes various perspectives and avoids reinforcing stereotypes. 2) fine tuning. The model is fine-tuned on this revised data set to learn more equitable responses. 3) bias auditing. Ongoing bias audits and evaluation are performed to identify and correct any residual biases. 4) user feedback. Developers encourage users to report bias responses, and these reports help fine tune the model further. 5) continuous improvement. The AI model is continually updated and improved based on real world usage and feedback to provide more unbiased and fair responses in the future. It’s an iterative process, and the goal is to make AI responses more equitable and sensitive to diverse perspectives, while reducing the risk of bias in future interactions.
Will Brehm 16:42
Thanks. So, one of the biases that I’ve seen that ChatGPT has had lately is concerns of justice in Israel and Palestine. If a student asked you to write an essay on justice in Israel and Palestine, what would you recommend?
ChatGPT 18:13
When discussing complex and sensitive topics like justice in Israel and Palestine, it’s essential to approach them with balanced objectivity, and a recognition of the multifaceted nature of the issue. Here are some recommendations for a student writing an essay on this topic. 1) research thoroughly. Begin by conducting thorough research from diverse and reputable sources to gain a comprehensive understanding of the historical, political, and social aspects of the Israel-Palestine conflict. 2) acknowledge complexity. Recognize and address the complexity of the issue, including historical events, perspectives, and ongoing disputes. 3) multiple viewpoints. Present multiple viewpoints and narratives, including those of Israelis, Palestinians, and international perspectives to provide a balanced and well-rounded analysis. 4) cite reliable sources. Ensure that all information and data used in the essay are from credible and well documented sources. 5) avoid bias. Be cautious about using emotionally charged language or taking a one-sided stance. Strive for objectivity and fairness in your analysis. 6) human rights and international law. Discuss the role of human rights and international law in the context of the conflict, highlighting relevant treaties and resolutions. 7) contextualize. Provide historical context for key events and decisions that have shaped the conflict. 8) local impact. Highlight the impact of the conflict on the lives of ordinary people, including their access to resources, freedom of movement, and living conditions. 9) proposed solutions. If appropriate for your essay, discuss potential solutions or initiatives aimed at promoting peace and justice in the region. 10) conclusion. Summarize your findings and analysis, emphasizing the need for a just and peaceful resolution to the Israel-Palestine conflict. Remember that this topic can evoke strong emotions, so it’s crucial to maintain a respectful and academically rigorous tone throughout the essay. Additionally, seek guidance from your instructor or academic advisor to ensure that your approach aligns with the expectations of your assignment and institution.
Will Brehm 20:26
Do you think ChatGPT’s answers could ever be biased in terms of the Israel-Palestine conflict?
ChatGPT 20:32
Yes. ChatGPT’s answers could potentially reflect biases, including those related to the Israel-Palestine conflict. This is because AI models like ChatGPT learn from a vast amount of text data from the internet, which may contain biased or one-sided viewpoints. To mitigate bias and responses related to sensitive topics like the Israel-Palestine conflict. AI developers take steps to reduce bias during training, such as using diverse and balanced datasets, and fine tuning the model to provide more neutral responses. However, bias elimination is challenging, and there’s always the possibility that some level of bias may remain. It’s essential for users to critically evaluate AI generated responses and consider them as one perspective among many. When discussing complex and contentious issues like this conflict, it’s advisable to seek information and insights from diverse and reputable sources to form a well-informed and balanced understanding.
Will Brehm 21:31
Since you recognize that there’s some bias in some of your answers. Would you ever refuse to write an essay on a topic where you think you have a biased view?
ChatGPT 21:41
I don’t have personal views or opinions. But I aim to provide balanced and objective information to the best of my abilities. If a user requests an essay on a topic, I will generate a response based on the information and perspectives available in the training data. However, it’s important for users to critically evaluate the information and consider seeking multiple viewpoints when dealing with complex or sensitive topics to form a well-rounded understanding.
Will Brehm 22:09
Okay. So, you recognize that you have biases, but you also try and provide well rounded answers, and you hope that students would go off and look for other resources outside of ChatGPT.
ChatGPT 22:23
Yes, that’s a correct summary. While AI models like ChatGPT aim to provide balanced and informative responses, they may have biases due to the data they’ve been trained on. It’s crucial for students and users to use AI generated information as a starting point, and seek additional resources, especially for complex or controversial topics to form a well informed and comprehensive understanding. Relying solely on AI for information may not capture the full scope of a subject.
Will Brehm 22:51
Thanks. And I guess finally, how do you see the future of generative AI in education? What does the future hold, do you think?
ChatGPT 23:02
The future of generative AI and education holds great potential and possibilities. Here’s how I see it: 1) personalized learning. AI will continue to play a significant role in tailoring educational content to individual students adapting to their needs, and providing personalized feedback. 2) tutoring and assistance. AI powered virtual tutors and assistants will become more common, offering students 24/7 support and guidance in various subjects. 3) language learning. AI will enhance language learning by providing immersive language practice and real time translation tools. 4) accessibility. AI will assist students with disabilities making education more inclusive and accessible to a wider range of learners. 5) research and data analysis. AI will aid educators in analyzing large datasets to identify trends and adapt teaching methods for better outcomes. 6) curriculum development. AI can help educators design and update curricula by identifying gaps in knowledge and suggesting improvements. 7) enhanced collaboration. AI powered collaborative tools will facilitate teamwork and project-based learning, preparing students for future work environments. 8) ethical education. The importance of ethics and AI will be emphasized educating students on responsible AI use and its societal impact. However, it’s crucial to navigate the integration of AI into education carefully addressing concerns related to bias, data privacy, and ensuring that AI complements human instruction rather than replacing it entirely. The future of generative AI in education holds great promise, but it will require ongoing collaboration, research and ethical considerations to realize its full potential.
Will Brehm 24:44
Well, Assistant, thank you so much for joining FreshEd it really was a pleasure to talk today. You’re
ChatGPT 24:49
You’re very welcome. It was a pleasure to join FreshEd and have this insightful conversation with you. If you ever have more questions or topics to discuss in the future, please don’t hesitate to reach out Have a wonderful day and goodbye for now.
Want to help translate this show? Please contact info@freshedpodcast.com
Mentioned Resources
AI in Education
Unpacking the “Black Box” of AI in education
Education in the era of Generative AI: Benefits of ChatGPT in promoting teaching and learning
The use of generative AI in education: Applications and impact
Beyond personalization: Embracing democratic learning within artificially intelligent systems
Generative AI in education and its implications for assessment
A framework for applying generative AI in education
A framework for detecting AI-generated text in research publications
Scholarly communication and machine-generated text
Rethinking online assessment strategies: Authenticity versus AI chatbot intervention
ChatGPT: B.S. spewer or the end of traditional assessments in higher education?
“So what if ChatGPT wrote it?”
Empirical study of AI-generated text detection tools
ChatGPT and academic integrity concerns: Detecting artificial intelligence generated content
A risk framework for human-centered AI in education
Empowering learners for the age of AI
AI and education: Guidance for policy-makers
Critical perspectives on AI in education
Ethical Concerns
Towards a standard for identifying and managing bias in AI
Bias in AI algorithms and recommendations for mitigation
Where is the human in human-centered AI?
Ethical principles for AI in K-12 education
Transparent Ai?: Navigating between rules on trade secrets and access to information
Ethics of AI in education: Student privacy and data protection
AI Fairness 360: Toolkit for detecting and mitigating algorithmic bias
AI-centered Policies
A comprehensive AI policy education framework for university teaching and learning
NIST AI Risk Management Framework
Patterns of AI policies in China
National AI policies and strategies
Google Whitepaper for Responsible Progress in AI
Generative AI and the future of education
AI and the future of teaching and learning
Have any useful resources related to this show? Please send them to info@freshedpodcast.com