Navigating the Artificial Intelligence Revolution in Schools – Future-Ed

Posted: Published on December 23rd, 2023

This post was added by Dr Simmons

For many in the education world, artificial intelligence is a demon unleashed, one that will allow students to cheat with impunity and potentially replace the jobs of educators. For others, its a panacea for educations myriad challenges. Its the whole world is coming to an end because of AI, which isnt true, and on the other side, AI is going to fix all our problems, which is also not true, says Richard Culatta, chief executive officer of the International Society for Technology in Education (ISTE).

In fact, AI, like all technology, is neither inherently good nor bad; what matters is how its deployed. And the research for this report suggests that the most effective way to use artificial intelligence in schools is as a tool that supports rather than supplants educators. Its best thought of as complementing human judgement, not replacing it, says John Bailey, a former director of educational technology in the U. S. Department of Education. Its not quite a tutor. Its definitely a little short of personalized learning. But its like having a very smart assistant that is always going to be available to you to help you with questions and manage your work.

While many educators may want to ignore this newest technological advance, they cant. As Mark Heifner, a seventh-grade science teacher at Angola Middle School in Angola, Indiana, puts it: Im most definitely afraid of AI. But like with all inventions throughout history, whether we like it or not, we have to learn how to use it.

In some schools, AI is already providing personalized feedback to students to learn or better understand a problem or concept, helping teachers develop lesson plans, providing feedback for educators on their teaching, developing Individualized Education Plans for students, and helping parents better understand and engage in the process.

Social media, the internet and cell phones all offer cautionary tales about embracing technology without thinking carefully about how to introduce it. The challenge for educators is to use artificial intelligence in ways that tap its unprecedented power to strengthen both teaching and learning while mitigating its liabilities. This report is designed to help them do that.

Artificial Intelligence has been around since the 1950s. But prior to 2017, it largely used past data to predict future behavior. Think Netflix suggesting movies you might like based on what youve watched in the past. Or self-driving cars. Or playing chess.

AI, however, wasnt particularly good at human language or creativityuntil 2017, when a paper called Attention is All You Need was published by eight authors, some of whom at the time worked at what was then known as Google Brain, an artificial intelligence research team.

The new findings led to what is known as generative AI, which uses so-called large language models, or LLMs, to comprehend and generate new content. And in late 2022, the company OpenAI grabbed global attention when it released ChatGPT, which stands for Chat Generative Pre-Trained Transformer. The emergence of ChatGPT was a significant technological leap forward, with its ability to rapidly analyze text, summarize documents, respond to queries in an eerily human style, translate anything into just about any language. With other models, AI can also create digital artwork and video and audio, such as musical compositions.

One stunning example of how fast ChatGPT learns is the improvement in its scores on tests such as the SAT, APs, and even GREs and LSATs. As Ethan Mollick, an associate professor at the Wharton School of the University of Pennsylvania, and his wife, Lilach Mollick, director of pedagogy at Wharton Interactive, explain in a series of five videos for educators, AI in the past flunked those testsbut ChatGPT did fairly well. It scored in the 40th percentile on the LSAT and above 60 percent for GRE Qualitative. That in itself was astonishing. But a few months later when the new ChatGPT-4 was released, LSAT scores improved to the 80th percentile and GRE Qualitative to the 99th percentile.

Were in the middle of a very big transformation, probably one of the largest in recent history, Mollick says. It will affect everything we do as teachers, as students, as workers.

Experts in AI want people to know one thing: AI is not magic. It is not supernatural. It can be controlled.

Early on some school districts rushed to ban ChatGPT, but quickly learned that makes little sense. New York City Schools prohibited access to the chatbot in January 2023, and reversed that decision by May. There was a realization that banning it would simply allow those with more technological options outside of schoolstypically wealthier studentsto use it, while poorer students would be left behind.

Also, as David C. Banks, chancellor of New York City public schools, said in a statement walking back the ban: The knee-jerk fear and risk overlooked the potential of generative AI to support students and teachers, as well as the reality that our students are participating in and will work in a world where understanding generative AI is crucial.

Among the many questions facing superintendents, principals, chief technology officers, and educators about the new world of AI: How do we choose what companies or programs to invest in? Should AI be a required skill for students? What can schools do about students cheating with AI? What are the privacy risks?

Mollick and other experts caution educators against hurrying to purchase AI programs. Theres a million companies trying to sell a million different things to educators, administrators, Mollick says. He warns against contracting with third-party vendors at this point, partially because AI is changing so fast that what looks good now may not be in a few months. Also, schools and districts need to learn what AI can do to address their particular needs before buying anything.

The first step is understanding what ChatGPT and similar chatbots, such as Googles Bard, Amazons Q and Anthropics Claude do. Companies that develop them crawl across the web gathering masses of information, says Michael Littman, a professor of computer science at Brown University. The process is essentially an automated version of browsing the internet.

But there are several serious obstacles, the most important being that the data these highly sophisticated chatbots provide arent always correct (the term used now is that the chatbot is hallucinating when it spits out facts that it seems to be making up). In addition, educators will have to ask chatbots the right questions, or prompts, in order to get the information they want.

To help on that front, companies and nonprofits are developing more focused chatbots trained on curated information that is far more targeted to, say, help teachers create lesson plans, or offer evidence-based professional development and personalized tutoring for students.

Companies and non-profits, from well-established organizations to start-ups, are jumping into the AI market. Its a multibillion-dollar business and everyone is trying to inundate my inbox with services or solutions I need to try, says Eric Griffiths, technology director for Madison-Plains Local School District in London, Ohio.

One of the best known and most widely publicized of these educational chatbots comes from the non-profit Khan Academy. Its called Khanmigo, and Khan describes it as an AI-powered student tutor and teaching assistant. It can help teachers develop lesson plans, co-create rubrics, and write a class newsletter, among other things.

With Khanmigo, which costs school districts $35 annually per student, the goal is not simply to give students an answer, but to guidethem through the learning process by asking questions and providing personalized feedback. It can also help a student through an essay revision process.

Another education-focused chatbot is Stretch AI, a teacher-coach bot developed by the ISTE and Association for Supervision and Curriculum Development (ASCD), which have merged into one organization. The chatbot is trained only on information vetted by ISTE and ASCD.

Educators are currently testing and training Stretch AI. Culatta says the hope is to make it available in the next few months. He sees specialized tools where the content is carefully vetted and answers are cited for verification as the real future of AI, especially in learning.

And teachers are going to be able to create customized bots of their own. Open AI in November introduced a ChatGPT feature that allows users to easily create customized apps by programming prompts into bots. For example, Bailey says, a teacher could design an app to function as a tutor. The teacher could prompt the bot to tailor lessons to students interests and then quiz students. If a student answers incorrectly, the tutor would explain the answer to the student.

Teachers could also dictate the sources of information their chatbots draw from. They could design lesson plans by tapping into research into effective teaching methods in the U.S. Department of Educations What Works Clearinghouse, for example. With the clearinghouses vast research resources stored in their apps, teachers could be confident that the best available instructional research is built into every lesson plan.

While all this can seem intimidating, some teachers have been pleasantly surprised once they tried ChatGPT. Kris Hagel, the Peninsula (WA) School District executive director for digital learning, tells of a veteran teacher who was teaching Romeo and Juliet after a 20-year hiatus.

She sat down with ChatGPT and asked it to help write rubrics, lesson plans, assessments and projects around Romeo and Juliet. And what would have taken the teacher 20 to 30 hours took just over an hour, Hagel says. Butthis is keyshe knew when the chatbot was going off base because she had background knowledge about the play.

Tabea Peske, a first-grade teacher at Francis Scott Elementary School in Spokane, WA, says shes very glad she decided to try the new technology when her school offered a pilot program using AI to improve teaching. I was a little standoffish to begin with, but I thought Id still give it a try to see if it could make me a better teacher. Using AIs coaching, an iPad and a small camera that followed her around the room, she videoed 20 minutes of a math lesson.

Her two goals were to ensure she has appropriate tone and pacing in her teaching, and she was pleasantly surprised with the feedback she received. The program didnt suggest specific ways to improve, but provided thought-provoking questions, Peske says. For example, she was asked to rate herself, and in some of the lower-rated areas the AI program suggested taking steps to improve.

It walked me through creating the steps, she said. For example, it said You notice youre going too fast. Whats one specific thing you can do to ensure that you slow down? One of Peskes solutions was as simple as posting a sticky note to remind her to watch her pace.

Im not very technologically savvy, but the program seemed to do a really good job responding to me with very personalized questionsnot just generic responses, Peske says. The whole process took about an hour, plus watching the video.

An important first step schools and school districts could take is to give educators the opportunity to investigate AI tools easily and without fear of making mistakes. Give teachers time for some structured exploration, Culatta says. It could be, pick an assignment that youve done, and explore how some AI tools could help make that assignment more meaningful.

Most people feel intimidated plunging into something theyre unfamiliar with. Providing teachers with strategies for using AI can be helpful. Culatta gave an example of talking to a skeptical third-grade teacher who didnt see how AI could be useful. He asked the teacher what she was working on. She replied that it was about designing cities of the future, with environmental issues wrapped in. Culatta suggested using an AI program such as Stable Diffusion that could help create images based on the childrens prompts of, say, a skyscraper made of grass. No search image exists for that, because its something that they are creating out of their minds, he says. That was a really powerful way that this teacher started to use AI to make these ideas come to life.

Understanding the benefits of such exploration is one reason Learn21, a non-profit that that works with schools on educational technology, created an AI playground at its 2023 annual conference to allow participants to wander through different stations trying out a dozen apps already loaded on computers, says Stacy Hawthorne, the organizations chief academic officer.

Griffiths, the Ohio school technology director, attended the conference and says it gave him some ideas he could suggest to teachers. The one that I thought was kind of neat was having an AI historical figure being able to tell stories to kids, he says. That could be used at any grade level, right, with just tweaking how the AI responds and what the input is.

Currently, Griffiths says, teachers in his district dont seem that interested in AI; they can schedule him for any kind of technology tutorial, and no one has asked about AI. But the more exposure I have to these ideas and examples, the more it will allow me to create a rapport with teachers, he says. Selling it to them is my job as well. Once they trust me, they will try it in a classroom.

And theres an argument that teachers must learn AI because students need to learn it as a skill for education and employment. At a minimum, Hawthorne says, AI needs to be included in all core subjects so students understand applicability, implications, and AI literacy in a variety of fields.

Two of the biggest concerns school districts have about AI are how to prevent students from using it to cheat and how to protect privacy policies.

There are no easy answers regarding cheating, and teachers are right to be worried. But, Bailey says, its a challenge, frankly, that we faced before. We faced it with calculators, we faced it with the internet, we faced it with smartphones when they came out if you go back, in 2009, schools were banning smartphones. This is just that on steroids.

Some argue, that in, say, an English class, a teacher gets to know a students writing style and can detect if AI has been used. But Hagel says thats simply not truethe genius of ChatGPT and the like is that if students provide it several essays they wrote, AI can quickly imitate that technique.

Few think that technology will be an answer to technology, that software will emerge that prevents cheating with AI. Instead, the advent of AI may signal a significant shift in teaching and learning, where the process of getting to the answer is just as important as the right answer. Because as Hawthorne says, If AI can do the work, then are we really assessing the right thing [in focusing on answers]?

For example, Hagel says, last year, some teachers in his school district changed their way of thinking about cheating. They said, We know youre going to use this. Lets stop playing the cat and mouse game of me trying to catch you trying to sneak around and do it. Go ahead and use it. If youre going to use ChatGPT, I want you to send me this shared link, so that I can read the whole conversation that you had, so I can understand your thought processes.

Some educators talk about doing much more in-class work, where students cant use AI. Because as Mollick bluntly put it: All homework is doable with AI. And its undetectable. Homework as we knew it is over, theres no way to get it back that ship has sailed.

Privacy and protecting student data is another major worry. In order to use AI in ways that could be most effectivepersonalized tutoring, for exampleit requires access to detailed data, according to a report from the U.S. Department of Educations Office of Educational Technology. The data goes beyond conventional student records (roster and gradebook information) to detailed information about what students do as they learn with technology and what teachers do as they use technology to teach.

The report, Artificial Intelligence and the Future of Teaching and Learning, notes that AI could help a teacher customize curriculum resources, and could include student demographic details, outside interests, relationships or experiences.

And this raises concerns about the data that AI systems are collecting and storing on children, particularly those under 13 years old. There are several federal and state laws that regulate collection and release of student data, with varying degrees of success and enforcement.

Amelia Vance, president of the Public Interest Privacy Center, a nonprofit that works on child and student data privacy issues, points out that there are widespread privacy concerns with most technology used in schools and with federal privacy laws, so the focus shouldnt just be on AI.

There isnt so much a specific privacy issue related to AI. This is just a new form of technology, nobody knows how it is being adopted, where the data is going, she says. This adds up to a situation where you end up having a serious lack of trust and a number of concerns from parents, students themselves, and others about what is it were bringing into our classrooms.

One recommendation from Hawthorne: school districts that havent already done so should bring on board a chief technology officer. This is probably one of the few things in schools that I think needs to be centralized, she says. Ed tech companies are great at marketing towards teachers because they want teachers to pick up their product and drop it in their class. The average teacher is not equippedand they shouldnt have to beto make these privacy policy decisions.

States could play a major role in setting parameters and providing guidance on this issue to school districts, but as of November, only two statesCalifornia and Oregonhave issued guidance on AI use, including privacy policies, while another 11 are in the process of developing them.

Yet protecting student privacy can come into conflict with using AI to offer personalized learning because AI needs data on students. It also needs more diverse sources to address the biases in material used to train AI.

One way we can have less-biased models is if they include data from a more diverse set of people, says Kristen Dicerbo, chief learning officer at the Khan Academy. But that can be problematic when talking about student privacy, she notes. Thats when it becomes tricky.

Some hope that AI will help bridge the equity gap in K-12 education; others fear it will widen it.

Many AI tools, such as ChatGPT, are free (the more powerful ChatGPT-4, however, is $20 a month and theres a waitlist to buy it), and therefore, many say, it will be much more accessible to lower-income and rural students than other technology.

Any district, including the lowest-income district, has access to expertise and intelligence, and capabilities that have previously only been in the realm of really rich districts, Bailey says.

But more affluent schools, Hawthorne says, will still have the advantages of more resources, including time and flexibility, which will help teachers and their students understand and learn how to use AI more deeply.

As Claire Zau, vice-president of GSV Ventures has noted: If you saw the first type of digital divide as this disconnect between the haves and have-nots with access to technology, theres going to be a new digital divide where kids get left behind because theyre not taughtor even exposed toAI.

At a minimum, equity means having policies and practices in place to make sure generative AI tools are available to all students and staff. But it goes beyond that; if AI is trained on biased data, its programs will propagate those biases.

In one infamous example, Amazon tested AI to vet applicants for technical posts; it did this by training it to observe patterns in resumes submitted over a 10-year period. But since most of the resumes were from men, AI trained itself to prefer male applicants over female applicants and penalize resumes that included the word women such as womens soccer team captain.

It will be impossible to escape biases in AI datajust as its impossible to escape biases in humans. A K-12 generative AI readiness checklist issued by the Council of the Great City Schools and CoSN, The Consortium for School Networking, urges districts to ensure that their policies address the potential for AI to spread existing biases. And it recommends that they require vendors to include algorithmic-discrimination protections in their products.

Schools should be wary of going after shiny objects rather than looking for evidence that bias has been appropriately addressed, says Jeremy Roschelle, executive director of learning sciences research at Digital Promise, a non-profit that promotes digitalequity in education. Education leaders can look at frameworks already in place to guide districts, such as the readiness checklist.

Roschelles own organization is working on product certifications. Companies developing AI tools can submit evidence, which expert assessors will evaluate to determine if the research-basis is valid; if so, the product is awarded a certification. But Roschelle, like Mollick and others, believes its too early for schools to be buying AI products. To do sensible product adoption, he says, youre going to need local leaders, educators, parents, school administrators who have much more AI literacy than they have today.

Policy Analyst Bella DiMarco contributed to this report.

More:

Navigating the Artificial Intelligence Revolution in Schools - Future-Ed

Related Posts
This entry was posted in Artificial Intelligence. Bookmark the permalink.

Comments are closed.