AI isn’t near, it’s here

Instead of fearing what is ahead, educators should embrace the benefits of artificial intelligence
AI technology has been around for a lot longer than most people suspect. Its time to bring this tool into the classroom to get students involved in the next frontier of education.
AI technology has been around for a lot longer than most people suspect. It’s time to bring this tool into the classroom to get students involved in the next frontier of education.
Photo liscensed with permission by Adobe Stock

If you want to throw out AI, consider first that you would be throwing out your Amazon Alexa, autofill, voice transcription apps, Siri, Netflix’s recommendation algorithm and Google Maps. I could list more. 

Artificial Intelligence — despite what many people have come to believe — is not synonymous with Chat GPT. And it’s not pure evil either.

Generative AI like Chat GPT is just one form of a trend that has been long developing. The first use of AI was when computer scientist Arthur Samuel created a program that could play checkers independently. And that was in 1956.

AI has been around for a while. But more importantly, it’s not going away

AI is expected to see an annual growth rate of 37.3% from 2023 to 2030. But that’s not an inspiring fact for most. According to Forbes,  77% of people still fear that AI will cause job loss in the next year. 

McCallum students and staff should be at the forefront of experimenting with ethical uses of this new technology in the school environment.

That’s not to say these fears don’t have merit; however, we were also afraid of the internet before it became an everyday necessity. 

And now, only about 40 years after the internet was first created, we are so reliant on it as a tool for work and school that most students would not be able to make it through their first-period classes without it. 

With all class materials anchored on BLEND and most activities involving online tools like discussion boards, Google Doc worksheets or GoGaurdian-protected tests, it’s apparent that the frequency of internet usage has shifted drastically since its initial launch.

It’s hard to imagine AI won’t follow a similar pattern as the science and technology behind it progresses. 

So how do we cure the fears of artificially intelligent robots taking over all job industries or teaching students only to cheat? Exposure.

Everyone should be learning to use AI in their various fields of employment or study to develop their knowledge of the technology as a tool, not an enemy. 

So how do we cure the fears of artificially intelligent robots taking over all job industries or teaching students only to cheat? Exposure.

This starts at the high school level. 

Students should be using AI to increase productivity in school so that they can focus more on tasks that will contribute to their lives beyond the classroom. 

We as a society have already moved towards utilizing tools that make our lives easier by taking away time-consuming tasks. Think about a Roomba that vacuums your house without you ever needing to get out of your chair. Or microwaves that speed up the cooking process. Or even comical ’80s technology like The Clapper saved people the time of getting up to turn off the light.

Generative AI can be used to save time in the same way. And I’m not talking about using Chat GPT to do your work for you.

There are generative processes involved in projects and school work that are not necessarily the final product. 

Generative AI can be used to save time in the same way. And I’m not talking about using Chat GPT to do your work for you.

What I mean by this is that tasks like brainstorming research topics, revising essays for word count or even coming up with a list of rhyming words for a poetry assignment are all instances where Chat GPT could free up a student’s time to allow them to focus more on the work that involves higher level thinking.

Of course, this is not a black-and-white issue. OpenAI’s Chat GPT is roughly a year old and is already involved in more than a dozen lawsuits. 

Take the Walters v. OpenAI, LLC, case. The Chat GPT software is reported to have had what developers describe as “a hallucination” which simply put, is when the software gets its facts very wrong. In this case, the plaintiff, Georgia DJ Mark Walters claimed that the program created a false criminal record for him. 

But that’s an extreme example. And something that would likely not happen in more simplified student uses of the platform.

Courts are right now trying to figure this out. McCallum students and staff should be at the forefront of experimenting with ethical uses of this new technology in the school environment — isn’t that what education is all about?

View Comments (2)
More to Discover

Comments (2)

All The Shield Online Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *

  • E

    EzFeb 20, 2024 at 9:40 pm

    This article fails to talk about any of the real issues of ai. Things like the problem of plagiarism, and teachers not wanting to grade hundreds of almost entirely generated assignment, as well as the Illusions that ai can just have being almost entirely overlooked, as story and story and story of horrible horrible failures of these programs come up and up, to the point were there is a real risk of students learning the wrong things by using ai to study.

    Reply
  • R

    rowanJan 19, 2024 at 1:49 pm

    I think this article is good because it covers what is happening now and it is an up and coming thing and everybody is scared that robots/ AI are going to take over and I think it is very important

    Reply