Is AI taking over our jobs (and classrooms)?

 “AI is here, and it’s coming for your job.”

“AI is replacing artists.”

“ChatGPT: The end of programming as we know it”

“The college essay is dead.”


Sensational headlines such as these are all over the news and the Internet these days. Since the public release of the AI-based text-to-image tool, DALL-E-2, by OpenAI back in November 2022, followed by the release of OpenAI’s conversational agent, ChatGPT, in December, there has been a growing discussion about how Artificial Intelligence (AI) is changing the creative, professional, and educational world. While DALL-E and other text-to-image tools started to show up in 2021 and 2022, and AI conversational agents have been around for a while, It was OpenAI and its two products, DALL-E 2 and ChatGPT, that turned the trend of generative AI into a mainstream topic. ChatGPT reached 100 million users in less than two months, which was a record for any new app or social media. One can argue that this user base includes many who were curious and only “tried it,” but regardless of that, the public (and investor) attention is undeniable. It is probably a concern for Google and Meta, who have been working on AI for a long time and now feel left behind, trying to catch up. But there are others who are concerned.


Academia is particularly shaken by the tsunami of possibilities and uncertainties caused by ChatGPT. Suddenly, it seems that students have a tool that can write everything they will ever need to deliver (from an essay to a computer program or even a research paper). This not only creates problems for assessing students’ work, but it also raises questions on the motivation to study, especially when employers start to think they can hire AI to do their jobs. The ongoing academic discussions on AI-based text and image generators generally revolve around the following topics:

  1. What is the impact of these tools on the job market, and as a result, on educational objectives? Once we know what AI can and will be done in the future, we need to re-assess what students need to learn. Using AI tools can potentially be among the skills that students need to learn, just like using search engines is now. On the other hand, some skills may become obsolete, as telephone operators and many factory workers experienced in the past.
  2. Assuming there are skills that students still need to learn, to what level should they be allowed to use AI when performing tasks that require those skills, and how do we assess the AI-supported work? For example, we may believe that writing by itself or as a means of critical thinking is an essential skill that university-educated professionals need to have. Should we allow students to use AI tools such as ChatGPT to write essays? Or if computer programming will stay a required skill, can students use AI to help them with their assignments? 
  3. Even if we don’t want to allow AI tools in some cases, is there a chance to actually stop students from using them? If something cannot be stopped, then we may be better off trying to come up with alternatives than wasting time to prevent it.
  4. On the flip side of the above questions is the idea of the positive uses of AI, both for students and educators. If AI is here to stay and is getting better, what can we do with it? YouTube, Google search, and Wikipedia are now used by everyone in academia and offer many advantages. It is not unreasonable to expect some benefits for generative AI (and other forms of AI) in education.
  5. What are ethical, legal, socio-cultural, financial, and other concerns with the use of generative AI and other big data-based tools? It is not hard to imagine a world where AI is taking over and managing everything based on goals other than the well-being of humans. But we don’t need to go to sci-fi to see the dangers of collecting and analyzing massive amounts of data and relying on black-box algorithms to make critical decisions.
  6. Last but not least, there remains a big question of where we go from here. How powerful these AI tools can get, and what are the reasonable expectations we can have from them? What can they really do? There is always hype around big technology trends, but most of the time, the reality is different. Technologies may not be as reliable and magical as we may imagine them, and they may not results in all the breakthroughs we hoped for or were afraid of. Alternatively, they may surprise us with rapid growth and leave us unprepared. Just think of many computer hardware and software problems caused by the lack of foresight on how things may evolve, from 2-digit year storage to 1-byte ASCII codes and 4-byte IP addresses. There is a huge price to pay for not being prepared.


These topics are going to be discussed in detail and probably for a long time. There won’t be easy answers and many different, complementary, and contradictory practices will be experienced. We can hope that we can act in a timely manner and a healthy, steady state will be reached. But for this to happen, there are points we should know and not forget. Here are some of them:

  • Before we can work with or against ChatGPT and other AI tools, we need to understand where they come from and how they work. Generative AI tools currently enjoying much attention are not the only branch of AI. They are associated with the notion of big data, and are based on Artificial Neural Networks (ANN) and the Depp Learning (DL) technique. ANN “learns” by changing the weights of neural connections after analysing training data. As the result, the network adapts to perform a particular function through what is commonly called the “connectionist” approach to AI. In DL, the ANN uses a large number of layers of neurons connected to each other. Similar to other ANN and DL-based systems, tools such as DALL-E and ChatGPT process large amounts of data (image or text) to train, i.e., adjust neural connections. For example, ChatGPT uses GPT-3, a Large Language Model (LLM), that creates probability distributions for sequences of words through learning from its large database. GPT-3 uses 175 billion neural connections and, based on that, creates new text from a given one (the prompt). No matter how complicated this process is, what happens is to simply guess what is the next word, as in auto-complete, which is now common in email clients. GPT-3 does not create a high-level knowledge or an overall concept, and (at least currently) has no clear mechanism for dealing with the correctness of what it generates. The goal of the model is to follow the stream of text with what seems to be likely. Different outputs may be given to the same prompt at different times as they have a similar probability. In that sense, one may argue that ChatGPT is not really intelligent at all. There are other approaches to AI, such as symbol manipulation, rule-based systems, and genetic algorithms, that do aim at creating new high-level knowledge. 

  • As a result of their basic structure and function, generative AI tools make mistakes more frequently than we may think. By mistake, here, I mean a result that is not correct based on some criteria that the users have, and not the system criteria itself. We should remember that the DL models do not have much metadata that the users may have. For example, they do not necessarily distinguish between true and false or run a fact check on the text they are processing (although rules can potentially be added to do so). They don’t even know what is the meaning of what they generate. Since their goal is to satisfy some statistical criteria, they create results that users find clearly wrong. Of course, there is no doubt that later versions of GPT and other AI tools will become more powerful by incorporating different methods and more diverse training data, but here are some examples based on GPT-3:
    • AI image generators have no knowledge of the actual entities they are presenting. While the visual output can be impressive and intriguing, it can be geometrically or anatomically incorrect
    • GPT-3 and similar LLM-based tools do not perform design, planning, and calculation to find the result. They guess based on samples. If you ask for a mathematical operation that is not likely to be found in their training data, they can get it wrong, while a simple dumb calculator can do it right. If we ask ChatGPT “what is 5489 times 682?” or something like that, a good percentage of times, the result will be wrong. Long numbers and their combination are not likely to be found in the training data, and no actual calculation is performed, only text prediction.
    • If we ask ChatGPT to describe or explain a non-existing topic, it is likely to give made-up answers, even with made-up references. I tried this multiple times using new phrases that I had recently proposed in my research papers (such as “holistic multimedia interaction and design”) or completely made-up ones (such as “spiritual multimodal interaction (SMI) theory”). ChatGPT was not likely to have seen these phrases, but it made up definitions and when I asked who had invented them, it falsely named unrelated people and cited non-existing references. It did not check for correctness; only created a text that sounded right (similar to the training data).
    • When I asked ChatGPT (and its base, GPT) to write simple computer programs, it was successful. But when I asked for using specific libraries and doing more complex tasks, it frequently created inefficient code, made mistakes, or even crashed (stopped working) before reaching its maximum allowed length.

  • DL-based systems, in general, have limitations and face criticism
    • Big data is not necessarily good data. For example, research has shown that while velocity (how fast we collect timely data) and variety (diversity of the data) positively improve innovation efficiency in businesses, the volume of data does not have a significant impact. The quality of data quality is shown to lead to low data utilization efficiency and cause decision-making mistakes. While some quality issues can be fixed through various “clean up” processes before data analysis, it is believed by many researchers that “big Data has been especially troubling because of the ideological implications: the belief that if 'bigger is better', and if we can analyze large data sets, then the type of knowledge produced will be truer, more objective and accurate.”
    • Symbolic AI has been around as long as, if not longer than, connectionist AI. It is based on the notion of defining meaningful symbols and building complex structures based on them to establish knowledge. Despise the attention that DL is receiving these days, many experts believe that connectionist approaches to AI are fundamentally limited in their ability to offer general intelligence. As such, adding more data and network layers will not be able to increase the performance, which will plateau at some point due to the structural limitations of the method. Many claims have been made about AI and other technologies in the past during periodic hypes. In 2016, Geoffrey Hinton, a leading DL researcher, said, “If you work as a radiologist, you’re like the coyote that’s already over the edge of the cliff but hasn’t looked down.” He claimed that DL will replace humans for reading images from MRIs and CT scans within five years. Many thought the same about car drivers. None of these happened, and many may never happen as long as a more hybrid approach to AI has not been implemented to bring symbolic and connectionist approaches together.

  • When it comes to AI tools, education is essential. Not many people, including students and educators, know how these tools work and what their limitations are. Regardless of what we decide to do with AI tools, there will be those who will use them, both in academia and in the industry. This means that faculty members should know what they are dealing with, so they can decide how and if they should use it, and how to control students’ use of AI. Students, on the other hand, should be aware of what these tools do, how reliable they are, and what are good uses for them. In fact, they should learn how to use them, as they are likely to end up using them at work.

  • All the above points lead to the fact that generative AI tools can be useful. They will affect how some jobs are done and are likely to make some jobs obsolete. But it is not going to be as drastic as some people picture it. Many other technologies caused similar scares. Printed books were considered a danger to society and traditional storytelling. Photography was supposed to kill visual artists. Wikipedia, Youtube, and search engines were also called the end of education as we know it. They all ended up being useful tools. Some jobs changed, of course, but drawing, painting, and writing survived and got even stronger with better tools. It is highly unlikely that in the near future, we will have AI programmers and writers who completely replace humans. Those tasks, in real-world cases, are still too complicated and sensitive to be left totally to AI, especially DL-based ones. They can help those professionals in prototyping, research, idea generation, and preliminary design, though. So, students need to learn how to use them for these purposes. The AI tools can also perform simple and more fundamental tasks that are commonly given to 1st-year and 2nd-year students. We may decide to ban these tools in such classes, but instead, we are better off letting go of pre-AI assignments and trying to embed AI into our fundamental tasks, or using a combination of assignments and assessment methods so that students can use AI for some and have to rely fully on their own skills for some others.  As we move forward, and just like other technologies, our community will come up with innovative ideas for how to use AI tools in education and research, and how to assess students properly.


Popular posts from this blog

Anyone Can Code: Algorithmic Thinking (new open-access book)

Learner-Centred Practices

What truth can AI tell?