Think.
The pressure of your first blog post for a new year can be overwhelming.Where does one start? What can one say about AI given the rapid advances we experienced in 2023? Everyone is making predictions - what are mine?
In the face of this pressure, I decided to write about what someone else was thinking about AI instead. As everyone was making their 2024 AI predictions, I stumbled on a post which seemed to make a lot of sense given what we have seen over the last 12 months.
Martin Signoux, a Public Policy Manager at Meta posted his 8 predictions for AI in 2024, which were endorsed by Yann LeCun, the VP and Chief Scientist at Meta.
Here are his predictions verbatim (click here for the original post):
AI Smart Glasses become a thing
ChatGPT won’t be to AI assistant what Google is to search
So long LLMs, hello LMMs (Large Multimodal Models)
No significant breakthroughs, but improvements on all fronts
Small is beautiful (the reference here is to SMLs - Small Language Models)
An open [source] model beats GPT-4, yet the open v closed debate progressively fades
Benchmarking remains a conundrum
Existential risks won’t be much discussed compared to existing risks
It goes without saying that these are only predictions, but let’s assume they are correct. What would the implications of these be on education and what could we ‘try’ in 2024 to ‘transform’ the educational experience of our students for the better? I think that predictions 1, 6 and 7 are the least relevant (I know that AI glasses sound really cool, but I am very doubtful these will have a general impact on education in the immediate future - what do you think?), so let’s focus on the others.
Prediction 2: ChatGPT won’t be to AI assistant what Google is to search
There is no denying that at present Open AI’s GPT-4 is the best all purpose LLM (although it’s actually an LMM - see below) on the market, making it an incredibly powerful AI assistant, for various tasks. However, there are many other players on the market. For example, both Perplexity and Claude are becoming increasingly more powerful, with the latter boasting the largest context window of all LLMs. People also seem to be drawn to different LLMs for different purposes. Midjourney (subscription based), for example, with the release of V6 has been a stable for image generating enthusiasts, despite Dall-E being free with CoPilot. Unlike Google, and despite Open AI’s fan fare and status in the market, consumers seem to enjoy having the option of which model to use as opposed to being limited.
Prediction 3: So long LLMs, hello LMMs
There is no doubt multimodal models are the goal of the large companies. Every few weeks there is announcement of increased multimodal capacity, with Perplexity being the most recent.The implication of these models is that not only will students be able to produce text, they will be able to produce almost anything with a prompt of a mere few words. And access to these tools are increasing for users using free versions of the models. CoPilot has now launched a free Android application which can generate images with Dall-E, use computer vision and interact with the user through voice.
The opportunties, though, are endless. With LMMs the boundaries of what is possible for both teachers and students is limitless, particularly as these models improve.
For a comprehensive overview of LMMs, read this brilliant piece by Chip Huyen.
Prediction 4: No significant break throughs, but improvements on all fronts
Could this mean that we can all catch our collective breaths? I doubt it. But perhaps this would give us the opportunity to bed in systems and policies without fear of them becoming redundant over night. There is a better general understanding of what these companies are trying to achieve and for those of us who have been tracking developments closely, I am doubtful that we will be majorly surprised by any new development in 2024. If you consider the above prediction, as teachers we are now aware of existence of these models and over the course of the year, we can can begin to understand their potential. The fact that they will improve significantly over the course of the next 12 months is not quite the same as ChatGPT suddenly landing at everyone’s finger tips in November 2022.
Prediction 5: Small is beautiful
According to many AI professionals, small models will proliferate in 2024, most with the capability of GPT3.5 at the very least. There is an expectation that these models will arrive in the various App Stores. More important, though, is that some of these models will be stored on mobile devices without the need for an internet connection. The implications of this will be profound for education. As Ethan Mollick posted this week, one could hypothetically store a Small Language Model (Mistral 7) and all of the text on Wikipedia on a 32 GB Apple Watch.
Once a few students become aware of how to harness this technology, the knowledge will spread on social media and adoption will become widespread.
How will schools ensure they have a handle on this?
Prediction 8: Existential risks won’t be much discussed compared to existing risks
Much of the ethical conversations about AI focused on existential issues in 2023. These issues are important and I believe they still need to be deeply thought about over the next few years as even if the probability for existential threats are low, they need to be taken seriously.
However, these ethical discussions have certainly been a major limiting factor for adoption of AI in schools. With a pivot to more existing risks, particularly those that surface due to widespread use of the technology in schools, teachers will be forced to grapple with the technology and design ways to either harness its power or prevent its use.
Try.
In the face of these predictions, what can educators ‘try’ to be prepared for the changes that are afoot?
Prediction 2:
This prediction highlights the importance of developing skills across a few LLMs, gaining a deeper understanding of their unique abilities and nuances. I recently said in a blog post that I think schools should choose a model and drive its usage so as not to overwhelm staff members. I maintain that this is still the best approach in the immediate short term; however, the medium and longer term goal needs to be ensuring that a skillset is developed across several models.
Other Options:
Have older students compare the output of different models and assess the strengths and weaknesses
Generate outputs from different models and have students critically analyse these.
Establish a school wide process for corroborating information across various models - involve the students in this process.
Prediction 3:
In the article by Chip Huyen, she mentions that she is very excited about the power of LMMs to help visually impaired people navigate the internet and real world. In the classroom, I think some of the most powerful use cases of LMMs would be for SEND students. AI generated videos, images or speech could provide scaffolds for various tasks. I have written extensively about text-to-speech resources here and I have also written about using chatbot for social stories for those students who may struggle with social or emotional regulation.
Other Options:
Discuss ethical issues linked to LMMs, such as copyright, misinformation and disinformation. This is particularly relevant with the recent case of the New York Times suing ChatGPT and also examples of Midjourney’s output looking very similar to original works - see below.
Analyse how bias arises in AI generated images. For further ideas on this point see edition #1 of this blog.
Prediction 4:
Productivity will increase hugely for those who have been using these tools for a while now. With a strong foundation in even one LLM, and capability increasing throughout the year, people will genuinely gain time back. With increased capacity and ability in all models, the aim may be to automate low value tasks and augment high value tasks with the support of AI.
Within the context of relative stability what else should we focus on?
Other options:
Pilot various AI tools and publishing case studies about their effectiveness in the classroom
Have a program of incorporating student voice in decision making. Meet with students regularly to discuss the ways in which they are using the technology in their learning. Open channels of communication are vital.
Think of ways to document successes and failures - publish this for all staff.
Prediction 5:
Assuming the abundance of small language models, I think there will be large amount of hype and, quite frankly, various models that are not very good. In education, we are likely to be approached by many companies who have the next, best model. Discernment and patience is what is needed.
I feel that using models from more trusted companies, that integrate within the school’s ecosystem should be the goal. This will be difficult to manage, particularly with many students using their own devices. However, with Microsoft’s Copilot having been launched, there is a great opportunity to leverage powerful AI technology in a safer way. It certainly won’t be perfect, but given the number of teachers that had to rapidly develop their tech skills during Covid, many using MS Teams and the Office 365 suite, I think this offers a good opportunity for widespread adoption within a school.
Other options:
Begin experimenting with Copilot studio which enables the creation of AI agents trained on data of your choosing. They can even link to Sharepoint documentation.
Train yourself using Microsoft’s various training programs - they are short, engaging and pretty useful.
Be aware of smaller language models being used by students. Again, open dialogue is what is needed.
Prediction 8:
There is just so much we can begin to try do both in and out of the classroom when it comes to AI ethics. In a few of the examples above, there are already various ethical issues that we need to consider and ideas you could try in a school.
As ethical issues begin to surface as the technology is more widely adopted, schools must be agile and flexible to deal with these in the moment. Adopting an ethical risk framework when an issue surfaces could be very useful. An example I have come across was from the National Institute of Standards and Technology in the US. They have a simple model: govern; measure; manage; map. I think adopting a similar, simple model to addressing and managing ethical issues would be very effective in a school.
Other options:
Debate AI ethics topics like transparency, justice, control
Analyse real cases of AI harm due to biases and lack of oversight
Relate discussions to student concerns like privacy and automation
Consider both societal benefits and risks of expanding AI capabilities
Transform.
Having thought and written about someone else’s predictions has given me the confidence to make a prediction of my own for 2024:
AI will transform many aspects of schooling in 2024 for those on the ‘right’ side of the digital divide.
While AI holds tremendous potential to transform education in the years ahead, realising this potential will require overcoming existing challenges and implementing these technologies responsibly.
If the predictions outlined earlier come true, 2024 could see more widespread classroom adoption of AI tools like chatbots, LMMs, and small on-device models. However, uneven access to technology and internet connectivity will likely persist, preventing equal access.
For schools that do adopt new AI systems, simply integrating the latest tools will not magically transform the schools. Employees will need extensive training to employ these technologies effectively and ethically. AI literacy is needed to teach students how to evaluate AI-generated information critically. Strong oversight is essential to ensure privacy, data security, and prevent over-reliance on automation.
Ultimately, AI on its own is not a silver bullet. While holding much promise, it can only amplify — not replace — the role of excellent teachers.
I wish you a wonderful 2024!