Based on new UNESCO guidance here are key things we can do and discuss at home to help students utilise and not overly rely on generative artificial intelligence tools.
UNESCO's report ‘Guidance for generative AI in education and research’ is a resource for governments and education sectors to help facilitate the responsible utilisation AI and generative AI technologies in the frameworks of education systems, and consequently inform everyday activities of teachers and students.
There are some great takeaways not only for teachers but anyone involved in the education of children and youth, and to refer to as a booster for digital literacy for all involved.
Issues surrounding the use of AI for education
The report outlines risks and issues associated with using generative AI and AI in general that can serve as a good reference point for explaining why we need to exercise caution when using generative AI. Starting at page 14 to further explore.
Support development of intellectual skills
I’ve been thinking about this issue as it’s important to everyone and posted about it last week (What is lost when we delegate writing to AI).
As more people use generative AI to support their writing or other creative activities, they might unintentionally compromise the development of their intellectual and creative skills and efficacy, which I believe are central to wellbeing.
Things we can do
The suggested principles to support human agency are worthwhile reference points to consider how schools implement generative AI, as well things we can take action with educating children at home (refer to page 24/25):
Inform learners about the types of data that GenAI may collect from them, how these data are used, and the impact it may have on their education and wider lives.
Protect learners’ intrinsic motivation to grow and learn as individuals. Reinforce human autonomy over their own approaches to research, teaching, and learning in the context of using increasingly sophisticated GenAI systems.
Prevent the use of GenAI where it would deprive learners of opportunities to develop cognitive abilities and social skills through observations of the real world, empirical practices such as experiments, discussions with other humans, and independent logical reasoning.
Ensure sufficient social interaction and appropriate exposure to creative output produced by humans and prevent learners becoming addicted to or dependent on GenAI.
Encourage learners to critique and question the methodologies behind the AI systems, the accuracy of the output content, and the norms or pedagogies that they may impose.
Utilising GenAI with scrutiny
In most cases it's going to take several tries to ascertain a satisfactory output from a generative AI application. Learners may not have the expertise or patience to undertake the iterative process needed (page 12):
“While GenAI might help teachers and researchers generate useful text and other outputs to support their work, it is not necessarily a straightforward process. It can take multiple iterations of a prompt before the desired output is achieved. A worry is that young learners, because they are by definition less expert than teachers, might unknowingly and without critical engagement accept GenAI output that is superficial, inaccurate or even harmful.”
Things we can do
Help children critique the responses provided by GenAI (page 27):
Understand the role of GenAI as a fast but frequently unreliable source of information. While some plugins and tools are designed to support access to validated and up-to-date information, there is little robust evidence as yet that these are effective.
Recognize that GenAI typically only repeats established or standard opinions, thus undermining plural and minority opinions and plural expressions of ideas.
Provide opportunities to learn from varied sources of information [TG: books!], trial-and-error, empirical experiments, and observations of the real world.
As a researcher, I believe the key to utilising generative AI (including improving prompts) is to do so with scrutiny. This is something I’ve discussed several times with my children; to treat it like a web search, including the process of adding keywords and context to get better answers.
No single source should ever be regarded as a definitive source of knowledge.
Evaluating sources with the ‘W’ questions are a great basis to do this:
Who created the text or image?
What is their point of view?
When was it created?
Where was the information it based sourced from?
Why did someone create this?
Considering the rise of deepfakes and AI generated content, we also need to add How to this list. Techniques are being developed to watermark AI generated images and text, however additional diligence will be needed as bad actors will no doubt forge or avoid these. Checking for signs that the source is authentic if it is claiming to be ‘real’, and verifying its validity with trusted sources should be practised.
Minimum age of 13
The report recommends that countries should carefully consider – and publicly deliberate – the appropriate age threshold for independent conversations with GenAI platforms. It recommends that minimum threshold should be thirteen years of age.
Things we can do
We all know that each child’s character is different and so an ‘appropriate’ age for one, may not be for another. Naturally, it’s your decision when and how these tools should apply.
I have been using ChatGPT with my children to supplement homework activities, find entertainment options and encourage them to chat in the second language they are learning.
It’s fun to use image and text generation applications with kids and see what they prompt when given free reign. That’s a great setting for discussions about proper use and the implications of using generative AI rather than doing the work. For me, it’s always top of mind that unless explicitly built for education, the target for generative AI services is enabling businesses solutions for efficiency and increasing worker productivity, rather than enriching learning and development.
Full report:
Future ready?
Comments