šŖ Important information about cookies
We use cookies to help us improve your experience with us. By continuing to use our site, you are accepting such use.
Artificial intelligence technology has advanced so quickly in the last few years that we will soon start to come across products driven by AI on a regular basis in our day-to-day lives.
Digital assistants will provide services ranging from secretarial support, legal and even medical advice in a more and more human-like manner. Behind every camera and microphone, AI technology will be capable of recognising objects and faces as well as, or better than we can. And not only will it understand speech, but it will be able to recognise voices and track conversations, even in a crowded room.
In the media, weāll start to see more and more content produced by artificial brains rather than the human creative process, and it wonāt be long until we canāt tell the difference between the two.
How will we help youngsters navigate such a world?
For those concerned about the safety of AI for children and young people, one of the most beneficial lessons we can teach them is how to find reliable and truthful sources of information.
In the past, a mashed together image of a politician or celebrity doing something embarrassing might have sparked a giggle, but it was usually obvious someone had faked it. With modern-day AI, image generators can create such convincing footage of events that arenāt real that neither you nor the children will be able to tell the difference.
Have a discussion with the child or class about what are reputable sources of information. Does the news article or post have a real person behind the story? If it is news, can it be fact checked using a reputable newspaper websites or government news outlets such as the BBC? And if it is a claim that is surprising, can it be verified using a reputable site such as Wikipedia?
For example, a TikTok of someone recounting news is less trustworthy than a journalist reporting on it or an interviewee giving evidence on a BBC or Sky News website or news channel. Be sure to talk to children about the consequences of spreading misinformation and fake news.
In a more local context, issues are likely to arise as fake video generators become readily available. Setting a code of behaviour for what to do when a youngster is sent a video containing potentially fake or malicious content will become vitally important: letting parents and teachers know something nasty is circulating and understanding not to send it on further will be a good starting place for a community code based on kindness.
The code should also include guidance for children on the consequences of using tech themselves, especially if it could be used to make fun of others. This could include reminding children what the definition of bullying is, talking to children about the consequences of bullying, and being sure to include cyber bullying as a part of that.
Finally, as AI generated entities start to appear more and more realistic, it will be essential to help children and young people maintain an emotional distance from them, however real they seem. This might sound like the plot of a science fiction movie, but the reality is that online AI assistants will very soon have not only human-like voices, but will be able to converse like a human and be able to fake convincing emotional reactions.
Parents and teachers will need to be alive to the fact that the AI that sits behind them could have been trained to be manipulative, to encourage desire for certain products in an illicit manner, or even have some other agenda that they want to instil on a child once they have gained their trust.
Parents and teachers should be careful to research and vet any AI driven products purchased for childrenās use in exactly the same way they would control what apps they download onto their phones.
Because this technology is moving so fast, the regulations around it are not there yet. Politicians are constantly playing catch up, and just like in the AI-based conspiracy thrillers I write, the opportunities for nefarious practices from unethical companies flying under the radar will be vast.
Monitoring children and young peopleās interaction with these new ādigitalā friends will be a vital safeguarding measure for parents and teachers.
As a parent, this might include exploring AI use together so as to learn what is safe and reputable, encouraging your child to ask questions, and monitoring your childās use of AI driven products.
And, finally, to ensure the use of AI does not damage childrenās individual development, be sure to educate them on not relying on AI for homework, creativity or problem solving, but instead find a safe way to use AI to enhance their exploration of the world.