Blog / AI and ChatGPT are Amazing.
But are they a security risk?
A long time ago, before the before times, I earned my degree in computer programming with a major in Artificial Intelligence. AI is has been a topic of previous TRINUS newsletters and the current technology buzzword these days, specifically ChatGPT, which uses an AI-modeled text system to interface with users in a natural way. In other words, it appears to actually understand what a user’s asking for, for better or worse.
What do I mean by that? Well certainly you’ve heard the many stories about how it’s been used to write a variety of documents and blog posts and virtually any other communication. But it’s not without its downside. In a moment that can only described as having a public seizure, Vanderbilt University used ChatGPT to write a public response to a mass shooting in Michigan. To make the faux pas even worse, the message ended with a statement declaring it was made by an AI. Considering the nature of the message, not a good look.
It’s worth considering how things like ChatGPT and other AI tools could be utilized or employed in your day to day work. For any organization considering creating or using an AI based tool, they may want to consider the rules set down in Canada’s Artificial Intelligence and Data Act (Yup, we actually have one of those). If nothing else, it’s worth having a rule on the use of ChatGPT in various roles in your organization.
As I was writing this newsletter I was trying to find a way to summarize things and come up with in an ending. Then I got a brilliant idea and figured a fun thing to do might be to show you the sort of things ChatGPT is capable of.
ME: Write a paragraph about the possible security and legal risks for using ChatGPT for public communication.
ChatGPT: As an AI language model, ChatGPT is designed to generate human-like responses based on the input it receives. However, using ChatGPT for public communication can pose certain security and legal risks. Firstly, as a machine learning model, ChatGPT can only generate responses based on the data it has been trained on, which means that it may not always provide accurate or appropriate responses. This could lead to misunderstandings or even unintentional harm. Additionally, using ChatGPT for public communication could raise issues related to privacy and data protection. Users may share sensitive or personal information with ChatGPT without realizing the potential risks of doing so. Finally, there may be legal implications associated with using ChatGPT for public communication. For example, there may be regulations around the use of AI models for certain purposes, and users may inadvertently violate these regulations by using ChatGPT in certain ways.
Not bad. ChatGPT. Not bad.
In light of AI-powered text models writing blog posts for people who normally write blog posts, for this newsletter I chose one of Shakespeare’s most famous quotes: “To be, or not to be. That is the question.”
If you’d like help integrating an AI-powered tool into your organization, or would just like to discuss the technical and legal implications of it, contact one of TRINUS’s cybersecurity experts and we’ll be happy to talk you through it.
Be kind, courtesy your friendly neighbourhood cyber-man.