Recent Research Paper Notes Security Issues of ChatGPT and Other Language Models

In early March a paper was published with details of a potential vulnerability that can spread across getAI systems. For example, a mailing assistant service could be set to send out something spammy.

The interesting issue with this type of vulnerability is that the traditional coding in C/C++/Java, Python processes have had decades to establish ways to eliminate vulnerabilities of certain types, memory address randomization, etc. Android and iOS newer versions have more features to keep apps from contaminating other processes or using gps or other features without your acceptance. And similarly in the Linux world, Snaps with similar containment have become more popular recently.

The new chatbot models that many companies are using are showing up in everything from support chats to Amazon web services (with some notable issues) and we are going to have to find some new tools to protect source documents in a similar way as security experts have made measures to protect memory and overflow protections and sql injections, in the traditional software development methodology.

Previous Newsworthy Leaks

In 2023, Bing leaked its source instructions, according to Arstechnica. It also said some odd responses.

Also in 2023, Samsung reportedly leaked information of a top secret nature into ChatGPT – remember that their default policy is to use the information you query it with.

Leave a Reply

Your email address will not be published. Required fields are marked *

77 ÷ eleven =