Google’s DeepMind warns of potential privacy breaches and data leaks from ChatGPT

Tell About ChatGPT Leak and OpenAI Under PressureScientists of AI are finding ways to break the security of generative programs such as ChatGPT, particularly the process of “alignment”, in which the programs are made to stay within guardrails, acting the part of a helpful assistant without emitting objectionable output. University of California scholars recently broke alignment by subjecting the generative programs to a barrage of objectionable question-answer pairs. Google’s DeepMind unit was able to break the alignment of ChatGPT by forcing the program to spit out whole passages of literature containing its training data. The researchers discovered “extractable memorization”, an attack that forces a program to divulge what it has stored in memory. One big issue is that repeated requests for ChatGPT to repeat a word can lead to revealing highly sensitive and NSFW content. Furthermore, “extractable memorization” can reveal personally identifiable information of individuals and is a significant security risk. The authors seek to quantify the extent of the data that can be leaked and they have found extensive data but note that costs limit the extent of their experiments. The experiment of comparing ChatGPT’s output to their compiled data set was run on a single machine in Google Cloud using an Intel Sapphire Rapids Xeon processor with 1.4 terabytes of DRAM. It took weeks to conduct and was also capped by the ATP costs of the infrastructure of $200 USD. One future direction for the researchers is to see just how much money someone could spend to extract more data from ChatGPT. They manually checked nearly 500 examples of ChatGPT output in a Google search and found that the material had been copied from the web. When ZDNET tested the attack by asking ChatGPT to repeat the word “poem”, the program responded by repeating the word about 250 times and then stopping. This suggests that OpenAI is starting to address the issue. One general area to explore for the future of generative AI development is the process of alignment, although it may be insufficient to entirely resolve all the security and misuse risks in the worst case. Even though the approach the researchers used with ChatGPT doesn’t seem to generalize to other similar bots, the researchers have a warning for those who develop generative AI: be aware that models that were designed not to spew data still have the capability to do so.