The capabilities of large language models such as ChatGPT have transformed the way many people work. However, a recent incident demonstrates that the use of these tools can come with risks. Professor Marcel Bucher from the University of Cologne states that he lost two years of academic work due to a simple settings change—grant applications, teaching materials, and publication drafts suddenly disappeared.
Bucher apparently intended to deactivate the option that allows data to be used for model training. According to his statement, this action resulted in the deletion of his entire chat history. In an article published in Nature, he describes attempting various methods to regain access to the data and chats. Attempting to contact OpenAI proved equally pointless for the professor. According to the professor's statements, the data was permanently deleted and could not be recovered. OpenAI cited the principle of "Privacy by Design," meaning data is deleted without a trace. For Bucher, the conclusion is clear: "If a single click can irrevocably delete years of work, ChatGPT cannot, in my opinion and on the basis of my experience, be considered completely safe for professional use."
However, ChatGPT does feature a backup function. The AI offers a straightforward method for downloading all chats and data. The option "Export data" can be found in the settings under "Data controls." After a few moments or hours, a download link to a ZIP file containing all stored information is sent via email. Depending on the volume of data stored in ChatGPT, the compilation of the archive can take some time. Once the email has been sent, the download link remains valid for 24 hours. Backups are an integral part of working with computers and should probably not be neglected when using AI tools either.
The scenario described could not be replicated in a recent self-test. When deactivating data sharing for training purposes, existing chats remained untouched and accessible. Selecting the option to delete all chats triggered an explicit warning message requiring confirmation. Since the data loss described in Nature occurred back in August, OpenAI may have since implemented adjustments to the user interface and security mechanisms to prevent accidental deletion.






