I have always assumed that everything that I type into an LLM is guaranteed to be completely insecure, and entirely public. I would never, in a million years, put anything sensitive into an LLM, any account and password is unique.
I mean, I figure whatever I type is going to be used as training data for the next iteration, and I assume anything that goes in, is retrievable to some degree.
If you use the API, they say they delete the data after 30 days and only keep it during those 30 days for security purposes (no training). Wouldn’t be the first time a company lies about data use though.
If you are using their app though, everything goes into the pot.
I have always assumed that everything that I type into an LLM is guaranteed to be completely insecure, and entirely public. I would never, in a million years, put anything sensitive into an LLM, any account and password is unique.
I mean, I figure whatever I type is going to be used as training data for the next iteration, and I assume anything that goes in, is retrievable to some degree.
Any information you provide to an AI that you don’t control, will be used against you.
Whether intentionally, as designed, or unintentionally through incidents like breaches.
If you use the API, they say they delete the data after 30 days and only keep it during those 30 days for security purposes (no training). Wouldn’t be the first time a company lies about data use though.
If you are using their app though, everything goes into the pot.