No one cares about security because there is no consequence for getting it wrong. Look at all the major breaches ever. And look specifically at the stock price of those companies. They took small short term hits at best.
Worst case the CISO gets fired and then they all play musical chairs and end up in new roles.
Heck, even Lastpass, ostensibly a security company, doesn't seem particularly affected by their breach.
My point is, especially with ChatGPT, where it can reasonably 10x your productivity, most people will be willing to take the risk.
I don't even agree to begin with the idea that a tool that vastly increases your productivity is a security risk in the grand scheme of things, since you know, you can just allocate the time you were spending before on writing boilerplate towards securing systems.
Endpoint security software is a security risk too.
Yikes! Aside from the fact that nobody will take the "spare" time saved and spend it on internal security, once you have a lot of your sensitive data outside in a third party system you have lost control.
> nobody will take the "spare" time saved and spend it on internal security
Well I have so speak for yourself.
> once you have a lot of your sensitive data outside in a third party system you have lost control.
Every time you search “how do I do this with this software stack” in Google you are leaking data which is nominally sensitive to a third party system. Every time a technical staff member goes to stackoverflow without obscufating their IP address they are leaking sensitive data about what software stacks a company uses. Let’s not even get into people posting their resumes on LinkedIn or cloud services in general.
The goal of security is not to stop all data leakage, it’s to stop the leakage of certain high value data, and LLMs can aid in this end if you use them intelligently and avoid leaking any high value data to them and only feed them with low value data. Attackers are not going to have any qualms about using LLMs both to come up with attacks and as part of attacks. Many many people are in situations where using LLMs to advance in security maturity as quickly as possible is more than worth the risk incurred. Don’t win the battle, win the war.
Ok, fair point. You are the one person who saved time and used it for security. But most people won't do that, as I'm fairly sure you will agree.
There is obviously a very big gap between searching for information and providing your internal code or documents to a third party. One reveals only your search terms, the other gives an attacker your actual proprietary information.
LLMs are ground breaking and have enormous potential. I am not saying that they should not be used. Only that there are huge security issues when employees post confidential information to third parties.
Worst case the CISO gets fired and then they all play musical chairs and end up in new roles.
Heck, even Lastpass, ostensibly a security company, doesn't seem particularly affected by their breach.
My point is, especially with ChatGPT, where it can reasonably 10x your productivity, most people will be willing to take the risk.