I work for a finance firm and everyone is wondering why we can store reams of client data with SaaS Company X, but not upload a trust document or tax return to AI SaaS Company Y.
My argument is we're in the Wild West with AI and this stuff is being built so fast with so many evolving tools that corners are being cut even when they don't realize it.
This article demonstrates that, but it does sort of beg the question as to why not trust one vs the other when they both promise the same safeguards.
While the FileVine service is indeed a Legal AI tool, I don't see the connection between this particular blunder and AI itself. It sure seems like any company with an inexperienced development team and thoughtless security posture could build a system with the same issues.
Specifically, it does not appear that AI is invoked in any way at the search endpoint - it is clearly piping results from some Box API.
There is none. Filevine is not even an "AI" company. They are a pretty standard SaaS that has some AI features nowadays. But the hive mind needs its food, and AI bad as we all know.
Because it's the Cloud and we're told the cloud is better and more secure.
In truth the company forced our hand by pricing us out of the on-premise solution and will do that again with the other on-premise we use, which is set to sunset in five years or so.
Probably has more to do with responsibility outsourcing: if SaaS has security breach AND they tell in the contract that they’re secure, then you’re not responsible. Sure, there may be reputational damage for you, but it’s a gamble with good odds in most cases.
Storing lots of legal data doesn’t seem to be one of these cases though.
Selling an on-premise service requires customer support, engineering, and duplication of effort if you’re pushing to the cloud as well. Then you get the temptations and lock in of cloud-only tooling and an army of certified consultant drones whose resumes really really need time on AWS-doc-solution-2035, so the on premise becomes a constant weight on management.
SaaS and the cloud is great for some things some of the time, but often you’re just staring at the marketing playbook of MS or Amazon come to life like a golem.
SaaS is now a "solved problem"; almost all vendors will try to get SOX/SOC2 compliance (and more for sensitive workloads). Although... its hard to see how these certifications would have prevented something like this :melting_face:.
> My argument is we're in the Wild West with AI and this stuff is being built so fast with so many evolving tools that corners are being cut even when they don't realize it.
The funny thing is that this exploit (from the OP) has nothing to do with AI and could be <insert any SaaS company> that integrates into another service.
Does SaaS X/Cloud offer IAM capabilities? Or going further, do they dogfood their own access via the identity and access policies? If so, and you construct your own access policy, you have relative peace of mind.
If SaaS Y just says "Give me your data and it will be secure", that's where it gets suspect.
And nobody seems to pay attention to the fact that modern copiers cache copies on a local disk and if the machines are leased and swapped out the next party that takes possession has access to those copies if nobody bothered to address it.
My argument is we're in the Wild West with AI and this stuff is being built so fast with so many evolving tools that corners are being cut even when they don't realize it.
This article demonstrates that, but it does sort of beg the question as to why not trust one vs the other when they both promise the same safeguards.