Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A former employer bristled at the cost of Slack[1]. Why should we pay for Slack when we already own Teams? Everyone will just use Teams.

About a week later I was on 4 unofficial Slack workspaces.

They also mandated Teams for meetings, which was both terrible and particularly hostile to external invitees, so everyone just used free versions of Zoom instead.

[1] Less than 0.07% of my base salary



Yeah, the one killer thing that slack has is that there is a free tier which is perfectly usable. Organizations can try and force everyone to use Teams but in practice they can't stop people from just spinning up a free-tier slack account for their team. At the end of the day it's just not worth fighting it anymore if everyone is determined to use slack.


> Organizations can try and force everyone to use Teams but in practice they can't stop people from just spinning up a free-tier slack account for their team.

Unless you work for a large corporation or really any business that takes security seriously. Many of these businesses will quickly fire employees for conducting company business on non-approved applications or sites. Major security issues there.


Those companies are bad. There’s ways to train users and secure material on slack. Picking bad tools and firing users for trying to work around IT rules punishes innovation and results in worse employees.

I used to work for a company with 200k employees that banned any use of google apps. In 2009. Even working on a different company’s doc was banned. They threatened firing. It was ridiculous.

One day a partner was presenting from google drive. One of the IT execs said “you can’t use google drive” during the presentation. The partner asked what he should use and the IT guy said something about opening a ticket with AV and emailing the presentation.

The partner kept going and the It guy said “no seriously, you’ll be fired.” The partner laughed, kept going and said “I’ll risk it.”


Have you ever seen a news article talking about a data breach? They're posted here pretty regularly, and most big news outlets discuss the more important ones when they happen.

I only ask because your comment sounds like you don't think security is a concern for businesses. It is, so much so that worldwide, companies who have been breached spend close to $4 million cleaning up after the incident [1]. In the US it's actually more like $8 million to clean up after a breach. And if you read the report, data leaks (like customer info sitting in an unsecured Google account) account for half of all breaches.

[1] https://www.upguard.com/blog/cost-of-data-breach

I'd like you to understand how Google Docs poses a risk to businesses. Imagine you want to share a customer list with someone else in your company, but you don't want to use Word and OneDrive like IT has approved. So you put it in Google Docs. Now sitting in your personal and unsecured Google account is a list of your company's customers. Maybe some pricing info, maybe the email address of a contact at the company, that kind of stuff. Now your personal Google account is compromised. It happens all the time, but this time the hacker finds your company's info. Maybe they sell it to one of your competitors and your company loses business. Maybe they use it to spear-phish your customers. Maybe your company's customers get breached because you put their information in an unsecured location.

Now lets say this goes on long enough that other people are using Google Docs too. If prepend can get away with it, so can Anne from accounting. And Ben from HR. And now the SSNs and birthdates and home addresses of all of your company's employees are in the hands of some lucky hacker who guessed that Ben's Gmail password was "Benjamin123".

Does that warrant firing? Doxxing everyone at your company, putting your customers out of business, and putting your own company out of business, just because you didn't like the IT approved solutions? I'm sure there are easier ways to destroy your company and put 200k people out of a job, but I can't think of any off the top of my head.


Security is a big concern of mine. Real security, not fake stuff where an employee can initiate a breach by posting something to slack.

If I can screw up and post company financials, ssns, whatever sensitive stuff to Slack, then that is a big security risk. And Slack isn’t my company’s problem. The problem is training, and digital loss prevention, and access controls.

If company IT approved solutions don’t meet business needs, then I think that’s a bit security risk. To prevent people from posting inappropriate material, we need effective tools.

Should the partner not have given his presentation?

The solution, I think, is to identify docs with SSNs wherever they may be on network and major cloud vendors and redact them, or remove the files when they are uploaded.

In the partner’s case there was no sensitive data in the presentation. IT should know that and help users. If the solution is to ban cloud docs in 2009 with no solution then I think that creates more risk because rather than trying to adapt to using google drive and training users how to use it, there’s a lot of shadow functions.

There is risk in these tools, my point isn’t that we must allow everything. I think we have to support common use cases and banning functionality needed by users, and used by competitors, is actually riskier than supporting enough so that users can do their jobs.


>not fake stuff where an employee can initiate a breach by posting something to slack

This alone tells me that your first sentence is not true.

Do you work in IT security? Have you used DLP tools? Have you seen the process to certify technologies for use and secure them when they are being used? Have you seen the cost of those tools, and how much time/manpower it takes to run them?

If Google Docs is not approved for company use, how does the security team identify SSNs in Google Docs? They don't. So the security team approves Google Docs and buys a product to monitor Google Docs. But now people want Dropbox, which means more cost. And Box, which means more cost. And OneDrive which means more cost. And Bobby only uploads his stuff to S3 buckets, which means more cost. All of these services cost money and all of the tools required to monitor them cost money too.

And while the security team is spending tens of millions of dollars per year (probably a low number actually) to monitor all these approved cloud storage services, Maria uploads a thousand W2 tax forms to her personal Gmail account and brings down the company anyway. Or if Gmail is blocked, she puts it on a USB drive and loses it when her car is broken into. Or if the security team locks down USB storage, she prints the documents and accidentally leaves the folder on the bus. Or if there's a DLP tool watching the printers... she shares it in a personal Slack channel so she can work on it at home.

Security is hard, and the mindset of users who say "you can't stop me" makes it almost impossible. The security team needs to be right 100% of the time, but an attacker only needs to be right once. The risky part isn't banning functionality, it's employees who refuse to follow the rules. And in any job, if you refuse to follow the rules, you get fired.


> Or if there's a DLP tool watching the printers... she shares it in a personal Slack channel so she can work on it at home.

And here we are. An idiotically simple use case that is not covered by IT.

If instead of locking the shit out of infrastructure people in IT in your story focused on providing a comfortable solution to work on a document from home, none of that would happen.

It repeat of the story with passwords. Muh security guys establish rules that your password should be a crazy something and you must rotate it every month and then are surprised that those passwords end up to be written on sticky notes beneath the keyboard.

Try to be human-first and address use cases and nobody will need to use third party tool to get on with their work.


>If Google Docs is not approved for company use, how does the security team identify SSNs in Google Docs?

Part of my argument is that Google Docs is popular and widely used by users, so IT should support it.

Then there’s training on how to use unsupported stuff (ie don’t email ssns, don’t upload ssns, etc).

Then there’s DLP as the source file was a PowerPoint on the partner’s laptop. Back then, I don’t know what products existed but today I have implemented DLP that if a file has a social it is flagged for review immediately and will present visual cues to the user for sensitivity and it is blocked from lots of different transfer methods. This helps prevent users who don’t know the file is sensitive (most of the potential breaches I’ve encountered) but users can get around it (screenshot, phone, etc) if they are really determined.

My point is mostly about rules being better rather than rigid. The best rules fit into a mental model and should be easy to follow. The “Just say no” style rules work just as well for security as for drugs and smoking.

Usability is really important, I think, in security.


Those companies are bad. There’s ways to train users and secure material on slack. Picking bad tools and firing users for trying to work around IT rules punishes innovation and results in worse employees.

If they just capriciously fire someone for deciding to use Slack in their functional team away from everyone else, sure.

But if one's org has a security mandated policy to use specific communication programs and services that one presumably agrees to and signs a document asserting their compliance to as a contingency of continued employment, and that person violates it anyway...I'm hard pressed to call the company "bad" when they take disciplinary or corrective action against that individual.

Such cavalierness (generally speaking) is how you get ants..I mean data breaches et al.


Slack isn’t some random company, they have enterprise practices and there are third party companies that do data management on slack.

An enterprise can adapt to use tools and apply security to the tools used and needed.

Also there is really basic “don’t post sensitive data to the wrong places” training. I think there’s a difference between banning posting sensitive data and stopping teams from planning a meeting agenda.

A company that can’t stop an enployee posting sensitive data to Slack also won’t be able to stop them posting it all sorts of bad places.

I expect that sensitive data is protected in an org. With something more effective than firing people from posting it to Slack.


So much of security entails making policies that people want to help enforce, rather than working around those policies to do their jobs.

If any substantial fraction of your workforce sees your security policies as an obstacle and IT as an adversary, your policies have already failed and it's just a matter of time before there's a problem.

If your policies make sense to everyone, you educate people on why they make sense, and they're so sensible people's first reaction to a breach of those policies is to genuinely understand how doing so might cause a security incident and advocate better solutions person-to-person, you're far less likely to have a security incident.

(That doesn't mean every person needs to be happy with every policy all the time; it means that people need to not systematically feel that IT is primarily an obstacle to their job.)

Companies where most people think IT is actively awesome (not just "not in the way" but actively good) are 1) rare, and 2) likely to be substantially more secure.


Unfortunately so much of security entails working around the technology you have, rather than implementing the best policies that make sense to everyone. At most companies, IT security is a cost center, so executives will only spend enough money to just pass the yearly audit and then stop. Which means a security operations center (SOC) that should be staffed with ten people gets by with just two. And making an upgrade to your SIEM's license got cut this year, which means you need to store fewer logs which means those two people have less data to work from. And the company is still using McAfee EPO because more modern endpoint solutions cost too much so malware is running rampant across the network.

In another reply I talked about my experience working as an infosec analyst at a company where we had to implement a policy against streaming media because our security monitoring tools could not handle the constant stream of data. That policy wasn't written because streaming media is inherently dangerous, it was written because the technology the IT security team had was not capable of monitoring the network when a bunch of people were streaming music on the corporate network.

Ultimately IT security is a racket of overpriced and outdated tools which forces CISOs to make decisions like that. If anyone is looking for an industry to disrupt, look at infosec. A startup could easily double the value of the software and still be able to cut the cost in half and they would just absolutely destroy the big vendors. And/or get a billion dollar exit when Cisco or Amazon buys you out.


Yeah. My security team would really have my hide if I tried that. The other day I had a visitor from security come in and scrub all my addons off of my Firefox installation because they hadn't been vetted through IT. They most certainly wouldn't allow shadow slack rooms.


A place I worked as an information security analyst about 8 or 9 years ago had a policy against various streaming technologies on the corporate network because it often overwhelmed or blinded the security monitoring technology we used back then. Spotify had just launched in the US right around that time and I had to spend a couple of days visiting various desks and asking the employees to stop streaming on the corporate network.

We were just completely and utterly blind as soon as two or three people started streaming.


> The other day I had a visitor from security come in and scrub all my addons off of my Firefox installation because they hadn't been vetted through IT.

They can't do that remotely?


I’m compressing time a little for my story. This was last year before the lockdowns. He probably could have done it remotely. But we never missed an opportunity to talk about nerd stuff. He was definitely a talker.


I'm not sure how much of a security issue there is. What's the threat model exactly? Terminated employees still having access because you don't have SSO? Sure I guess but if there is security sensitive data being posted in any chat app (approved or not) then you've got major problems.


I logged into an old Google account I hadn’t used since 2009 and realized I still had access to some documents from a company I had worked at back then - and the documents had been updated two years ago.

If I send a sensitive document in the approved chat app that uses SSO, once I quit. I don’t have access to it anymore.


Right, that's a legitimate security concern but the point I was trying to make is that the problem is not so much people using unapproved chat apps but people putting sensitive information in chat apps at all (approved or not). These applications aren't generally designed to have the sorts of access controls required to manage sensitive information. If I post something sensitive in a public slack channel at my work (where we use Slack) then it is available to anyone in the company. I guess it is marginally worse to have it available to everyone in the company plus any former employee who wasn't removed from the slack team, but only marginally.


Exactly this. I’ve taken pictures of whiteboard drawings using my phone for years. But that is completely against my current company’s policy because pictures get synced to iCloud.

It makes perfect sense why something I could do at small no name companies would be banned at BigCorp.


Try that at my company and if you get caught, you would get fired so fast it would make your head spin.

Most large corporations are very concerned about which communication platforms you use for official business.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: