> Google is running the program to reduce the risk of cyberattacks, according to internal materials. “Googlers are frequent targets of attacks,” one internal description viewed by CNBC stated.
Fair. The higher the profile you have, the worse and the more numerous the attacks.
I found after creating a LinkedIn account, attacks went up on my account 12x. Again, caveat emptor... but this makes a great deal of sense in who to target. When you willingly put "I am Gee Golly Whillikers, Sr Systems Engineer of Really Important Systems that 10m people use", well... you're self-selecting who to hit.
> The company has also in recent months been striving harder to contain leaks.
Wait WHAT? That's what we cybersec folks call an "Insider Threat". "Leaks" are done by insider employees wishing to harm the org they work for, up to leaking classified data, or secret corporate data.
> "Leaks" are done by insider employees wishing to harm the org they work for
This isn’t always the main motivation, and in my experience this isn’t even usually the main motivation.
(But if I’m wrong, please tell me. Don’t downvote me for my opinion/life experience. I have definitely seen the whole “f the company” scenario a number of times. But usually it’s “f a particular person” or it’s a “this violates my principles” kind of thing. Companies are legally entities but not that much psychologically, in my opinion.)
It’s a “oops, I’m drunk or forgetful and said or did more than I should have” lol.
I forgot to add that one.
> Companies are legally entities but not that much psychologically.
This is my own statement that I can’t edit. I understand that many people think otherwise depending on the context. I do also. In this particular context of “insider threat” (i.e. employee leaks), I still believe the main cause isn’t to “harm a company”.
> I found after creating a LinkedIn account, attacks went up on my account 12x.
I'm curious to know more about what you're doing to quantify this as an individual–I think I have a fairly sophisticated personal security posture and have implemented various business infosec measures professionally, but I'm not sure I have a great way of measuring how many attacks are targeted at me day to day, beyond a sort of vague sense of "this month I'm noticing more {frequent|targeted|sophisticated} attention".
I had a suspicion that phishers were using LinkedIn to build an employee directory to attack at work (using names and guessing our email address pattern), so I created a honeypot to test that: a company email along with a LinkedIn profile for a fake employee. Sure enough the honeypot mailbox started to get targeted by the same spear phishing emails our employees get pretty soon after.
Most of the attempts that I had were flagged as spam so I didn’t notice.
The one that almost caught me was a simple email allegedly from the CEO asking if I had a minute. I was working at a small enough company that it wouldn’t be out of the question for the CEO to email me and I replied, sure.
The next email was, fortunately, a clumsy attempt at fraud, asking me to buy some Xbox gift cards for a client, which made me look a little closer and notice that the email address hidden by the email client behind the CEO’s name was at a Russian ISP, so I dumped them into spam and posted a warning on Slack. Something less clumsily targeted might have caught someone (e.g., to do a test booking on our site, or open some tech window). That said, LinkedIn is useful enough to me that I’m willing to leave that window open, although it would be nice if company email addresses were less easily guessed.
The solution to such problems isn't just technical - if you see a mail from someone that sounds out of character like in your case, best is to contact them directly or via a fresh mail composed to their address to confirm what was sent.
Other examples are receiving an attachment from a known person with just a one line 'check this out' or sometimes nothing.
Social engineering attacks always trump technical hacking and tend to be more successful.
So the simplest most common email scam for anyone with a work email. It's not sophisticated or targeted, if anything you do matters to anyone you will get an email like that.
It's really on you for not checking the email instead of replying to basically a horny single in your area that could maybe give you a promotion.
During a phishing test at an old company, I was able to:
1. buy a cheap VPS ($5)
2. Buy a domain name with homoglyph unicode similar to the company's name
3. get a full stack of ssl from letsencrypt for the attack domain
4. set up postfix to use this domain with SPF/DKIM/DMARC
I then sent emails as the "CEO" and got by every spam and anti-impersonation filter. Outlook showed my emails as 100% legit in all ways. The ONLY way to tell was to go to email->properties->headers and KNOW and check the IP addresses of the servers.
Homoglyphs in Outlook desktop app are not shown as punycode. However the users who used OWA saw the punycode expansion of the domain and rightly reported them.
I thought that had been fixed, by ordering the registrars not to allow problem domains (e.g. domain must be entirely in Cyrillic+Numbers or not at all), though Wikipedia says this isn't yet the case for .com. I'm not sure if that's up-to-date though.
Pretending I work for asdf.com, I could register аsdf.com (Cyrillic а) and maybe αsdf.com (Greek α) as a precaution.
But "xn--sdf-5cd.com" and "xn--sdf-nxc.com" both show as unavailable for registration.
Something like asclf.com (all ASCII) could be mistaken in some circumstances.
I've aways wondered what on earth possesses companies to make these idiotic decisions. Take some feature that obviously works well and make a very trivial change that makes it much worse? Who sits down and decides to do this? How do they ignore the chorus of actual smart people telling them "don't do this, the domain name is important!" What possible financial ROI could they have gotten out of making such a small but terrible change? I don't understand how the checks and balances at a big, mature company like Microsoft would let something like this pass.
true, but even the domain is a really obvious signal for most of these attacks, considering most companies use their own domain for emails and it's fairly difficult to fake an email coming from that domain. So something coming from <ceo><random string of digits>@gmail.com should raise alarm bells for both automated systems and users (assuming the users can actually see it, which they now can on outlook)
Yes of course - this is why I mentioned that authenticating the domain is the main win.
Now - setting up SPF, DMARC, ... is technically easy but not so much organizationally. This is a real pain in the ass to find what various organizations used in the past in their outsourced communication (marketing, pay system, travel, ...).
The advantage that Outlook had (and has) is that it will resolve the user information from Active Directory. Receiving an email in Microsoft from "Bill Gates <helloworld@kjshkjsdhfkj.com>" will yield a very unusual visual (not the right firstname/lastname sequence, no picture, ...).
If I were someone of note who was concerned about increasing likelihood of attacks targeting me, I wouldn’t answer that question, doubly so if I was security conscious.
Leaks also happen accidentally, e.g. by employees copy-pasting sensitive data or code into ChatGPT or using Copilot. Although that may not result in a breach, it certainly opens up legal and IP concerns for Google.
I’m not sure those examples should be referred to as “accidents”. If you deliberately give the information to an external company, you leaked it deliberately, even if you thought they wouldn’t do anything bad with it.
Definitions might very by organization. The training I have received defined both "insider threat" and "leak" to include accidental disclosures by insiders as well.
Incidentally, accidental disclosures are the main thing SCIFs are designed to prevent. Deliberately taking classified data out of a SCIF is not particularly difficult (although it is harder to plead accident if you are caught, which acts as a dissinsentive).
Varies, and the terminology in news seems to actually favor 'leaked' as a way to describe any information a company doesn't want being disclosed. You'll often see headlines like "Video Game Source Code Leaked Online", when it's a system breach, not a 'leaker'.
Recent news headlines from a cursory search:
Massive Leak Of ChatGPT Credentials (June 2023)
Far Cry's Source Code Has Leaked All Over The Internet (July 2023)
VirusTotal leaked data of 5,600 registered users (6 hours ago)
Pikmin 4 ROM leaks online (14 hours ago)
Wow, I was expecting to find a few examples of different kinds from the past year, not... several from today.
The language used is not formalized. There is no "normal". The language is chosen (or arbitrarily used) to serve the goals of the company. This may include trying to align with a minimization campaign or an overblown narrative to provide cover.
Totally agreed. Also, now I am going to start using that phrase mashed up with an Archer reference when discussing security controls - "Do you want SCIFs? Because that's how you get SCIF's." LOL.
This is not a strange hardening technique at all. You'd obviously weigh the friction vs. added security, but at Google scale you'd probably be engineering on Google sources using Google tools on Google systems anyway, it's not like you're sitting around installing a random binary from reddit on your google server and calling it a day.
Perhaps a follow-up program ends up with a Qubes-like configuration, or just separate systems.
> This is not a strange hardening technique at all.
On the one hand, I agree. SWIFT recommend (in a similar vein) that machines that you do SWIFT tasks with are segregated from your daily driver last time I read the SWIFT standards; Microsoft recommend locked-down, no-access machines for a lot of AD admin tasks. If you do work in "I can tank a bank" roles, this will sound familiar.
On the other hand, this will also lead to cargo-cult "but Google programmers don't need Internet access, why do you?"
On the gripping hand, perhaps a bunch of Google people being restricted this way will lead to an resurgence in development and documentation practises that don't assume that you can download crap from the Internet all the time. That would be nice.
TBH, I doubt benefits really go outside, "no internet" at $BIGCORP's intranet is not the same as no internet at home.
There's internal sites for searching, documentation, code, forums, newsletters, Questions/Answers, chat, calendar, video conferencing, browsing memes, looking at cat pictures and probably more.
> On the other hand, this will also lead to cargo-cult "but Google programmers don't need Internet access, why do you?"
To which the obvious reply is, Google already has the whole Internet cached and indexed. If they need to Google something they are only accessing the intranet.
> On the other hand, this will also lead to cargo-cult "but Google programmers don't need Internet access, why do you?"
Just a random thought: this coincides with the rollout of Bard. Perhaps the idea is that Bard (or its internal equivalent) is a suitable replacement for sites like StackOverflow?
Google has had internal Stackoverflow for many years already. And so many Google engineers work with Google specific technology, that Stackoverflow is way less useful for them. You are not going to find how to use Borg on stack overflow.
I remember, during my internship at Microsoft, one of the "perks" everyone loved was being given a beefy machine that we were admins on and an absurdly fast internet connection to go with it. While I do get the security argument, this also seems like yet another signal that Google is tightening their infamous perks.
Google also gives (gave?) employees root access on the work machines (https://news.ycombinator.com/item?id=32307102) and an incredibly fast internet connection. As a non-employee, you only need to walk into the lobby, sit on a couch meant for visitors, connect to the unsecured GoogleGuest Wi-Fi network and enjoy that absurd speed. I live very close to Google offices and I occasionally bring my laptop there since my home internet has terrible upload speeds.
I don't mean to say it's about cost savings; I meant that they're choosing the most expedient approach to this problem at the expense of employee happiness.
I'm pretty sure employees will not be cut off from the external web altogether. They can use it at home, they can use it on their phone while at work, they're not living like digital hermits. So how happiness plays into this decision is not clear to me.
They're probably allowed to use separate devices (i.e. sans corporate credentials) to access the outside web when they need to look up specs, SO questions, etc. This is just about air gapping critical systems.
I really have no idea why this is even a news item worth reporting, and jumping at the opportunity to criticize Google for this is a bit of a knee jerk HN reaction imo.
Stupid question: I thought google was well known for its zero trust security model, is it completely unrelated? I read the article, but it doesn't mention beyondcorp at all. And if it applies at all, what threats are not well covered by zero trust security that would require this? I think I misunderstood a concept (or two lol), since I thought an employee's device was basically "untrusted" by default in google's case. And there wasn't really a lot to gain by simply targeting an employee's device ?
Beyondcorp is not used for access to non Google sites (and even if it was, it could be a WAF but not much more).
As for how beyondcorp protects if a employee workstation does get owned: the zero trust means simply being within a network permimeter does not grant access... But employee workstations are trusted to do whatever that employee is authorized to do. So beyondcorp massively reduces what an attacker can do and adds lots of hurdles (e.g. requiring pressing a security key, pervasive monitoring), but doesn't render a hacked employee workstation harmless
Ahhh that makes a lot of sense!! Thank you! I guess it is inevitable that an attacker could have access to something, no matter how constrained and limited it is, as long as they can get an employee to authenticate even if it is for a single action.
It still sounds miles better than the traditional IT hell that is trying to constrain the devices as much as possible instead of securing the network and implementing auth internally.
I have a feeling* that this is another case where Google's internal politics and promo culture rears its ugly head. I don't think the team behind this effort talks to the team responsible for BeyondCorp. Or maybe they do, but the person who came up with this idea wants a promotion so much that they decide to ignore BeyondCorp with some half-heartedly written doc.
*: Just a feeling. I have no proof. Just speculating. Take it with a huge grain of salt.
Security is an onion. It has layers. You can be motivated to prevent people from running "curl | sh" because a phishing email told them to even if internal systems run their own authentication and authorization.
Every Software role employee at Google has two computers, a laptop and a cloudtop. I have no idea what this pilot is about but I suspect it affects the instance you ssh into and not the laptop, which is perfectly acceptable. The VM does not require "internet" access for the work done.
A huge number of people now do most of their dev work on cloud machines. I'm one of them. I don't even have a workstation anymore. Just ssh into the cloud box to build and run code. Eliminating general access to the internet on my cloud machine wouldn't affect me at all.
I have never in my career seen a good implementation of cloud development. At every company I've ever worked for, "cloud development" is nothing but a Linux VM that gets provisioned for you in AWS, a file watcher that syncs your local files to the VM, and some extra CLI tools to run builds and tests on the VM. And every time I've used this, the overhead of verifying the syncing is up to date, the environment is consistent between my laptop and VM is the same, all this other mess...every time I end up just abandoning "cloud dev" and doing things on my laptop. God forbid you change a file in your cloud VM and forget to sync in the reverse direction. Not only is local development more reliable, but it's also faster (no remote network hop in the critical path of building things).
I don’t know about Facebook, but Google dev infra, it’s level of integration and convenience is seriously best in the world (ok, of what I’ve seen).
All the problems that you’re talking about of keeping in sync etc are just non existent. You have root directory /google/ and magic happens there. Look up articles about srcfs, objfs, piper, forge.
I honestly was looking if something exists open source so I can set it up at least for code editing, but all that I saw is just shit. When I compile at Google even using local build everything just works. Code on nfs network share + cachefilesd + all suggested flags for performance, with share at home nas (diskstation) on 1gbps lan + cachefilesd - constant issues like permissions. Simple test of cloning abseil library to that share and building takes 50%-80% more time than building locally.
Same test with qmk firmware spice is just unbearable. Even just doing git clone there is awful due to lots of small files, I guess. Tried cifs - also bad.
You can use GitHub code spaces (and competitors, incl. ones from google and open source ones) where you access VSCode in-browser and the entirety of the FS is on the VM - nothing local. They're quite widely available today.
They work great in the (strong-internet availability) environments I've used them.
Also, as an ex-googler, I can attest that google's implementation was enough that I never missed local development.
The way it works at Google is that there is no syncing necessary. I edit code in a browser IDE. I build and run code on a cloud machine that is automatically synchronized with the state of a virtual workspace I'm editing with the browser IDE. No file containing code or test input is ever stored on my laptop's filesystem.
This works great and is basically identical to working on my laptop directly except that my terminal is running over ssh rather than locally.
Why would you need files locally when most IDEs support a session over SSH that doesn't even involve a remote desktop tool? Just edit the files over there.
Not a sshfs Mount. The IDE has a process running on the other machine and communicate over ssh tunnel. Your text change actions are sent, I presume, and only ~visible portions of text are transmitted to your local machine. Uh, like a mainframe terminal :)
No, it's not at all an sshfs mount. It's your IDE sending commands over to another instance of itself, and it's perfectly responsive as if operating locally. VSCode and IntelliJ support this very well.
Reading and manipulating files on my local disk is near-instantaneous. Not sure how you could ever replicate that level of latency as long as there's a remote network hop required for every read and write.
Idk what srcfs you’re using but it’s not the one that I’m familiar with. There are literally patches in IntelliJ/Android Studio to SIGSTOP the IDE so it doesn’t get all confused when your credentials expire.
idk how it's implemented, but vs code server works just fine for me.
there's something to be said for building and running tests in the same environment your service targets. that's usually not going to be macOS or your favorite linux distro.
How do you access your remote development machine? Is it a graphic connection / can you use an IDE?
I love the idea of a powerful remote development machine but I’ve yet to find a tolerable RDP/VNC-type connection, and that leaves either something browser-based (VS Code maybe?) or something like JetBrains’ remote development tools which aren’t quite there yet.
I use regular terminal SSH+VS Code remote machine feature. It pretty much feels like i am using VS Code with code on my local machine, except it is on the cloudtop. That covers heavy majority of use cases I care about, so I have zero need for graphical access or RDP. But in an ultra rare case I need it (happens only about once per year at most), I can just RDP using chrome remote desktop, and it works pretty well.
Back in my day with TS clearance working in certain airframes there was no internet access. Also no personal electronics allowed inside the secure area except for a portable cd player, and a single audio disc (had to be pressed).
Yeah, this sounds a lot like Google wants to create SIPRnet and SCIFs, restricting external access like that. Probably wise, considering how tasty a target Google is.
I was never allowed in, but at a previous employer we had a scif onsite that had no internet in that room, and nothing was allowed out (except by a very painful process involving a briefcase, handcuffs, multiple people, etc).
Needless to say none of the computers in the scif has internet. Keeping them updated was surprisingly simple - we could bring updates in, we just had an increasing stack of USB drives and hdds that never left lol.
I worked in a place that had separate networks. The one with internet didn’t have our projects on it. It was kinda a pain to not have internet to look stuff up, but it was ada or c++ and we had books…
In BigCo land there are multiple attack vectors to the top execs.
I was literally in a convo today where our CFO and his EA are now virtually unreachable due to this type of stuff. Our IT team gave everyone burner phones above a certain grade level.
So process/VM isolation is not even worth trying anymore? Ideally your web browser can access Internet but only Download directory on your filesystem and you can span VMs that can access particular memory ranges / disk sector ranges / devices. This is very useful for reading online manuals or for developing new Linux apps / device drivers. I am not sure what is proposed alternative is or how its going to be more secure in practice. If side channels are really hopeless, have browser transparently run on another physical machine and tunnel display to your workstation.
I don't think its possible for any platform that allows for arbitrary code execution to be completely safe. The exploits of today are far more sophisticated than they needed to be 20 years ago, but they still exists and are harder to find and fix.
I had to work in an environment like that. It sucked. Then there were the corporate internets the blocked certain sites to prevent employees from surfing the internet too much. Those were annoying as well. Did they really add any security, tough to say because those places I mentioned also didn't tend to hire and retain the best and the brightest. Most of those types of people wanted more autonomy.
Google's move to restrict internet access for some employees shows the company takes security seriously. It's a proactive step to safeguard sensitive information, but striking a balance between security and productivity will be crucial.
> The company will disable internet access on the select desktops, with the exception of internal web-based tools and Google-owned websites like Google Drive and Gmail
What they are doing is called a "firewall", we have been doing that for a few decades now.
A company like Google provides critical infrastructure for the internet. They're full of very smart people, but they have had some very sloppy goofs over the years. I'm glad that they're trying to be more dependable.
> They're full of very smart people, but they have had some very sloppy goofs over the years.
That's because smart people tend to overestimate their abilities and think they are invincible. In fact the comment posted just below this one is jumping to conclusions that this probably only applies to the inferior-intellect mechanical and electrical engineers.
I don't know if they're really overestimating their abilities. When I see things go wrong, most
of the time people knew it was a possibilty, but it got deprioritized and forgotten about. Process helps avoid that and guardrails protect you when an unavoidable error does happen.
I know it’d be incredibly painful initially, but I’m surprised more firms don’t adopt a policy where they only accept traffic to an approved accept list of IPs/domains instead of permitting traffic unless explicitly denied. With an interface that allows employees to request allowances, this seems like a great added layer of security.
Having worked a job reviewing the coding habits and behaviors of software engineers for a year and a half, I'm gonna just go ahead and say you're wrong, it's probably especially for the software engineers.
Every tech company I've ever worked at, normal devs have had administrator access on their own Mac or Linux workstations, its only usually the sales/product folks who have locked down Windows machines.
And most SRE folks have sudo access on production VMs too
fwiw i think the article is talking about root on their lap/desktop machines, not production.
and regarding production, pure root access was revoked for everyone YEARS ago and replaced w/user and admin role accounts. admin was severely restricted, and could do most (but not all) things that root could do. this was for a server only, not accessing anything in borg/omega.
also, if a rando package was installed on a prod server there are safeguards in place that would detect a change and wipe it immediately. in my time that was called the 'assimilator'.
i'm sure that a very, very select few have actual root/sudo.
(disclaimer: i worked there 03-11, the role accounts were rolled out in 08 or 09 IIRC. things could be different now, and if so probably even more restrictive)
It wasn't quite immediately, it would take a few hours to detect+revert. And that was only the root fs, there were other places to hide things if you really wanted. But then there were other detection systems too. (Probably fairly different now, I left in '11 too)
In most orgs, you'll see Windows and the sysadmins and devs will have LOCAL\administrator, but not LOCAL\SYSTEM. That's usually because developing software(debuggers) or using sysadmin tools is a admin-only thing.
As for me, I do prefer Linux on the desktop proper, with appropriate sudo access for root access. But again, I also do want SELinux on as enforcing, and fapolicyd enabled with good setup. If it's a laptop, I definitely want clevis and tang for enforcing attested and encrypted drives. If my shit is stolen, I dont want to be the vector where everything is stolen.
I've only been at one such place. At Google the desktops mostly run Linux, and you pretty much only get another option if you're actually working on stuff that needs it.
That’s definitely selection bias. For big companies, in my experience, if you need a Unix-like development environment you’re going to be on a Mac. Small companies and startups are different of course.
Even in every non-technical company I have known, except one, has allowed its devs root access on their own machines. I have never needed someone else's password to install software or run Sudo on a work machine.
In the real world, endpoint security is very much a thing, and that means workstations so locked down, you can't even change the screensaver, let alone install unauthorized software.
If you work in health, for all intents and purposes you must be HITRUST compliant, and that basically mandates all sorts of lockdowns and network restrictions. ANYTHING that touches PHI must be airgapped.
> In the real world, endpoint security is very much a thing, and that means workstations so locked down, you can't even change the screensaver, let alone install unauthorized software.
I've been in the software industry since 1989 and I've never worked at a single company that didn't let developers have root/admin on their own PCs. "The real world" is quite a varied place.
It's really not that rare. For sure there are companies (big and small) which have a quite paranoid lockdown environment, but there are just as many which understand that local admin access is quite important for developer productivity and if you have the appropriate network architecture it's no less secure.
> You could also just do all of your development with fake PHI, but I've learned not to tell health people what to do.
Yeah, software companies like Google also lock user private data like crazy. But you can have root access (or at least you could when I worked there), cause for 99% of development, you couldn't touch actual user data anyway.
It really makes the most sense to grant employees the least amount of access possible for them to do their jobs. Anything else is courting unnecessary risk.
Fair. The higher the profile you have, the worse and the more numerous the attacks.
I found after creating a LinkedIn account, attacks went up on my account 12x. Again, caveat emptor... but this makes a great deal of sense in who to target. When you willingly put "I am Gee Golly Whillikers, Sr Systems Engineer of Really Important Systems that 10m people use", well... you're self-selecting who to hit.
> The company has also in recent months been striving harder to contain leaks.
Wait WHAT? That's what we cybersec folks call an "Insider Threat". "Leaks" are done by insider employees wishing to harm the org they work for, up to leaking classified data, or secret corporate data.
This is a WHOLE DIFFERENT REALM of attacks, and relevant appropriate defense. And this is how you get SCIF's https://en.wikipedia.org/wiki/Sensitive_compartmented_inform...