Disabling all of Defender is complex, but disabling automatic sample submission is easy. It's an option in the Security settings app, and you're even allowed to disable it during first time set up (or were, last I installed Windows 10).
It nags you once, but you can ask it to stop.
Besides, uploading unseen executable code and scanning photos are far from the same tech. They're very different things.
Every binary is executabale by default. Images are only subcategory. As malware, any file is potential threat.
Also, I think disabling automatic sample submission does not disable hash upload, only full files.
Pop open developer tools - Gmail's JavaScript is heavily obfuscated, not just minified. (I think it's a custom, self-modifying VM that's written in JavaScript, and it fetches pieces of itself over the network, like ReCAPTCHA).
This "DRM" plays at least some role in making the optimizers in V8 work a lot harder to get anything reasonable out of the spaghetti.
Why Google needs DRM for a web email app? Beyond me.
They're too embarrassed of all the shit code they've written that makes the app slow - so they obfuscate it to try to hide how shit it is - and it turn it becomes even slower ;)
The reason we use such tactics is to increasing barrier of reverse engineering because our teams value their work. Some people claim that security through obscurity is bad. I challenge this view. I claim that every security defense such as RSA is a obscurity.
It's a matter of time until RSA breaks in the same way as Obfuscation does.
Gmail is not your let's make it weekend kind of app. It's highly sophisticated and deliver huge value.
There are lot of people who hate Obfuscation. Some are communists and others are attackers.
My wife (she works in the fraud detection department) found an interesting attacker who masqueraded as a security researcher and student of X University, but in fact he was a a criminal scum. He has reverse engineered anti-fraud scripts of many websites and published them on Github for everyone to see. His main goal was to attract malicious buyers and sell them scripts that bypass this protection. It was one of the heck of marketing.
First, encryption is not "obscurity" in the same way you think DRM is.
Second, several other email providers don't think they need to rely on some performance-killing DRM to "protect" their web app (oh no, what of all the value!).
Outlook has a part of their files minified, but doesn't use any obfuscation; apps like ProtonMail[0] and Tutanota[1] are even open source.
(I'm actually starting to migrate off of Gmail to Protonmail myself.)
Encryption is "obscurity". For example, Quantum computers will break RSA.
> Quantum computers will break RSA
Now here it will take X amount of time so is breaking any protection like DRM.
The goal of any security method is increasing attack time.
TLS got attacked, SSL got attacked. History repeats itself. Period.
> Oh, and there's no need to call people "communists", "attackers", or "criminal scum". Be civil.
Why? I have a right to use these terms. What should I use instead?
Would you call Osama Bin Laden as "His Highness Bin Laden"?
The words exists for reason. I use them in appropriate context.
People don't understand Russian soul. I'm very direct and speak my mind!
>> Second, several other email providers don't think they need to rely on some performance-killing DRM to "protect" their web app (oh no, what of all the value!).
>> Outlook has a part of their files minified, but doesn't use any obfuscation; apps like ProtonMail[0] and Tutanota[1] are even open source.
So? What's your point?
You have Linux which is Open Source and you have Windows (A lot of parts including their licencing is obfuscated)
The performance hit is minimal. ProtonMail & Tutanota are way slower than GMail and lack cutting edge features we offer.
Gmail vs Outlook is like Ferrari vs Toyota.
Gmail has great UX even my grandmother can use it.
The point is that nobody relevant is going to get stopped by this DRM. That's because nobody relevant is likely to even try copying it in the first place, and if an economically relevant party were so unwise, I expect google's legal resources are sufficient to discourage plain copying, even if a court case is never won. They might learn some tricks sure, but the chances of gmail's client side bits doing anything that novel that's also competetively important are slim to none. (And if there really is some kind of secret sauce that needs protecting, relying on DRM seems quite... optimistic. Finally, we're only talking front-end here, not backend; and surely that's at least as important a part of the value proposition here.
While there may be a case for DRM in some places, gmail is almost certainly not it.
How exactly is a post-logged-in-app obfuscation supposed to be relevant to fraudsters that game the AdWords and reCaptcha etc?
Obviously people and corporations can choose to obfuscate; their prerogative. Doesn't mean it's effective nor wise in every instance, though, does it? Gmail is entirely free to waste effort and make its app slower and less (easily) maintainable, no question there.
So your claim is that they can't automate the UI (well) via conventional browser automation tools, and can't access whatever endpoints gmail the client-side-app uses without being detected, but could if the code wasn't obfuscated?
I'll bite once again - from personal experience, I knew Gmail is slower than ProtonMail, but I tested it anyway. I loaded both Gmail and ProtonMail, using the browser's profiler.
Gmail spent 6x the time ProtonMail did in the garbage collector, and 2x the time ProtonMail spent in the JIT compiler.
6x is minimal for me considering how complex Gmail is. It's not that slow. I can use it quickly and get up running and it's okay for anyone unless you're a person who is not patient for few seconds.
You always have the option for loading "Basic HTML" and you can get Protonmail or Toyota like experience there ;)
I don't know what's your agenda really is. Attacking DRMs are bad.
You have issues like spammers abusing Gmail interface to send emails using Google IPs and there DRM rocks.
I think those are particularly choice words coming from Basecamp, who have been particularly active in calling out Apple's treatment of iOS and the App Store, which is in at least some sense political advocacy. Life is necessarily political; and they should be more aware of it than most others.
I'm not endorsing Apple's behavior with the App Store, but seriously, this is very out-of-character and I don't like the tone or the content of the messaging here.
They also say, "We're in the business of making software, and a few tangential things that touch that edge."
Advocacy on the policies of one of the world's foremost software distribution gatekeeper seems well within how they define "their business". I don't think it takes much squinting to see how they can view this sort of advocacy as in their lane, while the popular political discussions of the day are not.
> Note that we will continue to engage in politics that directly relate to our business or products. This means topics like antitrust, privacy, employee surveillance. If you're in doubt as to whether something falls within those lines or not, please, again, reach out for guidance.
I think some (not all) of the changes outlined in the post make some degree of sense, but the overall tone of the blog post just comes off as incredibly condescending
Oh, there's a lot more "fun" stuff you can do in kernel mode. One comedic example is setting the CPU Vcore offset to +2.2V for fun/revenge. I don't know if it will destroy CPUs permanently, but it would be an interesting experiment.
More importantly though, once you're in the kernel, its much easier to hide your presence to all manner of Windows sysadmin tools.
There's always going to be software to defeat those tools! I've done my fair share of experimentation with source-to-source transformations; you can do things like substitute for/while loops, change conditions around, inline/outline various constants and variable declarations...
The sky's the limit when you think about it really.
I think conversations about cheating are missing the forest for the trees - or the learning for the degree.
I maintain that cheating is almost always a pedagogical problem first, and a trust problem second.
Cheating becomes a convenient solution to a problem when you're dealing with a course with inadequate teaching, a difficult learning curve, or a lack of motivation for students to do their work to the best of their ability themselves, or a nonsensical curriculum. Fixing cheating doesn't involve surveillance - it instead involves removing the incentive structure that exists for cheating in the first place. This may involve rethinking grading, or course material, or assignments; but is certainly not impossible.
We act surprised when students "cheat" in CS exams that's expected to be done with only pen and paper - nearly any real workplace will give you an option of a text editor or IDE of your choice. So give them an IDE! Give them the API documentation! Don't create an incentive to test the waters to fix the broken rules of assignments.
Another relevant area of work is ungrading, or self-graded courses in general - when you remove the friction that grades cause in the feedback loop of learning; learning becomes an organic process for everyone involved. There's a lot of interesting pedagogical research, and just "cheating is rampant" doesn't scratch the surface of "but why is it?"
In addition, cheating is a game. Every second you spend drumming up cheating in front of your students is another second they think about trying to get away with cheating you. If you tell students they're not to be trusted, they will not give you any reasons to trust them; in many cases it's as simple as that.
A combination of good pedagogical design, and building a relationship of mutual trust with your students, is certainly more fruitful than creating an academic police state (of which Proctorio is only one part of). There will always be people slipping through the cracks, but there are other safeguards in the world to catch them too.
Another important thing is that conversations about cheating always assume a very specific framing of higher education - that they exist primarily as a gatekeeper or arbiter of who-knows-what; the university also has the purpose of providing an environment for learning. And in many cases, cheating is just a result of a failure to provide that environment.
In addition, if the primary beneficiary of university degrees are the employers (or the people who care about the who-knows-what stamp), then why do students foot the bill for tuition? If you choose to accept this framing of universities primarily as arbiters, isn't access to a degree just a head tax to enter the skilled labor market?
There's two problems with this statement. First is the assumption that students don't care about privacy, second is the lack of discussion about consent.
I'm a student who takes special care about the software I install on my laptop. I use a Linux distro, run primarily open-source software, and sandbox every single proprietary app (limited access to files, no admin at all, no screen recording, disabled webcam, ...). I've also looked into several of these exam spyware tools (you really are forcing students to install spyware), and they're built with often hilariously poor security practices.
Which is to say nothing of the regularly stolen source code; If you held the exam spyware solutions to the same standards that you held students to, you would write up almost every single vendor to the Academic Integrity office. Another example of hypocrisy in academia from the perspective of a disgruntled student.
I deliberately do not install any video games with invasive anti-cheating functionality (and I regularly critique them, like I do for exam spyware); that is a false equivalence anyway, since they don't deal in the same breadth of personally identifiable information (like a permanently saved panorama of my bedroom).
Don't assume all students are the same.
Second, the consent dynamics are wildly different. For a game, its like "you trade this in for fun/relaxation" - and there's always other games that don't spy on you. I play those. With universities, many pulled a fast one and introduced the spyware to students after their tuition is already paid, and said "use it or drop the course". You can't switch universities because one university didn't consider the ethics of spyware; you can switch games much more easily.
> During the test, the student is only working on the test, which is not private or secret.
You fail to consider the circumstances in which the test takes place. Students take the test in their personal spaces, and earlier in the thread, you mentioned essentially inspecting a student's living space (...angle of camera, light, checking environment, etc...) "Checking environment" is really just a cold, "process" word for inspecting a student's living space.
A student's room can often have private or secret things about them. Before you ask, not every student has the privilege to use a separate, clean, blank room to take tests. A personal space is inevitably going to have personal, private things. I've brought this up before; I personally know friends who were outed to professors as trans because their personal space has things like needles - and then you even have stuff like naive professors assuming "drugs" when its really just medications.
It could be anything else besides that, in fact - calendars with things scribbled on them; family photos; posters for political organizations; if you look in someone's bedroom, you're inevitably going to find out things about them that they would rather you not know.
Would you take your students on a tour of your bedroom while you're teaching an online class?
EDIT: In addition, there's non-traditional students and high risk students, and interruptions in general - there's not _only_ a test going on - I've had someone from my family interrupted in the middle of an exam because someone from the government knocked the door to take our temperatures and ensure we're healthy and don't have COVID. There's always more things going on, too.
"Security researcher" here: Proctorio's "zero-knowledge encryption" claims were in name only, pretty much.
TL;DR Canvas and Moodle use incrementing integers for both user ID and quiz ID. Proctorio's "zero-knowledge encryption" has a shared key derived from the two IDs; they store the user ID, so that's effectively a single PIN. With their older settings, you can brute force a quiz ID in a couple hours at most.
They increased the time cost for the brute force to now take days/weeks, but that's still peanuts and the attack scales really well, because most exams take place at the same time (students start/end at similar times), so once you crack the quiz ID for one record, that's tens-hundreds of records; and since IDs are just increasing numbers, once you find the lower bound, working your way upwards is much easier.
They also added an option for universities to use PGP keys - but that involves training faculty, or manual setup.
Genshin Impact's anti-cheat is not completely secure: you can use it to read/write umode memory / read kmode memory with kernel privileges: https://github.com/ScHaTTeNLiLiE/libmhyprot
Mirror repo after the original author took the repo down, but still exploitable AFAIK.
It nags you once, but you can ask it to stop.
Besides, uploading unseen executable code and scanning photos are far from the same tech. They're very different things.