Someone once asked how would you cope with lawful interception of corrupt law enforcement agencies, so allow me to explain TideCloak in that context: imagine a corrupt agency requests the identity information of a specific user of TideCloak: they file a subpoena from the TideCloak operator – however the root identity of the user doesn't reside there. It's with Tide's Fabric. So, the agency has to file subpoenas in all the jurisdictions that this particular user's ID is split. Let's say that's this user's authority is spread across 17 countries across 9 jurisdictions – you'll need the cooperation of all those jurisdictions to get access to a single user identity. Still possible for an Interpol-coordinated task force for a valid law enforcement operation, but that's going to be very hard to get under the radar if the agency is corrupt.
In the end, it boils down to trusting only what you can verify. Unlike any comparable system today, every cog in TideCloak's architecture is entirely verifiable to its administrators, operators and end-users – so there's no requirement to blindly trust any single component. There is no "us" to trust due to the decentralized nature of the fabric. I can explain how decentralizing each aspect works in TideCloak, but unfortunately, it’ll take more than half a page...
If you are serious about network side blocking do tls interception (lmao) but that is a lot of maintenance, adds other attack surface and the average openwrt device isn't beefy enough for such things.
Firefox and Ublock Origin against ads.
There is cooler stuff for Openwrt.
Mesh nets between friends, to share internal services.
Just tinkering with and learning about network stuff.
Adding ipv6 to tunnel if isp doesn't support. Having Wifi whose autochannel doesn't suck.
Dual Antennas -> Sword fighting.
Only fun with multiple devices and dabbling with mesh mode or throughput maxing.
In ax dual channel 40mhz +160mhz bandwidth the throughput is faster than some cables.
Seen devices with IPMI that had by design unauthenticated admin login to the IPMI from the host side that was not removable.
They also could flash IPMI firmware from the host.
So if your server with such an IPMI is infected you can't trust reimaging it via IPMI because that can be hijacked as well.
I would consider that mostly a feature. The situation where that is useful (you somehow lost the credentials for BMC, but have root access to the host) is in my experience significantly more common (I see that multiple times in a year) than attacker implanting stuff into the BMC firmware (never seen that).
Obviously if you rent out whole physical machines and automate the provisioning by IPMI, then the last thing you want is the customer having admin access to the BMC.
Dell iDRAC has an interesting feature that allows you to make all of the BMC configuration read-only which can only be disabled by factory resetting the iDRAC by means of physical (and IIRC not exactly documented) switch on the BMC board. (Well, it is still _i_DRAC as in “integrated”, but on current higher-end PowerEdges the iDRAC is separate OCP-like card, but well, the system does not work without it)
Lennart is technically doing good work.
While his tools are less complicated than the current hilariously convoluted standard boot process, they are still too complicated to use well.
He also misses the point with the attack scenarios.
If you luks encrypt your data and choose a good passphrase, the brunt is done against theft.
Protecting against bad passwords is futile in the long run. (Will elaborate if requested.)
That someone images your drive for offline bruteforce or manipulates your boot binaries is rare.
The true benefit of signed boot chain is to have security patches work reteoactively, "compromise recovery".
Automated attacks and malware from the internet side are way more common.
Imagine one of your daemons is compromised. As long as it does not escalate privileges, it can only gain persistence via corruptable data files or config accessible to itself.
Now a patch comes along that closes the hole that reinfects the daemon.
The malware will not start on daemon restart.
With signed booting you can bring that to the kernel and root.
Signed booting with rollback protection into a known good state.
As long as the malware is not part of that system it won't run on launch.
But who signs my stuff, especially my own scripts and automation?
Me of course, if I had good tooling.
If that became normal malware would just steal the key.
A TPM or other keybearer device lets you conditionally unlock a signing key.
So to sign, you can boot your system into a runlevel / target / ... that does not run auxiliary scripts from writable locations.
If that state is measured by the TPM, you can sign.
With good enough tooling this is workable.
If implemented well, this even helps maintenance of the system.
In the state of things now, its a horrible convoluted mess that doesnt give extra security but 10 more points at which you can break your boot.
+ UEFI itself is again a complexity monster full of holes on very many machines.
The whole x86 preboot stack amd or intel is a horrible complexity monster.
> But who signs my stuff, especially my own scripts and automation? Me of course, if I had good tooling.
There's already a mechanism, provided for DKMS - you enrol a 'Machine Owner Key' which only root can access, and any time you update your kernel (requiring you to recompile a kernel module), it gets signed with the MOK. Which of course means any malware that gains root access can sign itself too.
An alternative is that any time you update your kernel and reboot, things like the nvidia drivers would get disabled until you perform some special ceremony. Not that great for usability, we want people to install updates in a timely manner after all so we don't want to make it too inconvenient.
Another alternative is to only load code blessed by a Microsoft-approved Linux distro - the Ubuntu Core approach. But this requires abandoning the open source ethos.
If firmware not buggy:
If Secureboot is set up to use keys which never signed any bootloader that lets you modify the system pre-login.:
Then it kind of matters.
If you have a gaming board: Firmware integrity checks are mostly easy bypassable.
If you use a distro's default bootloader scheme: You can compromise the OS pre-login.
The CA that signed the shim, the Microsoft 3rd party CA, sogned all kinds of crap that lets you run whatever you want from that.
The whole shim thing is not about security but having stuff boot smoothly without screwing with bios settings.
If you want it to give you integrity, then you need to roll your own keys and make sure the firmware has no signature check bugs letting one bypass signature checks.
All this is orthogonal to self sovereign systems.
On Intel thats gone. You can't have a secure and sovereign firmware game without extreme extra effort.
The whole secureboot roll your own keys is the next best thing of harm reduction.
If we had some way of making the system actually static and separate from userdata. And a way to boot that prevents any persistent executable code from the userdata part from running. So as to have a clean state. From that on we could bind that state to unlock signing keys to sign the next version of the booted stuff.
Then we could have nice security properties minus whatever is bad in the Intel TCB.
Compromise to root would actually be nicely reversible.
Immutable distros exist. But they are not there yet in terms of conditional readonly-ness.
Many surface brand devices had a toggle in the bios for something like "security level" for years.
Which is a switch between Windows signing CA only and Windows signing CA + 3rd party CA.
The latter allows shim distros to boot. You can still set your own keys on them.
Yeah, basically both super duper encrypted privacy oriented services want your phone number.
Sorry, but that's not privacy. I don't care what they do to encrypt your messages, they are still tied to me, which makes the super duper encryption pointless.
I agree with you that Telegram does not even have E2EE and that's bad.
But in this thread, GP was just talking about metadata. The goalpost here is metadata. GP particularly mentioned that Signal "fixed" the phone number issue and I just want to note that currently Signal isn't any better than Telegram in this aspect.
Telegram Foss clients exist only because of unpaid volunteers that take Telegrams messy mix of open and closed parts and rip closed parts out and replace them.
The Telegram organisation is notoriously late to release the source code to their current release. If they do, its a giant squashed commit without proper changelog.
These releases must then be first wrangled by volunteers to be well buildable.
The Telegram Org itself gives no support to volunteers at all.
You can't register with Foss builds. Only official binaries.
Nowadays a lot of features are premium only. You can only get premium with official binaries. That part is closed.
Telegram has fully reproducible builds and is not that complicated to build, no issues there. They even have a guide on how to build & verify. [0] No need to wrangle or modify, generally builds as is (at least from my experience).
Granted yes, the version commits are squashed like you said. [1] However I haven't seen source release to lag behind store releases, any sources on that?
A couple months ago I actually verified a build of Telegram on my friend's phone as he thought something might be off and didn't have any issues there (the build matched).
This doesn't affect the user that downloads these from distro repos or F-Droid because every single update they get comes from the source code. There is never a lag even for 1 second because without the source code there are no builds.
Pretty much all the packages on Linux repos come from package maintainers taking upstream source code, removing parts they don't like and then building that. This is a normal part of packaging and building open-source apps.
Yes and thats why users spend sometimes months on old builds.
Also which distro packages Telegram?
Fedora doesn't.
Debian does but at times it was so old the client crashed from receiving server comms because it wasn't fully compatible. It actually crashed as in segfault.