Hacker News new | past | comments | ask | show | jobs | submit | smashed's comments login

> * Better tooling to determine why it's falling back to DERP instead of direct for remote clients. DERP relays should be an absolute last resort to provide connectivity for Business-plan-level customers (very slow), and the way TS works just assumes any connectivity is fine.

Tailscale touts all the perf benefits of the wireguard protocol but in practice between the userland wireguard that seems to be used all the time on all platform (even linux) and the over reliance on DERP, it has none of the performance benefits of the real thing.


I thought they vastly improved user-space wireguard performance?

https://tailscale.com/blog/more-throughput

Not sure if the kernel implementation pulled ahead again, I don't really follow these things.

Also not defending tailscale, I respect them but I agree they are a one size fits some solution.


They also seem to be needlessly doing DERP over TCP in some cases where UDP would actually work.


> Byte mode uses (you guessed it!) 8 bits per single character.

8 bits is enough to represent the entire ascii char table, there must be some other limitation going on. QR code control chars maybe?

The linked "byte mode" table only has 45 individual chars. This could be represented with 6 bits with room to spare..


> 8 bits is enough to represent the entire ascii char table, there must be some other limitation going on. QR code control chars maybe?

The specified capacity of "25 characters" for QR code size is 25 characters in alphanumeric mode, not in byte mode.

> The linked "byte mode" table only has 45 individual chars. This could be represented with 6 bits with room to spare..

Even better than that - it's 5.5 bits per character! Each pair of characters is represented as a single 11-bit code unit. (This works because 45 x 45 = 2025, which is just barely under 2^11 = 2048.)

There's apparently some support in the QR standard for mixed-encoding codes, but few encoders seem to use that.


Apparently you can specify the text encoding in a thing called “ECI”, but support varies and most readers just guess the encoding by the bytes. I imagine these days most are UTF8 https://stackoverflow.com/questions/9699657/is-utf-8-the-enc...


> The linked "byte mode" table only has 45 individual chars.

No, that link is for alphanumeric mode, which uses 5.5 bits per character (45 * 45 = 2025 <= 2048, so it fits in 11 bits).


It does not really matter as even though the models get updated, the new data was produced with the help of the older models, it is feeding on itself.

Just imagine how hard it would be to push a new programming language. No AI models would be able to generate code in that new language, or they would be extremely limited. This would make adoption much more difficult in a world where all developers use AI tooling extensively.

I believe this trend could create new opportunities also: as everyone uses AI tools to generate statistically average quality code, only those not using AI tools will be able to create true innovation.


In some sense I am hopeful that AI will be able to just write everything directly in binary. Everything written ideally, with no abstraction, fluff or bumpers for human brains. Computers don't talk in any high level programming language, they talk in binary. If anything we should probably be focusing LLMs on getting good at that.

I can only imagine that the amount of energy wasted on CPU cycles from layers of bloated programming languages makes stuff like bitcoin mining look like a rounding error.


Not sure that’s always a good thing - see the occasionally erratic behavior of Tesla’s autopilot. It directly speaks the language of the systems it connects with, and also occasionally steers into dividers, killing people - and nobody knows why. We need to be able to verify correctness of what the models generate.


So you can't even debug or check the LLM's output for correctness? Good luck with that.

If anything, the best use for LLMs is to produce code in very strong languages like Lean, where correctness of the code can be established. Just trusting machine code output would be irresponsible.


I don't quite understand the need for a timestamp. This only reduces entropy? You wouldn't think of using the current date in a password prefix for example.

Aren't you going to track the keys in a database, where you can keep the tenant id and creation time, scope of the key and any other significant metadata anyway?

A static prefix + checksum, maybe a version number so you can future-proof the system sounds like best practice. For example `ASKEY1-(128bit random base32 encoded)-(chksum)`.


I agree it might not be worth promoting as a main feature at all. But from experience, there are users that will be very vocal about it and request a dark mode.


Investing in or doing it? Not the same.


It's not openwrt even though they pretend it is in their marketing. It's based on openwrt and might be "compatible" to some level with other openwrt packages.

When asked for full source code they seem transparent about it:

https://forum.gl-inet.com/t/source-code-for-gl-firmware-and-...

You can't reproduce their images and they don't share the improvements.

Of course not GPL compliant but not a concern in China I believe.


You can check if your GL-iNet product supports native OpenWrt here:

https://www.gl-inet.com/support/firmware-versions/

You might find some sources here:

https://github.com/gl-inet

The issues regarding GPL compliance or lack thereof are worth noting, however. I made a point of asking for native OpenWrt firmware for the products I have from them, only to discover after the fact that due to closed source firmware blobs, it will likely never be available in that format, which was somewhat disappointing.

Given the fairly low/competitive price point of their hardware, I think it’s worth taking the time to make sure that the device suits your needs in that regard, if it’s important to you.


The bigger deal is the lack of an open source BL2/BL31, but that’s me with my tin foil hat.

Sent from my iPhone


Are you referring to open source BL2/BL31 for GL-iNet products, and/or for OpenWrt One? I’m not sure it’s possible with either, as haven’t looked into the One in detail yet, as I wasn’t aware it had launched until TFA was posted, though I was aware of it since January or so.

I’d like to run fully open source network stack if possible myself, though I’m not sure if that possible without moving the goalposts and virtualizing something or doing it in software, and even then I’d have to figure out some kind of boot attestation ideally, thought I'm not sure how that's going to pan out. Isn't Intel SGX/AMD SEV/ARM CCA required for that?

Some links I thought we interesting on that topic, as it's adjacent to the discussion:

> A comparison study of intel SGX and AMD memory encryption technology

https://dl.acm.org/doi/10.1145/3214292.3214301

> vSGX: Virtualizing SGX Enclaves on AMD SEV

https://ieeexplore.ieee.org/document/9833694

What do you suggest? How’s your hat fitting, by the way?

Here's the firmware for the OpenWrt One, if that helps you determine whether it does what you want:

https://firmware-selector.openwrt.org/?version=SNAPSHOT&targ...

If you find the answer to your/our questions, please let me/us know!

Sent from my iPhone in Lockdown Mode


>It's not openwrt

This isn't entirely accurate. It absolutely is running a full OpenWRT instance. In addition to that, they have produced their own UI/shell, which is the default that you'll land on, but it's not difficult to get into LuCI.

That said, I'm not stating that it's only running OpenWRT, or that the OpenWRT instance it is running is unmodified, or trustworthy.

That said, I have struggled to get gigabit wireguard VPN throughput on other devices that support OpenWRT.

I love FOSS, I love self-hosting, I love DIY-friendly tinkerer-friendly, and I love high levels of user control, I just wish the ecosystem that prioritized these things had a stronger emphasis on high-end hardware that offers high performance.


    > Of course not GPL compliant but not a concern in China I believe.
I don't believe this. There are multiple cases where GPL was enforced by Chinese courts.

Example: https://www.ifross.org/?q=node/1676


Posted case does not enforce GPL


And they're headquartered in Hong Kong.


"The GL.iNet OEM firmware is a fork of OpenWrt and thankfully is compatible with official OpenWrt sysupgrade images, so returning to OEM is done simply by flashing their sysupgrade image without keeping settings and vice versa."

With GL.inet the buyer can install their own OpenWRT images. The OEM OpenWRT fork is a means of installing the buyer's choice of OpenWRT image.

For Cudy, another Chinese OEM, OpenWRT ToH refers to this as "Intermediate Firmware". See, e.g., https://openwrt.org/toh/cudy/tr1200

An OEM OpenWRT fork ("intermediate firmware") is (pre)installed, allowing a buyer overwrite it with an open source, GPL compliant OpenWRT image of their choice downloaded from openwrt.org or one compiled from source code downloaded from openwrt.org.

OpenWRT One is a Banana Pi board. Like GL.inet or Cudy, the Chinese OEM has their own system images.^1 Can a buyer reproduce them. 1. For example, https://docs.banana-pi.org/en/BPI-R4/BananaPi_BPI-R4#_system...


It's not really an issue, since OpenWrt has full support for this device. IIRC the support in OpenWrt 23.05 is pretty good, I have been running 24.10 on a Flint 2 that we use as an AP and I've had zero issues.

I'd argue that even though by default the Flint 2 has a nicer interface for beginners, vanilla OpenWrt is much better. E.g. their old OpenWrt 21.02 build with the proprietary Mediatek SDK does not support baby jumbo frames, which are used by a bunch of providers that still use PPPoE (to get better performance).


Yeah, you'd definitely want baby jumbo frames if you live in the UK. ISPs here almost universally use PPP.


Tailscale will fallback to tuns servers which are dumb "cloud" relays if direct connection can't be established.


It's my phone. It's my data. It's my choice to install the app. It's my choice to grant the permissions to all files. Because guess what, I'm using the app to sync all my files.

I really can't agree with Google in this particular case.


I couldn't agree more. Given how much frigging hoops I had to jump through to get my Obsidian over syncthing setup to sync with my company iPhone - I nearly gave up.

I grew up when computers didn't babysit me and tried to act like the good old GDR, knowing every thing better than their citicens.

Nowadays, I feel more and more hindered by computers, not enabled. Computers used to be a production device (I could create things with them).

Phones are not a computer - phones are a just "consume like we want you to" device.

The problem is, I want my phone to be a creation device. A device that allows me to create content, text, to do lists, shopping lists, ideas and store them. And(!) sync them using the tools I decide to use. And not force me to use tools I friggin hate, because they just don't get the job done.


I gave up. My phone now is just a communication and utility device, and thus I don’t feel the urge to upgrade until it can’t do those tasks. I went back to computers (and Linux) to be able to just use them as a computer.


Same. I wish there were an alternative (a practical pocket computer), but there really isn't. So I too gave up on fighting my phone, and have also completely stopped doing mobile development. I now treat my phone essentially as an untrusted, prepackaged walled garden with limited utility. :-/


> I wish there were an alternative (a practical pocket computer), but there really isn't.

There really is: https://puri.sm/products/librem-5.

And it's my daily driver.


Appreciate the feedback. For some reason I feel like I dismissed this as an option at one point, but that was long ago. I'll take another look!


I use Sailfish OS as my daily driver since 2013.

https://en.m.wikipedia.org/wiki/Sailfish_OS


Thanks for the info. I haven't looked at this in a while. I'll take another look!


How did you get iPhone Syncthing + Obsidian working? I was under the impression that it was basically impossible to share Möbius Sync's directory with Obsidian.


There is a new app Synctrain which does this.


Got it working myself. I set up a share inside of Mobius Sync that reaches into the Obsidian folder. (note the entire thing, just one vault). I think there was a popup saying it was unsupported but I haven't had any problems yet.


This


The java.io.File API isn't removed from Android, nor inaccessible. You can absolutely still use it. Google Play has chosen to not accept it on their store unless you justify it (to their non working bots). In this case, the dev chooses to just drop the entire app because maintaining it just for Fdroid feels pointless.

There's very few permissions on android that are system/privileged/preinstalled.


That's why you can install through F-Droid right?


You don't have root. Google does.

I'm not saying that's a good thing, but it's not exactly a secret when you bought it.


That’s still not your product though. You only bought a licence.

Flash your favorite open firmware, enjoy and let regular users who cannot do that avoid permission extortion. The world has needs and issues, it is not spinning around your skillset.


And yet you'll blame Android when some app steals a lot of data just like it always happens on this site.


Have you considered that it's a plural "you" that you're choosing to pit yourself against, with different people each weighing different complaints?

Almost by definition, the people who argue strongly for free use of their hardware and software are almost never the same people who argue strongly for safety and security restrictions. You seem to be frustrated by a contradiction or inconsistency that doesn't exist.

It's true that Google can't win the hearts of both sides, but they surely know that -- you don't need to get so personally frustrated on their behalf. It's just a company with a product in a market, and the market is never going to be uniform.


I've tried both in order to archive EOL websites and I've had better luck with wget, it seems to recognize more links/resources and do a better job so it was probably not a bad choice.


Conversely, httrack was the only tool that could archive the JS-heavy microsite my realtor made to sell our old house. The command-line interface is horrendous, but it does handle rewriting complex sites better than wget does.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: