These two solutions wouldn't work for me. My phone is covered, I use a custom ROM, but I like being able to help people install cool stuff that's not necessarily on the Play store, organically, without planning.
Reproducible builds ensure that you can build the same binaries with the same source code. Nothing like the current date for instance gets in the way of getting a different build.
This allows independent people to check that provided binaries don't contain malicious stuff for instance. Ultimately, it lets you download binaries instead of rebuilding everything yourself locally if the binaries have been independently reproduced.
The provided binaries may still contain malicious code but it guarantees that no malicious code has been inserted in between the build process of the published code. So if your binaries contain malicious code, you can be sure that all other users of the software version are affected, too.
does anyone practice dual build pipeline? eg: 1 by your devops team and another one by your security team and compare binaries hash later. To verify everything is reproducible.
I work for one of the several European companies building open source software that has been chosen as components of openDesk.
openDesk is solid, legit and serious.
Open source is a requirement. As such, money doesn't go to a startup building proprietary software that get bought a few years later by a big tech company and then all the investment is lost. They audit and check that licenses are open source and that the dependencies have compatible licenses.
It's publicly funded, by Germany* (for their needs, but it will grow larger than them). Their strategy is to give money to established European open source software companies so they improve their software in areas that matter to them, including integration features (user management, for instance, or file / event sharing with other software, many things) as well as accessibility. They take all these pieces of software and build a coherent (with a common theme / look & feel), turn-key, feature-rich suite. This strategic decision that has its drawbacks allows to get something fast with what exists today.
I'm not sure communication and the business strategy is all figured out / polished yet, but with the high profile institutions adopting it, it will come. Each involved companies wants this to succeed too.
I think this is huge. I'm quite enthusiastic. Software might not be perfect but with the potential momentum this thing has, it could improve fast, and each piece of open source software that is part of this as well along the way.
From what I know of Carson from his writing and presentations, he probably worded it that way on purpose knowing he'd eventually do a new version, and he didn't want to miss an opportunity to troll everyone a bit.
Which, if you take the base64 encoded string, strip off the control characters, pad it out to a valid base64 string, you get
"eyJhZ2VuZGEiOnsiaWQiOm51bGwsImNlbnRlciI6Wy0xMTUuOTI1LDM2LjAwNl0sImxvY2F0aW9uIjpudWxsLCJ6b29tIjo2LjM1MzMzMzMzMzMzMzMzMzV9LCJhbmltYXRpbmciOmZhbHNlLCJiYXNlIjoic3RhbmRhcmQiLCJhcnRjYyI6ZmFsc2UsImNvdW50eSI6ZmFsc2UsImN3YSI6ZmFsc2UsInJmYyI6ZmFsc2UsInN0YXRlIjpmYWxzZSwibWVudSI6dHJ1ZSwic2hvcnRGdXNlZE9ubHkiOmZhbHNlLCJvcGFjaXR5Ijp7ImFsZXJ0cyI6MC44LCJsb2NhbCI6MC42LCJsb2NhbFN0YXRpb25zIjowLjgsIm5hdGlvbmFsIjowLjZ9fQ==", which decodes into:
I only know this because I've spent a ton of time working with the NWS data - I'm founding a company that's working on bringing live local weather news to every community that needs it - https://www.lwnn.news/
Nesting, mostly (having used that trick a lot, though I usually sign that record if originating from server).
I've almost entirely moved to Rust/WASM for browser logic, and I just use serde crate to produce compact representation of the record, but I've seen protobufs used as well.
Otherwise you end up with parsing monsters like ?actions[3].replay__timestamp[0]=0.444 vs {"actions": [,,,{"replay":{"timestamp":[0.444, 0.888]}]}
Sorry but this is legitimately a terrible way to encode this data. The number 0.8 is encoded as base64 encoded ascii decimals. The bits 1 and 0 similarly. URLs should not be long for many reasons, like sharing and preventing them from being cut off.
Links with lots of data in them are really annoying to share. I see the value in storing some state there, but I don’t think there is room for much of it.
What makes them annoying to share? I bet it's more an issue with the UX of whatever app or website you're sharing the link in. Take that stackoverflow link in the comment you're replying to, for example: you can see the domain and most of the path, but HN elides link text after a certain length because it's superfluous.
XSLT is to my knowledge the only client side technology that lets you include chunks of HTML without using JavaScript and without server-side technology.
XSLT lets you build completely static websites without having to use copy paste or a static website generator to handle the common stuff like menus.
I did that. You can write .rst, then transform it into XML with 'rst2xml' and then generate both HTML and PDF (using XSL-FO). (I myself also did a little literate programming this way: I added a special reStructuredText directive to mark code snippets, then extracted and joined them together into files.)
Lies in user agent strings where for bypassing bugs, poor workarounds and assumptions that became wrong, they are nothing like what we are talking about.
A server returning HTML for Chrome but not cURL seems like a bug, no?
This is why there are so many libraries to make requests that look like they came from browser, to work around buggy servers or server operators with wrong assumptions.
> A server returning HTML for Chrome but not cURL seems like a bug, no?
tell me you've never heard of https://wttr.in/ without telling me. :P
It would absolutely be a bug iff this site returned html to curl.
> This is why there are so many libraries to make requests that look like they came from browser, to work around buggy servers or server operators with wrong assumptions.
This is a shallow take, the best counter example is how googlebot has no problem identifying it itself both in and out of thue user agent. Do note user agent packing, is distinctly different from a fake user agent selected randomly from the list of most common.
The existence of many libraries with the intent to help conceal the truth about a request doesn't feel like proof that's what everyone should be doing. It feels more like proof that most people only want to serve traffic to browsers and real users. And it's the bots and scripts that are the fuckups.
Googlebot has no problem identifying itself because Google knows that you want it to index your site if you want visitors. It doesn't identify itself to give you the option to block it. It identifies itself so you don't.
I care much less about being indexed by Google as much as you might think.
Google bot doesn't get blocked from my server primarily because it's a *very* well behaved bot. It sends a lot of requests, but it's very kind, and has never acted in a way that could overload my server. It respects robots.txt, and identifies itself multiple times.
Google bot doesn't get blocked, because it's a well behaved bot that eagerly follows the rules. I wouldn't underestimate how far that goes towards the reason it doesn't get blocked. Much more than the power gained by being google search.
Yes, the client wanted the server to deliver content it had intended for a different client, regardless of what the service operator wanted, so it lied using its user agent. Exact same thing we are talking about. The difference is that people don't want companies to profit off of their content. That's fair. In this case, they should maybe consider some form of real authentication, or if the bot is abusive, some kind of rate limiting control.
Add "assumptions that became wrong" to "intended" and the perspective radically changes, to the point that omitting this part from my comment changes everything.
I would even add:
> the client wanted the server to deliver content it had intended for a different client
In most cases, the webmaster intended their work to look good, not really to send different content to different clients. That later part is a technical means, a workaround. The intent of bringing the ok version to the end user was respected… even better with the user agent lies!
> The difference is that people don't want companies to profit off of their content.
Indeed¹, and also they don't want terrible bot to bring down their servers.
1: well, my open source work explicitly allows people to profit off of it - as long as the license is respected (attribution, copyleft, etc)
> Yes, the client wanted the server to deliver content it had intended for a different client, regardless of what the service operator wanted, so it lied using its user agent.
I would actually argue, it's not nearly the same type of misconfiguration. The reason scripts, which have never been a browser, who omit their real identity, are doing it, is to evade bot detection. The reason browsers pack their UA with so much legacy data, is because of misconfigured servers. The server owner wants to send data to users and their browsers, but through incompetence, they've made a mistake. Browsers adapted by including extra strings in the UA to account for the expectations of incorrectly configured servers. Extra strings being the critical part, Google bot's UA is an example of this being done correctly.
reply