Other people might point to more specific tells, but instead I'll reference https://zanlib.dev/blog/reliable-signals-of-honest-intent/, which says that you can tell mainly because of the subconscious uncanny valley effect, and then you start noticing the tells afterwards.
Here, there's a handful of specific phrases or patterns, but mostly it's just that the writing feels very AI-written (or at least AI-edited). It's all just slightly too perfect, like someone's trying to write the perfect LinkedIn post but are slightly too good at it? It's purely gut feeling, but I don't think that means that it's wrong (although equally it doesn't mean that it's proven beyond reasonable doubt either, so I'm not going to start any witch hunts about it).
I bought a second hand Fairphone, and I'm very happy with it, except that my wife, a colleague of mine, and some friends of ours now also gave Fairphones, so when one buzzes we all instinctively check our pockets because they all sound the same...
I also bought headphones from the same company, and while they're probably not the best for audio quality, it was great being able to repair them when the headband broke. Generally, I'm a very happy Fairphone customer.
> when one buzzes we all instinctively check our pockets because they all sound the same
Isn't that the same for every brand? I have a friend who worked in cybersecurity in a certain phone company and was getting very stressed whenever my phone, which happened to be from the same brand, was ringing :D
I guess one can change the default sound, isn't that the case with fairphones?
I have a Samsung Moto, and it has a very default ringtone, not really a tone since it says "Hello, Moto" which is embarrassing but I haven't made the effort to switch tones, at any rate while I will be confused if someone in proximity to me gets a call on their Moto, my experience they don't have to be very far from me before I realize instinctively, that sound is far enough away it can't be my phone, although it irritates me nonetheless.
And I've been seated eating with people who had the same phones and I realized no, it must be their phone (although I feel a strong urge to check), because my ears are able to determine direction of a sound.
I'm also old and keep getting told I'm going deaf, so my question is, are people really not able to tell it's not their phone or are they just not thinking it through before checking.
Samsung Moto? Two different companies with very different phones. I'm surprised that such a mutant exists. Reads to me Car (with square wheels).
Moto is the only big brand I ever consider for a phone, while Samsung has never been as much as a consideration. Moto has had, which is changing, a bit of freedom - enough to tweak it into resembling a pure android experience. Samsung is incorrigibly infested - and if they ever start giving phones to prisoners, they'll be Samsung.
Just in case you wondered, and even if you didn't,
I admire ignorance of smartphones and consider such as virtue. I obtained my first in 2018 after years of resistance. But driving a semi and not being the best with maps and logistics, I finally capitulated.
And back then, although cyanogenmod was gone, they weren't too bad. 2019 changed a lot, with autonomous, respawning, immutable "services" and things have digressed severely since. Hence my visiting this post for Fairphone.
So take pride in your purity. It only gets worse the more you know.
It's less the sound, and more the buzz when it's on vibration. I've never found a way of changing that, unfortunately. It's probably true for other brands, but I've never really had a phone that other people have also used, whereas now I'm in a (very small) bubble that seems to be happily converging on Fairphones...
I think this is connected to the overlap and offset that are used layer to account for complex or symmetrical letter shapes. If the author had just split the grid, those effects would have been harder to achieve.
The thing is that you can still have high-level abstractions without them needing to be as slow as React. React does a slow thing by default (rerendering every child component whenever state changes, so every component in the UI if top-level state is changing), and then requires careful optimisation to correct for that decision.
But you can also just... update the right DOM element directly, whenever a state changes that would cause it to be updated. You don't need to create mountains of VDOM only to throw it away, nor do you need to rerender entire components.
This is how SolidJS, Svelte, and more recently Vue work. They use signals and effects to track which state is used in which parts of the application, and update only the necessary parts of the DOM. The result is significantly more performant, especially for deeply nested component trees, because you're just doing way less work in total. But the kicker is that these frameworks aren't any less high-level or easy-to-use. SolidJS looks basically the same as React, just with some of the intermediate computations wrapped in functions. Vue is one of the most popular frameworks around. And yet all three perform at a similar level to if you'd built the application using optimal vanilla JavaScript.
Note that it's not clear that any of the JustHTML ports were actually ports per se, as in the end they all ended up with very different implementations. Instead, it might just be that an LLM generated roughly the same library several different times.
I had a course on natural language processing with Prolog, and the first third of the exam was just evaluating Prolog expressions and figuring out what syntax errors had been made. This of course took so long that everyone spent at least two thirds of the time on that one portion...
It was a weird course, though. Because we spent so long learning Prolog, the second half of the course was really rushed - lots of learning about English grammar and syntax trees, and how you could model them in different ways, and then the last lecture was just "oh, by the way, here are all the ways this doesn't work and is a complete dead end - your exam is on the 14th".
IIRC there was a part two to the course, but I think it clashed with something I was more interested in so I never took it. It was cool to learn Prolog, but I wish it had been a whole course on just Prolog and actual present-day use-cases, as opposed to this weird half-and-half course about an approach to NLP that even back then wasn't being pursued with much enthusiasm.
According to [1], the most important factor for the power consumption of code is how long the code takes to run. Code that spreads over multiple cores is generally more power efficient than code that runs sequentially, because the power consumption of multiple cores grows less than linearly (that is, it requires less than twice as much power to run two cores as it does one core).
Therefore if parallelising code reduces the runtime of that code, it is almost always more energy efficient to do so. Obviously if this is important in a particular context, it's probably worth measuring it in that context (e.g. embedded devices), but I suspect this is true more often than it isn't true.
>Therefore if parallelising code reduces the runtime of that code, it is almost always more energy efficient to do so
Only if it leads to better utilisation. But in the scenario that the parent comment suggests, it does not lead to better utilisation as all cores are constantly busy processing requests.
Throughput as well as CPU time across cores remains largely the same regardless of whether or not you paralellise individual programs/requests.
That's true, which is why I added the caveat that this is only true if parallelising reduces the overall runtime - if you can get in more requests per second through parallelisation. And the flip side of that is that if you're able to perfectly utilise all cores then you're already running everything in parallel.
That said, I suspect it's a rare case where you really do have perfect core utilisation.
It is a lot more complex than moment, but only because there's a lot of inherent complexity to dates and times that moment just doesn't deal with. So you need to be explicit about whether you're dealing with dates, times, or datetime objects, whether or not the object you're working with has a timezone, etc. Where moment is generally designed to have a convenient API, Temporal is designed to have a correct API.
In theory yes, but in practice I don't think you could build something like Servo very easily like that. Servo is a browser, but it's also purposefully designed to be a browser-developer's toolkit. It is very modular, and lots of pieces (like the aforementioned CSS selector library) are broken out into separate packages that anyone can then use in other projects. And Servo isn't alone in this.
However, when you install Servo, you just install a single artefact. You don't need to juggle different versions of these different packages to make sure they're all compatible with each other, because the Servo team have already done that and compiled the result as a single static binary.
This creates a lot of flexibility. If the Servo maintainers think they need to make a breaking change somewhere, they can just do that without breaking things for other people. They depend internally on the newer version, but other projects can still continue using the older version, and end-users and distros don't need to worry about how best to package the two incompatible versions and how to make sure that the right ones are installed, because it's all statically built.
And it's like this all the way down. The regex crate is a fairly standard package in the ecosystem for working with regexes, and most people will just depend on it directly if they need that functionality. But again, it's not just a regex library, but a toolkit made up of the parts needed to build a regex library, and if you only need some of those parts (maybe fast substring matching, or a regex parser without the implementation), then those are available. They're all maintained by the same person, but split up in a way that makes the package very flexible for others to take exactly what they need.
In theory, all this is possible with traditional distro packages, but in practice, you almost never actually see this level of modularity because of all the complexity it brings. With Rust, an application can easily lock its dependencies, and only upgrade on its own time when needed (or when security updates are needed). But with the traditional model, the developers of an application can't really rely on the exact versions of dependencies being installed - instead, they need to trust that the distro maintainers have put together compatible versions of everything, and that the result works. And when something goes wrong, the developers also need to figure out which versions exactly were involved, and whether the problem exists only with a certain combination of dependencies, or is a general application problem.
All this means that it's unlikely that Servo would exist in its current form if it were packaged and distributed under the traditional package manager system, because that would create so much more work for everyone involved.
But isn't the problem with X- headers that if they ever get standardised, they necessarily create this smuggling issue? Whereas if you start with an unprefixed header and standardise it under the same name, you avoid this issue.
You could also solve the problem by standardising the header with the X- prefix, but this is more confusing to users and violates the idea that X- always means "not standardised", at which point the prefix is useless anyway.
Here, there's a handful of specific phrases or patterns, but mostly it's just that the writing feels very AI-written (or at least AI-edited). It's all just slightly too perfect, like someone's trying to write the perfect LinkedIn post but are slightly too good at it? It's purely gut feeling, but I don't think that means that it's wrong (although equally it doesn't mean that it's proven beyond reasonable doubt either, so I'm not going to start any witch hunts about it).
reply