Hacker Newsnew | past | comments | ask | show | jobs | submit | Eikon's favoriteslogin

Nice project! I’ll be highlighting it in the next edition of the https://Rust-Trends.com newsletter.

I used to work in this industry. One thing that might be interesting for people is the metals do not actually withstand the temperatures directly. Instead cooling vanes are needed throughout various parts of the engine. This is why shutting a gas turbine (aka jet engine) down from full power will destroy it. It is necessary to take the engine down to a lower power setting first and then continue to spin the engine (calling motoring the engine) for quite a while even after it is turned off.

Another interesting thing is some engines cannot withstand certain RPM ranges as the compressor and power turbine can get into a catastrophic resonance. A good example is the T700 (used in the Blackhawk).


It "can" be done. Some photons leaving the earth a thousand years ago. Bouncing off a mirror in some distant planet and allowing us to see two thousand years into the past

Related and perhaps useful: I’ve seen this in multiple cloud offerings already, where the cpu scaling governor is set to some eco-friendly value, in benefit to the cloud provider and in zero benefit to you and much reduced peak cpu perf.

To check, run `cat /sys/devices/system/cpu/cpu/cpufreq/scaling_governor`. It should be `performance`.

If it’s not, set it with `echo performance | sudo tee /sys/devices/system/cpu/cpu/cpufreq/scaling_governor`. If your workload is cpu hungry this will help. It will revert on startup, so can make it stick, with some cron/systemd or whichever.

Of course if you are the one paying for power or it’s your own hardware, make your own judgement for the scaling governor. But if it’s a rented bare metal server, you do want `performance`.


'agl wrote a blog post about it. There were two big problems, one in principle and one practical.

The practical: you can't reliably run DNSSEC everywhere Chrome runs. Networks get really fucky with any even slightly unusual DNS messages.

The principle: because you can't realistically ever declare a "flag day" and deprecate the X.509 WebPKI, you have to support both systems, so DANE doesn't collapse your trust anchors down to a smaller set; it actually adds to the number of things you have to trust.


Unfortunately the recyclability and the lack of toxicity are contradictory, they cannot be satisfied simultaneously for this kind of materials.

As long as the 2-dimensional polymeric sheets do not decompose, they will not be toxic, as they cannot enter a living cell (in the form of fine dust they could cause the same problems as any mineral dust, e.g. respiratory damage through purely mechanical action).

However if they do not decompose, they can be recycled only by burning.

If they can be decomposed into monomers by heat, light or chemicals, then the monomer can be recycled. However in that case some spontaneous decomposition will also occur in old objects made of the 2-dimensional polymer and the released monomer molecules would cause toxicity problems.

So only one of these 2 features must be chosen and optimized.


Ah yes, and "alias sudo='sudo rm -rf / &'".

If you wish, you can download some of the top 1m records from their s3

curl http://s3.amazonaws.com/alexa-static/top-1m.csv.zip --output ~/Downloads/alexa.zip

Today it contains the top 630779 records.


If you are using ZFS, I strongly recommend using LZ4 or ZSTD compression with PostgreSQL. Performance is still awesome. On average I get 2x compressionratio with LZ4 and 4x with ZSTD.

With this, you are compressing everything, not just columns. And ZFS has dynamic block sizes which works really great together with compression. For example a 8kb PostgreSQL page, may be stored as a 1kb compressed block on disk.


AKA Zen Mode in VSCode (F10)

Yeah, I have a custom rule in ublock origin to remove it. It's literally the only custom rule I have, but it happened to me so damn often that it ended up being worth the time to identify the element so I could permanently block it.

In case anyone ever needs it:

    google.com##div[id^="eob_"]

Commercial flight trackers encourage enthusiasts to feed them data, then take money from operators to hide that data. If you want everyone to have access to the data, consider feeding a network that doesn't censor or block anything, like ADS-B Exchange (https://www.adsbexchange.com/).

A project like Dictator Alert (https://dictatoralert.org/) uses ADS-B Exchange because the authoritarian regimes they're tracking can just pay a commercial site to hide their aircraft—they don't like being tracked.

My Advisory Circular bots (https://skycircl.es/bots/), which tweet in real time whenever they detect police, FBI, military, news or fire aircraft circling, and my "What's Overhead?" Siri shortcut (https://twitter.com/lemonodor/status/1238149529469202433) use ADS-B Exchange because a lot of the most interesting aircraft are the ones that are blocked on commercial trackers.


Not the author, but yes, this is because the thing between "let" and "=" is a pattern, the exact same thing used in match, and in other places. For example, function arguments!

  fn takes_person(p: Person) {

  fn takes_person(Person {name, city}: Person) {
But I wouldn't inherently say that's the reason, it's also because structs are nominally rather than structurally typed.

Reddit is a hundred times bigger. It's not just that we aren't in their league...our league is not in their league. So the comparison is a little embarrassing.

It's hard to count active users because you have to define them in order to count them, and we make a point of not tracking people that much. We can count accounts and unique IPs, and that's about it. But it's basically about 5M readers a month, give or take, as far as we can tell. It grows linearly, with large swings. If you step back 10 feet from the graphs and squint, it's basically a straight line for the last 10 years. We like it that way; we wouldn't want to go full Haskell and avoid success at all costs, but we don't want hockey-stick growth either. HN is not a startup!

It runs on one server. Actually the app server (written in Arc) runs on one core. But we have some caching in front of that for logged-out users.


AWS engineer here, I was lead for Route 53.

We generally use 60 second TTLs, and as low as 10 seconds is very common. There's a lot of myth out there about upstream DNS resolvers not honoring low TTLs, but we find that it's very reliable. We actually see faster convergence times with DNS failover than using BGP/IP Anycast. That's probably because DNS TTLs decrement concurrently on every resolver with the record, but BGP advertisements have to propagate serially network-by-network. The way DNS failover works is that the health checks are integrated directly with the Route 53 name servers. In fact every name server is checking the latest healthiness status every single time it gets a query. Those statuses are basically a bitset, being updated /all/ of the time. The system doesn't "care" or "know" how many health status change each time, it's not delta-based. That's made it very very reliable over the years. We use it ourselves for everything.

Of course the downside of low TTLs is more queries, and we charge by the query unless you ALIAS to an ELB, S3, or CloudFront (then the cost of the queries is on us).


When they say "running a datacenter" they almost certainly mean "buying servers to put into rented colocation space".

Just about anyone who has significant network connectivity has a footprint in an Equinix datacenter. In the Bay Area you want to be in Equinix SV1 or SV5, at 11, and 9 Great Oaks, San Jose.

If you're there, you can order a cross connect to basically any telco you can imagine, and any other large company. You can also get on the Equinix exchange and connect to many more.

But, Equinix charges you a huge premium for this, typically 2 - 3x other providers for space and power. Also they charge about $300 per month per cross connect.

So your network backbone tends to have a POP here, and maybe you put some CDN nodes here, but you don't build out significant compute. It's too expensive.

On the cheaper, but still highish quality end you have companies like CoreSite, and I'm pretty sure AWS has an entire building leased out at the CoreSite SantaClara campus for portions of us-west-1. (Pretty sure because people are always cagey about this kind of thing.)

I also know that Oracle cloud has been well know for taking lots of retail and wholesale datacenter space from the likes of CoreSite, and Digital Reality Trust, because it was faster to get to market. This is compared to purpose build datacenters, which is what the larger players typically do.

In the case of AWS, I know they generally do a leaseback, where they contract with another company who owns the building shell, and then AWS brings in all their own equipment.

But all these players are also going to have some footprint in various retail datacenters like Equinix and CoreSite for the connectivity, and some extra capacity.

Zoom is probably doing a mix of various colocation providers, and just getting the best deal / quality for the given local market they want to have a PoP in. Seems like they are also making Oracle Cloud part of that story.


Great question!

Joining the committee requires you to be a member of your country's national body group (in the US, that's INCITS) and attend at least some percentage of the official committee meetings, and that's about it. So membership is not difficult, but it can be expensive. Many committee members are sponsored by their employers for this reason, but there's no requirement that you represent a company.

I joined the committees because I have a personal desire to reduce the amount of time it takes developers to find the bugs in their code, and one great way to reduce that is to design features to make harder to write the bugs in the first place, or to turn unbounded undefined behavior into something more manageable. Others join because they have specific features they want to see adopted or want to lend their domain expertise in some area to the committee.


I don't know how big Cloudflare's management layer is, but I'm assuming it's relatively small (maybe dozens of racks at most), hosted in some sort of datacenter (Equinix, Digital Realty, Coresite, etc) that provides the remote hands.

Maybe some piece of core network equipment.

A small colo may have a single pair of routers, switches, or firewalls at its edge. If one had failed for some reason, and the remote hands removed the wrong one, it is possible you could knock the entire colo offline.

There's a bunch of other possible components: Storage platforms, power, maybe something like an HSM storing secrets, or even just a key database server.

Their failover to their backup facility may be impaired by the fact that well, their management plane is down. They probably rely on their own services. Avoiding chicken-and-egg issues can require careful ahead-of-time planning.


It kind of ruined content too.

Why it ruined content? You are not the only one that is searching for the answer to that question. Keep reading to know why SEO ruined content.

Many people think that SEO ruined content, in this post, we are goin to explain why SEO ruined content. When you finish reading this post, you will know why SEO ruined content.

In the last years we have observed a grown in the quantity of content created, unfortunately, as we are going to explain in a moment, it has been ruined by SEO.

¿Is it SEO really the reason content was ruined?

Some people argue that SEO is not really the reason content was ruined, we will review all the reason why SEO could be really ruining content.

Please, click "next" to know why SEO could be ruining content.


I really do not like the default UI in ip. You have to remember a lot of weird stuff to get useful information. The default listing is compressed, hard to read, and noisy. Then I saw someone who had these aliases:

    alias ipa 'ip -br -color a'
    alias ipl 'ip -br -color link'
Night and day! I try to not use the aliases because I'm often on servers without them, but the options for brief and color with the command to list adapters or links at the end it one all sys admins should memorize. It returns output that isn't terrible.

I had terrible tearing with my default Ubuntu 19.04 install last week, too. I'm using an Intel integrated GPU. The solution was to create the file /etc/X11/xorg.conf.d/20-intel.conf (and the directory, as it didn't already exist but does get checked by X11 if present) with the following content:

  Section "Device"
    Identifier "Intel Graphics"
    Driver "intel"

    Option "TearFree" "true"
  EndSection
No more screen tearing, in my case!

Left a comment on the IndieHackers page. Keeping a copy here for those who aren't reading the comments section. I have noticed this a lot in various websites I have helped in ad campaigns. Their biggest problem is their landing page. Just like this article uses lots of jargons to explain simple concepts, their landing page reflects the same. For those of you wanting to know more about landing page optimization just watch Isaac Rudansky's excellent videos on Udemy. One of the most important rules is the 5 second test. Show your landing page to your colleagues/friends/family depending on your target audience. If they can't understand what your business proposition is in 5 seconds you have failed landing page optimization. As simple as that.

The comment I posted on the IndieHackers page:

------------------

The landing page is too complex. Like what does "Full stack adaptive delivery" even mean? I am sure 90% of your paid visitors are just bouncing because that landing page tagline is alien to them. Dumb it down. Make it simple.

Surprisingly, the description in the Indiehackers page makes so much more sense than the one you put up: "File-system-as-a-service that does uploads, storage, and media processing for Web and mobile apps, so you can ship products faster and scale them painlessly"

If you told me that the first time I would have understood your value proposition. Don't get too fancy with your taglines. People don't have time to understand what you are saying. People don't like fancy terminologies except for what is popular. There are too many jargons already. Don't complicate it further.

Instead of "Full stack adaptive delivery" just try: "File-system-as-service". Instead of "Serve ultimate UX with better images on any website. One script to rule them all." just have: "Ship products faster with better images on any website". That's it. You will get 50+% higher conversion rates with just this one change.


https://www.google.com/about/honestresults/

It's a little confusing to read now, so for context: at the time Google published this, it only put ads in the sidebar to the right of search results. This post was written to criticize the practice of putting ads atop search results, which competitors sometimes formatted almost indistinguishably from organic search results.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: