Hacker Newsnew | past | comments | ask | show | jobs | submit | emilburzo's favoriteslogin

GMC Geiger counters have a usb port, which lets you upload real time data: https://www.gmcmap.com/

Yes it's very good fun just exploring the embeddings! It's all wrapped by the geotessera Python library, so with uv and gdal installed just try this for your favourite region to get a false-colour map of the 128-dimensional embeddings:

  # for cambridge
  # https://github.com/ucam-eo/geotessera/blob/main/example/CB.geojson
  curl -OL https://raw.githubusercontent.com/ucam-eo/geotessera/refs/heads/main/example/CB.geojson
  # download the embeddings as geotiffs
  uvx geotessera download --region-file CB.geojson -o cb2
  # do a false colour PCA down to 3 dimensions from 128
  uvx geotessera visualize cb2 cb2.tif
  # project onto webmercator and visualise using leafletjs over openstreetmap
  uvx geotessera webmap cb2.tif --output cb2-map --serve
Because the embeddings are precomputed, the library just has to download the tiles from our server. More at: https://anil.recoil.org/notes/geotessera-python

Downstream classifiers are really fast to train (seconds for small regions). You can try out a notebook in VSCode to mess around with it graphically using https://github.com/ucam-eo/tessera-interactive-map

The berries were a bit sour, summer is sadly over here!


If you need to bypass censorship, you'll need a tool specifically designed for anti-censorship, rather than any one repurposed for that.

Since China has the most advanced network censorship, the Chinese have also invented the most advanced anti-censorship tools.

The first generation is shadowsocks. It basically encrypts the traffic from the beginning without any handshakes, so DPI cannot find out its nature. This is very simple and fast and should suffice in most places.

The second generation is the Trojan protocol. The lack of a handshake in shadowsocks is also a distinguishing feature that may alert the censor and the censor can decide to block shadowsocks traffic based on suspicions alone. Trojan instead tries to blend in the vast amount of HTTPS traffic over the Internet by pretending to be a normal Web server protected by HTTPS.

After Trojan, a plethora of protocol based on TLS camouflaging have been invented.

1. Add padding to avoid the TLS-in-TLS traffic characteristics in the original Trojan protocol. Protocols: XTLS-VLESS-VISION.

2. Use QUIC instead of TCP+TLS for better performance (very visible if your latency to your tunnel server is high). Protocols: Hysteria2 and TUIC.

3. Multiplex multiple proxy sessions in one TCP connection. Protocols: h2mux, smux, yamux.

4. Steal other websites' certificates. Protocols: ShadowTLS, ShadowQUIC, XTLS-REALITY.

Oh, and there is masking UDP traffic as ICMP traffic or TCP traffic to bypass ISP's QoS if you are proxying traffic through QUIC. Example: phantun.


I have a homelab which is a zimaboard, a dumb netgear switch, and six mini-pc's (5560U/16GB/500GB).

The zimaboard runs pfsense & an nginx reverse proxy, then all six of the mini-pcs run proxmox. 4 mini-pcs run k8s clusters (talos) and the other two run home services and selected one-offs (home-assistant, plex, bookstack, build-tools, gitea, origin servers for a subset of projects).

It was a lot easier to set up than I had expected. Its was still a massive PITA though. I got what I wanted out of it work-wise, and its a nice little novelty.

I've been thinking about ditching most of it for a while; I like the idea in the article about breaking it up - move one under the TV, one into the office, one under the stairs, and the remaining 3 + zimaboard I'm tempted to sell. I'd keep running proxmox on them, but I wouldn't link them up. The key thing that needs to happen for this to make sense is using something like cloudflare to route domains.

The part I never sorted properly was storage. It has 3TB of storage, but getting that storage into k8s for proper dynamic allocation without giving random nodes CPU perf issues was a too-long-for-one-session task which meant it never got finished. I was tempted to add a NAS, but most NAS's are horrid.


You can get 90% of the benefit of these longevity clinics for 10% of the cost.

First, advanced blood testing can cost <$200. Get ApoB and Lp(a) for heart health. hs-CRP for inflammation, A1c for diabetes, eGFR for kidney health, etc. https://www.empirical.health/product/comprehensive-health-pa...

Then, determine your nutrition goals based on your blood test results. For example, if your ApoB / LDL cholesterol is high, focus on getting more fiber and less saturated fat. If blood pressure is high, focus on potassium and sodium.

Exercise & sleep - use an Apple Watch or similar to track VO2Max and sleep stages.

MRI - I'd probably skip the MRI for cancer screening. While I think this will be the future, the evidence base is just not strong enough today to know what to do with the results. You can do a FIT-based colon cancer screen at home for <$10 (colon cancer is affecting people at younger and younger ages). Mammography and cervical cancer testing in a regular doctor appointment.

CAC scan (assessing calcium buildup in the arteries) - do if ApoB is high. You can book these for $200.


This was solved a hundred years ago.

It's the same problem factories have: they produce a lot of parts, and it's very expensive to put a full operator or more on a machine to do 100% part inspection. And the machines aren't perfect, so we can't just trust that they work.

So starting in the 1920s Walter Shewhart and Edward Deming came up with Statistical Process Control. We accept the quality of the product produced based on the variance we see of samples, and how they measure against upper and lower control limits.

Based on that, we can estimate a "good parts rate" (which later got used in ideas like Six Sigma to describe the probability of bad parts being passed).

The software industry was built on determinism, but now software engineers will need to learn the statistical methods created by engineers who have forever lived in the stochastic world of making physical products.


Ensuring yuv410p is the real important addition of this post. If you don't do that, depending on the user input you will end up with a mp4 that won't play.

AAC is more standard for mp4 video than opus, although I agree opus is the superior CODEC. Whether the benefits of opus outweigh the downsides of it being non-standard in your usecase is not mine to decide, but if you are looking to produce a thing that is similar to most other web video things out there I'd go with AAC. The saved bandwith is probably miniscule in comparison to what you could save on the video side.


You can use IPinfo's IP map (https://ipinfo.io/tools/map) or IP summary tool (https://ipinfo.io/tools/summarize-ips).

Both of these services support sending IP addresses via an API endpoint and can handle up to 500k IP addresses. You can also share the report via URL.


The root causes are usually among:

Security teams are often staffed by people who have no operational experience, and do not understand the consequences of what they are recommending or even mandating. Often those staff are blindly following hardening guides or asking for every configuration switch to be flipped to "most secure" setting without having a good understanding of the threat model for the workload and without taking into account the tradeoffs between utility and operational cost. The level of advice can be on par with ChatGPT or worse, but it is taken more seriously due to the advice-giver's job title.

Security teams often have no "skin in the game". There are no real disincentives to stop them from asking for crazy or very expensive things and imposing high costs on other teams. In fact they are incentivised to do that very thing, because the only thing that covers your butt more than recommending everything possible, is recommending everything possible PLUS some things that can't be done with the time & budget available, leaving them able to say "we see you had $security_problem, well, we recommended $impossible_thing but you didn't do it" (e.g. say, $500k of DLP solution [with its own operational risks!] for a workload that only makes $1M a year). To be fair I've seen some good & practical security teams but once you get a bad actor / games player at management level that behaviour can become very sticky.

I have seen variants of this nearly everywhere I've worked, it seems very hard to get incentives aligned between the do-ers and the secure-ers.

The most practical workaround I've seen is to make sure there is a reasonable balance of political power between the various parties.

"Reasonable" can be hard to establish but is context dependent. You would expect "Security" to have more power in an F500 because the brand value, financial and legal exposure are high and individual dev teams aren't necessarily across or exposed to the full consequences of the damage they can cause.

In a startup you would expect "Security" to have much less power because the consequences of not shipping / misallocating effort are almost immediately existential.


Sadly I can't try this because I'm on Windows or Linux.

Was testing apps like this if anyone is interested:

Best / Easy to use:

- https://lmstudio.ai

- https://msty.app

- https://jan.ai

More complex / Unpolished UI:

- https://gpt4all.io

- https://pinokio.computer

- https://www.nvidia.com/en-us/ai-on-rtx/chat-with-rtx-generat...

- https://github.com/LostRuins/koboldcpp

Misc:

- https://faraday.dev (AI Characters):

No UI / Command line (not for me):

- https://ollama.com

- https://privategpt.dev

- https://serge.chat

- https://github.com/Mozilla-Ocho/llamafile

Pending to check:

- https://recurse.chat

Feel free to recommend more!


I solved this by using "Push-Button Switches": https://katalog.gira.de/en/datenblatt.html?ean=4010337126034...

They look like a push-button, are therefore always "aligned", though the electrical switch alternates between open and closed.


Why not use a combination of open source and OpenAI models? GPT-3.5 is already beaten by Mixtral and Mistral-Medium. The first one you can host for free and the second has a darn cheap API while getting really close to GPT-4 performance.

Watching the way they handled this recall so poorly got me to do whatever it took to reduce my snoring and mild sleep apnea down to almost nothing (mouth/throat/tongue exercises did the most followed by head positioning). The stories of people who relied on their CPAP caught up in it were heartbreaking and nerve wracking.

Edit: I used a combination of the Snore Gym app and Vik Veer's videos on Youtube. After a couple months my snoring became barely audible, but it doesn't always work for everyone. I was tilting my head down at night which blocked off my throat somewhat, so I lifted my head up higher with a bigger pillow.


SentinelHub EO Browser is, as the name suggests, a browser based explorer of Sentinel imagery

You can even download subsets of the data for free (requires login though). It's one of the simplest ways to get started exploring Sentinel imagery IMHO

https://apps.sentinel-hub.com/eo-browser/


I used chatGPT to decode proprietary binary files of some industrial machinery. It was amazing how it can decipher shit and find patterns. It first looked for ascii characters, then byte sequences acting as delimiters, then it started looking at which bytes could be the length or what 4-bytes could be floating point numbers of coordinates and which endianness was more logic for coordinates, etc. etc. crazy stuff.

I'm not a ham operator, though I debated it a few years ago, I just don't even know where to start, and have zero equipment. I wonder if a better email would be to match your call sign, I assume you have to announce it when communicating with new people? Or am I making a bad assumption, even so, that would maybe make it much easier for someone to figure out how to get to you via email.

OpenWRT is great for taking bargain routers on eBay (which can be had for as little as £5 and yet have fast CPUs and support modern WiFi radios) and making them into proper homelab gateways with VPN, VLANs, DNS proxies etc. I would often do this for friends who wanted VPN access to their home network but didn't want to fork out for a router that would offer this out of the box. I would find it amusing when an ISP's stock firmware would turn out to just be an OpenWRT skin.

I noticed that virtually all LEDs driver resistors are two vastly different values, in parallel. Such that if you snip off the correct one, you are left with a slightly lower output light, that consumes half the power and last many time longer.

It's like a secret cheat code for people in the know. It's almost a conspiracy.

Just cut off the highest value resistor with a pair of snips.


Not sure if I'd like to send anything into tested port just to tell if it is open. I'd rather stick with telnet or whatever is available.

Also worth mentioning that bash also has network capabilities, you can check port like this:

  : < /dev/tcp/google.com/80 && echo OK || echo ERROR

First, I am big fan of your articles even before I joined IPinfo, where we provide IP geolocation data service.

Our geolocation methodology expands on the methodology you described. We utilize some of the publicly available datasets that you are using. However, the core geolocation data comes from our ping-based operation.

We ping an IP address from multiple servers across the world and identify the location of the IP address through a process called multilateration. Pinging an IP address from one server gives us one dimension of location information meaning that based on certain parameters the IP address could be in any place within a certain radius on the globe. Then as we ping that IP from our other servers, the location information becomes more precise. After enough pings, we have a very precise IP location information that almost reaches zip code level precision with a high degree of accuracy. Currently, we have more than 600 probe servers across the world and it is expanding.

The publicly available information that you are referring to is sometimes not very reliable in providing IP location data as:

- They are often stale and not frequently updated.

- They are not precise enough to be generally useful.

- They provide location context at an large IP range level or even at organization level scale.

And last but not least, there is no verification process with these public datasets. With IPv4 trade and VPN services being more and more popular we have seen evidence that in some instances inaccurate information is being injected in these datasets. We are happy and grateful to anyone who submits IP location corrections to us but we do verify these correction submissions for that reason.

From my experience with our probe network, I can definitely say that it is far easier and cheaper to buy a server in New York than in any country in the middle of Africa. Location of an IP address greatly influences the value it can provide.

We have a free IP to Country ASN database that you can use in your project if you like.

https://ipinfo.io/developers/ip-to-country-asn-database


Or buy a hackrf one and make a wideband antenna like a a bicone or dual planar disk or tapered slot, etc, and use Qspectrumanalyzer with the hackrf_sweep backend to frequency hop at 8 GHz/s (@20 MS/s) for spectrum monitoring.

edit: Now that someone provided an archive.org mirror I see the site is actually about truly global radio monitoring as a service. That's pretty cool. There's plenty of public/open SDRs online and some that do hyperbolic multi-lateration of signals but nothing so integrated or comprehensive. The USA's Unified Data Library has a lot of this kind of thing too. Unfortunately for regular US citizen accounts you can't use the RF monitoring endpoints in the API or web interface.


For people who use Home Assistant, I've gone with DIY route with ESPHome [1] and senseair CO2 sensor. You can buy those sensors for +- 26 USD on AliExpress. Together with 5 USD, ESP32 devkit, it's mostly "solder"-n-play.

Of course, I only have one of them, so I can't say if they are accurate, but I just need to know if the CO2 levels are normal (+- 400ppm) or high (+1000ppm), to open a window. I have tested it with just blowing on it, the CO2 value jumps up, putting it near a window, goes directly to +- 400.

I haven't had any strange readings with it, ESPHome developers really made an excellent product, that is stable and "just works". You can even calibrate the sensor by putting the it outside (but I haven't really bothered with it).

ESPHome has also support for a lot of other sensors that you combine on a single ESP32 module.

[1] https://esphome.io/components/sensor/senseair.html


I built my own lay man's digital signage solution.

I wanted to have a display in my living room, which shows the temperature of all rooms in my apartment. So I used an Android Picture Frame. This is connected via WIFI, and offers FTP access.

A Docker service on my local in-house server grabs a random background image from a folder. Depending if we have day or night time, the picture will show satellite images from earth’s day or night view.

It then connects to my home assistant instance, and pulls all the necessary values. A SVG template is then filled with these values, and they are merged with the background image. The service then uploads the image to the picture frame, and it will refresh the image after some minutes.

The whole thing uses templates and config files, so it's easy to extend.

Unfortunately, the picture frame broke down since, and I haven’t had the chance to buy another one yet.


On a related note for anyone interested in this and want better performance today:

I messed with a combo of Whisper and ChatGPT. I took a whisper transcript and asked chatgpt to fix mistranscriptions using the context of the transcript and based on potential phonetic issues. I asked it to replace transcribed words that don't make sense with "[unintelligable]", which improved the output even more.

Transcription error rate was almost nonexistent, even on the smallest whisper model.


There are a lot of "electronic price tags" which are basically the same form factor as your trading cards, except they mass produce them (have nice plastic cases) and they usually include a 3-color eink (black-white-red or black-white-yellow) plus a wireless transmitter (usually proprietary protocol, but sometimes plain Bluetooth and/or NFC) for OTA updates and a 10-year battery (sometimes replaceable CR2032). Also, if you can grab them at $6 / piece, I imagine they're being produced for a lot less than that (random AliExpress link: https://www.aliexpress.us/item/3256803094207083.html).

If you're thinking of mass-production might be worth reaching out to one of those manufacturers; you can buy in bulk if nothing else (but I'm sure they'd be open to customizing it a bit - maybe some branding on the plastic molding and whatnot).


I sometimes use my little service https://pushurl.43z.one/ If I need to notify my desktop browser or phone with a notification triggered by some code.

I just go to the website with my browser click generate. Then in my script I do `curl $pushurl?title=jobdone`. When it gets triggered a notification popups on my phone that says "jobdone".

It uses the native browser web push API so I don't need to install anything extra.


If I had to guess they are running the WRF model [1][2]. The AI part is post-processing the model output. With a fair amount of reading the manual anybody can run their own WRF. WRF scales from running on a laptop to supercomputers with 1000s of cores.

[1] https://www.mmm.ucar.edu/models/wrf [2] https://github.com/wrf-model/WRF


This resource covers every level from pop sci to pro https://www.stevenabbott.co.uk/practical-adhesion/

Just personally, I value companies who "dogfood" their products a lot more than similar companies that don't.

An example is 3D printers. Prusa makes fantastic 3D printers, and about half of the structural parts for their printers are printed on their own printers. So they have 600 of their own 3D printers printing 24/7 printing their own parts. That means they are forced to address long-term durability and reliability issues even just to ship their own product.

Because it absolutely is true that at that scale, you're usually better off just injection molding the parts, but they'd lose the high-quality signaling that yeah, their 3D printers are good enough for the company to rely on them to print their product. They also lose out on insight to things they could do to improve their own product from a usability standpoint.


> My favorite way to create a network between all my services hosted in different AWS accounts is to share a VPC from a network account into all my service accounts and use security groups to authorize service-to-service communication. There’s no per-byte tax, zonal architectures are easy to reason about, and security groups work just like you expect.

That's gold advice. I wish AWS RAM supported more services (like AWS EKS).

A small complain: working with AWS SSO is a bit tedious. My current solution is to share my ~/aws/config with everyone so we all have the same profile names and scripts can work for everyone.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: