Hacker Newsnew | past | comments | ask | show | jobs | submit | MortyWaves's commentslogin

Got a link?


Not sure if I can find a particular episode right now that has that type of call but if you look up Phone Losers of America, or The Snow Plow Show (both are the same guy) you can find lots of Craigslist calls.

Here's a similar idea where he pretends to be calling from AT&T to let them know they have no incoming calls:

https://www.youtube.com/watch?v=s1sxF6vN3Ho


That’s pretty funny. The UK had something like it called PhoneJackers for a while.


Sadly it doesn’t seem to be able to fix the several SD cards killed by the raspberry pis I have.


Pro tip: Use a read only rootfs and use overlayfs to write any changes to ram instead of the SD. Then you just gotta put your state on a USB drive, network etc.


You can also mount /var/log as a tmpfs. Worked for the Debian based OS but not so well with Fedora (on an eMMC volume).


Likely worn out sectors. Sadly such products are easy in that they don't have a read only file system, but for the majority of users that would help. Maybe there should be a simple script called "finalize" that turns it read only


Didn't there use to be a physical read lock on SD cards?

I know "modern" cards have no space, but if it's important to you, you can still use the full size adapters which should (all?) have the physical lock. Super glue that in place...


There's a flag you can set electronically, to make the card read only, but if the OS isn't set up to support running from a read-only filesystem, setting the card to read only will make the OS crash. If the OS is set up for a read-only filesystem, it won't try to write to it, regardless of the flag.

Setting up Linux to run from a read only filesystem only takes a handful of commands, but having a tool to automate it would be nice.


Fun fact: SD write lock is just a switch wired to host controller, it doesn't actually prevent writing to the card.


The latest Pi OS writes all logs to RAM, which might change your experience.


SD cards do not make good SSDs. For Plan 9 I only put the kernel on the SD card and pull root from another source e.g. a file server via tcp/tls.


The thing I don’t like about systemd is the inexplicable need to have multiple files for a service. Why can’t they all be declared in a single unit file?


What do you mean? They can be in a single service file.


All the examples I see is there’s a network unit file, a cron unit file, etc all for one application. It would be nice to colocate.

Then there is composition of multiple applications too.

With docker compose I have a single file for running all my services.

With systemd it has to be N number of files for each service too.


I see you haven’t stumbled across the Minecraft community much, because this weirdness is just every day for them.

Take for example, the infamous 2B2T Minecraft server.

Exploits and game breaking mechanics by virtually impossible to discover bugs, and the no rule against hacking and cheating, have led to things people didn’t think were even possible in Minecraft over the servers ~15 year history.


It was rejected because it needed JS to even work.


This is simply not true, both in how it worked and why it was rejected.

HTML Imports didn't need JS to work: they used link tags and could transitively import HTML files without any JS.

You only needed JS because the wasn't anything you could do with the HTML yet because declarative custom elements don't exist yet.

HTML Imports were rejected because they created a parallel module graph to the JS module graph.

I think they could have been made to share the same graph, but that's basically what HTML modules are anyway.


It’s why I’ve started making CI simply a script that I can run locally or on GitHub Actions etc.

Then the CI just becomes a bit of yaml that runs my script.


How does that script handle pushing to ghcr, or pulling an artifact from a previous stage for testing?

In my experience these are the bits that fail all the time, and are the most important parts of CI once you go beyond it taking 20/30 seconds to build.

A clean build in an ephemeral VM of my project would take about 6 hours on a 16 core machine with 64GB RAM.


Sheesh. I've got a multimillion line modern C++ protect that consists of a large number of dylibs and a few hundred delivered apps. A completely cache-free build is an only few minutes. Incremental and clean (cached) builds are seconds, or hundreds of milliseconds.

It sounds like you've got hundreds of millions of lines of code! (Maybe a billion!?) How do you manage that?


It’s a few million lines of c++ combined with content pipelines. Shader compilation is expensive and the tooling is horrible.

Our cached builds on CI are 20 minutes from submit to running on steam which is ok. We also build with MSVC so none of the normal ccache stuff works for us, which is super frustrating


Fuck. I write shader compilers.


Eh, you write them I (ab)use them.


I have 15 million lines of C++, and builds are several hours. We split into multi-repo (for other reasons) and that helps because compiling is memory bandwidth limited - on the CI system by we can split the different repos to different CI nodes.


To be honest I haven’t really thought about it and it’s definitely something it can’t do, you’d probably need to call their APIs or something.

I am fortunate in that the only things I want to reuse is package manager caches.


That’s fair, but surely you must see that’s a very simple build.

The complicated part comes when you have job A that builds and Job B that deploys - they run on two different machine specs so you’re not paying for a 16 core machine to wait for helm apply to wait for 5 minutes - they need somewhere secure to shuffle that artifact around. Their access to that service is likely different to your local access to that service, so you run your build locally and it’s fine but then the build machine doesn’t have write access to the new path you’ve just tested and it fails.

90% of the time these are where I see CI failures


You must be very lucky to be in a position where you know what needs to be done before the run begins. Not everyone is in that position.

At my place, we have ~400 wall hours of testing, and my run begins by figuring out what tests should be running and what can be skipped. This depends on many factors, and the calculation of the plan already involves talking to many external systems. Once we have figured out a plan for the tests, we can understand the plan for the build. Only then we can build, and test afterwards. I haven't been able to express all of that in "a bit of yaml" so far.


Are you not worried about parallelisation in your case? Or have you solved that in another way (one big beefy build machine maybe?)


Honestly not really… sure it might not be as fast but the ability to know I can debug it and build it exactly the same way locally is worth the performance hit. It probably helps I don’t write C++, so builds are not a multi day event!


Ever since Windows gained Terminal and OpenSSH, my usage of Putty has almost entirely ceased except for serial for embedded systems work.

Then I realised Putty ships with a CLI version which I now use in Terminal for accessing serial.


I haven't used Putty since I stopped using Windows for anything serious (in the early 00s.) It was my favorite quick and dirty SSH and serial client before then though!


I have to say, I liked SecureCRT a lot, too.

PuTTY was just easier to get ahold of on a new install.

I think that's why it won out for me. That and its simplicity.


I always used mingw and similar projects. IMO, putty was always annoying (but very useful) software. The "ecosystem" seems better now though.


Indeed, that and “git bash” were always the weird outliers. I’m glad there’s now native options.


mingw predates git on windows (and in general), but yeah, indeed.

interesting to scan the log on that: https://github.com/git-for-windows/build-extra/blob/main/Rel...


I remember my journey trying to disambiguate Git Bash, Git for Windows, MinGW and MSYS2. To this day, I'm still not sure I have the full story right.


just install msys2.

i cannot imagine windows without it. even wsl garbage pales in comparison


One nice thing about putty is that it let's you specify the password on the command line. Can't stand software that doesn't let you make your own decisions.


I don't trust Windows with my SSH keys. Since about 2 years, I am actively preparing my final migration to Linux. There's some Windows software left that I need to replace before this move is possible, but I am close.


Just pull the trigger. A surprisingly large amount of software just works on wine.

I'm a c# dev with near 20 years experience, and I finally got the shits with advertising in the start menu. Arch Linux, because I figured why not do it properly?

I game a fair bit, and find most things on steam just work.


Wine can be a bit of a headache if you are on a couple year older distro as it can make it harder to install newer Wine versions.

But I found that the Bottles project pretty much solves this, by installing everything in some kind of sandboxed environment:

https://usebottles.com/

https://github.com/bottlesdevs/Bottles

Has worked wonderfully for the few cases where plain Wine failed.


bottles is garbage. i mean wine is extremely dangerous too... but bottles lie and that make it more dangerous.

they don't have sandbox. only if you install the flatpack AND DISABLE SOME CONVENIENCES you actually get something I'd call a safe sandbox.

but their site lies and make you feel safe while being extremely vulnerable installing cracked games (which is what everyone used bubble for).


Too bad it's only flatpak, I'd try it out if it had an AppImage.


> I'm a c# dev with near 20 years experience

Which IDE do you use? JetBrains Rider?


Not the person you asked, but I'm in a similar boat (15 years, polyglot but a lot of C#).

I mostly use VS Code to be honest. I use VSCode for other languages and for a long time it was the only graphical editor to have good remote development (over SSH) support.

Rider has that feature now though and is pretty nice too. I typically jump over to it when I need to profile something as it integrates with dotTrace. If you're coming from full-fat Visual Studio you'll probably prefer Rider.


Rider; however that's on a Windows work machine. We are a solid way to getting a linux/mac dev env going; maybe 30% is netstandard2.0, 10% is net9, the remainder net472 (including an old school non-sdk web app on IIS). Maybe ~ million LOC in its 14 year lifespan.

My personal dev is shifting to Rust.


I agree with you and just wanted to add that for what it's worth one can optionally limit where ssh keys are useful by adding network restrictions on the public key / server side. e.g.

    grep AuthorizedKeysFile /etc/ssh/sshd_config
    AuthorizedKeysFile /etc/ssh/keys/%u

    cat /etc/ssh/keys/bender
    from="[192.redacted]/24,[redacted]/20" ssh-ed25519 AAAAC[snip...] comment
or wherever your system is configured to look for public keys, typically /home/username/.ssh/id_dsa.pub. I use a different location. Even being really broad like adding a /16 or /8 for a home ISP is still better than allowing the entire internet. This can also be useful where machine-to-machine ssh keys are utilized one can limit the access to that network so that should keys leak the potential blast radius of damage is reduced. For example, the keys for an Ansible account can be restricted to the Primary/Secondary Ansible server IP addresses or at very least the CIDR block(s) of the network(s) they reside in. Broad restrictions are not perfect but perfect is the enemy of good or good enough.

Example use case would be that lets say a contractor from Microsoft tries one of your keys. Your restriction limits the key validity to 24.0.0.0/8 and they are coming from 207.0.0.0/8. They will be denied Authentication refused and you now have log entries that can be shared with their fraud department, the world, whomever. Obviously the tighter the restrictions the better, at the risk of requiring a static IPv4 or IPv6 address if too tight. One can always have lighter restrictions on a fall-back account that requires additional hoops to sudo / doas / su.


Is such paranoia warranted? Millions of corporate laptops run Windows 11 just fine. I know M$ is evil and spying on you, but not to such degree.


Having a Windows 11 corporate laptop with a domain/Entra login, I actually trust it more than a home Windows 11 with a Microsoft account. Because if I lock myself out, I have a contact (corporate support) that is actually interested in helping me recover everything. With a Microsoft account it's a mess. I had so many problems with Microsoft accounts that I lost count of how many I have, and most are broken in some way, because of different issues and different service integrations over time. The Skype account is now useless. I never recovered my paid Minecraft account after one event. With a machine with a local account, now I have to be very careful on what I click related to MS accounts, because trying to solve various issues with Teams, I managed to get the local account linked with that MS account. I spent hours trying to recover a different account after I randomly filled one nagging question about birth date - who wants to give the real birth date to Microsoft - and then I got locked out because I said was underage :). So yes, one of the big issues is the push to have a linked OS account where you have to rely on MS support to solve your issues, otherwise you basically get locked out of your machine and other things you paid for.

Also, domain policies offer more control over the corporate PCs (this is how some of the MS spying is shut off on corporate PCs; it's debatable if the corporate spying added by other domain policies is an improvement).


I have to agree, I've also suffered account problems. I was locked out from an email address I used for 20 years. It refuses to take my password which is still valid. I've changed phone number since 20 years ago so can't use that and the security questions were nonsense as I was a teenager. Originally my account never had phone number, they insisted I add it when they integrated my Skype account perhaps. So I didn't expect access to that phone number to be a strong ongoing requirement.


I recently, by playing around with the LAN's default PAC file and a dummy HTTP server, discovered that on a machine that says in System Settings that Proxy Auto-Discovery is turned off, the PAC file is still fetched and used by a too-large number of Microsoft/Google background auto-update services, from Windows Update to Office.

* https://mastodonapp.uk/@JdeBP/114693762493884550

I had been lucky through having done my own experimentation, decades ago, with setting up a default PAC file on the LAN and having left it in just-send-everything-directly mode, keeping it as I upgraded things on the LAN, all of these years. Because otherwise I would have been vulnerable to a third-party in the search path for years, on a machine that clearly and unequivocally, including per direct inspection of the setting in the registry, has this switched off.

* https://jdebp.uk/FGA/web-browser-auto-proxy-configuration.ht...


> Is such paranoia warranted? Millions of corporate laptops run Windows 11 just fine.

Yes. With Windows Recall data mining surveillance screenshots taken every 5-7 seconds, completely disregarding if this may compromise your security, safety or privacy, we move from "you're the product" to "you're a pet in a zoo, and we want to learn from your behavior."

> I know M$ is evil and spying on you, but not to such degree.*

I mean, they could be recording every second.

I'm pretty sure that's a bandwidth issue.

Not because they really feel like giving you 3-4 second pockets of security, safety and privacy.


I can't wait for the AI overlords to take ower. Maybe then we can finally be free from people spreading misinformation and fud.


>Windows Recall data mining surveillance screenshots

Some of you people are just too far gone to turn off a setting.


Turn on. It's off by default. But people on HN, reddit and twitter are too stupid.


> It's off by default.

For now. This is Microsoft we’re talking about. Needing a Microsoft account to log in to Windows used to be optional.


I’m reminded of a checkbox titled “Don’t ask me next time” when logging into Microsoft Online that I am given the option to check every single time I log in.

My lack of trust in Microsoft (or Google) to keep my interest in mind is rooted in experience.

The problem is: once your organisation is so corrupt that they think of this shit, turning off bad ideas becomes a game of whack-a-mole.

Just say no to this kind of behaviour.


We don't trust them to not turn it back on later...


I don't trust microsoft to not push an update that exposes all my stuff. Their updates the last few years have been an absolutely shitshow in so many regards.


Can you tell us which software? (Even if it’s very niche) I’m really curious where the gaps are.


I know Altium doesn’t work, which is very important if you need to provide someone else files in Altium format. If you just want to work on designs there’s always Kicad, which is increasingly very good! But it can’t save in Altium format, and I’m not sure I’d trust it for manufacturing.

The other thing I’m missing is my 3D Gerber viewer called ZofZPCB. I’ve not gotten either it or Altium to even start.


The biggest migration challenge isn't finding one-to-one replacements for software, but rebuilding tested workflows and processes.

For years, I've had a seamless document management process on Windows for all my receipts and bills:

    1. My ScanSnap scans, auto-crops, and OCRs documents into a designated folder.
    2. A small open-source tool, DropIt [1], monitors that folder.
    3. Based on about 100 custom rules that parse the OCR'd text (for tax IDs, phone numbers, etc.), DropIt automatically renames and moves the PDFs into the correct subfolders.
    4. Nextcloud then syncs the organized files, and I can discard the paper originals.
This "fire-and-forget" system has been incredibly reliable.

When I explored replicating this on Linux, I found the building blocks exist. For instance, ocrmypdf seems to be a powerful OCR tool, and SANE drivers combined with gscan2pdf can handle the scanning. [2] I also found several tools for automated file renaming and organization.[3] However, the Fujitsu ScanSnap Home software provides an all-in-one experience for the initial capture.[4] More importantly, I'd have to manually translate all my pattern-matching rules from DropIt to a new system, likely a collection of shell scripts. I still feel that this is too fragile. I would need to program all exceptions myself: file renaming issues, special characters, length of document names, issues with OCR and alerting, should anything go wrong. The system needs to be fail-safe because once I throw the original away, there is no going back.

Then, another challenge is to find the time to replace this reliable system with the shortest "downtime" possible. I need this daily.. so I already decided I need a migration phase, where both systems run in parallel. Perhaps this better explains my slowness to migrate to Linux.

The fact that there isn't a well-known, integrated tool for this on Linux seems suspicious. It makes me wonder if I'm approaching the problem from the wrong direction. Is there a more "Linux-native" philosophy for this kind of workflow automation that I'm missing?

And yes, I'm aware of Paperless-ngx. It's a fantastic project, but I'm committed to my current folder structure and prefer to avoid a solution that centralizes my documents in a database, away from my Nextcloud setup and my filesystem-first-philosophy for document management. I don't trust that paperless-ngx will be available in 40+ years from now, but I need my document management to last that long.

[1]: http://www.dropitproject.com/

[2]: https://github.com/ocrmypdf/OCRmyPDF

[3]: https://github.com/ptmrio/autorename-pdf

[4]: https://forum.manjaro.org/t/fujitsu-scansnap-home-software-f...


On one Windows box I once put my password in for a private Github site. Never had to do that again, it just 'remembered' it... Not what I would expect or want.


Why replace it? Wine works fine.


If Windows were to steal your SSH keys (lol), would you really think using a third-party program would protect you? The evil code could just read the key you configured in PuTTY.


> Terminal

Have they fixed font rendering yet? cmd.exe looks better on my laptop



Windows is basically spyware at this point. The only way to win is to not play.


They probably meant conhost.exe (it gets you the regular console on Windows 11).


Fun fact: if you have Windows Terminal installed, it takes over `conhost.exe` and brings telemetry spying even to old apps which never had it before :D

https://github.com/microsoft/terminal/commit/906edf7002b8ccf...


Are you referring to the pixel-level font smoothing they use by default (as opposed to CMD's subpixel-level font smoothing)?

You need to define the "antialiasingMode" key in the settings JSON for the default profile to hold the value "cleartype", rather than "grayscale" (which is the default value). I don't believe this is exposed in the GUI settings page.

Note that this only affects the actual terminal emulation area. The rest of the application will still be pixel-level font smoothed (so e.g. the tab titlebars, the settings, etc.).


I’ve never noticed any issues on any computer with it…


The first time I ever saw it, the text already looked better than cmd.exe via conhost.


https://imgur.com/a/qA1fr71

Something wrong with my eyes? Doesn't cmd.exe look smoother in this screenshot?


I agree. In those screenshots cmd looks better. Not sure what's up.


It's the lack of subpixel anti-aliasing (aka ClearType). For some reason it's being erased from a lot of modern software. It's why Windows >= 8 UWP apps and GNOME look so blurry.


It's just another example of why developers shouldn't be given the fanciest tech available (in this case: high DPI screens) so that they have to experience how actual users see their software.


My Terminal looks great! https://imgur.com/a/js6Yzxf


Looks like you’ve gone for something like the classic text mode 80x25?


More Perfect DOS with a CRT filter, to remind me of my handmedown MS DOS/Win95 PC from elementary school.


I find the Terminal more readable because the white seems brighter in your screenshots


The color is configurable for both.


Terminal looks far better.


cmd.exe looks worse to me. Particularly because of the lack of padding on the left.


cmd looks pixelated.


I've only used it through RDP on Wayland and it's been fine visually. Downloading it can be a challenge if you don't know where to look (Github, not Microsoft's App Store...)


Fascinating how each “standard” or intrinsic that gets added actually totally fails to give you the real numbers promised.


You can’t be serious…


Uh yeah? Prusa Mini has his face as boot screen. Cannot find any ref’s online sadly (I sold the printer). Edit 1: speaking from a Prusa perspective I guess..

Edit 2: wait, i misunderstood ‘you cant be serious’? Not sure but what I tried to address was a cult of personality. Thats not something I feel fitting in this whole context.


Frankly quite insulting to call any Windows app legacy


sorry it came off that way! could you elaborate on that thought?


windows itself is legacy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: