Hacker Newsnew | past | comments | ask | show | jobs | submit | tfsh's commentslogin

This looks really cool, I'd like to give it a go. The idea of taking a screenshot of the terminal and then parsing that to determine the true colour support is definitely novel, though perhaps so, because for me I can't get it to work. Are there any debug flags I can enable?

So far it was able to take the screenshot correctly (https://ibin.co/8kaRr8TIanv2.png), however the parsing of that fails with the non-descript "Palette parsing failed." error.

Edit: enabled tracing at got this: https://paste.ee/p/ZyNxG9FK


> The idea of taking a screenshot of the terminal and then parsing that to determine the true colour support is definitely novel,

A better way to do this is to send `OSC 1 0 ; ? ST` (query foreground color), `OSC 1 1 ; ? ST` (background color), then `OSC 4 ; {n} ; ? ST` where {n} is the nth XTerm color.

See: https://invisible-island.net/xterm/ctlseqs/ctlseqs.html#h4-O...


OMG really!? That link is blocked for me for some reason. If that OSC code is widely supported it's going to make things sooooo much easier.


It’s very widely supported from my experience. This is how asciinema captures terminal palette.


It’s supported by any xterm compatible terminal emulator. But like with most things in this domain, expect plenty of edge cases where it should work but doesn’t.


Thanks for trying it out. It looks like either your terminal or screenshotter isn't faithfully rendering the pure red marker column (it's needed for calibration in the parser). The red should be #ff0000, but the screenshot is using #ea3323. I've made a Github issue to keep track https://github.com/tattoy-org/tattoy/issues/98 If you can add more details it'd really useful, I'm sure there'll be more people like you.


Maybe that's night mode (or whatever your DE calls it)?


Interesting, I'd never thought of that.


It's not. You might be joking, but that comment still isn't helpful.

My understanding is this is part of Google's internal PSD offering (Public Status Board) which uses SCS (Static Content Service) behind GFE (Google Frontend) which is hosted on Borg, and deploys other large scale apps such as Search, Drive, YouTube, etc.


Wellp. Incident report: "We posted our first incident report to Cloud Service Health about ~1h after the start of the crashes, due to the Cloud Service Health infrastructure being down due to this outage."


How could it not be helpful given that it gave you reason to provide more details that you wouldn't have otherwise shared? You may not have thought this through. There is nothing more helpful. Unless you think your own comment isn't helpful, but then...


Because "It's good to lie because it makes people correct me" is a joke about IRC, not a viable stable game-theoretic optimal position.


Cunningham's Law emerged in the newsgroups era, well predating the existence of IRC.

Of course, I recognize that you purposefully pulled the Cunningham's Law trigger so that you, too, would gain additional knowledge that nobody would have told you about otherwise, as one logically would. And that you played it off as some kind of derision towards doing that all while doing it yourself made it especially funny. Well done!


I have 0 idea what Cunningham's Law is, so we can both agree that "recognizing purpose" was "mind-reading", in this case. I didn't really bother reading the rest after the first sentence because I saw something about how I joking and congratulating me in my peripheral vision.

It is what it says on the tin: choosing to lie doesn't mean you want the truth communicated.

I apologize that it comes across as aggro, its just that I'm not quite as giggly about this as you are. I think I can safely assume you're old enough to recognize some deleterious effects of lying


> I have 0 idea what Cunningham's Law is

You had no idea what it is. Now you know thanks to you the lie you told.

> choosing to lie doesn't mean you want the truth communicated.

But you're going to get it either way, so if you do lie, expect it. If you don't want it – don't lie, I guess. It is inconceivable that someone wouldn't want to learn about the truth, though. Sadly, despite your efforts in enacting Cunningham again, I don't have more information to give you here.

> I apologize that it comes across as aggro

It doesn't. Attaching human attributes to software would be plain weird.

> I think I can safely assume you're old enough to recognize some deleterious effects of lying

Time and place. While it can be disastrous in the right context, context is significant. It makes no difference in a place of entertainment, as is the case here. Entertainment has always been rooted in tales that aren't true. No matter how old you are, even young children understand that.


A JJ forge is the natural next step, and I'm excited, but posting such a link right now is premature. Are there any design docs, blog posts, etc, on how this'll work?


No real available details yet. But from context of who these folks are, we can infer that things like stacked PRs, deeply integrated developer tooling, and similar things are likely to be priorities.


Perhaps you missed the associated documentation? This is a classification tool which requires input labels "uses an EfficientNet architecture and was trained using ImageNet to recognize 1,000 classes, such as trees, animals, food, vehicles".

The full list [1] doesn't seem to include a human. You can tweak the score threshold to reduce false positives.

1: https://storage.googleapis.com/mediapipe-tasks/image_classif...


You're right about human, that would explain it, but still I find it surprising that such "common item" as a human is not there.

Did you also try on items from the list ?

If there is a match (and this is not frequent), to me it's still very low confidence (like noise or luck).

It seems to be a repacking of https://blog.tensorflow.org/2020/03/higher-accuracy-on-visio...

So an old release from 5 years ago (like very long time in AI-world), and AFAIK it has been superseded by YOLO-NAS and other models. MediaPipe feels really old tool, except for some specific subtasks like face tracking.

And as a side-note, the OKR-system at Google is a very serious thing, there are lot of people internally gaming the system, and that could explain why it is a "new" launch, instead of a rather disappointing rebrand of the 2020-version.

I'd rather recommend building on more modern tools, such as: https://huggingface.co/spaces/HuggingFaceTB/SmolVLM-256M-Ins... (runs on iPhone with < 1GB of memory)


> And as a side-note, the OKR-system at Google is a very serious thing, there are lot of people internally gaming the system.

So you came here to offer a knee-jerk assessment of an AI runtime and blamed the failure on OKRs. Then somebody points out that your use-case isn't covered by the model, and you're looping back around to the OKR topic again. To assess an AI inference tool.

Why would you even bother hitting reply on this post if you don't want to talk about the actual topic being discussed? "Agile bad" is not a constructive or novel comment.


I see this as addressing a symptom rather than the cause.

A direct result of technical staff leaving is the loss of siloed knowledge, instead of trying to address it at the final juncture, my opinion is that this type of knowledge should be proactively documented and shared with the rest of the team as the domain expert learns/works on it. That way, they're around to mentor other members, reduces the bus factor and when said person leaves, the rest of the team naturally continues their work rather than spending time on ramp up tasks.


I don't think you're wanting to converse in good faith, but on the off chance this is a question - yes, GCP was revenue losing for a number of years, but since Q1 2023 they've been profitable. It takes money to bootstrap anything - obviously - this is the case for the vast majority of companies and their offerings, especially so for one which requires vast amounts of compute resources, SREs, legal, etc.


So just to clarify the entire cloud org was funded by advertisers for most of it’s existence.


No, it was funded by Google.

Advertisers paid money for Google for totally unrelated services. Google invested that money in a number of ways. One of them was to build this very profitable non-advertising business. The advertisers didn't fund that business any more than the advertisers funded US treasuries, or the dozens of startups that Google has invested in as a VC.


What’s the problem? Google is trying to diversify their revenue streams. I don’t understand the relevance. Apple TV+ is paid for by iPhones. Ok? And?


> Ok? And?

This is a thread about using your money for better things than paying an ad company. The comment that started this argument you want to have pointed out that it’s self sustaining. But I pointed out that wasn’t always true. Tfsh backed my claim.

So today maybe there isn’t a problem to which your money isn’t being spent with the ad org but it was that way for a very long time to which we can grant the OP some grace as it’s a rather recent change.

There is even still an argument to be made that while you may not be giving money to the ad org you are still giving money to Google thereby helping them deflect the damage they cause the world in their other orgs.


No, even if you were Google Cloud paying customer #1, your money was going to Cloud. It wasn't supporting anything to do with ads.

The ads were providing income to Google which allowed Google to bootstrap Cloud until it was profitable on its own, not vice-versa.

When you buy (or bought) Cloud services, that doesn't affect Google's ad revenue or advertising behavior at all, not for the better and not for the worse. They're basically unrelated orgs within the corporation. Using Cloud isn't promoting ads or whatever you seem to think, not now and not previously.


But it’s not about killing Google’s ad revenue, it’s about hurting Google as a whole. It’s a complete monster, regardless how many heads the hydra has.


OK, well at least you're being honest now.

You could have saved us all a lot of time by simply stating upfront that you hate Google as a whole, rather than discussing the technicalities of which parts have to do with advertising or not.


Sorry your life is now lost for your optional participation in this conversation. Let's agree on one thing then: Google is an evil ad business.


Massive +1 to this, I'd recommend using Calibre to manage your library and any Kobo reader as the companion, I manage a massive 2000+ book library like like this, and never encountered issues with syncing, corrupt file systems, etc, which I did regularly with Kindle.


I’ve managed the kindle library for my wife and I on two devices for close to 10 years and I’ve never dealt with syncing or file systems full stop. I hit “buy” on amazon and about 15 seconds later the book is on the kindle and ready to read. There _is_ no management to do.


To clarify this is managing my own library rather than buying books off Amazon. I tend to buy books locally and pirate the ebook copy which I sync from my computer. I don't feel inclined to line Amazon's pockets any more than I already have...


I understood, sorry. The comment you replied to that you plus 1’ed said:

> I’m not sure why anyone buys Kindles when there are so many better options available.

My point is that the kindle _is_ an excellent experience and that’s why people buy them.


Kobos doesn't require a subscription. I switched from a Kindle to a Kobo Clara Colour recently and it's honestly one of the best tech choices I've ever made. Kobos are hackable by default, so you can literally plug them into Chrome and flash new software onto them via WebUSB (or just via the file system). The real kicker for me though is the support with Calibre, I have a massive collection of maybe 2000 books, and this perfectly syncs with my Kobo supporting filtering, collections, etc. attempting this with even 10 books with my Kindle would routinely break down, books not appearing even after waiting hours for it to index, corrupt file systems, etc, the entire device is designed to push you towards Amazon store, including the scammy "pay £20 more to remove adverts" and them disguising the actual price as "reduced".

The fee GP pointed to is a monthly subscription similar to Amazon's Kindle book club offering.


But will this company be British or European? I'd love to think so, but somehow I doubt that. There just isn't the money in UK tech, the highest paid tech jobs (other than big tech) are elite hedgefunds but they get by with minimal headcount.


Exact same story here, spent 4-5k on my build with a 13700k which has blue screened hundreds of times in video games (R6, Hogwarts Legacy, Cyberpunk) over the last year (to the point that I don't even play competitive tournaments now).

I did all sorts, switching from Windows 11 to 10, buying new memory (twice!!), countless days debugging, updating my bios, etc.

I'm relieved to finally have found the cause, but my goodwill for intel is burnt.

Do you know what the general fix is, is it just a BIOS update?


The only fix is to replace it. There is something wrong with the processor itself that is unfixable by mortal hands.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: