SAM3 seems to less precisely trace the images — it'll discard kids drawing out the lines a bit, which is okay, but then it also seems to struggle around sharp corners and includes a bit of the white page that I'd like cut out.
Of course, SAM3 is significantly more powerful in that it does much more than simply cut out images. It seems to be able to identify what these kids' drawings represent. That's very impressive, AI models are typically trained on photos and adult illustrations — they struggle with children's drawings. So I could perhaps still use this for identifying content, giving kids more freedom to draw what they like, but then unprompted attach appropriate behavior to their drawings in-game.
I know it may be not what you are looking for, but most of such models generate multiple-scale image features through an image encoder, and those can be very easily fine-tuned for a particular task, like some polygon prediction for your use case. I understand the main benefit of such promptable models to reduce/remove this kind of work in the first place, but could be worth and much more accurate if you have a specific high-load task !
Curious about background removal with BiRefNet. Would you consider it the best model currently available? What other options exist that are popular but not as good?
I'm far from an expert in this area. I've also tried Bria RMBG 1.4, Bria RMBG 2.0, older BiRefNet versions, and I think another I forgot the name of. The fact I'm removing backgrounds that are predominantly white (a sheet of paper) in first place probably changes things significantly. So it's hard to extrapolate my results to general background removal.
BiRefNet 2 seems to do a much better job of correctly removing backgrounds in between the contents outline. So like hands on hips, that region that's fully enclosed but you want removed. It's not just that though, some other models will remove this, but they'll be overly aggressive and remove white areas where kids haven't coloured in perfectly — or like the intentionally left blank whites of eyes for example.
I'm putting these images in a game world once they're cut out, so if things are too transparent, they look very odd.
I've recently become the maintainer of https://github.com/godotjs/GodotJS (TypeScript bindings + JS runtime for Godot). GodotJS supports numerous runtimes, but V8 is the most well supported. Unfortunately, I have been noticing V8's GC a bit more than I would like recently.
Don't get me wrong, I'm aware V8 wasn't designed with games in mind. QuickJS (which is also supported by GodotJS) is probably the safer bet. Or you know, not JavaScript at all. However, I'm building tooling specifically for kids to make games, and TypeScript is leagues ahead in terms of usability:
Before I make the swap to QuickJS out of necessity, I was hoping to try my hand at tuning V8's GC for my use case. I wasn't expecting this to be easy, but the article doesn't exactly instill me with confidence:
> Simply tuning the system appears to involve a dose of science, a dose of flailing around and trying things, and a whole cauldron of witchcraft. There appears to be one person whose full-time job it is to implement and monitor metrics on V8 memory performance and implement appropriate tweaks. Good grief!
If anyone reading this has experience with tuning V8's GC to minimize stop-the-world GC duration (at the cost of overall memory use, or runtime performance etc.) I'd greatly appreciate any advice that can be offered.
Not really. I've written a bunch of code to try maintain the limited support for it that already exists in GodotJS, but I've never really tried it. Main reason I haven't is I'm dependent on Web Worker(-like) APIs in GodotJS, and they're currently missing for JavaScript Core. But since I actually wrote some of those APIs, that's not really an excuse, I can port them easily enough.
So, yeah, I should really give it a shot. Thanks for the reminder.
It most certainly was. You have someone outside your organization who accessed the data, and you know about it. Here's what you just wrote about the person who accessed this endpoint:
> - The author was ultimately banned from the community not for their opinions on this matter, but because of a long streak of unrelated conduct issues that culminated in a spree of saying horribly abusive things to multiple other members of the community.
> — They have been pursuing a grudge against the organization ever since. They are not a reliable narrator, this post is a fantasy version of events that casts them as a martyred hero.
Someone who has been acting maliciously against your organization accessed that data. And you think it's fine? They're a teenager. An angry teenager, who is acting out. You honestly believe you can trust they didn't distribute this data or tell anyone else about the problem before you found out about it?
When I was a teenager, someone in my year level gained access to a lot of personal data about a bunch of people in our year level. This was a smart individual who at least somewhat understood the gravity of the situation. But they were also a kid, of course they distributed some of the data — bragging rights and what not.
What about the section titled "the surveillance infrastructure (orpheus engine)" where the teenager claims children's data was intentionally being sent out to third parties, specifically to profile kids? What's that all about?
Look, no-one read this article and thought "Wow, this is well written article by a super mature well-adjusted individual. I'm taking this as gospel." The article is clearly written by an angry teenager. I feel far more invested in this now that I've seen your responses. The way you're handling this, and yourself, is just downright absurd. Stop.
I never said anything was fine. I said it was a serious vuln, and we took it seriously.
We patched the vulnerability, quickly. We addressed it with the engineer and made clear that this is no joke. We have extensive refactoring happening within our infrastructure to move to a model where this information is handled as much as possible through secure, audited, centralized systems. Is there something else we should be doing?
The crux of the question here was about whether GDPR obligates us to email all 5,000 people signed up for this program about this vulnerability. The two lawyers we have consulted on this have both said no. One of them specifically specializes in privacy compliance. It's not a complicated legal question, the answer is just no.
Look. This isn't on the front page of HN anymore. So I'm mostly writing this to you. You've work to do on your communication. This style of communication probably works just fine with teenagers, but it's not going to hold up to scrutiny with adults.
> The crux of the question here was about whether GDPR obligates us to email all 5,000 people signed up for this program about this vulnerability.
You are just not going to be able to control the narrative like this. Trying to tell someone else what the "crux of the issue is" will not allow you to shift the goal posts. The article described a pattern of issues, and in my previous comment I specifically raised one. No determined individual is going to just leave that thread dangling for you.
> Is there something else we should be doing?
Yes. Obviously. That's the point.
> The crux of the question here was about whether GDPR obligates us to email all 5,000 people signed up for this program about this vulnerability. The two lawyers we have consulted on this have both said no. One of them specifically specializes in privacy compliance.
It's not a great look for the leader of a children's organization to so blatantly flout that they lack a moral compass. You're currently interacting with the public, not the legal system. Sure, whether or not you're legally required to inform your kids is relevant. However, the law is quite literally the bare minimum of what you're obligated to do.
No-ones reading this thinking. "Oh great, they've done the bare minimum legally required of them." They're thinking, "Wait. Companies notify people of breaches all the time. You apologise, and explain what you're doing to rectify the situation. What have they got to hide? Are they worried they'll get an influx of outrage because this lack of care was something people in the community were already concerned about?" With the context given from the odd parent in this thread, it certainly comes across as the latter.
> It's not a complicated legal question, the answer is just no.
This detracts so much credibility from your communication. There is no lawyer on Earth that will describe this as "not a complicated legal question". No adult that's ever had any communication with a lawyer is going to believe this for a second. Lawyers are notorious for their non-committal attitude toward providing legal advice. Nothing is black and white — it's all grey. So this comes across as:
a. You've never interacted with a lawyer in your life. Or,
b. You're telling porkies, or at the very least, are way too flippant with hyperbole.
> You've work to do on your communication. This style of communication probably works just fine with teenagers, but it's not going to hold up to scrutiny with adults.
> …
> It's not a great look for the leader of a children's organization to so blatantly flout that they lack a moral compass.
I'm not the leader of anything, that would be Zach Latta. He's a much better diplomat than I am, but I am doing my honest best to speak plainly and matter-of-factly to you about a complex situation that frankly requires a lot more context to properly understand than I think is possible to acquire from the information you have.
I'm also not trying to absolve our organization of all sins. We mess up all the time. We are working on many fronts to learn from these experiences and make imperfect systems a little better every day. We make mistakes, we apologize, we do our best to make amends, then we move on to the next mistake. It is the nature of doing new, hard things with real stakes.
> You're currently interacting with the public, not the legal system. Sure, whether or not you're legally required to inform your kids is relevant. However, the law is quite literally the bare minimum of what you're obligated to do.
>
> No-ones reading this thinking. "Oh great, they've done the bare minimum legally required of them." They're thinking, "Wait. Companies notify people of breaches all the time.
This is addressed in the top comment I left. Notifying 5k people about a patched vuln is not "more than the minimum", it's legitimately bad practice. That is not my opinion, it is industry standard practice! Absent any reason to believe there has been a data breach, absent any sort of actionable information, we are not going to send an email to thousands of people.
I call the GDPR thing the crux of the question because probably 80% of the thousands of Slack messages sent on this topic, a solid majority of them were about that question. That was the impasse. Staff considered the issue and concluded that from a moral, legal, and industry standard practice perspective, notifying every user was not the correct decision. Nothing was being hidden, that team logged and discussed the vulnerability publicly within the community from the start. They fixed, disclosed, discussed, learned, and moved on.
> This detracts so much credibility from your communication. There is no lawyer on Earth that will describe this as "not a complicated legal question". No adult that's ever had any communication with a lawyer is going to believe this for a second. Lawyers are notorious for their non-committal attitude toward providing legal advice. Nothing is black and white — it's all grey. So this comes across as:
>
> a. You've never interacted with a lawyer in your life. Or, b. You're telling porkies, or at the very least, are way too flippant with hyperbole.
I am married to a law professor for whom I lived through 3 years at Yale Law and 3 years of PhD/fellowship, I have about as much exposure to law as you can get without it actually being your job. I assure you, uncomplicated legal questions exist.
Glad to see you here, actually interacting. I've made this clear on the slack, and truthfully I'm disappointed in the fact that it took so long and external involvement from a parent on HN to get a response.
Another example: there was a relatively civil debate about a new hackathon yall are putting out, funded by.... AMD, and the US government's fund to "teach AI literacy" or whatever the fuck that means. Due to this, _you region locked an entire Hack Club event_. This is the kind of stunt Nintendo would pull, but an organization that thrives itself in "everyone is welcome".
When confronted, yall decided to..... shut down any internal discussion, and avoid the thread at all costs, directly going against you other claims of "radical transparency" and "openness to feedback"/
What long game are you playing here? The game of "make Hack Club suck for 5 years, and lose our motives, morals, and the trust of our community, for an extra few bucks on the 6th?
It's complicated to handle the law. It's why lawyers cost, per your quote, $500 an hour.
But it's not complicated to listen to people and genuinely try to turn back from the wrong turn you took somewhere during Juice.
About the vuln, Ella is exaggerating and has very minimal basis if at all. She did some pentesting, vuln got patched, problem solved. Does HQ need to be more responsible here? Yes. Should critical infrastructure be written by AI? Absolutely not! But does Ella have the basis to start claiming legal superiority over here? Also no.
But, now that you absolutely insist you need to keep my passport indefinitely in order to ship me a sticker that says "summer of making" on it, I expect you to be a little more responsible in:
- Who you give access to
- How you give said access
- How long you give it for
- How strict you are about conduct when person is in possession of said access.
TL;DR: Ella's point sucks. Hack Club data handling, also socks. Hack Club PR? Might be worse.
I'm not going to pretend this is an easy read. So I wouldn't blame you if you stopped early. However, there's a section titled "the surveillance infrastructure (orpheus engine)" which claims that children's private information is being distributed to third-parties without consent.
If they're ignoring GDPR because they're in the US, you can potentially flag these as COPPA violations. COPPA is serious stuff. Courts can fine over $50k for each violation, where each individual impacted can be considered a unique violation. COPPA applies to under 13s, I'm not sure if there are age restrictions in place to join Hack Club, but if there isn't even a privacy policy, I doubt age restrictions are properly enforced.
An 11 year old child's account was promised to be deactivated, went through the dramatic "welp, you had a nice run" text wall, and then absolutely nothing happened.
Melbourne is one of the largest cities in the world by urban sprawl. It is very spread out. I live 85 km from Melbourne CBD, and am officially still considered part of Melbourne. Well, there isn't an "official" boundary per se. My council is considered a council of Melbourne, but I'm also the last suburb in this direction. (Yes, Melbourne's Covid lockdowns applied to us.)
When my wife and I ended up delivering my youngest daughter at home. (Because they'd sent us away from hospital 30 minutes earlier). I think the ambulance must have taken around 10-15 minutes to arrive. Granted, I don't have a great memory of it, lots of Adrenalin, a bit of a blur. They arrived in time to cut the cord. Fortunately, the phone dispatcher stayed on the line and provided me with instructions the entire time.
Just to clarify, I think the dispatch time was reasonable, I'm not at all upset with the ambulance service. The hospital — different story.
P.S. My daughter is 2 now. 100% fine, fortunately.
Admittedly I won't have time to go through the code. However, a quick look at the thesis, there's a section on multi-threading.
Whilst it's still very possible this was a simple mistake, an alternate explanation could be that each strip is allocated to a unique cache line. On modern x86_64 systems, a cache line is 64 bytes. If the renderer is attempting to mitigate false sharing, then it may be allocating each strip in its own cache line, instead of contiguously in memory.
I just posted recently in the thread so would have expected to see my post somewhere on this site near the front page. Granted, I guess HN already has a new/hot algorithm, so perhaps you didn't want to reinvent the wheel and instead focus on search.
* GodotJS — https://github.com/godotjs/GodotJS — TypeScript for Godot
* Consulting for companies using GodotJS (and Unity).