Hacker Newsnew | past | comments | ask | show | jobs | submit | more stevage's commentslogin

Arguably high profile thefts increase interest in art and therefore more people enjoy art.

Also artworks can still be enjoyed post-theft through replicas etc.

And if the artwork is returned, as in this case, it's just a big win all round. Creating a new performance artwork in the process.


Wow. OP is being very generous with their time.

My approach would be along the lines of "you prove to me exactly what font me supposedly used then we'll look into it".


Seems like there is a commercial agreement between the two parties, but it somehow doesn't capture everything they need. They're relying on some kind of unspoken agreement but now they don't trust each other. they should make a new agreement.

Dunno if you saw the previous post about the situation, but seems that attempts to do so for months have not born fruit, and only led to more trust breakdown.

Yep. I'm sure glad I'm not a current or aspiring Pebble owner..

> I wanted to highlight some of my favourite watchfaces on the Pebble Appstore.

It was for a commercial purpose. Not a personal one.


But to be clear, the agreement does allow him API access to view apps and display metadata. Presumably, to build App Store experiences on top of the data. Which could easily include something like stack ranking your favorite apps as a review system, or displaying favorites.

Saying this is scraping is so pedantic, and given that Eric’s company is paying for access to the API, they should kick rocks.


You're assuming zero savings to begin with, which is a weird assumption.

His salary from Mastadon annual reports 2021-2023 were 28,800, then 36,000, then 60,000 euros annually (reports for 2024 and 2025 are not released yet), so unless he had side gigs or deals, I wouldn't expect he has a ton of savings at the moment. Glad he is getting a decent payout with his exit, though unfortunately a windfall like this in one year offers less take-home than if he was paid this over several years.

I really hope he's able to find success and better work-life balance in his future endeavours


Curious whether you actually think this, or was it sarcasm?

It was sarcasm, but git itself is Decentralized VCS. Technically speaking, every git checkout is a repo of itself. GitHub doesn't stop me from having the entire repo history up to last pull, and I still can push either to the company backup server or my coworker directly.

However, since we use github.com fore more than just a git hosting it is SPOF in most cases, and we treat it as a snow day.


Yep, agreed - Issues being down would be a bit of a killer.

More useful words are "negligible" and "problematic".

> More useful words are "negligible" and "problematic".

Yes, thank you! Worth emulating.

By comparison:

> A characteristic of these systems spanning so many orders of magnitude is that it is very frequently the case that one of the things your system will be doing is in fact head-and-shoulders completely above everything else your system should be doing, and if you have a good sense of your rough orders of magnitudes from experience, it should be generally obvious to you where you need to focus at least a bit of thought about optimization, and where you can neglect it until it becomes an actual problem.


>These general data models start to become useful and interesting at around a trillion edges

That is a wild claim. Perhaps for some very specific definition of "useful and interesting"? This dataset is already interesting (hard to say whether it's useful) at a much tinier scale.


It was a widely observed heuristic going back to the days when the Semantic Web was trendy. The underlying reason is also obvious once stated.

Almost every non-trivial graph data model about the world is a graph of human relationships in the population. If not directly then by proxy. Population scale human relationship graphs commonly pencil out at roughly 1T edges, a function of the population size. It is also typically the highest cardinality entity. Even the purpose isn’t a human relationship graph, they all tend to have one tacitly embedded with the scale implied.

If you restrict the set of human entities, you either end up with big holes in the graph or it is a graph that is not generally interesting (like one limited to company employees).

The OP was talking about generalizing this to a graph of people, places, events, and organizations, which always has this property.

It is similar to the phenomenon that a vast number of seemingly unrelated statistics are almost perfectly correlated with GDP.


This is not a "general purpose data model", though. A better example would be Wikidata which at about 100M nodes and 1B edges (so orders of magnitude less than that 1T claim) is already enabling plenty of useful queries about all sorts of publicly-available data and entities.

How do you measure it?

>Many software developers will argue that asking a candidate to reverse a binary tree is pointless

Is "reversing a binary tree" actually a thing, or is this a cute kind of "rocket surgery" phrase intentionally mixing reversing a linked list and searching a binary tree?


I think it's a reference to the Google interview problem that the author of Homebrew (IIRC) failed. They were quite upset about it since they have proved their worth through their famous open-source contributions, but got rejected in a LeetCode-like interview.

It’s probably a mistake on the author’s end, but the problem comes across anyway.

I can only imagine reversing a binary tree would imply changing the “<“ comparison in nodes to “>” which would be a useless exercise


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: