It does fall back to "core values" though - kinda' like with math & axioms. The "why" chain of questions will inevitably lead to something like "because there's inherent value in human life", and this is the point where it breaks down because there's no logical reason to say that. You can probably postulate the contrary and end up with a completely different set of morals that may still be internally coherent but would be very alien to you. Just how you can say "in a plane, through a given point not on a given line, there is no line parallel to the given line" and end up with a weird, non-Euclidean but coherent geometry.
How can you know? One could argue that the entire phenomenon of cognitive dissonance is "people (internally) recognize the contradiction and then perform it"
> Don’t bother predicting which future we'll get. Build capabilities that thrive in either scenario.
I feel this is a bit like the "don't be poor" advice (I'm being a little mean here maybe, but not too much). Sure, focus on improving understanding & judgement - I don't think anybody really disagrees that having good judgement is a valuable skill, but how do you improve that? That's a lot trickier to answer, and that's the part where most people struggle. We all intuitively understand that good judgement is valuable, but that doesn't make it any easier to make good judgements.
Make lots of predictions and write down your thought process (seriously write them down!) once the result is in, analyze whether you were right. Were you right for the right reasons? Were you wrong but had the right thought process mostly?
I tried this! Made a list of long-term (10y I think) predictions. Posted it on social media so that I can come back to it later, and also so that it's public & keeps me honest. And by social media, I mean "Google Circles" - tells you everything you need to know about my long term predictions, I guess...
The role of the entrepreneur is predicting future states of the market and deploying present capital accordingly. Beck is advocating a game-theory optimal strategy.
Judgment is a skill improved through reps. Sturgeon’s law (ninety percent of everything is crap) combined with vibe code spewage will create lots of volume quickly. What this does not accelerate is the process of learning from how bad choices ripple through the system lifecycle.
It's just experience, i.e. a collection of personal reference points against seeing how said judgements have played out over time in reality. This is what can't be replaced.
I think the current state of AI is absolutely abysmal, borderline harmful for junior inexperienced devs who will get led down a rabbit hole they cannot recognize. But for someone who really knows what they are doing it has been transformative.
> They are simple statistical predictors, now universal anwsering machines.
I see this a lot. I kinda' doubt the "simple" part, but even beyond that, is there any evidence that statistical predictor can't be a universal answering machine? I think there's plenty of evidence that our thinking is at least partially a statistical predictor (e.g. when you see a black sheep you don't think "at least one side of this sheep is black", you fully expect it to be black on both sides)
I'm not saying that LLMs _are_ universal answering machines. I'm wondering why people question that they are/they can become one, based on the argument that "fundamentally they are statistical predictors". So they are. So what?
I feel like complaining about things being "undemocratic" is like complaining about a software system's architecture that "it's not microservices". Not everything needs to be - and some things would be actively harmed by making them "democratic". I wouldn't want drug approval to be a democratic process.
Raising capital can be done "democratically" if the founders want to. They can use direct listing. IPO is an option, not an obligation.
Not everything, sure. But this one more than most, needs to be democratic. If you don't see the wealth inequality today in which the 1% own 50% of the worlds wealth, and you don't see where this is inevitably going to lead, then I don't know what to say.
I'm pretty critical of how late capitalism is shaping up (I pretty routinely get called a leftist radical here on Hacker News which is increasingly Thiel-Aligned Psycho News).
With that said, lots of options exist for a company like Figma doing a public listing: when you're the belle of the ball you can list how you want. Google did a pretty unconventional Dutch Auction thing IIRC.
In this instance the Figma folks decided they wanted an IPO pop and had the underwriters set it up that way. They were paying some premium (to institutional investors) to get one of many intangibles that are attached to that (like a bunch of press about how hot the stock is).
In a world where it was a no-brainer that this was going to be another mediocre Adobe product line rent seeking from here to the horizon, I'm pretty OK with how this turned out.
> I guess the solution is to just join executive leadership ha.
It's really... not? I guess, it probably depends on the person too. But at some level, you have both a lot of power to influence things accidentally in a bad way if you're not careful, and at the same time absolutely minimal power to actually get stuff done (you always need to rely on others for the "doing" part, oftentimes several levels deep/ with a lot of potential for miscommunication).
Those opaque decisions? You _have to_ take decisions, because not taking decisions is very often worse than taking a bad decision. And you don't have the information, you can't have the information, you need to work at a high level of abstraction because it's impossible to know all the details. Unless the relevant details are being communicated to you just in time (spoiler: they won't be), you won't know them. If you actually care about how well you do your job and what is your impact on others, it's not a walk in the park, at all.
I wasnt suggesting that you shouldnt or you cannot take the decisions. We all understand corporate life etc and i think that kind of compartmentalizing is just part of the game. Frankly there is no decision to take - you are just a messenger so you roll with the punches. While we all learn to put up a straight face and explain (nay relay/readout) why sacrificing X000 people because the path to ASI needs new blood etc - if you are not moved by it internally (with your inner self raising that single eye-brow in .... curiosity) then I applaude you for being made of much more sterner stuff than me :)
I was not being clear - I was just trying to say that taking opaque $hit decisions with lots of inverted umbrellas around you is somewhat inevitable and not all fun, either; sure, it's lucrative, so that helps - but otherwise it's not really a solution to the "mixed bag" problem.
There are a handful of (somewhat exotic) languages that support multiple dispatch - pretty much, all those listed by you. None of the mainstream ones (C++, Java, C# etc) do.
(also Common Lisp is hardly a poster child of OOP, at best you can say it's multi-paradigm like Scala)
> Since when do OOP languages have to be single paradigm?
What I really meant to say with that was that it's lisp at its core -i.e. if one wants to place it squarely in one single paradigm, imo that one should be "Functional".
I was just surprised to see it listed as an example of OOP language, because it's not the most representative one at that.
What, you need something truly "internet-scale" to make sure your thousands of clients can hit, sequentially, that one faulty api? Would you really be concerned more about Redis failure rates, than said API's failure rates?
If you get into that situation then it's probably because that API is critical and irreplaceable (otherwise you wouldn't be tolerating its problems), so you really don't want to get stuck and be unable to query it. And if you can tolerate a SPOF then there's no reason to bring Redis/Postgres into the picture, you might as well just have a single server doing it.
Plus it's just good practice that I'd want to be following anyway. Once you get in the habit of doing it it doesn't really cost much to design the dataflow right up-front, and it can save you from getting trapped down the line when it's much harder to fix things. Especially for an interview-type situation, why not design it right?
Does a truly distributed solution have no additional cost at all?
To be honest, for me, in an interview-type situation, if you insist that Redis is the problem in that scenario - you would have failed the interview (the interview is never one-way, interviewers can fail it too).
> Does a truly distributed solution have no additional cost at all?
If you literally just drop in etcd or Zookeeper rather than Redis and then develop in the same way then I'd say there's no additional cost to doing that. (I mean sure if you dig hard enough you can always find a way in which solution A is worse than solution B - e.g. most things have worse latency than Redis - but in this scenario the latency of the external API is going to make that irrelevant). Of course if you're just running those in single-node mode and developing against them without thinking about the distributed issues then you've still got plenty of ways to shoot yourself in the foot, but it's a small step in the right direction.
Developing more fully distributed from day 1 requires discipline that takes time to learn, but I'm not convinced that it's actually slower - I'd compare it to e.g. using a strongly typed language, where initially you spend a lot of time bouncing off the guardrails, but over time you adapt yourself and can be productive very rapidly on new projects.
> To be honest, for me, in an interview-type situation, if you insist that Redis is the problem in that scenario - you would have failed the interview (the interview is never one-way, interviewers can fail it too).
Interesting - to me Redis in a system design is very often a case of over-architecting. It's easy to use and programmers enjoy working with it, but very often it isn't letting you do anything you couldn't do without it, and while it can speed things up, I see a lot of cases where the thing it speeds up is something that was already fast enough.
TBH I didn't communicate that clearly - my point was not "Redis in particular", but "whatever you already have at hand, for this usecase". Could also be Postgres or another SQL server.
1 etcd pod doesn't give you "no SPOF", you need 3, and then you need them on multiple VMs (or physical machines if you're not on the cloud/not in k8s), and then the cluster needs to be multi-AZ, and if you're really serious about the "no spof" that may mean geo-redundancy too... come on, just the deployment costs alone are significant.
> my point was not "Redis in particular", but "whatever you already have at hand, for this usecase". Could also be Postgres or another SQL server.
But if you're in the habit of using HA-capable systems then whatever you have to hand will be HA-capable, and so there won't really be any additional cost to using that.
And again, I think there's a real antipattern where people take a single-server application and then claim they've made it fault tolerant by making it run on multiple hosts, but it's still relying on a single DB server. In my experience that doesn't actually improve reliability any (at least not if you've got a good deployment process for your single-server application) and it complicates your architecture to no real benefit. (Indeed, frankly, I think a lot of developers reach for an external database because they have no other idea how to store data from their application, when using embedded sqlite/hbase or - shudder! - the local filesystem, would let them use much simpler architecture and not really reduce the actual reliability of the system).
> 1 etcd pod doesn't give you "no SPOF"
No, but it gives you a clear path to removing your SPOF when the need arises. Which is much harder if you've built your system on Redis.
All the systems mentioned in this discussion are HA-capable (even Redis, for some usecases, is perfectly HA-capable; typically for a distributed lock it isn't appropriate, but then again, for the scenario under discussion, you don't need a perfectly safe distributed lock so it would work just fine).
The more interesting question is not whether a system is HA-capable, it's whether the system is appropriate for the job that's required of it (given said system weaknesses & strengths, plus the specific job needs). And my argument was that both Redis and Postgres were fine, for the job that was described. In an interview situation I want to see that my interviewer is capable of thinking through particular situations and having a good honest debate about strengths and weaknesses of a proposed solution _for a proposed problem_ - not just pushing their preferred solution as dogma. In many business scenarios it's fine & correct to architect systems as "HA by default" but in interview situations we're debating hypotheticals, and I am going to judge you based on the hypothetical at hand, not based on your day-to-day job, because I don't know what your day-to-day job is (and it's not what's being discussed).
> All the systems mentioned in this discussion are HA-capable (even Redis, for some usecases, is perfectly HA-capable
It really isn't, outside of some stretched definition. Nor is Postgres without third-party extensions (that come with significant issues in my experience).
> The more interesting question is not whether a system is HA-capable, it's whether the system is appropriate for the job that's required of it (given said system weaknesses & strengths, plus the specific job needs).
I used to believe this kind of thing, but I've come around to the opposite; actually rather than carefully considering the strengths and weaknesses of any given system in the context of a given job, it's a lot more efficient to have some simple heuristics that are easy to evaluate for which systems are good or bad, and avoid even considering bad systems. Of course occasionally you do need to dive into a full evaluation and pick your poison, but if a task doesn't have very specific requirements you avoid a lot of headache by just dismissing most of the possibilities out of hand.
> And my argument was that both Redis and Postgres were fine, for the job that was described.
But they're not contributing anything to the job that's described! Adding an extra moving part to the system that doesn't actually achieve anything is a much worse error than choosing the wrong system IMO.
As a cache, it is. All you need for a cache (if you're using it correctly, as a cache) is for the replica to be up, which it can be. Azure even gives you out-of-the-box multi-AZ replicated Redis with 99.99% promised uptime (and based on previous experience, I'd say they deliver on this promise).
> Adding an extra moving part
I specifically mentioned I considered those as good solutions for the problem at hand only if you already have them/ don't need to add them, that's their strength (lots of systems already use Redis or a SQL database, e.g. Postgres - but anything really would work just fine for the task at hand).
TBH, I feel like the biggest help Cursor gives me is with understanding large-ish legacy codebases. It's an excellent (& active) "rubber duck". So I'm not sure the argument holds - LLMs don't just write code.
Yeah I've run several SOTA tools on our gnarly legacy codebase(because I desperately need help), and the results are very disappointing. I think you can only evaluate how well a tool understands a codebase if you already know it well enough to not need it. This makes me hesitant to use it in any situation where I do need it.
reply