Hacker Newsnew | past | comments | ask | show | jobs | submit | more alpha_squared's commentslogin

No one else is adding the context of where things were at the time in tech...

> The instagram purchase felt insane at the time ($1b to share photos) but facebook was able to convert it into a moneymaking juggernaut in time for the flattened growth of their flagship application.

Facebook's API was incredibly open and accessible at the time and Instagram was overtaking users' news feeds. Zuckerberg wasn't happy that an external entity was growing so fast and onboarding users so easily that it was driving more content to news feeds than built-in tools. Buying Instagram was a defensive move, especially since the API became quite closed-off since then.

Your other points are largely valid, though. Another comment called the WhatsApp purchase "inspired", but I feel that also lacks context. Facebook bought a mobile VPN service used predominantly by younger smartphone users, Onavo(?), and realized the amount of traffic WhatsApp generated by analyzing the logs. Given the insight and growth they were monitoring, they likely anticipated that WhatsApp could usurp them if it added social features. Once again, a defensive purchase.


I don't think we can really call the instagram purchase purely defense. They didn't buy it and then slowly kill it. They bought it and turned it into a product of comparable size to their flagship with sustained large investment.


It's a little unfair to blame startups, they largely just set up shop where the capital is. Most VCs required startups to be headquartered near by for easier management/communication. The tech scene in SV had such exceptionalism that it quite literally viewed any startup not in SV as an inevitable failure. Even YC mandated startups be in SV.


Not sure why you are down voted. This is a requirement for many of the VCs in the valley. You have to be both US-based and “nearby”.


I didn't realize Perplexity was willing to opaquely filter it's search results. It makes sense as a shrewd business move, but now makes me doubt if there's any version of the platform that doesn't filter, to some extent, in the same way.


I'm sorry, but I feel like I have to amend your scenarios to reflect the accuracy of LLMs:

> Quick [inconsequential] fact checks, quick [inconsequential] complicated searches, quick [inconsequential] calculations and comparisons. Quick [inconsequential] research on an obscure thing.

The reason that amendment is vital is because LLMs are, in fact, not factual. As such, you cannot make consequential decisions on their potential misstatements.


These are simply implementation failures. You should be using them to gather information and references that are verifiable. There are even hallucination detectors that do some of this for you automatically.

If you are treating LLMs like all-knowing crystal balls, you are using them wrong.


Svelte seems to do this just fine. It's much simpler to work with, doesn't introduce too much proprietary code, and is both lightweight and incredibly fast.


> elected by the people of gaza with support from the population.

Since we're cherry-picking, let me continue the sequence for you...

...propped up by Israel to be elected by the people of Gaza. [0]

[0] https://www.timesofisrael.com/for-years-netanyahu-propped-up...


Israel tried to bribe Hamas into not attacking them, yes. Was that a bad plan? Should they not have tried to make life good enough for Gazans to not resort to terrorism?


A literal reading is that sowing division among Hamas and the Palestinian Authority would prevent a unified effort for Palestinian statehood. A more cynical reading would be to prop up the crazies to point at them and say, "look at what they're doing, they shouldn't be allowed to self-govern."

The heritage of both the Israeli people and the Palestinian people is steeped in literally thousands of years of history in that land. While one group, the Jews, were often displaced from it, it does not diminish the ties the Palestinian people have to that land. A land they cultivated and farmed for, again, thousands of years. Well before formal borders of any nation today were recognized.

We can nitpick on specific words and phrases, but the broader point still stands. These are human beings who are used for political aims, whose suffering is a means to an end, and it's all happening at the hands of people who do not respect human life. Whether you want to play partisan and choose the label of "Hamas" or "Israel" is up to you, but both are guilty nonetheless. It just so happens that one of those two parties has more resources at their disposal than the other to inflict profound misery and annihilation.


The PA, to this day, pays the families of terrorists who kill Israelis. They're not the good guy. Yes, Israel decided back then that Hamas can't possibly be worse so they gave them a chance. Keep in mind at that point in history, the PA had done far more harm to Israel than Hamas.


One thing I'll add that isn't touched on here is about context windows. While not "infinite", humans have a very large context window for problems they're specialized in solving. Models can often overcome their context window limitations by having larger and more diverse training sets, but that still isn't really a solution to context windows.

Yes, I get the context window increases over time and that for many purposes it's already sufficient enough, but the current paradigm forces you to compress your personal context into a prompt to produce a meaningful result. In a language as malleable as English, this doesn't feel like engineering so much as it feels like incantations and guessing. We're losing so, so much by skipping determinism.


Humans don't have this fixed split into "context" and "weights", at least not over non-trivial time spans.

For better or worse, everything we see and do ends up modifying our "weights", which is something current LLMs just architecturally can't do since the weights are read-only.


This is why I actually argue that LLMs don't use natural language. Natural language isn't just what's spoken by speakers right now. It's a living thing. Every day in conversation with fellow humans your very own natural language model changes. You'll hear some things for the first time, you'll hear others less, you'll say things that get your point across effectively first time, and you'll say some things that require a second or even third try. All of this is feedback to your model.

All I hear from LLM people is "you're just not using it right" or "it's all in the prompt" etc. That's not natural language. That's no different from programming any computer system.

I've found LLMs to be quite useful for language stuff like "rename this service across my whole Kubernetes cluster". But when it comes to specific things like "sort this API endpoint alphabetically" I find the amount of time to learn to construct an appropriate prompt is the same if I'd have just learnt to program, which I already have done. And then there's the energy used by the LLM to do it's thing which is enormously wasteful.


> All I hear from LLM people is "you're just not using it right" or "it's all in the prompt" etc. That's not natural language. That's no different from programming any computer system.

This right here is the nail on the head. When you use (a) language to ask a computer to return you a response, there's a word for that and it's "programming". You're programming the computer to return data. This is just programming at a higher level, but we've always been increasing the level at which we program. This is just a continuation of that. These systems are not magical, nor will they ever be.


I agree, I'm mostly trying to illustrate how difficult it is to fit our working model of the world into the LLM paradigm. A lot of comments here keep comparing the accuracy of LLMs with humans and I feel that glosses over so much of how different the two are.


Honestly we have no idea what the human split is between "context" and "weights" aside from a superficial understanding that there are long term and short term memories. The long term memory/experience seems a lot closer to context than it is to dynamic weights. We don't suddenly forget how to do a math problem when we pick up an instrument (ie our "weights" don't seem to update as easily and quickly as context does for an LLM).


> humans have a very large context window for problems they're specialized in solving

Do they? I certainly don't. I don't know if it's my memory deficiency, but I frequently hit my "context window" when solving problems of sufficient complexity.

Can you provide some examples of problems where humans have such large context windows?


> Do they? I certainly don't. I don't know if it's my memory deficiency, but I frequently hit my "context window" when solving problems of sufficient complexity.

Human context windows are not linear. They have "holes" in them which are quickly filled with extrapolation that is frequently correct.

It's why you can give a human an entire novel, say "Christine" by Stephen King, then ask them questions about some other novel until their "context window" is filled, then switch to questions about "Christine" and they'll "remember" that they read the book (even if they get some of the details wrong).

> Can you provide some examples of problems where humans have such large context windows?

See above.

The reason is because humans don't just have a "context window", they have a working memory that is also their primary source of information.

IOW, if we change LLMs so that each query modifies the weights (i.e. each query is also another training data-point), then you wouldn't need a context window.

With humans, each new problem effectively retrains the weights to incorporate the new information. With current LLMs the architecture does not allow this.


It's a very large context window, but it is compressed down a lot. I don't know every line of insert your PL of choice's standard library, but I do know a lot of it with many different excerpts from the documentation, relevant experiences where I used this over that, or edge cases/bugs that one might fall into. Add to it all the domain knowledge for the given project, with explicit knowledge of how the clients will use the product, etc, but even stuff like what might your colleague react to to this approach vs another.

And all this can be novelly combined and reasoned with to come up with new stuff to put into the "context window", and it can be dynamically extended at any point (e.g. you recall something similar during a thought train and "bring it into context").

And all this was only the current task-specific window, which lives inside the sum total of your human experience window.


If you're 50 years old, your personality is a product of 50-ish years. Another way to say this is that humans have a very large context window (that can span multiple decades) for solving the problem of presenting a "face" to the world (socializing, which is something humans in general are specifically good at).


Which really says a lot about how hard it is to leave platforms. The network effect is hard to overcome.


I just think that apps / social networks / whatever are usually not replaced by a copy of the same thing.

Google+ didn't replace Facebook, Signal didn't replace Whatsapp, Bluesky won't replace Twitter.


There's no technical reason that one couldn't move from platform to platform and link identities - the restrictions around IP and platform lock-in only benefit the platform owner, ensuring that competition will be stifled rather than the platform made useful for its users.

The sad part is that ad networks know more about our connections across platforms than we're allowed to.


There is also no technical reason people have to stay, because tech isn't the problem here. The value in these platforms aren't in the range of features they provide, but the engagement between individuals and the community and the value of the information it generates.


how do you move platform when you have >10k followers on twitter?


If you can be found on any other platform, your followers will be able to follow to those. There’s no benefit to being on one platform.


Things you can say when you have 10 followers on twitter


Which reinforces the concept of a digital fiefdom; the owners of said platforms have this immense power only because they were the first to implement their ideas during the internet boom.

And now we're stuck with Zuckerberg, Musk and Bezos. Out of all people, the last ones I would choose to have unelected power. Okay maybe the last one would be Joe Rogan.


Sir, I've seen whom you _elected_, let's be humble here about preferred choices


I'm wildly offended that you called me an american.


Now I’m curious to know whom you’ve elected in your home country


I personally have never elected anyone ;)


Not voting is voting for the majority candidate though :)


I said I did not elect, not that I did not vote.


I got 14/20 on my first try just by knowing how the color mixing works. A few simple rules:

- Higher values mean brighter colors

- The closer the individual colors are to each other, the closer to "gray" it looks

- R + G = Yellow, R + B = Fuchsia, G + B = Teal


The typical sets of primary/secondary colors are RGB and CMY. Which set is considered "primary" depends on if you're doing additive (light) or subtractive (ink/pigment) mixing.

In additive color mixing, Red (#F00), Green (#0F0) and Blue (#00F) are the primary colors, and Cyan, Magenta and Yellow are the secondary colors.

Cyan: Green+Blue (#0FF)

Magenta: Red+Blue (#F0F)

Yellow: Red+Green (#FF0)


You also need to know the combinations, for when you get, eg, 0FF. Though I was once asked to find 0FF and the choices also had 1FF in them. Obviously it was pure luck.


I think the intent, as stated, appears to match what you're saying. It's hard to ignore that there doesn't appear to be any display of critical thinking involved, though.

He wants recognition for quickly building simple tools (e.g. visual org chart) without the responsibility of what the tool was used for: to fire half a million people. Where are the efficiency gains in this? It's very telling that the interview that he got fired for included his praise that the government was actually more efficient than he expected.

Given all that, I can't take the writing as being all that sincere.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: