Their search homepage was supposed to be minimal. I was at a tech talk given by Google sometime around 2012 and they said that their ad service is not under any circumstances allowed to slow down the page load - if the ads don't return before the page is ready the pager is rendered without ads.
Chrome had so many great ux choices originally, such as tabs all staying the same size when you were closing them so that you could close multiple easily and only resizing after a second or two (that stopped working around a year ago). Hell there are even rumours that Chrome is called Chrome because it was a polished UX.
Their original products were so smooth compared to what was there before. Search compared to altavista, mail compared to Hotmail, both compared to Yahoo!. I really don't know where your perspective comes from. GCP?
If i remember chrome:// used to have special meaning in Firefox (and probably well before that), and was used to tweak UI settings. I always assumed this was where Google took the name from.
Chrome is a now-somewhat-archaic term for GUI (or specifically the actual elements of the GUI, not the concept), and Netscape/Mozilla did use the term a lot. Google claims that their browser is called Chrome because of an association with fast cars (presumably Google was keen to market it to extremely old people, chrome not having been a particularly big thing in cars for a very long time).
> Google claims that their browser is called Chrome because of an association with fast cars
FWIW, before Google Chrome, Firefox was originally Firebird (changed for name collision reasons), and Mozilla had broken off the rest of the Netscape-ish "communications suite" into Thunderbird, both arguably named after cars.
Besides the use of chrome by Netscape/Mozilla that you mention, roughly around that time I heard it used by HCI people to refer flashy GUI design for cosmetics rather than function, and specifically to changes in a particular MacOS version.
I wonder whether Netscape/Mozilla jokingly then used it as a term for the GUI toolkit "trim" around the browser page. Given that this was a transition to the important stuff being on the Web page, rather than your computer. And/or whether Google did.
> FWIW, before Google Chrome, Firefox was originally Firebird (changed for name collision reasons), and Mozilla had broken off the rest of the Netscape-ish "communications suite" into Thunderbird, both arguably named after cars.
Mozilla named the web program Phoenix for rebirth. A company objected. Mozilla renamed it Firebird because phoenix was a fire bird. They named the mail program Thunderbird for similarity of Firebird.
Between Netscape Navigator and Firefox, their web browser was called simply "Mozilla". It supported GUI themes in XML with images which were officially called "Chrome".
Mozilla also hosted user-contributed themes on a web site called "Chrome Zone".
The browser was considered slow and bloated however, and when Firefox came, its lack of theme support was perceived as part of it having been de-bloated.
I might vaguely recall Mozilla being in an easter egg or alternate throbber in a Netscape browser, and my impression was that it had been an internal codename at Netscape which was then adopted for the open source project.
This comment and a few others here make me feel old and sad for the people too young to remember that time. Yes, Google was an enormous breath of fresh air when it came out. 1000% better UI and features than the competition. Search was incredible. Gmail was a revelation. The whole company culture was night and day compared to the stodgy old tech companies like IBM. Just mind blowingly awesome. And then maps?? How did they even do that? The tech world felt entirely fresh and new and hopeful.
They basically revolutionized the web with the JavaScript V8 engine in chrome. Before them, JavaScript performance was so bad you had to have a really light touch with it.
Kamala Harris bragged about enforcing this law against parents in California. That’s the only way that I know that it’s an actual law that gets enforced because I had never heard of laws like this before, and I grew up in US public schools in the South.
> generally means doing busywork and/or thinking of ways to cheat/manipulate their customers and the market for maximum gain whole delivering minimum value
When I read comments like this I can’t help but wonder where people like you work. It’s completely unrepeatable to me. I work with really good people, all the way to the tip, and no try to make money by increasing value for our customers.
Apple, Google, Walmart, Amazon, Home Depot, Anthropic, Toyota, and a hundred other companies all offer me incredible value for so cheap. Why are people so cynical about a world that offers them unimaginable riches everywhere they look.
Sure there are bad companies. And if you work at one of those, go get a new job.
The parents are talking about things in-the-large, negative societal trends, while you are talking your anecdotal experience and perhaps survival bias striking it so lucky with your employer. The world offers unimaginable riches, but at what cost really? Who benefits most? Where does it lead? Big picture.
>Apple, Google, Walmart, Amazon, Home Depot, Anthropic, Toyota, and a hundred other companies all offer me incredible value for so cheap.
Try to find something with Google these days. Try to use an Apple product past it's planned obsolescence. They crowd out innovation with their monopolistic rent seeking.
This take might have sounded convincing a year ago. Now it just sounds foolish. The best coders on the planet are using AI. Why wouldn’t they use this?
"Use AI" sounds like "programmers use a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations" - without the details or context how they use it, what tools and for what tasks - doesn't mean anything and also sounds foolish, sorry.
Which is perfectly adequate for 95% of applications. Hell, it’d probably improve most applications if they adopted some proven shade of gray.
Why do people feel that each and every tool they use needs its own unique look and feel? And why are people willing to pay more for that? In some cases, sure. For my smart sprinkler app.. I don’t give a damn if it looks like 1000 other apps.
I’d actually prefer if all of them looked and worked the same, especially useful if you have elderly family members you need to teach how to use app for XYZ. all government websites (especially functional ones that citizens use to do something on) for instance should be exactly the same
Nah. We’re right on the money with this one. AI is a nice tool to have available, but you AI nuts are the ones being voluntarily and gladly fed the whole “you’re a bazillion times more productive with our AI!!!!” marketing spiel.
It’s a nice tool, nothing more, nothing less. Anything else is marketing nonsense.
Software already exists that has been written by Claude. They absolutely are selling the means to write software, and the means to securing the insecure software. At least for the time being. In the future Mythos will probably just make it possible to prompt good software from the start.
Maybe because there’s no critical and widely used software written by LLMs so far? Which says a lot about LLMs are failing to even approach the level of capabilities you would expect from all the hype? The goal has always been, even before LLMs, to find something smarter than our smarter humans. So far the success at that is really minuscule. Humans are still the benchmark, all things considered. Now they’re saying LLMs are going to be better than our best vulnerability researchers in a few months (literally what an Anthropic researcher said in a conference). Ok, that might happen. But the funny part is that the LLMs will definitely be the ones writing most of these vulnerabilities. So, to hedge against LLMs you must use LLMs. And that is gonna cost you more.
So today, most of the vulnerabilities being found by these tools are in code written by humans. Your hypothesis is that down the road, most of the vulnerabilities will be in code written by LLMs.
What seems more probable is that the same advances that LLMs are shipping to find vulnerabilities will end up baked into developer tooling. So you'll be writing code and using an LLM that knows how to write secure code.
No need to be petty. They have a point. We did this with the words racist and fascist. Overinclusion diluted the term and gave cover for the actual baddies to come in. I'm not sure debating who is and isn't a sociopath is as useful as, say, the degree to which Sam is a liar (versus visible).
While I agree that the word has been misused by some bad actors in the "Woke 1.0 era", it's worth pointing out that this isn't what most people complaining about the word being "diluted" are referring to as these are mostly people flat-out upset by any suggestion that they themselves might hold racist beliefs.
That said, anyone using "racist" as a noun isn't worth your time, nor is anyone who's genuinely upset about people calling concepts, systems or ideologies "racist".
Specifically, the "Woke 1.0 era" culture war arose from two conflicting meanings of the word "racist" largely aligning with two different segments of the population: 1) "racist" as a bad word you call people who are extremely bigoted against people along racial lines and 2) "racist" as a descriptor for systems and ideologies downstream from racialization (i.e. labelling people as racialized - e.g. Black - or non-racialized - i.e. "white") as a mechanism of asserting a power structure. "Wokists" would often conflate the two by applying the word as broadly as the latter definition necessitates while still attempting to use it with the emotional weight and personal judgement of the former definition.
I think a lot of this can be blamed on "pop anti-racism" just as a lot of the earlier "boys are icky" nonsense can be blamed on pop feminism because fully adopting the latter definition requires a critique of systems, which is much more dangerous to anyone benefiting from those systems than merely naming and shaming individuals. Anti-racism (and feminism) ultimately necessitates challenging hierarchical power structures in general and thus necessarily leads to anti-capitalism (which isn't to say all anti-capitalists are anti-racist and feminist - there are plenty of "anti-capitalist" movements that still suffer from racism and sexism just as there are "anti-racists" who hold sexist views or "feminists" who hold racist views). But you can't use that to sell DEI seminars to corporations and corporations can't use that to promote themselves as "woke" - as some companies like Basecamp found out when their internal DEI groups suddenly started taking themselves seriously during the BLM protests, resulting in layoffs and "no politics" policies and a general rightwards shift among corporate America leading up to and into the second Trump presidency (which reinforced this shift, resulting in the current state of most US corporations and their subsidiaries having significantly cut down on their previously omnipresent shallow "virtue signalling").
I don't know how to define the delineation I'm about to propose. But there is a difference between overinclusivity trashing a morally-loaded, potentially even technical, term, and slang evolving.
I would be curious to hear you expand on that, walk me through it, maybe a small paragraph to explain what over inclusion happened with the weird fascist, what baddies you're vaguely referring to, and connect those dots?
Racism and fascism have been used correctly, its just that people do not like to be have their beliefs associated with negative things and thus, rather than perform self-reflection about themselves, instead the problem exists elsewhere. I am sure you can come up with outliers that prove what you are saying is true, but across the vast majority of applications of the use of both words they are correct relative to definitions of both words.
Have we been using the same Google?
reply