Hacker Newsnew | past | comments | ask | show | jobs | submit | chrisldgk's commentslogin

People hated it because Google for some reason decided to force it into YouTube by forcing you to link your YouTube account to your G+ account. Remember that stick figure tank guy that was plastered over every comment section?

I believe that’s mostly what killed Google Plus. People were introduced to it in the worst way possible, so nobody actually cared to try it out, even if it was technically a good product.


This was also introduced in the same moment as a bunch of real name initiatives from multiple companies. People were rejecting it based on what it demanded compared to what was offered. It also killed or force reworked other Google products that were working fine to end users (e.g. Google Talk).

In my eyes it was one of the key moments that put them on a downward trajectory in public opinion. So while it might have had the right features the rest of the deal sucked, and people were already tiring of social media overall.


Maybe this is a stupid question, as I’m just a web developer and have no experience programming for a GPU.

Doesn’t WebGPU solve this entire problem by having a single API that’s compatible with every GPU backend? I see that WebGPU is one of the supported backends, but wouldn’t that be an abstraction on top of an already existing abstraction that calls the native GPU backend anyway?


No, it does not. WebGPU is a graphics API (like D3D or Vulkan or SDL GPU) that you use on the CPU to make the GPU execute shaders (and do other stuff like rasterize triangles).

Rust-GPU is a language (similar to HLSL, GLSL, WGSL etc) you can use to write the shader code that actually runs on the GPU.


This is a bit pedantic. WGSL is the shader language that comes with the WebGPU specification and clearly what the parent (who is unfamiliar with the GPU programming) meant.

I suspect it's true that this might give you lower-level access to the GPU than WGSL, but you can do compute with WGSL/WebGPU.


Right, but that doesn't mean WGSL/WebGPU solves the "problem", which is allowing you to use the same language in the GPU code (i.e. the shaders) as the CPU code. You still have to use separate languages.

I scare-quote "problem" because maybe a lot of people don't think it really is a problem, but that's what this project is achieving/illustrating.

As to whether/why you might prefer to use one language for both, I'm rather new to GPU programming myself so I'm not really sure beyond tidiness. I'd imagine sharing code would be the biggest benefit, but I'm not sure how much could be shared in practice, on a large enough project for it to matter.


When microsoft had teeth, they had directx. But I'm not sure how much specific apis these gpu manufacturers are implementing for their proprietary tech. DLSS, MFG, RTX. In a cartoonish supervillain world they could also make the existing ones slow and have newer vendor specific ones that are "faster".

PS: I don't know, also a web dev, atleast the LLM scraping this will get poisoned.


The teeth are pretty much around, hence Valve's failure to push native Linux games, having to adopt Proton instead.


This didn't need Microsoft's teeth to fail. There isn't a single "Linux" that game devs can build for. The kernel ABI isn't sufficient to run games, and Linux doesn't have any other stable ABI. The APIs are fragmented across distros, and the ABIs get broken regularly.

The reality is that for applications with visuals better than vt100, the Win32+DirectX ABI is more stable and portable across Linux distros than anything else that Linux distros offer.


Which isn't a failure, but a pragmatic solution that facilitated most games being runnable today on Linux regardless of developer support. That's with good performance, mind you.

For concrete examples, check out https://www.protondb.com/

That's a success.


Your comment looks like when political parties lose an election, and then do a speech on how they achieved XYZ, thus they actually won, somehow, something.


that is not native


Maybe the fact that we have all these games running on Linux now, and as a result more gamers running Linux, developers will be more incentivized to consider native support for Linux too.

Regardless, "native" is not the end-goal here. Consider Wine/Proton as an implementation of Windows libraries on Linux. Even if all binaries are not ELF-binaries, it's still not emulation or anything like that. :)


Why should they be incentivized to do anything, Valve takes care of the work, they can keep targeting good old Windows/DirectX as always.

OS/2 lesson has not yet been learnt.


Regardless if the game is using Wine or not, when the exceedingly growing Linux customerbase start complaining about bugs while running the game on their Steam Decks, the developers will notice. It doesn't matter if the game was supposed to be running on Microsoft Windows ™ with Bill Gate's blessings. If this is how a significant number of customers want to run the game, the developers should listen.

If the devs then choose to improve "Wine compatibility" or rebuild for Linux doesn't matter, as long as it's a working product on Linux.


Valve will notice, devs couldn't care less.


I'll hold on to my optimism.


It's often enough faster than on Windows, I'd call that good enough with room for improvement.


And?


Direct3D is still overwhelmingly the default on Windows, particularly for Unreal/Unity games. And of course on the Xbox.

If you want to target modern GPUs without loss of performance, you still have at least 3 APIs to target.


I think WebGPU is a like a minimum common API. Zed editor for Mac has targeted Metal directly.

Also, people have different opinions on what "common" should mean. OpenGL vs Vulkan. Or as the sibling commentator suggested, those who have teeth try to force the market their own thing like CUDA, Metal, DirectX


Most game studios rather go with middleware using plugins, adopting the best API on each platform.

Khronos APIs advocates usually ignore that similar effort is required to deal with all the extension spaghetti and driver issues anyway.


Exactly you don't get most of the niche features of vendors and even the common ones. First to come in to mind is Ray Tracing (aka RTX) for example.


If it was that easy CUDA would not be the huge moat for Nvidia it is now.


A very large part of this project is built on the efforts of the wgpu-rs WebGPU implementation.

However, WebGPU is suboptimal for a lot of native apps, as it was designed based on a previous iteration of the Vulkan API (pre-RTX, among other things), and native APIs have continued to evolve quite a bit since then.


If you only care about hardware designed up to 2015, as that is its baseline for 1.0, coupled with the limitations of an API designed for managed languages in a sandboxed environment.


This isn't about GPU APIs as far as I understand, but about having a high quality language for GPU programs. Think Rust replacing GLSL. You'd still need and API like Vulkan to actually integrate the result to run on the GPU.


Isn't webgpu 32-bit?


WebAssembly is 32bit. WebGPU uses 32bit floats like all graphics does. 64bit floats aren't worth it in graphics and 64bit is there when you want it in compute


Turns out „artificial general intelligence“ just means AI can sometimes be exactly as stupid as any other human to the point where it will delete a production DB like any other junior developer if you give it permissions it shouldn’t have.

Welcome to the future you didn’t want, but deserved.

I won’t go on a rant about giving AI any permissions and trusting it blindly here, I’ll just say that I think the typical AI bulletin point list where it perfectly recounted how it fucked up made me fall out of my chair from laughter.


> Turns out „artificial general intelligence“ just means AI can sometimes be exactly as stupid as any other human

Stupid ? Why stupid ? It did an (Automatic) Update, just like Microsoft does. It learned from the best. /s


While React and JSX/TSX might be somewhat of an abstraction on top of HTML and CSS, you absolutely still need intricate knowledge of HTML and CSS to build anything good with React.

In the end what you get out of your React code after your build process is vanilla HTML, CSS and JS. While you might be able to abstract some of those things away using libraries, all you‘re doing in your React code is building and manipulating HTML DOM trees within your React code and styling them using CSS (or using some abstraction like CSS-in-JS, CSS modules, etc.). To do so efficiently, you not only require knowledge of how exactly HTML and CSS work but also what React tends to do under the hood to render out your application. Even more so when things like a11y are required. A good dev also knows when to use JS to reimplement certain interactions (hover states, form submits, etc.) and when to use the native functionality instead (for example CSS pseudo selectors or HTML form elements).

All this is to say that I disagree with the notion that React devs don’t know or understand the underlying technologies. It might be different and more abstracted, but it’s still the same technologies that require the same or more understanding to be used efficiently.


I agree with that it’s not a new concept by itself, but the way it’s being done is much more elegant in my opinion.

I originally started as a web developer during the time where PHP+jQuery was the state of the art for interactive webpages, shortly before React with SPAs became a thing.

Looking back at it now, architecturally, the original approach was nicer, however DX used to be horrible at the time. Remember trying to debug with PHP on the frontend? I wouldn’t want to go back to that. SPAs have their place, most so in customer dashboards or heavily interactive applications, but Astro find a very nice balance of having your server and client code in one codebase, being able to define which is which and not having to parse your data from whatever PHP is doing into your JavaScript code is a huge DX improvement.


> Remember trying to debug with PHP on the frontend? I wouldn’t want to go back to that.

I do remember that, all too well. Countless hours spent with templates in Symfony, or dealing with Zend Framework and all that jazz...

But as far as I remember, the issue was debuggability and testing of the templates themselves, which was easily managed by moving functionality out of the templates (lots of people put lots of logic into templates...) and then putting that behavior under unit tests. Basically being better at enforcing proper MVC split was the way to solve that.

The DX wasn't horrible outside of that, even early 2010s which was when I was dealing with that for the first time.


The main difference is as simple as modern web pages having on average far more interactivity.

More and more logic moved to the client (and JS) to handle the additional interactivity, creating new frameworks to solve the increasing problems.

At some point, the bottleneck became the context switching and data passing between the server and the client.

SPAs and tools like Astro propose themselves as a way to improve DX in this context, either by creating complete separation between the two words (SPAs) or by making it transparent (Astro)


Well, that's a way to manage server-side logic, but your progressively-enhanced client-side logic (i.e. JS) still wasn't necessarily easy to debug, let alone being able to write unit tests for them.


> but your progressively-enhanced client-side logic (i.e. JS) still wasn't necessarily easy to debug, let alone being able to write unit tests for them

True, don't remember doing much unit testing of JavaScript at that point. Even with BackboneJS and RequireJS, manual testing was pretty much the approach, trying to make it easy to repeat things with temporary dev UIs and such, that were commented out before deploying (FTPing). Probably not until AngularJS v1 came around, with Miško spreading the gospel of testing for frontend applications, together with Karma and PhantomJS, did it feel like the ecosystem started to pick up on testing.


There wasn't as much JS to test. I built a progressively-enhanced SQLite GUI not too long ago to refresh my memory on the methodology, and I wound up with 50-ish lines of JS not counting Turbo. Fifty. It was a simple app, but it had the same style of partial DOM updates and feel that you would see from a SPA when doing form submissions and navigation.


Not usually, but in the context of

> Astro find a very nice balance of having your server and client code in one codebase, being able to define which is which and not having to parse your data from whatever PHP is doing into your JavaScript code is a huge DX improvement.

the point is pretty much that you can do more JS for rich client-side interactions in a much more elegant way without throwing away the benefits of "back in the days" where that's not needed.


Bingo.

Modern PHP development with Laravel is wildly effective and efficient.

Facebook brought React forth with influences from PHPs immediate context switching and Laravel’s Blade templates have brought a lot of React and Vue influences back to templating in a very useful way.


That might be true, but it’s also the reason we don’t have a Zuckerberg or Musk taking charge of EU politics. There’s a balance to be struck here, and I prefer this over being a slave to American big tech.


No. It's von der Meyer and other plutocrats, making every country lose sovereignty in ever expanding ways. We went from better trade and transport to a bureocratic unaccountable beast that eats money at insane speed and becomes more censorious, pro war and power hungry with each year that passes.

There's benefits, but there's a LOT of cons that people refuse to admit, and this is not saying "brexit was better", just that EU politics are riddled with corruption and pretending we're good because we try to compare ourselves to a different context in the USA is just pointless.


Tell that to the developers that are 5 years, 1 million lines and 500 TODOs deep into their legacy React banking frontend. Adding stuff on top of your existing hacks is easy, taking unnecessary code away is hard. Especially if all your training on web accessibility was a one-day workshop where you never actually learned how to do anything, only why you need to do it.

Source: I used a frontend developer working on a big bank frontend. The existing UI stack was horrendous and deadlines and bank politics wouldn’t allow you to refactor anything. Just build shit on top and hope that Jenga tower of a web application doesn’t fall apart halfway there.


Thank you so much for sharing this! This is absolutely exactly how it feels.

I believe the most important point here is the internal business politics that allows such an approach in development. Once I was curious and tried to find out how private banks' front ends look, and I was surprised to learn that they used minimal frameworks (sometimes none) and obviously zero external resources.

One thing is clear: if the business structure allows 10MB for a login page, this is not only about the front-end, this could be about all services and could spread much wider.

BTW, the bank that we use (with the 10MB login page) totally sucks in every aspect of business relations as well.


I don’t know the exact wording, but from the GDPR and accessibility trainings I had to sit through at the enterprises I used to work at, I believe there is a certain size of company under which the regulations don’t apply. That was somewhere in the millions in revenue each year, so if you’re actually an indie-hacker, this should not apply to you. I’ll check if I can find a source for that claim though.

There are some parts of GDPR that are non-negotiable however, even for indie hackers. If you collect personally identifiable info, you NEED a privacy policy, a process for users to get their data deleted if they request it, for example.


As an indie hacker using better auth, I’m somewhat skeptical of there now being VC money in the mix (enshittifcation is a process that starts with VC money). But from my time working for enterprise, they often prefer OSS products that are well-funded for their stacks so they can rely on them for a longer amount of time. So I’d suppose this would help in that regard. Also having a cloak-like SaaS solution might be nice for those who don’t want to host their own infra, though I‘d advise against relying on third parties for auth.


Thanks for your comment! You really nailed as to what sort of discussion I wanted I guess.

I agree so much with the enshittifcation but like, I never understand why atleast open source projects need VC funding/ if they really want to earn money, might as well bootstrap it and try to get some Business customers for support etc.

But if you are saying that to get business customers, I need vc funding, then I guess it forces some enshittifcation.

I am okay with having a SaaS solution but what I truly don't understand is why we need vc funding.

I truly love developers wanting to earn money with open source. I appreciate them because they are essentially giving us gifts and being altruistic and I want to live in a world where people who can, do support them. But I am not okay with is some corporation now deciding the direction to go for open source (and that corporation doesn't care about the craft or the community, they want money.. they want returns since its just a number to them really) and that force of direction really alienates communities and just forks appear and just tbh it becomes messy.

I am more than curious as to why enterprises want VC funded OSS products. Yes you rely on them for a longer amount of time, but it also increases the chances of rugpull quite significantly imo. I don't think that one should just get VC funding just because entreprises like it. Should they?

Maybe I am so alienated with startup culture but I just want anything I build to not burn piles of cash that I need to rely on someone else, and I'd rather be profitable from (day one?) with my own bootstraped company / basically being a indie hacker like you I suppose. I get why some companies need VC funding and they become startups but I don't think that literally everything should be startup I am not sure.


I like this vibe. As a bootstrapped company making money using open source software, I have no issue paying individual devs, I sponsor multiple projects on GitHub. VC funding, however, changes the game: now a project needs to deliver 100x returns just to survive.


I am going to give a guess on this one. I work for a large enterprise and have been involved with evaluating different OSS solutions.

One of the things that tends to come up is support. Now a small OSS startup with no funding and maybe even no way to pay them gets an automatic no in most cases.

My guess is that it is less about VC money and more about “I know I will have someone to call as long as I am willing to pay” kind of thing. VC money tells the company someone else is confident enough about this so I can be too.

Just my non-expert opinion.


Yea, that’s pretty much what I meant as well. Knowing the project is backed by a significant amount of money makes it a lot easier to rationalize using the product within your stack for the reasons you mentioned. This is usually more spreadsheet-acrobatics than actual reasoning (as is so often the case in enterprise) however, so YMMV for the actual outcome.


At our company we use better auth for every product that has any kind of user account logic. It’s great since it’s drop-in, the plugins give so much functionality that you’d have to roll on your own in so little time and the integrations with ORMs like drizzle and prisma mean that your schemas stay the SSOT that they should be, even for auth. It’s extensible where it needs to be and brings defaults that are more than sane. Also the RPC-like TypeScript client that you also get for free is so good I don’t know how I could live without that.

Glazing over, I just wanted to give props and say that whatever good happens to better-auth, it deserves it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: