Gitea [1] is honestly awesome and lightweight. I've been running my own for years, and since they've put Actions in a while ago (with GitHub compatibility) it does everything I need it to. It doesn't have all the AI stuff in it (but for some that's a positive :P)
I'm stuck on the latest gitea (1.22) that still supports migration to forgejo and unsure where to go next. So I've been following both projects (somewhat lazily), and it seems to me that gitea has the edge on feature development.
Forgejo promised — but is yet to deliver any — interesting features like federation; meanwhile the real features they've been shipping are cosmetic changes like being able to set pronouns in your profile (and then another 10 commits to improve that...)
If you judge by very superficial metrics like commit counts, forgejo's count is heavily inflated by merges (which gitea development process doesn't use, preferring rebase), and frequent dependency upgragdes. When you remove that, the remaining commits represent maybe half of gitea's development activity.
So I expect to observe both for another year before deciding on where to upgragde. They're too similar at the moment.
FWIW, one of gitea larger users — Blender — continues to use and sponsor gitea and has no plans to switch AFAIK.
That's an interesting perspective, and I can't strongly disprove it, but that doesn't match my impression. I cloned both repos (Gitea's from GitHub; Forgejo's from Codeberg, which runs on Forgejo) and ran this command:
to get an overview of things. That showed 153 people (including a small handful of bots) contributing to Gitea, and 232 people (and a couple bots) contributing to Forgejo. There are some dupes in each list, showing separate accounts for "John Doe" and "johndoe", that kind of thing, but the numbers look small and similar to me so I think they can be safely ignored.
And it looks to me like Forgejo is using a similar process of combining lots of smaller PR commits into a single merge commit. The wide majority of its commits since last June or so seem to be 1-commit-per-PR. Changing the above command to `--since="2024-07-1"` reduces the number of unique contributors to 136 for Gitea, 217 for Forgejo. It also shows 1228 commits for Gitea and 3039 for Forgejo, and I do think that's a legitimately apples-to-apples comparison.
to match lines that mention a PR (like "Simplify review UI (#31062)" or "Remove `title` from email heads (#3810)"), then I'm seeing 1256 PR-like Gitea commits and 2181 Forgejo commits.
I'm not an expert in methodology here, but from my initial poking around, it would seem to me that Forgejo has a lot more activity and variety of contributors than Gitea does.
I've successfully migrated from Gitea 1.23 by just rolling back migrations manually in SQL to where forgejo supports it again. Of course, I had backups.
The development energy has not really moved, Gitea is moving much faster. Forgejo is stuck two versions behind and with their license change they're struggling to keep up.
I’ve almost completed the move of my business from GitHub’s corporate offering to self-hosted Forgejo.
Almost went with Gitea, but the ownership structure is murky, feature development seems to have plateaued, and they haven’t even figured out how to host their own code. It’s still all on GitHub.
I’ve been impressed by Forgejo. It’s so much faster than Github to perform operations, I can actually backup my entire corpus of data in a format that’s restorable/usable, and there aren’t useless (AI) upsells cluttering my UX.
For listeners at home wondering why you'd want that at all:
I want a centralized Git repo where I can sync config files from my various machines. I have a VPS so I just create a .git directory and start using SSH to push/pull against it. Everything works!
But then, my buddy wants to see some of my config files. Hmm. I can create an SSH user for him and then set the permissions on that .git to give him read-only access. Fine. That works.
Until he improves some of them. Hey, can I give him a read-write repo he can push a branch to? Um, sure, give me a bit to think this through...
And one of his coworkers thinks this is fascinating and wants to look, too. Do I create an SSH account for this person I don't know well at all?
At this point, I've done more work than just installing something like Forgejo and letting my friend and his FOAF create accounts on it. There's a nice UI for configuring their permissions. They don't have SSH access directly into my server. It's all the convenience of something like GitHub, except entirely under my control and I don't have to pay for private repos.
I think the beauty of this is that it's big and clunky. An SD card takes quite coordinated motor skills to insert, can be quite flimsy and has to be inserted with an exact fit.
I still, will never understand the need for native "Apps". To this day, I have never seen an "App" that couldn't simply have been a website/webapp. Most of them would likely be improved by being a webapp.
The only benefits I can see of "Apps", are the developer get's access to private information they really don't need.
Yeah, they get to be on the "App Store". But the "App Store" is a totally unnecessary concept introduced by Apple/Google so they could scrape a huge percentage in sales.
Web browsers have good (not perfect) sandboxing, costs no fees to "submit" and are accessible to everyone on every phone.
The reality is, most webapps for mobile just suck. The UX is nowhere near that of a native application. I don't want any text to be selectable. I don't want pull to refresh on every page. I don't want the left-swipe to take me to the previous page.
You can probably find workarounds for all these issues. The new Silk library (https://silkhq.co/) is the first case I've seen that get's very close to a native experience. But even the fact that this is a paid library comes to show how non-trivial this is.
>I don't want any text to be selectable. I don't want pull to refresh on every page. I don't want the left-swipe to take me to the previous page.
Strange. This inability to select any text has always felt like one of the most hostile things developers could ever do. It feels like pure vandalism.
Another thing that causes massive productivity degradation is not being able to keep multiple pages open so you can come back to some state. I cannot imagine how anyone could possibly use these apps for any serious work.
The UX of almost all native mobile apps is absolute crap. But it's not their nativeness that makes them crap. I'm not complaining about the idea of operating systems offering non-portable but high performance UI primitives that make use of OS facilities.
Many native desktop apps don't have these UX issues (at least not all of them at the same time). It's the mobile UX patterns, conventions and native UI frameworks that are causing this catastrophic state of affairs.
Inability to select text is a pain in the ass when you're midway through learning the language and only wants to translate certain parts. In native apps it's understood (app makers don't really give a shit about me), but when it's in websites it's like a slap in the face :)
Yeah, the app model of one page open at a time ever is such bad UX. Huge regression from the web.
Funnily enough you get around it on an app like Reddit by opening pages in the web browser.
Every time I try to select a single word in a WhatsApp message I surprised for a second. It’s so strange that most apps that have text as their fundamental content don’t allow you to do this.
> Strange. This inability to select any text has always felt like one of the most hostile things developers could ever do. It feels like pure vandalism.
Use Circle to Search? Native capability that works on every single app, and is close to perfect (with the exception of handling text at the very bottom/top of your screen that's covered by your navbar/Google logo).
On modern mobile and desktop operating systems, you can always copy that portion of the screen to the clipboard and it will recognize the text so you can paste it anywhere.
Even if you could (which you can't, at least on my, modern, phone), it would be a workaround, not a solution.
A solution would be allowing free selection like in the browser or, better yet, ditching "native" apps for web apps, as the person above suggested. As a bonus, this "exodus" will force browser makers to iron out any UX issues very quickly.
To be fair, browser apps do have their advantages:
- text is selectable
- content is zoomable
- you can have an ad/nuisance blocker
- page source is open
While native apps have their own advantages:
- much smoother experience esp. navigation, scrolling, animations, etc.
- better overall performance (JavaScript will always lose to the native binary)
- access to hardware opens new possibilities; audio, video accelerators etc.; there's a ton of things you can't do in the browser with audio for example
- widgets, some of them are nice and useful too
- for publishers: an app icon on the home screen is a reminder, a "hook" of sorts; this is the main reason they push apps over web versions
All the features you mentioned can also be achieved by a well developed PWA. Of course, minus the widgets or some deeper system integration (like controlling phone calls etc.)
There are cases like media apps, camera apps, videogames, terminal emulators, clipboard managers etc. that won't become Web apps any time soon.
Either because they need to operate closer to the OS, or for performance expectation reasons.
But I've just had a quick scroll through the apps on my phone, and I can confidently say that 90% of them are basically HTTP clients that interact with an HTTP server.
And even those that do more could probably be wrapped into a WebAssembly artifact with comparable performance in a near future.
The reason why they are not PWAs, and why engineers are often expected to do triple work (iOS, Android, Web), and why there aren't more products released as PWAs, keeps eluding me.
Sure, you have to tell folks how the "Install/Add to home screen" process works from a mobile browser, but is it that really that much more friction compared to an App Store paradigm to justify the abuse of native apps that either reinvent the wheel multiple times, or are just unglorified Web browsers running an Electron app just to show you the discounts at the supermarket near your house?
Heh, I was actually building one. Haven't considered the battery... Are the web audio APIs bad, or are you forced to use the CPU? I guess with webgpu it may be easier?
I think on iOS you need access on the CoreAudio level if you want to be efficient, ie fill audio buffers on a high priority thread with some lower level static language.
These are more like byproduct of the fact that web apps are built on the stack not suited for modern UI apps. It's literally a text typesetting engine pretending to be a rendering engine for high-performance UI.
So, it can also be framed as:
- everything is selectable, even what shouldn't be - buttons, drawers, video players, etc
- content is zoomable, which most of the time just breaks UX in hilariuous ways. Developers have to do extra-work to either disable zoom or make hacks/workarounds.
"Everything is selectable" and "everything is zoomable" makes total sense if it's a blog post. If it's a UI for the modern app, it does not.
Disabling zoom is so hostile, why not disable screen readers and put bollards on handicapped ramps while you are at it. It’s literally a middle finger to older people and people with vision issues. If you disable zoom I will not be using your website.
This is factual view. No matter how many layers of abstraction you put on top, the foundation is always there. Luckily we have better and better support for wasm in browsers, so it's a matter of time when this outdated stack will be replaced with solutions designed from the ground up for the task.
As a user I usually want all of those features to work. I regularly get ticked off at apps, because I cannot copy paste like in the browser or the app just closes (and loses all state) because I tried to use the back button. I also encountered apps that just reset, because I dared switch to another app for a second because I wanted to copy paste something into it...
I have literally never needed to select text in a UX element.
In the past, occasionally there would be an error message in a message box dialog that I wanted to copy and paste. And then I discovered that despite it not looking selectable, it actually was.
I don't want to accidentally select the text of my menu bar, or of a text box label, or a dialog tab title.
Lots of limitations for you to not accidentally do something, maybe there is a way to not accidentally do those things and also help people that need them.
No, not providing concrete examples is a weakness.
You're awfully arrogant in making a judgement about my empathy... if you want to make this personal.
Or maybe you can justify why people need to be able to select menu labels in the first place? That's not standard on any OS I've ever used, so it's up to the person who wants to change things to justify why.
Maybe be less judgmental of people here on HN, and contribute something factual instead? I at least gave a factual account of my personal experience, which is a data point. Describing one's experience isn't egoism.
A simple and concrete example is, go to Japan, find yourself in need of using any Japanese-only app, be extremely frustrated in not even being able to select text to translate it.
At least in recent versions of Android there is that OCR (?) powered functionality to select text when you're in switch-app view.
Circle to Search can translate everything on your screen without you needing to go through the whole "copy text, open Translate, paste, switch back to app" workflow. You just hold the home button, then press the translate button.
Mmh, the examples you've listed are actually super easy to do if you're using a framework such as angular with it's plugins for pwa and touch controls.
And prolly tailwind for css/disabling selection if you really want to, but I'd call that an anti feature in almost all cases.
I've had enough browser apps try that on my phone. Usually they start to lag out and become unbearably slow due to the framework bloat, compared to native apps that have no such issues.
You have to wonder about the motivations of the company making the browser that makes it impossible to disable some of these things, and therefore makes real apps so much superior (like swipe to go back on safari - I have never ever swiped back intentionally in over 100000 swipe backs).
“I have never wanted to type the letter ‘e’ in any of the 100,000 times I hit the ‘e’ key on the keyboard; it’s always felt suspicious to me why keyboards even have an ‘e’ key which can’t be disabled” said the perfectly normal hacker news commenter.
Touching something on the left side, like a link, and let my finger touch the glass a tiny bit too long while pulling the finger back. Unwanted swiping happens to me all the time in all directions - may the developers use a touch screen for everything forever!
It doesn't sound like anything that a PWA (paired with some a sync mechanism like Websockets) can't solve. And with WebAssembly the convergence is even more compelling.
To go along with this UX argument: it’s always been my perception that native apps often lean towards a stateful design while web apps try for stateless. Maybe that’s too abstract (read - incorrect), but was always just where my intuition landed.
This is a bizarre take. Are you also suggesting there’s no reason to have a native app on a laptop? Because it’s essentially the same question. There are many things which a native app can do that a browser just cannot do well, or at all. I don’t know what your needs are, but for example if you’re doing heavy video or audio editing, accessing heavy amounts of RAM or utilizing GPU compute or doing other things on the bare hardware, doing that all from a browser is definitely not there yet.
On desktop you do productive work, your apps need native capabilities. On mobile, apps are primarily consumption, displaying, browsing... no complex interactions.
Lots of people use iPads for content creation. I think your worldview on this topic is a bit narrow. There have also been multiple feature length movies shot on an iPhone, at least two of them by Oscar winning directors! Those weren’t done on a mobile browser.
> Lots of people use iPads for content creation. I think your worldview on this topic is a bit narrow
Can we stick to "by and large"? Every year many youtubers make that video of trying to use ipad/samsung dex as the productive computer for a day. Last I checked they always end the same way.
ipads are designed primarily for consumption not creation. I'm sure lots of people manage create something on an ipad anyway but that doesn't mean it's a good tool for the job. Filming a movie on an iphone is just using the camera. I'd be very surprised if anybody making a full length movie with their iphone edited that film on their phone or an ipad.
> Filming a movie on an iphone is just using the camera.
Not really. And this is why native apps are necessary. You can't use the built-in camera on an iphone successfully in this way, and I don't know any director who has. They use specialized third-party apps which give them the appropriate control.
> I still, will never understand the need for native "Apps". To this day, I have never seen an "App" that couldn't simply have been a website/webapp.
In cases where a native app and web app are both available on iOS, there’s often a huge difference in battery usage and sluggishness. Also, as a sibling poster mentioned, I like having fully “offline” apps as well, for example for maps and notes.
I’m not saying that I like how Apple and Google have done this in practice, but I don’t think going webapp-only is the future. For the same reason I won’t replace my real computer with a Chromebook for the foreseeable future.
When the iPhone came out, you had full offline access on PC to Gmail and google docs using Google Gears.
Google Gears got deprecated because something something move to standard HTMl and browser features and now we don’t really have any offline web apps.
The ability to have non sluggish, offline web apps has existed for decades now, but the interest from providers has been declining and the understanding that this is possible is also declining on the consumer side.
I’m still bitter about Apple backing off their stance against using web tech in apps. Most apps that are really bad, are really bad because they’re just wrapping websites.
> Where most of the modern applications are either web wrappers or Electron apps.
Only if you're stuck on a depreciated platform like Linux. If you are on Mac, native applications – real applications – are much more powerful and usable than any web wrapper on Linux.
I've noticed Linux users have taken a habit of proposing their broken way of using a computer through the browser for other platforms as well. But on other platforms we are already spoiled with quality software.
Native applications are way better on Linux, too. But only where they exist. There are plenty of "apps" where there developers have taken shortcuts by getting "Linux support" by using Electron. These app perform noticeably worse and are generally disliked by their users.
I was lamenting the lack of native UI in Blender last night.
I’ve been using Nova for the last few years.
Increasingly native non-Xcode development tools seem to be few and far between. I have BBEdit and Nova, but a lot of people have switched to VS Code it seems.
Have you tried building PWAs for large user bases?
Here are some of the frustrations I had with PWA's.
There are massive differences between browsers and Android/iOS when it comes to storage, access to local files, and size limitations. Proper backup/sync of large files using IndexedDB, Cache API, or localStorage is not as straightforward as native storage.
Service workers aren’t designed for complex or long-running computations, But they’re more like lightweight assistants, and you would have a HUGE pain trying to accommodate all the different browser/OS limitations if you need predictable background sync/backup. This seems maybe to be better going forward due to frameworks like Ionic/Capacitor or Workbox.js tho.
PWAs are tethered to the web’s security model, which means they’re generally restricted to HTTP and HTTPS for communication. This limits direct access to protocols like SMTP (email) and FTP (file transfer). You’re stuck with web-friendly options like WebSockets or WebRTC, or you’ll need a server to act as a middleman. Building a torrent client would be really annoying due to the limited protocol access. The WebTorrent JavaScript framework, which can run in the browser, does not fully support traditional TCP/UDP torrent protocols directly but instead relies on WebRTC data channels. Therefore, your app will only connect to peers supporting WebRTC, which significantly reduces available torrents and peer counts. Also, there often is an added level of restriction to background processes on mobile.
There are also limits to access of the devices APIs:
- NFC (partial Web NFC support in Android Chrome)
- Bluetooth (Web Bluetooth limited to Chrome Android, absent in iOS)
- Native contacts, SMS inbox, telephony, or system-wide calendars.
- Some system-level sensors (barometer, precise accelerometer data).
Also: Web apps often perform slower on heavy graphics or computation than native apps due to lack of direct GPU access. I have not tested this myself, but I know this has gotten better.
Onwards:
- PWAs can't directly register as the default handler for specific file types or URL schemes across the OS.
- PWAs cannot reliably run background tasks (like precise location tracking, audio playback, VoIP callbacks, or continuous data monitoring) when inactive.
- WebAuthn supports biometrics, but native biometric APIs (like Face ID/Touch ID) offer deeper integration for specific app functionality. This is a HUGE need for our firm, as we rely on it for easy authentication for our app, and customers love it over other authentication methods.
- PWAs can't easily embed widgets into the OS home screen or system-level UI components like control center integration.
YES, PWAs are much more capable than some people think and could, in many instances, work just as well as a native app. (I use GeForce Now on iOS with not many problems.)
And this is not even touching on how much easier it is to use Android/iOS SDKs to put together an application, and user expectations (which might be WRONG when they think PWAs are lesser or more insecure, but these attitudes are still reality).
All that said, I prefer PWA over native myself due to publication freedom, but I get annoyed when you talk down to people, and you seem to be the one that doesn't understand that there are actual limitations.
The post mentioned offline usage for maps and notes. Neither are significantly limited by service workers' capabilities. Platform differences are annoying indeed, especially due to the deliberate sabotage by Apple.
Sure there are limitations to PWAs, but quite a vast majority of apps don't need the missing features.
I find native Android and especially iOS SDKs vastly more difficult and cumbersome to develop for. Doubly so of course if you have to develop for both. Maybe if you're already used to the Android/iOS development mess it is easier short term than to learn something new.
I get your point partially. All these apps that companies put out in order to collect and manage shopping tokens or to contact their customer service would have been much better as a website.
However I still do like to have apps on my devices that just work offline, without distributing my data across services I do not control. And I also do not want to depend on a internet connection, when I am anywhere.
I like my offline Osmand/Organic Maps app to show me the trails when I am somewhere in the woods or mountains. I like my apps that instead on using some third party server, connect directly to my other local devices to share data.
IMO all (where possible) apps should be developed offline first, and only require internet when necessary, and those apps that cannot work without internet should be web apps, they do not need to be on my devices.
It’s totally possible to distribute a webapp that works offline and stores all your data offline too.
Platform owners introduce a bunch of restrictions that create reliability and usability concerns, but the standards already exist to enable a website operator to create a webapp that, after the initial ‘install’, runs entirely offline on the user’s device, and has no need to communicate with the website.
Im sorry. I really just can’t understand or relate to this at all. Mobile web still feels like such a terrible experience, and apps generally don’t. When’s the last time you tried booking a flight on mobile web? And how do you deal with all of the real estate the browser steals? Having to log in every time when the app can just cache my authentication and FaceID me?
Seriously, booking hotels and flights is so much better on the web. You get multiple windows for easy flight and price comparisons, within and between providers.
I don’t understand people who use apps for this. It is such a pain.
No, I’m saying that the booking.com app, or the Skyscanner app or any of their competitors don’t support multiple tabs.
Their websites do (although even on new phones you are at a greater risc of a tab being purged and needing a reload, but still you can multi tab on the mobile website)
Ah the difference here is that I can't use multiple tabs on my phone as they are too small. So tabs are only relevant to me on desktops and even then I will often use new windows.
I almost always book via apps. I can compare flights by looking at Kayak (app), then actually book it in the carrier app. I think the workflow just has to adapt to the tools you’re using, and trying to follow the same methods you’d use on desktop just don’t work. I don’t think either particular method is objectively worse than the other for every use case.
Not who you replied to, but I more so do not rely on my phone for anything where I would prefer more screen real estate such as doing comparisons like buying flight tickets. I have never bought flight tickets on my phone, only on my computer. I prefer the bigger screen and keyboard for most things actually
> You are currently using a webapp that doesn't do this. It's called Hacker News, and it never asks me to login every time on my phone.
Every time I visit Hacker News on my iPad I'm logged out. Apple has decided that if you don't visit a website often enough it will expire all your cookies for the site.
In practice that means I can log in to HN while I'm at the cafe one weekend and be logged out by the time I visit the next weekend.
Passkeys do definitely make the mobile web experience better, but unfortunately they’re still not widely supported. I’m not saying mobile web apps can’t be good, but a native app allows for a lot of UX optimization.
There are also an increasing number of services which are ONLY available as apps now, including, but not limited to, many financial apps such as Revolut.
A big issue with this trend is that unlike the web, the whole Android ecosystem is a walled garden which is strictly controlled by Google. In principle you can run your own custom Android ROM, but in practice this will lock you out from any app which uses Play Integrity API to enforce Google's totalitarian regime which dictates what software YOU are allowed to run on "your" hardware.
You go to the nhs webpage and it works in the same way.
Login is better on the iOS app as you can use touch id/faceId and not userid/password also the webpage asks for cookies as it can't seem to remember the choice
Unfortunately that seems to depend on who did the test or your GP.
There seem to be sites for your GP (which mine does via a .nhs.uk domain it used to be via https://account.patientaccess.com/ which still shows appointments but does not allow booking but still allows requests for repeat prescriptions.)
or hospital portal for results.
Its funny to read negative replies to your comment on the shortcoming's of web apps.
The browsers are controlled and manipulated by the likes of Apple and Google. These companies have a significant influence on the direction of browser features and limitations, often shaping them to suit their business interests. For example, Apple’s Safari and Google’s Chrome have been criticized for implementing features that reinforce their own ecosystems, such as limiting web push notifications or restricting certain web API functionalities to encourage users toward their native apps. This ultimately means that even in the browser world, the same forces that drive the app store monopolies can still control and restrict what’s possible, even if the web is inherently more open. So while web apps offer more flexibility than native apps in theory, the reality is that Apple and Google’s control over the browsers still limits the true potential of a completely open web.
> The browsers are controlled and manipulated by the likes of Apple and Google.
Who do you think controls Android and iOS native APIs?
Web standards at least have public forums and specs, with multiple parties involved. And all the major browser engines are open source and apps built for them are relatively cross-compatible.
During earthquake in Bangkok in Friday Grab (local superior version of Uber) helped me to order taxi and get my kids home. Needless to say that cell phones network collapsed for most of the day. All people want to know what happens and is their family and friends are safe. They definitely have very optimized network layer for poor connections. I bet they can switch to udp or something. I'm glad that it wasn't web app.
99% likely they're using a REST API, which is... HTTP.
Even if it's gRPC or something more exotic, it'll be over TLS (you best hope it is).
You can have a webapp cached locally on your device. PWAs allow developers to create an SPA you can open from your homescreen, and to do that API interaction the same way as a native app.
I hope you and your family are well, and it's great that tech helped. But please, don't think that because this tech worked in this instance it can't be made safer and securer.
It’s clearly for data collection. Take the yelp web app for example. It used to be much nicer than the native one. Then, they intentionally defeatured it until it was useless.
Also, this situation benefits the google-apple duopoly, since it means superior products (remember Windows Phone 8?) or privacy focused devices (FirefoxOS) have no chance of getting a foothold in the marketplace.
The objections I see in sibling comments are nonsense. Modern web supports high frame rates, developer control over the UI, etc, etc.
While many native apps could be web apps, you’re ignoring a very large reasons for native apps:
1. Better UX and responsiveness for users, including better offline use.
2. Using native hardware APIs. How are you going to do things that require on device video compression, or realtime graphics that are more advanced than GL ES, etc
3. Battery life and performance. A native app can use less power than a web view for doing its work, and it can also make use of better async/concurrency/threading than a web view allows for.
> The only benefits I can see of "Apps", are the developer get's access to private information they really don't need.
That's exactly the point. More developer control, less user control. Can't change cookie settings in an app, can't (easily) block ads, can't use developer tools to remove annoying UI elements, can't disable phone home mechanics, can't prevent the developer from profiling you.
GP used hyperbole but was not all wrong. The issue is that most native apps could very well have been web apps. I appreciate that on iOS adding a web app to homescreen is possible, albeit obscure and not many use that feature. I hate that Firefox never really supported PWA for some unfathomable reason.
Exactly. But GP deliberately said all, not most or many.
GPs comment is something that people in politics would called sensational. Extreme rhetoric is great for upvotes because it stirs emotions but it’s not rational.
I think it’s completely justifiable, since it illustrates the core of the idea. Also, HN users, unlike voters, can see through the framing. If anything, it’s a great way to spark a debate.
I think that the name browser is basically just what is putting people in the wrong track of interpretation. They have been fully fledged VM sandboxes, which incidentally happen to also embed html and pdf interpreter natively.
The commenter says about most apps. The use case you mentioned requires computing resources.
You can do the whole thing on browser too but it is not efficient way .
But in the case of delivery apps, finance apps, you don't need much compute as can work exclusively with APIs .
There is nothing inherently evil about an app, or inherently good about a website - it's only because historically we have allowed crappy app permissions structures and allowing apps to ask for things they don't need.
Apps are faster, are more predictable (no auto-reloading or rendering issues) and generally perform better IMO.
On the other hand, in reality, you're correct. I think the NYTimes app will collect more data from me than the NYTimes website.
For me, there are a lot of applications that I want to be able to load regardless of whether I have a connection to the Internet or not: calendar, notes, mail etc. They can sync/send/whatever whenever I am next online.
Ah yeah. While this is mostly implemented terrible, a web app can absolutely do this for you using service workers. So you can install a webapp to your homescreen and use it without an internet connection at all.
Emulate a network layer to serve a pre-packaged bundle. Neat "platform", but as a developer no thanks.
While apps are spying etc, making them is usually a no-brainer compared to churning and leaky web stacks. And probably not a single time a webapp loaded for me when I tried it outside standing in the wind trying to figure something out. It was always an app that started and helped and didn't ever scroll horizontally while doing so.
You seem to miss the fact that most web app experiences are inferior to that of native app.
The disadvantage of native is barrier to install. Once that's done, the experience to the user is simply superior. True native experience, fast and predictable. As a developer it's easier to build those types of apps as well.
People who haven't used iOS might not understand this though as they've never seen "how things should be".
Becoming the middle man is the default model that supports scale. No one has come up with anything else to support a world where avg disposable income is close to 0
I worked for a company that used Sencha back in the day and wrote the first React integration over their form/datagrid components in 2013. React ate their lunch
Very narrow take, it so far fetched i would consider this a bad faith comment.
How could you possibly consider intensive games to be "simply" web apps? How about network apps like vpns, wifi analyzers? Have you really not come across such apps or are we meant to think every app is a TODO application?
Both web and native has been driven by the same corporate forces, the argument here should be technical only - what can you do on native that you can't on the web. Mixing this technical matter with corporate policies muddies the waters.
Maps and navigation apps? Desktop integration and sync apps?
That said most of the time you are right.
I am fairly convinced that some apps are just wrappers around web apps. The Virgin Money (Uk bank brand) app used to ask for cookie permissions on launch and felt very like their website used to (until it was removed and they went app only).
For one, you couldn't access those webapps without a browser, so that's the need for one app. It would also be a bit annoying if you had to load a webpage when trying to dial a number
Or am I not understanding what you mean when you use the quoted name "Apps"?
Many things needs to be an app, but so so many do not require.
Many apps are apps just because they can collect your data, and create walled gardens. It is harder to create extensions for existing apps, for web pages it is easier.
Access to Bluetooth devices is a good reason to have an app. I definitely do not want a Bluetooth API in my browser (although Chrome does have something in that direction, I think it's a bad idea)
Any kind of offline cryptography. Imagine Apple Pay being an app. So all sort of digital signatures, documents, checks, payment codes and vouchers, tickets etc.
IMO this is in the range of „why we use machines to transport if we all have legs”. Technically true, but applications do more than only UI.
I've heard this argument for the past 30 years (we won’t be using apps, everything will be remote console/terminal/webpage/web). Chromebooks were meant for web-first access, and yet native apps are still alive and kicking.
Push notifications. Apps have them on by default, websites have them off by default. 100% of Temu's valuation is because they pester users all the time with nudges to buy stuff, which works.
Normies don't turn off notifications. Over the last few years all my relatives have picked up smart watches, (thanks to cell carriers upselling them hard during phone replacements) and in any given conversation at family events they'll be glancing at their wrist every 100 seconds.
Registering for push notifications ought to be a protocol much simpler and lightweight, compared to this spinning up a virtual machine and running a downloaded binary for each channel of notification you wish to receive.
This makes me wonder if Google would let rot creep into (or possibly already has) to encourage people to use apps and also encourage developers to build on their platform.
To me a mobile app is usually just a shorter web app that you can’t zoom on
Edit: and I’ll venture a guess that since mobile apps can’t use things like ad blockers, companies probably prefer them. More control over what you look at.
Push notification is the big one. Yes, there is web push, but that's hardly scratching the surface of feature completeness. And incentives to change that aren't really there.
Yeah, good luck writing a screen reader, a demanding mobile game, a (local) music player, or a warehouse parts lookup app, supporting fully offline use and barcode reading functionality.
In 2025? Sure, you can do some (but not all) of that in a browser? In 2010, when those systems were becoming popular? Absolutely not a chance.
People forget that Apple initially tried this exact approach. On the first iPhone, that's how you were supposed to do apps. People wanted native so much that they were willing to go the extra mile, jailbreak their device, document the undocumented iPhone SDK and write their own toolchain. The user demand for native was clearly so overwhelming that Apple finally relented and gave in.
Even a few years later, Facebook tried hard to have a single, cross-platform HTML5 website instead of bothering with apps. Even then, browsers just weren't there yet, and they probably had the best engineers and resources on that project one could have had for any money.
The most basic app, a notepad, I often prefer native. When I go between google keep or notion to apple notes I can tell the difference. If the text is long enough, the web apps just can not load the content.
Just to confirm:
I dumped all of my notes from my insanely large apple notes (about 16000 lines of text) and pasted them into Google Keep, Notion, Google Docs. With the exception of Google Docs the rest of them flat out froze and I had to kill my browser. Stop trying to tell us that the browser is the answer to everything when most web apps cant do the job of Notepad.exe or vi
So, one out of three webapps that you tested could handle this much text. It suggests that the problem for the other two is their implementation, rather than any limitation of the browser.
Of the two that failed, did you also try the app versions to see if they failed too? I really doubt the Notion app could handle 16000 lines of text.
Honestly I wonder the same. App stores have big % cuts for the provider, I believe Apple has a 30% cut? Surely this number is big enough to justify spending the resources for a mobile first site?
And it’s also pretty clear that humans typically don’t crank this type of thing out in 15min. The current narrative around AI is a lot more about augmenting our work and making parts of it faster. It can create buggy code in almost no time, which leaves plenty of time for bug fixing, iterations, and optimization.
Humans can also create buggy code quickly, but it takes a while! And you still have to do bug fixing and optimization after you think you’re finished.
> This is why my kid isn't going to watch YouTube. If and when we decide to show her any children's show, it'll be from a manually curated set of videos downloaded and streamed from a NAS. In my opinion, it's irresponsible to expose children to modern advertising.
I think we need to be careful with this approach. It's also irresponsible to not expose them to modern advertising. Unfortunately, modern advertising does exist, and if they are exposed to it first as a teenager, or worst as an adult, they are likely to be scammed/convinced to buy things they don't need.
Maybe a better way is, instead of banning anything, to supervise and explain to our children what they're seeing.
The problem is that advertising is insidiously effective. Most people don't think ads work on them, yet the results of ad campaigns demonstrate that's plainly false. And this is even more true for children who have extremely poor reasoning and logic abilities, and are going to be being targeted by ads specifically designed to exploit their instinctive impulses. This is unlikely to be a battle that parents can win.
I think this is vaguely akin to exposing your child to gambling and explaining the impulses/desires it creates so they don't get addicted to it. But that's just not how it works.
> Maybe a better way is, instead of banning anything, to supervise and explain to our children what they're seeing.
Now that I have 3 of them, and the oldest is almost 6, I guess I need to update the article on how this panned out.
Long story short: replacing Spotify and YouTube for music with yt-dlp + Audacity (to cut out ads, padding, channel jingles and other nonsense, + normalize volume) worked to an extent, and I built up a small library for my eldest daughter, which I then reused with the younger ones. However, once the oldest one went to kindergarten, she immediately contracted the Paw Patrol fever, and we couldn't continue keeping our kids oblivious to children programming anymore.
I mean, you can try and fight Paw Patrol, but you can't win in the end, and the more you resist, the harder it gets for your kid to make friends. Same with other major franchises. It's the same kind of problem as teenagers and smartphones, just couple years earlier.
Anyway, from that point on, curating a local library stripped of advertising became too much of a hassle. Instead, we embraced Netflix and YouTube - I just keep the ad-blocker running to reduce ad exposure, but since advertising is everywhere and can't be avoided, we focus on teaching our kids how to process it.
As evidenced by TFA, I do have some strong views on advertising in general, but I'm not dumping them on my kids. Letting them see advertising as just a fact of life seems to make it less impactful. Turns out, most ads are 100% boring to my kids (after we get past the "why is the man doing ${something they never saw someone do before}" questions); my oldest one actually figured out on her own where the "skip ad" button is on YouTube and how to operate the mouse to click it and resume the song that was playing. And those that aren't boring, we actually learned to enjoy.
I can go on and on, but I'll leave it for an eventual blog article.
Just to confirm, Ollama's naming is very confusing on this. Only the `deepseek-r1:671b` model on Ollama is actually deepseek-r1. The other smaller quants are a distilled version based on llama.
Which, according to the Ollama team, seems to be on purpose, to avoid people accidentally downloading the proper version. Verbatim quote from Ollama:
> Probably better for them to misunderstand and run 7b than run 671b. [...] if you don't like how things are done on Ollama, you can run your own object registry, like HF does.
It’s definitely on purpose - but if the purpose was to help the users making good choices they could actually give information - and explain what is what - instead of hiding it.
I think if you find Ollama useful, use it regardless of others say. I did give it a try, but found it lands in a weird place of "Meant for developers, marketed to non-developers", where llama.cpp sits on one extreme, and apps like LM Studio sits on the other extreme, Ollama landing somewhere in the middle.
I think the main point that turned me off was how they have their custom way of storing weights/metadata on disk, which makes it too complicated to share models between applications, I much prefer to be able to use the same weights across all applications I use, as some of them end up being like 50GB.
I ended up using llama.cpp directly (since I am a developer) for prototyping and recommending LM Studio for people who want to run local models but aren't developers.
But again, if you find Ollama useful, I don't think there is any reasons for dropping it immediately.
Yeah, I made the same argument but they seem convinced it's better to just provide their own naming instead of separating the two. Maybe marketing gets a bit easier when people believe them to be the same?
ollama has their own way of releasing their models.
when you download r1 you get 7b.
this is due to not everyone is able to run 671b.
if its missleading then more likely due to user not reading.
I'm not super convinced by their argument to blame users for not reading, but after all it is their project so.
> It is very interesting how salty many in the LLM community are over Deep Seek
You think Ollama is purposefully using misleading naming because they're mad about DeepSeek? What benefit would there be for Ollama to be misleading in this way?
The quote would imply some crankiness. But ye it could be just general nerd crankiness too of course. Maybe I should not imply the reason or speculate too much about the reason in this specific case.
It's also not helping the confusion that the distills themselves were made and released by DeepSeek.
If you want the actual "lighter version" of the model the usual way, i.e. third-party quants, there's a bunch of "dynamic quants" of the bona fide (non-distilled) R1 here: https://unsloth.ai/blog/deepseekr1-dynamic. The smallest of them is just able to barely run on a beefy desktop, at less than 1 token per second.
I used to love Sublime Text and used it for my daily driver. Even bought a license.
But just downloaded it again, and while it definitely felt snappy, after 15 minutes I still couldn't find a way to get TypeScript type checking working, or even any type of JavaScript/TypeScript autocomplete.
If you want IDE-like behavior you'll need to install the TypeScript LSP: https://packagecontrol.io/packages/LSP-typescript. We do have our own limited language-agnostic indexing and auto-complete if you open your projects folder; it relies on existing code to provide suggestions.
Oh, that's frustrating. I think it'd be great to have some more concrete guides for modern Sublime. Maybe I'll write one about getting TypeScript and everything working starting from scratch
Why are these only 1.4ghz frequency, when raspberry pi gets to 2.4ghz? Is it a limitation of the cost of scale that prevents building faster chips? Or does the architecture not really support faster chips?
I really hope that RISC-V can take over as a modern architecture, adding some competition to Intel/AMD and Arm. But they'll need to be able to offer faster chips, or at a minimum more than 4 cores.
Also does anyone know the rate of progress? I believe 10 years ago these where at 0.5mhz?
Seems to be limited mostly by the power and size constraints, while it's also fabricated in an older node. The target market seems to be simple embedded devices not needing SIMD instructions, which is less constrained by the software availability. RISC-V is still a very new architecture.
This same point is made in threads discussing how wayland protocol is 16 years "old". I think it's different if the system starts out as a research project rather than a commercial project, because the time until a usable implementation is much greater. For example, I would say that riscv is "newer" than loongISA/loongarch despite being slightly older in a literal sense.
If you look at an arch like x86 or ARM it was designed right before chips were released, and then extended over time. The same goes for the X protocol, it simply extended previous versions.
If you are designing something from the ground up to avoid the inherent problems of an existing system, it is reasonable to take time and research design problems to make sure you don't recreate the same issues (which would defeat the point of the redesign). It doesn't compete on the same time-frame as an extension of an existing system.
I think it’s a bit problematic to say ARM is 30-ish years old. The company is 34 years old but 64 bit Arm (AArch64) which is really very different to its predecessors was announced in 2011 so arguably only 14 years old.
Yes, in a sense. But that is not an apt comparison at all, really.
ISA design is a complex endeavor. Arch64 benefited from multiple factors directly attributable to ARMs prior art, domain knowledge, and market positioning.
There is a huge distinction between a globally recognized and dominant ISA engineering firm coming out with a “new” ISA that their engineers had been prepping for for years, and the effort required to create a novel ISA and ecosystem from scratch.
One is just another day at the office, while the other is a very riscy endeavor that requires amassing the talent, creating incentives, creating an engineering culture, and trying to create a niche in a market that was arguably fully populated by other options. And then, you still have to create an entire family of ISAs to match various specification levels.
Well, that is definitely true. I guess that the quibble is with the assumption of equivalence….
(which you didn’t explicitly state, so I apologize for thinking it that way)
….of the launching of arch64 with the launching of the RiscV ISA.
They were both theoretically clean slate designs, but one was made by a bunch of academics and the other by a company with decades of ISA design expertise. I’d expect the latter to be much, much more mature, all other things being equal.
At any rate, what riscV really did so far was to make 8bit MCUs irrelevant. I used to use a lot of 8 bit parts even with M0 around, but now with chips like the CH32v003 and family, it’s just ridiculous to even contemplate.
I mean you can hook up an 8pin MCU and a couple of resistors to a vga monitor and a keyboard and have a a computer in a heat shrink tube that walks circles around my ancient apple II for $0.60.
And if you want WiFi, BLE, and some other wireless stuff, with 3x the speed and 100x the memory, the riscV esp32 chips come in at about $1, and they can do pretty strong edge AI. It’s all just silly at this point, and it’s riscV that caused that sea change.
No worries and thanks for sharing your experiences with RISC-V MCUs - really interesting.
FWIW I agree that AArch64's launch was not the same as the RISC-V launch. There has clearly been a lot more work required to boot the RISC-V ecosystem and they have made amazing progress. Arm had the advantage of incumbency and a lot more resources.
At the same time I think the have oversold progress on application level processors and I don't think this does RISC-V any favours at all. Arm has a lot of experience and the general tone from some of the RISC-V commentary is that they got it 'wrong' with AArch64 which, to be charitable, is unproven.
“As of June 2019, version 2.2 of the user-space ISA[46] and version 1.11 of the privileged ISA[3] are frozen, permitting software and hardware development to proceed. The user-space ISA, now renamed the Unprivileged ISA, was updated, ratified and frozen as version 20191213”
So, it’s more like 5 years old, compared to ≈40 for 32-bit x86, ≈20 for 64-bit x86.
But even that isn't the true starting gun for anything but basic MCU.. For high performance, it's RVA22. The relevant specs were only ratified in December 2021.
It takes 3 years from IP to chips, and thus we are seeing the first RVA22 chips now.
> This SoC has a 1.4 GHz, quad core P550 cluster with 4 MB of shared cache. The EIC7700X is manufactured on TSMC’s 12nm FFC process
> Next up is TSMC’s 12 nm FFC manufacturing technology, which is an optimized version of the company’s CLN16FFC that is set to use 6T libraries (as opposed to 7.5T and 9T libraries) providing a 20% area reduction. Despite noticeably higher transistor density, the CLN12FFC is expected to also offer a 10% frequency improvement at the same power and complexity or a 25% power reduction at the same clock rate and complexity.
They optimised for density and power, not frequency. A lot of the benefit they're claiming comes just from this.
> Why are these only 1.4ghz frequency, when raspberry pi gets to 2.4ghz?
Milk-V Megrez is shipping the same SoC running at 1.8 GHz.
Intel's Horse Creek chip with the same cores ran at 2.4 GHz, but Intel is in trouble and cancelled non-core activities. A working board was shown at Hot Chips 23.
Clock speed depends on the SoC integrator at least as much as the core designer, and the process node it is made on. And the packaging thermal envelope.
> Or does the architecture not really support faster chips?
Of course not. From as technical point of view it's essentially identical to Arm64, and with the same financial investment and comparable engineers will run at the same speed.
The P550 is a very early RISC-V design, announced in June 2021, just a few months after the RVA22 spec it implements was published. Three to four years to go from core to SBC is normal in the industry, including for Arm's A53, A72, A76.
SiFive has designed and shipped two or three major generations of more advanced cores since then, in the P670, P870, and P870-D.
Although that was at announcement in June 2022. SiFive have continuous improvement and quarterly updates, which is how the U74 in the JH7110 ended up with Zba and Zba that the FU740 (and I thin JH7100) doesn't have, but missed out on the L2 prefetcher that the next quarterly updata got.
But Intel Horse Creek did (and also was higher MHz, they said 2+ GH, but I think they demoed it at 2.2 at Hot Chips and were shooting for 2.4 in production):
Does Intel/AMD/ARM really need more competition? Do you think they’re stagnant?
As I and others have said before, successful consolidation around RISC-V is an ultimately a gift to China. Maybe you’re for that; as an American I am not.
It's a gift to everyone if an actually open architecture is usable.
But it's especially a gift to sanctioned regimes, because they can more easily use this architecture for home grown chips that become desirable when mainstream chips are embargoed or under threat of embargo.
Yes, there's still fabrication, but China and Russia have some fabrication going, just not at the latest nodes. Starting from an open standard makes it a lot easier than if they have to clone an architecture/chip or make a whole ecosystem of architecture and software.
Also, do Chinese companies have licenses for all the ARM cores they produce? I assumed they don’t. They traditionally don’t care about IP, so it’s a wash anyaway.
Because the tooling is make or break. When LLVM, Linux, rust, debuggers, Android etc etc support it you have a real chance. Having an ISA that one company doesn’t own means you can develop chips that plug in, although all the extensions of RISC-V make that a little harder.
I think that the development of riscv will eek out greater market share for chinese manufacturers, which will have a negative effect on the global order.
I am also skeptical that it will lead to more open designs, but perhaps it could increase competition enough in the chip design space that more open chip designers can make a space for themselves, especially if the business of chip fabrication is isolated from design.
China tried to create a homegrown CPU 15-20 years ago with their MIPS variant but that died out. I think this time they are much wiser and will pull it off. In 5-8 years we may have China CPUs dominating the Asian market at least.
Riscv leading to more open designs is wishful thinking, plus probably a large dose of PR. MIPS has been open for years and how many open source MIPS desisgn have we seen so far?
Presumably 100% of supply at this point is going into chinese government/military projects where not using a western design with possible backdoors is worth a price premium.
1. https://about.gitea.com/
reply