Hacker Newsnew | past | comments | ask | show | jobs | submit | Tehnix's commentslogin

>MitID doesn't work on rooted android phones, or those running a custom rom.

I find these arguments quite strange. A big part of MitID and similar services is to protect you against fraud. The most vulnerable in society (e.g. old people) aren't running these kinds of devices, and I'd rather we optimize for the general population and the people most at risk, rather than people running some weird setup that is almost identical to setups a scammer would run.

What privacy aspects are you lacking here? For all the services that MitID connects you to, there are government required responsibilities for these companies to track all of this information anyways and be able to provide it to the government if needed. That goes for banking, public services, telecom, etc. And this is in no way unique to Denmark, it's how most countries operate. Denmark has just acknowledged this and decided to make it easier.

Did you expect your UK bank to not be required to know who you are and be able to track and keep records of literally all financial interactions you have with them and their services? I'm a bit confused on what society you are comparing against.


I see a few people here complaining about the idea of a central digital identity service.

As a Dane, having lived in other countries, MitID is an insanely superior to anything I've ever tried. It simplifies so many touchpoints with the government, and is honestly such a good upgrade going from nothing -> physical NemID card with codes -> digital MitID (literally "My ID").

The only real disruption I'd say is if you happen to be buying something online that triggers the 3DS prompt (an additional security layer to prevent cards getting stolen/scam). In Denmark the 3DS prompt for VISA at least uses MitID to verify you are the owner of the card, so that'll obviously not work when MitID is down.

I'll say, it has been surprisingly stable though otherwise, and disruptions usually aren't a big impact (I literally wouldn't have known unless I saw this HackerNews post).

As for a centralized identity system: I personally see this as an acceptable contract for living in a society. Most countries have SSNs anyways, your taxes and many other things are tied to this. Centralizing this identity allows the government to streamline so many things to give a better service to their citizens. For example, all official communication goes to your "DigitalPost" email inbox, your verify identity with "MitID", and every person or company has a registered "NemKonto" tied to them for any salary or government payouts.

I maybe see people get tripped up at the concept that your government should actually care about the service they deliver. That's probably already the point where we diverge when talking about if these things are a good idea or not.


> I see a few people here complaining about the idea of a central digital identity service.

Digital identity service is fine for gov services. It’s not OK as a hard requirement for anything else such as banking.

Digital ID in my country is down for about 7 days and counting. iOS app no longer opens after the recent update. I cannot pay tax without digital id app working but i can do banking and everything else.


> It’s not OK as a hard requirement for anything else such as banking.

What’s the alternative that you think is okay for that then?

Certain businesses have regulatory requirements to know and verify your identity (banking, telco).

A UK poster gave an example of how they need to mail the bank a copy of their passport and other private information.

I’d certainly much prefer simply using a digital login solution as an alternative to that. They can verify I am who I say I am, without needing my passport which I would consider a much bigger privacy invasion to hand out.


I have an electronic certificate for sign and verify on my physical national identity chip card. You either use it physically or online but only at times when identity confirmation is required.

> It’s not OK as a hard requirement for anything else such as banking.

It is in fact not a hard requirement. It just happens that when you have a relatively cheap and efficient digital identity, which is by definition trusted by the government, banks will use that to reduce risk. It's not that they can't verify your identity any other way, this is just the obvious and easy one.


> The only real disruption I'd say is if you happen to be buying something online that triggers the 3DS prompt (an additional security layer to prevent cards getting stolen/scam). In Denmark the 3DS prompt for VISA at least uses MitID to verify you are the owner of the card, so that'll obviously not work when MitID is down.

If you use Lunar, the 3DS prompt uses the Lunar app and not MitID.


Bunch of negative sentiment in here, but I think this is pretty huge. There are quite a lot of applications where latency is a bigger requirement than the complexity of needing the latest model out there. Anywhere you'd wanna turn something qualitative into something quantitative but not make it painfully obvious to a user that you're running an LLM to do this transformation.

As an example, we've been experimenting with letting users search free form text, and using LLMs to turn that into a structured search fitting our setup. The latency on the response from any existing model simply kills this, its too high to be used for something where users are at most used to the delay of a network request + very little.

There are plenty of other usecases like this where.


OP here.

Really couldn't find many resources on how to actually use subsecond in your own Ruest applications for a better development experience, so thought I'd share the step-by-step I just did to get our own project up and running with it.

I'm sure there's some optimizations that could be done in order to hot-reload less of the code, but I think this is a pretty good starting point for people that are just looking to "reload my server on change, without killing it during the reload".

Let me know if you have any questions or things you'd like me to try out!


Nothing has opened my eyes more to how much mainstream media is distorting reality to fit their narrative, than the genocide Israel is committing in Gaza.

You can sit with literal video of an incident, and then see media headlines tell a completely different story than what actually happened.

Social media in our generation has been a weird amplifier of both misinformation as well as truth from the ground that contradicts misinformation in the media.

My selection of topics I trust media to report on has greatly narrowed down to ones that are completely apolitical, which is sad (they’ve always been biased, but at least I felt you could tell that they were biased and read through it).


With investments of these huge amounts (similar to Anthropic's recent investment), do they actually get a full 1.7B€ deposited into their bank account? Or does it work in some other way?


It works whatever way is agreed upon between them and the investors. For such large amounts it’s unlikely to be pure cash (there’s likely some amount of services somewhere in there), and they won’t be calling for all that cash at once.

The cash that is guaranteed is sent as soon as the investee needs it (they do what is called a capital call). Early stage startups and investments just do one capital call for the full amount, but larger amounts are often committed for periods of time; this also helps the investors schedule their own cash flow: for example if I have 500m this year and 500m next year, I can invest 1b in you, given the right schedule.


Anthropic has much more funding than that. Most recent one was at $13B at the one before was at $3.5B. Now imagine that GPT recieved $40B in one round!


GPT is not a company


Neither is OpenAI, but here we are.


OpenAI, Inc. is a company, and it owns other companies including OpenAI Holdings, LLC and OpenAI Global, LLC. https://en.wikipedia.org/wiki/OpenAI


>In May 2025, the nonprofit renounced plans to cede control of OpenAI after outside pressure.


Regardless of non-profit shenanigans, OpenAI is an entity. GPT is a type of LLM, which is not specific to OpenAI, other companies use this as well.


The nonprofit is OpenAI, Inc., a company: https://opencorporates.com/companies/us_de/5902936. Look at how many times the word "company" is used in the Wikipedia article.


my bad, but you know what I meant


I'm also wondering this. It also doesn't seem to be a coincidence, that ASML is an integral part of the semiconductor value chain.


> Ah, the penny drops. The idea that you can’t run a traditional server and must rely on serverless vendor if you’re “serious”

That's not at all how you should read this. They later on give an example of exactly what kinds of problems you'll run into once you start needing to horizontally scale you Next.js servers (e.g. as pods in k8s, which is not serverless):

> The issue of stale data is trickier than it seems. For example, as each node has its own cache, if you use revalidatePath in your server action or route handler code, that code would run on just one of your nodes that happens to process that action/route, and only purge the cache for that node.

Seeing as a Node.js server running Next.js serving SSR or ISR (otherwise you'd just serve static files, which I personally prefer) is not known to have the greatest performance, you will quickly run into the need of needing to scale up your application once you hit any meaningful amount of traffic.

You can then try to keep scaling vertically to avoid the horizontal pains, but even that has limits seeing as Node.js is single-threaded, and will run into issues with the templating part of stringing together HTML simply taking too long (that is, compute will always block, only I/O can be yielded).

The common solution for this in Python, Ruby, and JS/Node.js is to run more instances of your program. Could be on the same machine still, but voila! you are now in horizontal scaling land, and will run into the cache issues mentioned above.

There was not really anything in the article that should have lead you to believe that this was a "serverless only" issue, so I think the bashing against Netlify here is quite unwarranted.


> (e.g. as pods in k8s, which is not serverless):

> There was not really anything in the article that should have lead you to believe that this was a "serverless only" issue, so I think the bashing against Netlify here is quite unwarranted.

It's not because you can use an external cache like Redis[1]. You can scale to hundreds of instances with an external redis cache and you'll be fine. The problem is that you can't operate on Netlify scale with a simple implementation like that. Netlify can't afford running a redis instance for every NextJS application without significantly cutting into their margins (not just from compute cost, but running and managing millions of redis instances at scale won't work).

Clearly Vercel has their own in-house cache service that they have priced in their model. Netlify could run a redis instance per application, though more realistically it needs its own implementation of a multi-tenant caching service that is secure, can scale, cost effective, and fits their operational model. They are not willing to invest in that.

[1] https://github.com/vercel/next.js/tree/canary/examples/cache...


Interesting and definitely something platforms must take into consideration.

Now back to the post, implementing custom cache is not something Netlify is strongly complaining about. They are mostly asking for some documentation with rather stable APIs. Other Frameworks seems to provide that.


> Netlify could run a redis instance per application, though more realistically it needs its own implementation of a multi-tenant caching service that is secure, can scale, cost effective, and fits their operational model. They are not willing to invest in that.

But they have done that, as they say in the post.

Disclosure: used to work at Netlify, now work at Astro


Hmm, beyond a bug they had in bun between version 1.0.8 and 1.1.20[0] bun has otherwise worked perfectly fine for me

You have to do a few adjustments which you can see here https://github.com/codetalkio/bun-issue-cdk-repro?tab=readme...

- Change app/cdk.json to use bun instead of ts-node

- Remove package-lock.json + existing node_modules and run bun install

- You can now use bun run cdk as normal

[0]: https://github.com/codetalkio/bun-issue-cdk-repro


If you're not a Medium member, I've included a link in the start of the post where you can read it for free :)


In case you're not a Medium member, there's a link to read it for free right at the beginning of the post :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: