Hacker News new | past | comments | ask | show | jobs | submit | more codeulike's comments login

There are similar statements from other people in the computing field around that time

https://www.sciencebase.com/science-blog/predictive-text-dar...

Sir Charles Darwin (grandson of the naturalist) who was head of the UK’s computer research centre, the NPL (National Physical Laboratory) said in 1946:

“it is very possible that … one machine would suffice to solve all the problems that are demanded of it from the whole country”

Douglas Hartree, Mathematician and early UK computer pioneer said in 1950: "We have a computer here in Cambridge, one in Manchester and one at the [NPL]. I suppose there ought to be one in Scotland, but that's about all."

This 1969 talk by Lord Bowden about the 1950s explains the thinking behind that statement:

https://www.chilton-computing.org.uk/acl/literature/reports/...

I went to see Professor Douglas Hartree, who had built the first differential analysers in England and had more experience in using these very specialised computers than anyone else. He told me that, in his opinion, all the calculations that would ever be needed in this country could be done on the three digital computers which were then being built - one in Cambridge, one in Teddington and one in Manchester. Noone else, he said, would ever need machines of their own, or would be able to afford to buy them. He added that machines were exceedingly difficult to use, and could not be trusted to anyone who was not a professional mathematician, and he advised Ferranti to get out of the business and abandon the idea of selling any more. It is amazing how completely wrong a great man can be.


It's easy to laugh at quotes like this, but they really had a different understanding of the word computer than we do. These machines were in fact huge and required specialists to use and didn't really do much except calculate. It's fairer to say that they were misjudging what computers would become.

I'm not sure I can say that I would have been able to look at an early 1950's computer and imagine a 1960's DEC or IBM machine under every bank, or whatever, much less desktop publishing or Defender of the Crown...


Oh yes definitely. But its fascinating because its looking back at the dawn of a completely new thing in the world, and how hard it was for people to see where it was going to go.

Interestingly if you look back at Alan Turing's writing around that time, he seemed to have a much better grasp of what a _big_deal_ this was all going to be. It would be amazing to go and fetch Turing with a time machine and bring him to our time. Show him an iPhone, his face on the UK £50 note, and Wikipedia's list of https://en.wikipedia.org/wiki/List_of_openly_LGBT_heads_of_s...


> It's easy to laugh at quotes like this, but they really had a different understanding of the word computer than we do. These machines were in fact huge and required specialists to use and didn't really do much except calculate. It's fairer to say that they were misjudging what computers would become.

Or were they predicting what is yet to come.

"These machines were in fact huge and required specialists to use"

Sounds like Azure to me.


I clicked on this because of the crazy title but its actually a really inisghtful article, e.g. "Conversely, there are people with commitment issues; they want to experiment non-stop and thus have no faith in robustness." ... like there's this belief that bugs will just happen anyway so why worry about them. But the authors point is that a little bit of extra thought and work can make a lot of difference to quality.


> the authors point is that a little bit of extra thought and work can make a lot of difference to quality

Care to bring home the thesis on how that’s actually really insightful?


There are two examples that come to mind:

I’ve caught multiple production bugs in code review by just carefully reasoning about the code and changing some variable names to match my mental model. I didn’t quite understand how bad it was, but I could tell it wasn’t right so I left a comment saying “this seems wrong”. What happened after? It was merged without addressing and it triggered a P1 regression in production two weeks later. Like the author said, it takes time and energy to truly understand a program and think through all the modes of execution. BUT I think it’s a worthwhile exercise. It just doesn’t really get rewarded. In my experience, this happened at least twice a year over my last 10 years of working in software.

The other example is blindly accepting AI generated code, which I think is an extension of copying template / boilerplate code. You take a working example and massage it to do what you want. It includes so many things that either aren’t needed or don’t make sense, but it works at first glance so it’s submitted and merged. For example, build configs, JSON or YAML configs, docker container build files, helm charts, terraforms, gradle builds. It takes a lot to learn these things so we often just accept it when it works. It’s exhausting to learn it all but if you do, you’ll be much better at catching weird issues with them.

I think the problem is we trick ourselves into thinking we should spend more time coding than anything else, but it’s everything else that causes us more problems and we should build the muscles to handle everything else so that we waste less time on those mistakes. We’re often already really good at shipping features but terrible at finding and handling bugs or configs or builds or infrastructure or optimizing for performance.


"It just doesn’t really get rewarded."

This is the entirety of the problem. Also why open source programs are so often "surprisingly" high quality.

Bad reward functions in companies don't just not reward people who do good work. It *actively punishes* them because stack ranking is a zero-sum game. And as much as people joke about stack ranking and lambast the dinosaurs who used to do it on purpose, it's still how it all actually work. It's just distributed stack ranking - each manager and manager of managers has their own local stack rank that bubbles up to who gets fired and who gets promoted.

So people who throw shit at the wall and make product that sell but have a shitty user experience get promoted and people who plod along and make things that work, or fix things that are broken (but not so much that they don't sell) filter to the bottom of the list and get cut, or leave when they don't get promoted or get raises.

Sure, there's some golden mix in between throwing shit at the wall and fixing key UX-ruining bugs. But these people still get outcompeted by people who purely ship n scoot.


The author makes the insightful observation that they write non-buggy code by being careful, in contrast to the vast majority of developers who write code full of bugs. Being careful is left to the reader, but it should be easy. /s


Author makes insightful observation that once you start paying attention deliberately - after some time you won’t have to be deliberately careful because you will be careful by default.

There are devs who don’t pay attention and devs who pay too much attention to context of the change they are implementing. I think author also outlined which things one might pay attention to so they would be considered careful.


I think there's always argument that you don't know what you don't know. How much thought do you put on writing code with out bugs. Bug could be caused by the business logic, the language internals, the runtime environment internal and variation. I think what people often ignore writing piece of software is an iterative process, you build, deploy and learn from the operation and you fix and repeat.

If you keep thinking of all possible issues that can happen that becomes a blackhole and you dont deliver anything.


> writing piece of software is an iterative process

Often, yes. Absolutely.

> you build, deploy and learn from the operation and you fix and repeat.

But no, not at all in this way. This is generally not necessary and definitely not a mindset to internalize.

Commercial software products are often iterative, especially for consumer or enterprise, because they need to guess about the market that will keep them profitable and sustainable. And this speculative character has a way of bleeding down through the whole stack, but not for the sake that "bugs happen!" -- just for the sake that requirements will likely change.

As an engineer, you should always be writing code with a absolutely minimal defect rate and well-understood capabilities. From there, if you're working an a product subject to iteration (most now, but not all), you can strive to write adaptable code that can accomodate those iterations.


> As an engineer, you should always be writing code with a absolutely minimal defect rate and well-understood capabilities.

I think the problem with the purists is that this is just a moral claim - it's not based on how businesses + marketplaces actually work. The lower you attempt to crank the defect rate (emphasis on the word "attempt"), the slower you will iterate. If you iterate too slow, you will be out-competed. End of discussion. This is as true in open-source as it is in enterprise SaaS. And in any case, you're just begging the question: how do we determine the "absolutely minimal" rate in advance?

> you can strive to write adaptable code that can accomodate those iterations.

This is a damaging myth that has wasted countless hours that could have otherwise been spent on fixing real, CURRENT problems - there is no such thing as writing "adaptable" code that can magically support future requirements BEFORE those requirements are known. If you were that good at predicting the future you would be a trader, not an engineer.


I mostly agree with you.

In first few iterations of writing the code, you often don't have complete picture of capabilities, capabilities change on the fly dictated by change in requirement. There is no baseline of what minimal defect rate it. Over period of time and iterations you build that understanding and improve the code and process.

I'm not saying that you don't think before you write code but often over thinking leads of unnecessary over engineering and unwanted complexity


> I think what people often ignore writing piece of software is an iterative process, you build, deploy and learn from the operation and you fix and repeat.

I presume you didn't use any Microsoft operating system (or program). /s


ASML is so interesting. Its like sci-fi that one firm in the Netherlands knows how to make the most complicated machine ever made, and no one else can do it.

And it is arguably the most complicated machine ever made. 50,000 times a second, the EUV lithography machine hits a 25 micron drop of molten tin that is moving at 70 meters per second with two co-ordinated lasers, the first hit to change the shape of the drop of tin in exactly the right way, the second hit to vaporise it, creating Extreme Ultraviolet Light at the right wavelength to etch chip designs onto silicon at "5nm process" sizes. Some labs can cobble together something similar as a proof of concept, but not well enough to make it feasible for mass production of chips.

Video about the light souce - https://www.youtube.com/watch?v=5Ge2RcvDlgw

No one else in the world is able to make these machines. If you buy one it costs $150m and gets shipped to you in forty containers on specially adapted planes. Very few firms have the resources/know how to even run the machines - which is what makes TSMC so important.


EUV technology was developed in partnership with the US Department of Energy which is why the US can implement export controls (it was an explicit condition of original deal with the DoE). A significant part of the “secret sauce” is manufactured in San Diego.

It’s not really “one firm in the Netherlands”, it’s a global collaboration that goes back to the 1990s. Intel was involved from the beginning, they just dropped the ball.


> it’s a global collaboration

Yep, even on the optics side (IIRC Zeiss, Nikon and Samsung are big players in the optical side...)


I saw a working euv laser at TRW in the early 2000's. https://www.laserfocusworld.com/lasers-sources/article/16550...


EUV LLC is the joint venture between ASML, Intel, etc funded by the Department of Energy. That’s the legal structure that they used to bring the entire deal together. TRW was a partner.


The rest of the machine is no less incredible - the EUV light is reflected off a series of mirrors (the largest weighing, IIRC, somewhere in the region of 300kg, and the largest that Zeiss have ever made) onto a mask of the target pattern, which is rapidly shuttling back and forth, onto another series of mirrors, and finally focused onto the wafer stage, which is also moving precisely in step with the mask, to expose the photoresist on a single die which is on the order of 10mm square.

And each wafer goes through multiple cycles of this, so not only does the machine rapidly create features at the nanonmetre scale with incredible accuracy, but again and again in exactly the same place after removing and returning the wafer.

Oh, and the wafer stage is pretty fun too; it uses an electrostatic chuck that induces a charge in the wafer and therefore holds it firmly in place without needing any kind of suction that could distort the wafer.

Promotional fly through video from ASML here: https://youtu.be/h_zgURwr6nA

Asianometry video about the stage: https://youtu.be/1fOA85xtYxs

Another short video about the light source: https://youtu.be/9VDJMivfhGU


They were a big customer of a former company I worked for. I had a session at our executive briefing center, and myself and everyone else was just floored and geeking out on their tech.


While EUV lithography machines are surely a contender, the most complicated machine every made is likely the Large Hadron Collider (LHC). A 27 km tunnel in which protons (with cross section about 10^-28 square meters) collide head on. Hard to imagine this amount of alignment is possible.


True, perhaps if we call EUV Lithography the most complicated fabricator machine that actually makes something. Whereas LHC is a scientific experiment.


One differentiator is that it's factory produced (not just a one off)


It’s not complex just gigantic, repeated sections. Definitely not downplaying it but definitely euv is damn complex !!


It is nothing compared to the EUV machines and not as complicated either and with a utility value much less than even regular DUV machines.


It appears that ASML alone knows the correct rituals to keep the Machine Spirit cooperative.


> Very few firms have the resources/know how to even run the machines - which is what makes TSMC so important.

I dispute that this is simply what sets TSMC apart. The process of designing transistors and chip "IP" (the term used) such as 2.5D stacking technologies and the like (which is where coordination with EDA companies like Synopsys, Cadence, and etc. comes in) is just one thing. Then to prepare the photomask (I'm sure there are partners, but still), and operate the whole clean room facility around the input and output of wafers and other materials into these ASML/other fabrication machines is another thing.


ASML seems to strike deep at personal identities: it's the only European player in space dominated by American and Asian companies.

Every time anyone mentions ASML, I see comments of adulation. It's a very human tendency to hero-worship, especially fanboys who only recognize the name but don't understand the history or the ecosystem.

It's useful to remember that ASML didn't outcompete rivals through brilliant innovation in a heated market race. Instead it won mostly by being the last one standing. All the other players dropped out. Nikon and Canon made strategic decisions not to pursue EUV because it was too risky and expensive. ASML, a small Phillips spinoff, couldn't do it alone either. It took two decades and billions of investments backed by Intel and TSMC to keep ASML going before the first breakthrough -- it was a lot of persistence and incrementalism. ASML was essentially a side bet/strategic hedge by Intel, TSMC and Samsung.

(all this is covered on Asianometry)

People associate ASML with the Netherlands, but it would not have been possible without the massive contributions of the Americans, the Taiwanese, the Germans, and the Japanese.

It's like Grigori Perelman and the Poincare conjecture -- he didn't accept the Fields Medal for it because he felt that he just happened to the last person to put the pieces together (he was building on work by Thurston, Hamilton, etc.).

We see ASML as this amazing Dutch company, but we forget all the other players were critical to making this singular company possible (only because it happened to be the only one standing, not because no one else is capable -- this is the great misconception).


It seems for some reason that it’s only important to keep these things in mind and mention that it contains American tech, is an international effort, etc, when discussing some successful European company. But when discussing American companies there is never any talk about such things. Why can’t ASML be seen as an amazing Dutch company even if they also use tech from other countries?


The American companies that get talked about most here are mainly software companies. The "supply chain" for software is far less interesting and easily hidden away. Who cares about a litany of Indian outsourcing companies (apart from those who were outsourced i suppose)?


Their supply chains consists of Indian and Chinese Univerities.


Because it's not the case.

ASML is truly an international integration company. This type of integration requires deep expertise and tacit knowledge. But they don't manufacture most of their critical components.

The way most narratives make it sound is that ASML is out of this world. No, it is not. It is unique because it is the only one standing in the market, after a 2 decade period of investment.

It has a compelling moat, but that moat is not entirely technical (though it is substantially technical -- there is a lot of tacit knowledge on integration at that level of precision). It's the supplier network lock in, and sunk cost lockin from chipmakers.


The system is Dutch. Any component of anything is always trivial.

If the Dutch want to design a new EUV light source, they can probably find someone to do it. If they want to find someone to grind mirrors to match those from Zeiss, maybe they can find someone who worked on the ESO or similar, and similarly with everything else. Think of it like how the F-35 is a Boeing project, even though many companies make components.

The integrator is the maker, not the component manufacturers.

I don't think the US could have made something like ASML. Baumol effect due to the successful firms, different attitudes, etc. There's after all a reason why the others dropped out.


My impression is one important strategic factor is that ASML was not build using venture capital -in which the US is dominant - but relied mostly on government subsidies. Once ASML was a strong player they got a lot of investment from American customers (intel et al) to invest, for example in UEV development. This is a funding approach that doesn't work well in most markets but seems to fit well with the deep speculative research typical of these kinds of systems. As some other users have noted, the US was the initial leader in UEV research (in the very early phases), but the technology was deemed too speculative to attract significant market funding.


Yes. Especially Germans can make these kinds of long-terms plans.

I saw one thing that made it obvious to me that they were only people who could possibly develop a certain technology which hasn't been commercialized. Basically, they were doing research on details that they from their analysis could determine were necessary, but which nobody else had even thought about yet. It took me half an hour of a guy who had done his PhD on it explaining until I understood why they needed it. I think his work was paid for by industry though.


ASML is not German, and I'm not sure I understand the rest of your comment.


You’re partly right, but I think you’re downplaying ASML’s achievements, especially when it comes to EUV. The ‘internationalization’ you mentioned was definitely one of the key reasons for their success. They were smart to position themselves as a global system integrator early on, stepping away from the vertical integration that their competitors mostly stuck with. (And let’s not forget, ASML was already a big player before EUV even became a thing.)

That said, this isn’t something unique to ASML. It’s just how global companies operate, especially in hardware. Take Tesla as an example—these companies are inherently multinational. They rely on acquired patents, parts and expertise from around the world, and hire international talent to stay competitive.

As someone else pointed out, it’s interesting how often this comes up in ASML discussions here, especially from American users. Maybe it’s because admitting reliance on another country—particularly those “arrogant Europeans” Americans often feel both superior and insecure about—kind of messes with the U.S. dominance narrative.


> It's useful to remember that ASML didn't outcompete rivals through brilliant innovation in a heated market race. Instead it won mostly by being the last one standing.

They were the last one standing because of their brilliant innovation. It was order of magnitudes harder to get EUV from the lab to an actual working machine than anybody thought in the beginning. There is a reason others gave up.


Fair enough, I can see it was a team effort. But its still interesting that all that work has focused on one place without leaking and it now has global strategic importance.

And being the last one standing must mean something - good at negotiating with funders, tenacious, perceived as a good long term bet etc.


For a long time the Netherlands were somewhat of a tax haven for companies relying heavily on R&D.


Cymer was the best purchase they ever made. But it also means they're totally beholden to US export controls. That light source is made in San Diego after all.


A part of reason No one else can make these machines is because US has tight control on who's allowed to even develop that tech.


There were 9 manufacturers with access to the initial technology, but only one managed to productionize it iiuc.


Seems to me that bike-lane-onto-pavement transition is designed to be deliberately awkward so that the cyclists have to SLOW DOWN before they join the pavement and share the space with pedestrians


I was unlucky enough to start exploring using Sonos (as opposed to old Squeezebox) just as this new app thing hit.

One weird thing about Sonos is that it seems to be "a home speaker system for homes that dont have any children in them". Like there doesnt seem to be a way to allow kids to play music on the things without giving them access to _everything_


Weird how SQL Server and its Azure variants gets no mention. It dominates in certain sectors. DBEngines ranks it third most popular overall https://db-engines.com/en/ranking


Are people choosing SQL Server independently of the Microsoft ecosystem? My understating is that you typically use it because you’re forced to choose a MS product.


SQL Server is a terrific product. And I detest most things Microsoft.


Us it, though? I worked with it tangentially and found it deficient compared to Postgres. Why pay for a product that's worse than the best free product? In the old days there was a question of who to pay for support that was easier to answer for proprietary DBs, but with cloud services that answer is "you already pay your cloud provider".


I wish Microsoft paid more attention to T-SQL though. It’s an atrociously primitive language in some ways. There is no “record” or “struct” type of any kind, table-valued functions are not composable, an error in one line may throw an exception or just continue execution to the next line depending on whether TRY..CATCH exists at some higher level… to name just a few grievances of many I accumulated over the years.

It can work well performance-wise and security-wise, but programming it can be quite a pain, and I feel that’s unnecessarily so, considering what resources Microsoft has at their disposal.


Their query optimizer is incredible. Unfortunately that lets people get away with truly horrifying queries or views nested a dozen layers deep until it falls over.


Except when you need to scale.


this of course is false… it scales fine if you know what you are doing.


It is of course true... it is well known that SQL Server scales to department level, but Oracle scales to company level. This is true inside Microsoft and Oracle as well. Inside Microsoft, they have a bug database per division but Oracle has a single database for the entire company. Ask people who work at those companies.

See also scalability sections in these artcles:

https://airbyte.com/data-engineering-resources/oracle-vs-sql...

https://futuramo.com/blog/oracle-vs-sql-server-head-to-head-...


if it is good for SO it should be good for most :)

https://stackoverflow.blog/2008/09/21/what-was-stack-overflo...


You don't believe that web visitors are directly querying SQL Server, right? I can believe they are storing their employee database in SQL Server... they have hundreds of employees.


do some research and then come back here… coming with shit like “you don’t believe they are querying sql server directly” is childish and unprofessional.


This is true for most databases though .

How much is out of the box or simple easy to access configuration not magic incantations either you need expensive courses to know or be battle hardened with years of experience is the question really


Agreed with the other person. It's a great database. I wouldn't choose it for a startup over Postgres, but it is extremely capable.


I would use it if it supported backup/restore over unix pipes / ssh.


If it supports backup to a file, you can have it write to a named pipe and from there to wherever.

I used this hack for backing up Oracle 30 years ago.

Something like 'mknod p backup.dmp; oradump .... file=backup; dd if=backup.dmp | ssh othermachine receiver-process'


Not necessarily. That won't work if the backup uses apis that a pipe doesn't support, like seek or reading back from the file.


Sure; does any backup actually do that? I guess it's possible.

Backups (at least db backups) used to be made with the assumption that the backup device is tape.


SRE who deals with some .Net stuff that uses MSSQL but is converting to MySQL. so I feel somewhat qualified to talk about MSSQL. TL;DR: Nothing interesting going on.

There is nothing to talk about here. It's boring database engine that powers boring business applications. It's pretty efficient and can scale vertically pretty well. With state of modern hardware, that vertical limit is high enough most people won't encounter it.

It's also going the way of Windows Server which is to say, it's being sold but not a ton of work is being done on it. Companies that are still invested in it are likely because they don't care about cost ultimately or cost of switching is too high to greenlight the switch.

Anyone who does care about cost like my current company has switched to OSS solutions like PostGres/MySQL/$RandomNoSQLOSSOption. My company switched away when turned into SaaS business and those MSSQL server costs ate into bottom line.

This has been happening throughout the ecosystem. Proget which is THE solution for .Net Artifacts is switching to PostGres: https://blog.inedo.com/inedo/so-long-sql-server-thanks-for-a...

Also, I saw this article from Brent Ozar, who I see as MSSQL smart person, which basically said if you have the option, just go with PostGres: https://www.brentozar.com/archive/2023/11/the-real-problem-w...

It's also worth noting that Microsoft even bought PostGres scaling solution called Citus so they read the writing on the wall: https://blogs.microsoft.com/blog/2019/01/24/microsoft-acquir...


I'll probably come across as a shill here, but there is a lot going on with SQL Server, all included in your license (Standard Edition has limitations on scaling).

Some of these things are merely passable, some are great, but it's all included. The key takeaway is that SQL Server is a full data platform, not just an RDBMS.

- RDBMS: very solid, competitive in features - In-memory OLTP: (really a marketing name for a whole raft of algorithmic and architectural optimization) can support throughput that is an order of magnitude higher - OLAP: Columnstore index in RDBMS, can support pure DW style workload or OLAP on transactional data for near-real-time analytics - OLAP: SSAS: two different best-in-class OLAP engines: Multidimensional and Tabular for high-concurrency low-latency reporting/analytics query workloads - SSIS: passable only, but tightly integrated ETL tool; admittedly in maintenance mode - SSRS: dependable paginated / pixel-perfect reporting tool; similar to other offerings in this space - Native replication / HA / DR (one of the only things actually gated behind Enterprise) - Data virtualization: PolyBase

If you're just looking for a standard RDBMS, then there's little to justify the price tag for SQL Server. If you want to get value for money, you take advantage of the tight integration of many features.

There is value for having these things just work out of the box. How much value is up to you and your use cases.


Yes, it has a ton going on but most of companies I've found using it are using primarily as RDBMS and thus MySQL/Postgres could replace it. Other stuff it did could be replaced by tools more geared towards specific function and most of time, at much lower cost.

Licensing isn't cheap. For anyone wondering, before discount, it's 876/yr per core for Standard and 3288/yr per core for Enterprise. Also note that Standard is limited to 24 cores and 128GB of RAM, if you want to unlock more of that, you must move to Enterprise.


My point was just that there’s a lot going on there, and value for those who want more than an RDBMS. I have no disagreement that the RDBMS on its own is not worth paying for for most, and especially not for any techish organizations.

I’d also note that most orgs and use cases probably don’t need more than 24 cores and 128GB RAM.

I think for an organization that wants a near-trivial out of the box experience with RDBMS, reporting, and analytics, Standard Edition is not a bad deal. Especially for the many organizations that are already using Microsoft as their identity provider and productivity suite.


Theres still an express version thats free to use but limits database to 10gig, for what its worth


Microsoft is a decent deal if you go 100% in on Microsoft, but it's important to budget for additional support because actually using microsoft products has quite the learning curve.


> It's boring database engine that powers boring business applications

I'm taking that as a positive thing... it's boring and does its job with little fanfare. That's pretty much what I want out of a RDBMS. So long as it is "fast-enough" with enough features for the applications that use it, that seems like a good place for an RDBMS to be.

One could still argue about Windows and licensing fees, but from a technical point of view, for business customers, boring isn't necessarily a bad thing.


There’s other boring databases that also reliably fill that job, and they also cost far less.

It can also be a bit of a pain outside the C# ecosystem, whereas every language ever has nice postgres drivers that don’t require us to download arms setup ODBC. It runs on Linux as of a few years ago, but I also wouldn’t be surprised if many people didn’t realise that.


I’ve run into MSSQL on Linux. Most DBAs know but their entire ecosystem is Windows Server so what’s another Windows Server is their thinking.


It's the Linux-isation of the db space. Once Linux was good enough for enterprise work, it massively reduced demand for Solaris/HP-UX/AIX/WindowsNT.

Same thing is happening now to Postgres vs enterprisey DBs.


> It's boring database engine that powers boring business applications.

FWIW, it also powered the most popular (in terms of player base) MMORPG before WoW took over.

And I wouldn't be surprised to find it in aviation, railways, powerplants, grid control, etc...


Before wow , there was either lineage, and then EverQuest.

I guess it was Lineage as Korean used mainly MSFT softwares?


Looking at some subscription charts I was able to quickly find now I see I made a mistake. I was thinking of Lineage 2 using mssql, while it was Lineage (1) that was the major one. I do not know anything about its backend and it would be hard to assume considering how much older it is.

1. https://ics.uci.edu/~wscacchi/GameIndustry/MMOGChart-July200...


I was a big proponent of MSSQL. It is still a good product but I see Microsoft constantly fumbling with new OLAP tools. It is a shame but it seems Microsoft is abandoning MSSQL.


If it's any consolation, half of the new cloud OLAP tools are basically still MSSQL


I have used a lot of RDBMS vs NoSQL solutions and I love SQL Server. I have used and written services consuming/reporting/processing thousands of transactions per second and billions of euro per year.

The profiling abilities of SQL Server Management Studio (SSMS) and its query execution insights, the overall performance and scalability, T-SQL support, in-memory OLTP, and temporal tables - I just love SQL Server.

I'm not sure if it's just that I learned SQL Server better in college than MySQL, Mongo or Postgres but it's just been an amazing UX dev experience throughout the years.

Granted, there's some sticky things in SQL Server, like backups/restores aren't as simple as I'd like, things like distributed transactions aren't straightforward, and obviously the additional licensing cost is a burden particularly for newer/smaller projects, but the juice is worth the squeeze IMHO.


Lots of people deliberately avoid Microsoft technologies and their whole ecosystem. There's of course interesting stuff happening there, but not enough for those outside the ecosystem to care.

It's more a cultural thing than anything else. HN for example largely leans away from MS. It's quite interesting how little overlap there is between the two worlds sometimes.

Speaking as one of those people, it's just not my thing, so it's not on my radar at all. There's enough stuff happening outside MS to keep me busy forever.


The fawning over Larry Ellison is also weird.


The joke is that his greed/ unwillingness to squeeze margins has made the entire database company ecosystem possible.


Recently bought an ipad mini 6th gen and I notice that although it seems to have a USB-C charge port, if you use a regular old USB-C to USB-A cable and wall-wart it only charges to 75%. You have to use the apple-supplied USB-C (at both ends) cable to charge to 100%. Not sure what is going on there exactly but it seems like malicious compliance.


Or as this hasn’t been widely reported something else is going on…

Try different chargers, there’s a lot of defective hardware out there. Also it’s at 80%, but there’s a setting on iPhones and possibly iPads etc that avoids charging to 100% to preserve long term battery life if you’re going to leave the device plugged in long term.


I don’t know about iPads, but my iPhone shows a message when the delayed charge thing is active. I think it’s even one of those always on notifications you can’t swipe away.


Delayed charge (waiting to charge fully or charging the last ~20% slowly to just-in-time for your alarm) is a different setting, though I don't recall the name for the "Only charge to 80%" one


Can you tell an iPhone to only charge to 80%? I only have the “optimized charging” option, which is the delayed one.


Yes.

> To change your charging option with iPhone 15 models and later, go to Settings > Battery > Charging and choose an option. You can choose a charge limit between 80 percent and 100 percent in 5 percent increments.

https://support.apple.com/en-us/108055


OK, that explains it, mine is an older model. Wonder why this setting doesn't apply to them...


Probably a combination of the devices being old enough and the battery not being large enough that a lot of people would not find 80% of the already-degraded capacity of their device reasonable, and not wanting to have to explain to customers why their friend’s phone allows it and theirs doesn’t if they both have the same model.


"battery saver"


Yeah, USB-C is a bit of a nightmare when it comes to knowing what a given cable can actually do.


Well, all cables can charge at least. It is not a usb-c problem but an apple and /or charger manufacturer one.

My bet would be sth about the voltage the charger provides.


Its not the setting. It charges fine to 100% with the same wall-wart and a tiny usb-a to usb-c adapter then the apple cable. But not happy with my regular usb-a to usb-c cable (that works fine with everything else). Or any of the other cables in my house. A message pops up about non-compatible cable. I suspect the ipad has been designed to be deliberately fussy. I'm in europe, if that makes any difference.


I charge my iPad Mini with a variety of chargers, all the way to 100%. None of my cables are from Apple, only some of my (USB-C) chargers are not from Apple.


Are you sure it is because of the cable? By default Apple devices only charge to 80% when you plug them in and then do the final 20% later around when they anticipate you are going to unplug it.


Its not that. It shows a message about 'non compatible cable'. And when I use the apple cable it quite happily charges to 100% with no quibbles. And the non apple cable is a good quality one that works fine with everything else. I suspect the ipad has been designed to be deliberately fussy.


Our household has a number of iPads and never had an issue with any non-Apple usb-c cable I’ve used to charge them with, mostly Anker branded but one or two AmazonBasics or Cable Matters brand. I’ve never seen an incompatible cable warning, my suspicion is it’s a cable that doesn’t have the right signaling to go above 5V, so it’s stuck charging at 5V and the iPad prefers to charge at 9v or 12v.


Right maybe, but why make the ipad so fussy? Why cant it chill at 5v? I am suspicious of the design decisions made here


It’s likely that your wall-wart doesn’t provide enough watts to fully charge your iPad mini, and/or that there’s some reason the USB-A side of that cable isn’t adequate for what the iPad mini needs.

If you want to test, consider trying with a non-Apple wall-wart for which the rated wattage is equal to or greater than the one which Apple provides with your iPad mini and which uses a USB-C connection rather than a USB-A one. If it comes with a USB-C to USB-C cable, use that, otherwise get one that supports USB-C PD and enough watts to match the iPad mini’s needs.


That can't be the explanation. Batteries use fewer watts as they get close to full.


That's not fully true, and even if it's partially true in some cases (this depends on the chemistry of the battery): volts and watts aren't the same thing. You can be fully capable of supplying 5v@2.4A and not capable of supplying 12v@1A which are the same number of watts.

Battery tech is a horrible black hole that is not very fun to dig into, chargers are a little bit more transparent: with markings for various voltages and amperages printed on the device.

iPad batteries output 3.7v if I'm not mistaken, but I'm unsure what they charge with.


  > iPad batteries output 3.7v if I'm not mistaken, but I'm unsure what they charge with.
For those not familiar with the tech, the term "3.7v battery" means that it is about 4.2 volts when full. Black hole indeed.


A 3.7V nominal li-ion battery would peak at about 4.5V while charging. A bit high, but a well designed circuit should be able to do that off 5V. Besides, 75% is far short of where the voltage starts to spike.


>volts and watts aren't the same thing. You can be fully capable of supplying 5v@2.4A and not capable of supplying 12v@1A which are the same number of watts.

For the layman, the equation is Volts x Amperes = Watts.

Where if we use the common water examples: Voltage is electric charge ("water pressure" or "volume of water"), Amperage is electric current ("water flow rate"), and Wattage is electrical energy ("amount of water transferred").

2V x 6A, 4V x 3A, 1V x 12A, 12V x 1A and similar are all 12W but they are obviously very different in nature.


I would expect much bigger issues and failure to charge at all if there's not a reasonable voltage on the USB line.


More … peak voltage or something like that?


Sounds great. Very good charger for battery life.


Agree. I bought a "Chargie" just to get this feature, and it doesn't work with my wireless chargers worth a darn. I would pay at least $40 x 5 units for chargers that reliably stop at 75% with no software required.


I have some Chargie units, but found them finicky enough with the bluetooth connection that I've abandoned them for devices' built-in 80% charge limit, even if the exact charging pattern isn't quite what I'd like.


What? Isn't that a function of the device? The only alternative would be to start discharging at 75%, and I don't want my batteries to constantly cycle while plugged in. I leave them plugged in so they'll run off of wall power.


> The only alternative would be to start discharging at 75%

Not necessarily, Chargie lets you configure minimum charge, minimum charge for a cycle, time to charge, etc.

In practice, what it looks like for my devices:

1. I plug in my phone when I go to bed. 2. Phone charges to 40% (if it's not already >40%) and stops charging. 3. At 5am or so, the phone is still at ~38%, it then charges to 80% and stops. 4. I get up and my phone is still at ~78% charge.

For devices with more software capabilities than phones (e.g. macOS) you can use software (e.g. Al Dente) that will cap the charge level and run off wall power. In practice this means that if I plug my laptop in at 90% charge, it will take weeks to drop to 80% since it's running off wall power, and unless I'm doing particularly high power-draw things the drop to 80% comes down to the battery's self-discharge rate.


I can't beleive I'm seeing this on HN. This is really a fuckup of the industey if they're even confusing technical people.

A lot of phones only charge to 80% to wave battery life. You can change this setting. Spread the world.

I wonder how much they pay in tech support because of this one thing.


> This is really a fuckup of the industey if they're even confusing technical people.

Honestly, we're not that great.


When I use the apple cable it charges to 100% with no quibbles


You're sure it's not the "optimized battery charging" feature?

https://support.apple.com/en-us/108055


I have a number of quality, 3rd party USB PD rated cables which work without issue on iPhone 16, iPad Mini 6g, MacBook Pro. Both with and without 1st party chargers. Admittedly the options for consumers in the USB-C space are a confusing mess, but I’ve never had problems with stuff from brands like Ugreen or Anker where USB-PD support is specifically advertised.


I’ve got a 6th gen iPad Mini, it charges to 100% using an Anker charger and no-name USB A-C cable


I don’t believe it’s the cable as much as the charging brick that is causing that. I have that issue with a MacBook Pro, using the Apple provided cable plugged into a usb-c port on my power strip. If I use the power brick, it charges fine.


So why call it a horrible finish?


Because as a chess fan and just as a human being my heart goes out to Ding Liren who seems like a genuinely likeable and nice human being who has been open about the tremendous struggle he has had with mental health etc since winning the world championships. To pull himself out of a hole that deep and play really great chess for 13 and 9/10s matches and then lose it with a blunder at the last second is awful.

And I say that as 100% someone who wanted Gukesh to win from the beginning, which is a result I think is great for chess and I think is “objectively correct” in the sense that he has played better chess and has been (apart from Magnus Carlsen and his compatriot Arjun Erigaisi who is also a complete monster) the story of the chess world for the last year.


Because the ending was pretty meh. All this excitement, and then Ding just flubs up an end game that most super gm's should be able to draw against stockfish.

The best finale's are often when two players at their best duke it out, and one comes out on top. This was simply not Ding's best.


They opened the API for it and I'm sending requests but the response always comes back 300ms before I send the request, is there a way of handling that with try{} predestined{} blocks? Or do I need to use the Bootstrap Paradox library?


Have you tried using the Schrödinger Exception Handler? It catches errors both before and after they occur simultaneously, until you observe the stack trace.


I swear I can't tell which of these comments are sarcastic/parody and which are actual answers.

A sort of quantum commenting conundrum, I guess.


They are both, just until the moment you try to read them


This subthread are among the best comments I've read in this website.


I read them as sarcastic. Please reply here with your output.


Since you read them as sarcastic, I also read them as sarcastic. Quantum entanglement at work.


What happens when you don't send the request after receiving the response? Please try and report back.


No matter what we've tried so far, the request ends up being sent.

The first time I was just watching, determined not to press the button, but when I received the response, I was startled into pressing it.

The second time, I just stepped back from my keyboard, and my cat came flying out of the back room and walked on the keyboard, triggering the request.

The third time, I was holding my cat, and a train rumbled by outside, rattling my desk and apparently triggering the switch to send the request.

The fourth time, I checked the tracks, was holding my cat, and stepped back from my keyboard. Next thing I heard was a POP from my ceiling, and the request was triggered. There was a small hole burned through my keyboard when I examined it. Best I can figure, what was left of a meteorite managed to hit at exactly the right time.

I'm not going to try for a fifth time.


You unlock the "You've met a terrible fate." achievement [1]

[1] https://outerwilds.fandom.com/wiki/Achievements


I love myself a good Zelda reference


Please report back and try.*


Looks like we don't have a choice.


Finally, INTERCAL’s COME FROM statement has a practical use.


>They opened the API for it and I'm sending requests but the response always comes back 300ms before I send the request

For a brief moment I thought this was some quantum-magical side effect you were describing and not some API error.


Isn't that.... the joke?


Write the catch clause before the try block


Try using inverse promises. You get back the result you wanted, but if you don't then send the request the response is useless.

It's a bit like Jeopardy, really.


Did you try staring on your IP packets while sending the requests?


You are getting that response 300ms beforehand because your request is denied.

If you auth with the bearer token "And There Are No Friends At Dusk." then the API will call you and tell you which request you wanted to send.


Pretty sure you just need to use await-async (as opposed to async-await)


The answer is yes and no, simultaneously


Help! Every time I receive the response, an equal number of bits elsewhere in memory are reported as corrupt by my ECC RAM.


Update: I tried installing the current Boostrap Paradox library but it says I have to uninstall next years version first.


> I'm sending requests but the response always comes back 300ms before I send the request

Ah. Newbie mistake. You need to turn OFF your computer and disconnect from the network BEFORE sending the request. Without this step you will always receive a response before the request is issued.


I'm trying to write a new version of Snake game in Microsoft Q# but it keeps eating its own tail.


What does Gemini say?


It responds with 4500 characters: https://hst.sh/olahososos.md


It think you are supposed to use a "past" object to get your results before calling the API.


Try setting up a beam-splitter router and report back with the interference pattern. If you don't see a wave pattern it might be because someone is spying on you.


When I was 10 I had a BBC Micro and one of my friends had a ZX Spectrum, really remember going to each others houses and playing games. It was amazing hearing the Ghostbusters theme on the spectrum https://www.youtube.com/watch?v=CMIphX8Ipak ... the game itself was pretty confusing but fun to try and figure out


It's me or they did the first karaoke here? In any case the first @home karaoke for sure.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: