Hacker Newsnew | past | comments | ask | show | jobs | submit | ElectricalUnion's commentslogin

Starting with SQL Server 2017, native Linux support exists. Probably because of Azure.

Ironically, SQL Server AFAIK in order to run on Linux uses what basically amounts to a Microsoft reimplementation of Wine. Which always makes me wonder if they'll ever get rid of Windows altogether someday in favour of using Linux + a Win32 shim. I think there are still somewhat strong incentives nowadays to keep NT around, but I wouldn't be that surprised it this happened sometime down the line.

It's a Windows container. It runs the NT kernel and a few minimal other things. The closest would be the Nano Server container

AFAIK it's more like a reimplementation of NT APIs in userspace - aka basically Wine with extra steps, or Linux UM. There was a slide deck going around about Project Drawbridge, here: https://threedots.ovh/slides/Drawbridge.pdf

I also find it really strange this weird ORM fascination. Besides the generic "ORM are the Vietnam War of CS" feeling, I feel that with average database to ORM/REST things, end up with at least one of:

a) you somehow actually have a "system of record", so modelling something in a very CRUD way makes sense, but, on the other hand, who the hell ends up building so many system of record systems in the first place to need those kinds of tools and frameworks?

b) you pretend your system is somehow a system of record when it isn't and modelling everything as CRUD makes it a uniform ball of mud. I don't understand what is so important that you can uniformly "CRUD" a bunch of objects? The three most important parts of an API, making it easy to use it right, hard to use it wrong, and easy to figure out the intent behind how your system is meant to be used, are lost that way.

c) you leak your API to your database, or your database to your API, compromising both, so both sucks.


The vietnam of computer science was written 20 years ago (2006 even), and didn't kill off ORMs then. We've only had 20 years of improvement of ORMs since then. We've long ago accepted Vietnam (the country) as what it is and what it will be in the forseeable future. We should do the same with ORM.

I for one don't want to write in a low level assembly language, and shouldn't have to in 2026. Yet, SQL still feels like one.

I've written a lot of one off products using an ORM, and I don't regret any of the time savings from doing so. When and if I make $5-50M a year on a shipped product, okay, maybe I'll think about optimizing. And then I'll hire an expert while I galavant around europe.


SQL is a pretty high-level, declarative language. It's unnecessarily wordy though, and not very composable.

The problem with ORMs is that they usually give you a wrong abstraction. They map poorly on how a relational database works, and what it is capable of. But the cost of it is usually poor performance, rarely it's obvious bugs. So it's really easy to get started to use; when it starts costing you, you can optimize the few most critical paths, and just pay more and more for the DB. This looks like an okay deal for the industry, it seems.


> interface designed for humans — the DOM.

Citation needed.

> The web already went through this evolution once: we went from screen-scraping HTML to structured APIs. Now we're regressing back to scraping because agents need to interact with sites that only have human interfaces.

To me, sites that "only have human interfaces" are more likely that not be that way totally on purpose, attempting to maximize human retention/engagement and are more likely to require strict anti-bot measures like Proof-of-Work to be usable at all.


The funny thing is that those days you can fit 64 TB of DDR5 in a single physical system (IBM Power Server), so almost all non data-lake-class data is "Small data".


And a single machine can hold petabytes of disk for medium scale. There aren't many datasets exceeding that outside fundamental physics.


> There aren't many datasets exceeding that outside fundamental physics.

Just about every physical world telemetry or sensing data source of any note will generate petabytes of analytical data model in hours to days. On the high end, there are single categories of data source that aggregate to more like an exabyte per day of high-value data.

It is a completely different standard of scale than web data. In many industrial domains the average small-to-medium sized company I come across retains tens of petabytes of data and it has been this way for many years. The prohibitive cost is the only thing keeping them for scaling even more.

The major issue is that the large-scale analytics infrastructure developed for web data are hopelessly inadequate.


You could generate PB of data from a random number generator.

My question would be, why does a company need PBs of sensor data? What justifies retaining so much? Surely you aren’t using it beyond the immediate present.


There's nothing wrong with that. Small data is relative, and my clients often find it useful to rent or get access to beefy machines to process it with "small" techniques rather than use clusters...


Claude Code still runs things on your local machine. So if you have some pretty expensive transpilation, or resolving dependency trees that needs musl recompilation, or doing something rust, you still need a reasonable ammount of local firepower. More so if you're running multiple instances of them.


They don't even need that many mines or bombs to start with, presence of wreckage on the shipping lanes that aren't more that 75m deep would already put all shipping at risk.


Common business-oriented language (COBOL) is a high-level, English-like, compiled programming language.

COBOL's promise was that it was human-like text, so we wouldn't need programmers anymore.

The problem is that the average person doesn't know how what their actual problems are in sufficient detail to get a working solution. When you get down to breaking down that problem... you become a programmer.

The main lesson of COBOL is that it isn't the computer interface/language that necessitates a programmer.


Agreed, the programmer is not going away. However, I expect the role is going to change dramatically and the SDLC is going to have to adapt. The programmer used to be the non-deterministic function creating the deterministic code. Along with that were multiple levels of testing from unit to acceptance in order to come to some close alignment with what the end-user actually intended as their project goals. Now the programmer is using the probabilistic AI to generate definitive tests so that it can then non-deterministically create deterministic code to pass those tests. All to meet the indefinite project goals defined by the end-user. Or is there going to be another change in role where the project manager is the one using the AI to write the tests since they have a closer relationship to the customer and the programmer is the one responsible for wrangling the code to validate against those tests.


> The problem is that the average person doesn't know how what their actual problems are in sufficient detail to get a working solution. When you get down to breaking down that problem... you become a programmer.

Agreed. I've spent the last few years building an EMR at an actual agency and the idea that users know what they want and can articulate it to a degree that won't require ANY technical decisions is pure fantasy in my experience.


Right now with agents this is definitely going to continue to be the case. That said, at the end of the day engineers work with stakeholders to come up with a solution. I see no reason why an agent couldn't perform this role in the future. I say this as someone who is excited but at the same time terrified of this future and what it means to our field.

I don't think we'll get their by scaling current techniques (Dario disagrees, and he's far more qualified albeit biased). I feel that current models are missing critical thinking skills that I feel you need to fully take on this role.


> I see no reason why an agent couldn't perform this role in the future.

Yea, we'll see. I didn't think they'd come this far, but they have. Though, the cracks I still see seem to be more or less just how LLMs work.

It's really hard to accurately assess this given how much I have at stake.

> and he's far more qualified albeit biased

Yea, I think biased is an understatement. And he's working on a very specific product. How much can any one person really understand the entire industry or the scope of all it's work? He's worked at Google and OpenAi. Not exactly examples of your standard line-of-business software building.


> I don't think we'll get their by scaling current techniques (Dario disagrees, and he's far more qualified albeit biased).

If Opus 4.6 had 100M context, 100x higher throughput and latency, and 100x cheaper $/token, we'd be much closer. We'd still need to supervise it, but it could do a whole lot more just by virtue of more I/O.

Of course, whether scaling everything by 100x is possible given current techniques is arguable in itself.


There’s nothing any human can do that an AI can’t be expected to perform as well or better in the future.

Maybe the Oldest Profession will be the last to go.


Related: https://www.commitstrip.com/en/2016/08/25/a-very-comprehensi...?

At my job, we use a lot of AI to literally move fast and break things when working on internal tools. The idea is that the surface area is low, rollbacks are fast, and the upside is a lot better than the downside (our end users get a better experience to help them do their job better).

But our bottleneck is still requirements for the project. We routinely run out of stuff to do and have to ask for new stuff or work on a different project.

But you're absolutely right. Most people (programmers, managers, etc) don't know exactly what problems need to be solved, or at least, struggle to communicate it adequately for it to be implemented well enough. They say they want X. But they haven't thought about the repercussions of it, or that it requires Y first. AI might be able to help there, but it will give a totally bogus answer if it does not have any context of the domain, which is almost never documented in code.

These are still very much so technical roles, but maybe we are becoming more "technical domain experts."


I predict the main democratization change is going to be how easy people can make plumbing that doesn't require--or at least not obviously require--such specificity or mental-modeling of the business domain.

For example, "Generate me some repeatable code to ask system X for data about Y, pull out value Z, and submit it to system W."


What happens when value Z is not >= X? What happens when value Z doesn't exist, but values J and K do? What should be done when...

I hear what you're saying, but I think it's going to be entertaining watching people go "I guess this is why we paid Bob all of that money all those years".


Hence the "not obviously require" bit: Some portion of those "simply gluing things together" will not actually be simple in truth. It'll work for a time until errors come to a head, then suddenly they'll need a professional to rip out the LLM asbestos and rework it properly.

That said, we should not underestimate the ability of companies to limp along with something broken and buggy, especially when they're being told there's no budget to fix it. (True even before LLMs.)


LLM generated code is replacing the hacked together spread sheet running many businesses.


This seems needlessly nitpicky. Of course there will be edge cases, there always are in everything, so pointing out that edge cases may exist isn't helpful.

But it stands to reason that would be a huge shift if a system accessible to non-technical users could mostly handle those edge cases, even when "handle" means failing silently without taking the entire thing down, or simply raising them for human intervention via Slack message or email or a dashboard or something.

And Bob's still going to get paid a lot of money he'll just be doing stuff that's more useful than figuring out how negative numbers should be parsed in the ETL pipeline.


Edge cases are pretty much the reason you need professional developers, even before LLMs started writing code.


> when value Z is not >= X?

Is your AI not even doing try/catch statements, what century are you in?


Did you just arrogantly suggest that my LLM should use exceptions for control flow? Funny stuff!


How do you model the business domain without modeling the business domain?


For some sorts of "confusables", you don't even need Unicode in some cases. Depending on the cursed combination of font, kerning, rendering and display, `m` and `rn` are also very hard to distinguish.



Isn't m.2 storage but DRAM - hopefully, meaning NVMe/PCIe not SATA speed - already exists as Compute Express Link (CXL), just not in this specific m.2 form factor? If only RAM wasn't silly expensive right now, one could use 31GB/s of additional bandwidth per NVMe connector.


Ideally you want to run all those trusted (read: security critical, if compromised entire system is no longer trustworthy) processes on separated and audited machines, but instead busy people end up running them all together because they happen to be packaged together (like FreeIPA or Active Directory), and that makes it even harder to secure them correctly.


There's a very good reason to package these things together on the same machine: you can rely on local machine authentication to bootstrap the network authentication service. If the Kerberos secret store and the LDAP principal store are on different machines and you need both to authenticate network access, how do you authenticate the Kerberos service to the LDAP service?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: