What makes XSLT inherently unsuitable for an interactive application in your mind? All it does is transform one XML document into another; there's no earthly reason why you can't ornament that XML output in a way that supports interactive JS-driven features, or use XSLT to built fragments of dynamically created pages that get compiled into the final rendered artifact elsewhere.
I'm aware I'm in a minority, but I find it sad that XSLT stalled and is mostly dead in the market. The amount of effort put into replicating most the XML+XPath+XSLT ecosystem we had as open standards 25 years ago using ever-changing libraries with their own host of incompatible limitations, rather than improving what we already had, has been a colossal waste of talent.
Was SOAP a bad system that misunderstood HTTP while being vastly overarchitected for most of its use cases? Yes. Could overuse of XML schemas render your documents unreadable and overcomplex to work with? Of course. Were early XML libraries well designed around the reality of existing programming languages? No. But also was JSON's early implementation of 'you can just eval() it into memory' ever good engineering? No, and by the time you've written a JSON parser that beats that you could've equally produced an equally improved XML system while retaining the much greater functionality it already had.
RIP a good tech killed by committees overembellishing it and engineers failing to recognise what they already had over the high of building something else.
There are still virtually zero good XML parsers but plenty of good JSON parsers so I do not buy your assertion. Writing a good JSON parser can be done by most good engineers, but I have yet to use a good XML parser.
This is based on my personal experience of having to parse XML in Ruby, Perl, Python, Java and Kotlin. It is a pain every time and I have run into parser bugs at least twice in my career while I have never experience a bug in a JSON parser. Implementing a JSON parser correctly is way simpler. And they are also generally more user friendly.
Take a look at C# / dotnet. The XML parser that's been around since the early 2000s is awesome, but the JSON libraries are just okay. The official JSON library leaves so much to be desired that the older, 3rd party library is often better.
Oooh, then it makes sense why there isn't a good set of layers:
XmlReader -> (XmlDocument or XmlSerializer) generally hits all use cases for serialization well. XmlReader is super-low-level streaming, when you need it. XmlDocument is great when you need to reason with Xml as the data structure, and XmlSerializer quickly translates between Xml and data structures as object serialization. There's a few default options that are wrong; but overall the API is well thought out.
In Newtonsoft I couldn't find a low level JsonReader; then in System.Text.Json I couldn't find an equivalent of mutable JObject. Both are great libraries, but they aren't comprehensive like System.Text.Json.
JSON parsing is pretty much guaranteed to be a nightmare if you try and use the numeric types. Or if you repeat keys. Neither of which are uncommon things to do.
My favorite is when people start reimplementing schema ideas in json. Or, worse, namespaces. Good luck with that.
> by the time you've written a JSON parser that beats that you could've equally produced an equally improved XML system while retaining the much greater functionality it already had.
Furthermore, JSON has so many dangerously-incompatible implementations the errata for JSON implementations fills in multiple books, such as advice to "always" treat numbers as strings, popular datetime "extensions" that know nothing of timezones and so on.
Yes, but that's also everything you need to know in order to understand XML, and my experience implementing API is that every XML implementation is obviously-correct, because anyone making a serious XML implementation has demonstrated the attention-span to read a book, while every JSON implementation is going to have some fucking weird-thing I'm going to have to experiment with, because the author thought they could "get the gist" from reading two pages on a blog.
I think you are misreading the phrase "based on". The author, I believe, intends it to mean something like "descends from", "has its origins in", or "is similar to" and not that the ECMAScript 262 spec needs to be understood as a prerequisite for implementing a JSON parser. Indeed, IIRC the JSON spec defined there differs in a handful of respects from how JavaScript would parse the same object, although these might since have been cleaned up elsewhere.
JSON as a standalone language requires only the information written on that page.
Well yes, if you're writing a JSON parser in a language based on ECMAScript-262, then you will need to understand ECMAScript-262 as well as the specification for the language you're working with. The same would also apply if you were writing an XML parser in a language based on ECMAScript-262.
If you write a JSON parser in Python, say, then you will need to understand how Python works instead.
In other words, I think you are confusing "json, the specified format" and "the JSON.parse function as specified by ECMAScript-262". These are two different things.
> The same would also apply if you were writing an XML parser in a language based on ECMAScript-262.
Thankfully XML specifies what a number is and anything that gets this wrong is not implementing XML. Very simple. No wonder I have less problems with people who implement XML.
> In other words, I think you are confusing "json, the specified format" and "the JSON.parse function as specified by ECMAScript-262". These are two different things.
I'm glad you noticed that after it was pointed out to you.
The implications of JSON.parse() not being an implementation of JSON are serious though: If none of the browser vendors can get two pages right, what hope does anyone else have?
I do prefer to think of them as the same thing, and JSON as more complicated than two pages, because this is a real thing I have to contend with: the number of developers who do not seem to understand JSON is much much more complicated than they think.
XML does not specify what a number is, I think you might be misinformed there. Some XML-related standards define representations for numbers on top what the basic XML spec defines, but that's true of JSON as well (e.g. JSON Schema).
If we go with the XML Schema definition of a number (say an integer), then even then we are at the mercy of different implementations. An integer according to the specification can be of arbitrary size, and implementations need to decide themselves which integers they support and how. The specification is a bit stricter than JSON's here and at least specifies a minimum precision that must be supported, and that implementations should clearly document the maximum precisions that they support, but this puts us back in the same place we were before, where to understand how to parse XML, I need to understand both the XML spec (and any additional specs I'm using to validate my XML), plus the specific implementation in the parser.
(And again, to clarify, this is the XML Schema specification we're talking about here — if I were to just use an XML-compliant parser with no extensions to handle XSD structures, then the interpretation of a particular block of text into "number" would be entirely implementation-specific.)
I completely agree with you that there are plenty of complicated edge cases when parsing both JSON and XML. That's a statement so true, it's hardly worth discussion! But those edge cases typically crop up — for both formats — in the places where the specification hits the road and gets implemented. And there, implementations can vary plenty. You need to understand the library you're using, the language, and the specification if you want to get things right. And that is true whether you're using JSON, XML, or something else entirely.
> my experience implementing API is that every XML implementation is obviously-correct
This is not my experience. Just this week I encountered one that doesn’t decode entity/character references in attribute values <https://news.ycombinator.com/item?id=45826247>, which seems a pretty fundamental error to me.
As for doctypes and especially entities defined in doctypes, they’re not at all reliable across implementations. Exclude doctypes and processing instructions altogether and I’d be more willing to go along with what you said, but “obviously-correct” is still too far.
Past what is strictly the XML parsing layer to the interpretation of documents, things get worse in a way that they can’t with JSON due to its more limited model: when people use event-driven parsing, or even occasionally when they traverse trees, they very frequently fail to understand reasonable documents, due to things like assuming a single text node, ignoring the possibilities of CDATA or comments.
Exactly. In my experience, XML has thousands of ways to trip yourself while JSON is pretty simple. I always choose JSON APIs over XML if given the choice.
Try not to confuse APIs that you are implementing for work to make money, with random "show HN AI slop" somebody made because they are looking for a job.
FFS, have your parser fail on inputs it can not handle.
Anyway, the book defining XML doesn't tell you how your parser will handle values you can't represent on your platform either. And it also won't tell you how our parser will read timestamps. Both are completely out of scope there.
The only common issue in JSON that entire book covers is comments.
The SOAP specification does tell you how to write timestamps. It's not a single book, and doesn't cover things like platform limitations, or arrays. If you want to compare, OpenAPI's spec fills a booklet:
Aside from the other commenter's point about this being a misleading comparison, you didn't need to reinvent the whole XML ecosystem from scratch, it was already there and functional. One of the big claims I've seen for JSON though is that it has array support, which XML doesn't. And which is correct as far as it goes, but also it would have been far from impossible to code up a serializer/deserializer that let you treat a collection of identically typed XML nodes as an array. Heck, for all I know it exists, it's not conceptually difficult.
You need to distinguish between the following cases: `{}`, `{a: []}`, `{a:[1]}`, `{a:[1, 2]}`, `{a: 1}`. It is impossible to express in XML in an universal way.
Xsd lets you explicitly specify if you are dealing with one or more elements, no need to encode that information in the data itself. It also gives you access to concrete number types, so you don't have to rely on the implementation to actually support values like 1 and 2.
Not every XML has associated XSD. You need to transfer XSD. You need to write code generator for that XSD or otherwise use it. A lot of work which is unnecessary when you can just write `JSON.parse(string)`.
XML is not a data serialisation tool, it is a language tool. It creates notations abd should be used to create phrase-like structures. So if a user needs these distinctions, he makes a notation that expresses them.
Basically the difference is that underlying data structures are different.
JSON supports arrays of arbitrary items and dictionaries with string keys and arbitrary values. It aligns well with commonly used data structures.
XML node supports dictionary with string keys and string values (attributes), one dedicated string attribute (name), array of nodes (child nodes). This is very unusual structure and requires dedicated effort to map to programming language objects and structures. There were even so-called "OXM" frameworks (Object-XML Mapper), similarly to ORM.
Of course in the end it is possible to build a mapping between array, dictionary and DOM. But JSON is much more natural fit.
XML is immediately usable if you need to mark up text. You can literally just write or edit it and invent tags as needed. As long as they are consistent and mark what needs to be marked any set of tags will do; you can always change them later.
XML is meant to write phrase-like structures. Structures like this:
int myFunc(int a, void *b);
This is a phrase. It is not data, not an array or a dictionary, although technically something like that will be used in the implementation. Here it is written in a C-like notation. The idea of XML was to introduce a uniform substrate for notations. The example above could be like:
This is, of course, less convenient to write than a specific notation. But you don't need a parser and can have tools to process any notation. (And technically a parser can produce its results in XML, it is a very natural form, basically an AST). Parsers are usually a part of a tool and do not work on their own, so first there is a parser for C, then an indexer for C, then a syntax highlighter for C and so on: each does some parsing for its own purpose, thus doing the same job several times. With the XML processing scenario is not limited to anything: the above example can be used for documentation, indexing, code generation, etc.
XML is a very good fit for niche notations written by few professionals: interface specifications, keyboard layouts, complex drawings, and so on. And it is being used there right now, because there are no other tool like it, aside from a full-fledged language with a parser. E.g. there is an XML notation that describes numerous bibliography styles. How many people need to describe bibliography styles? Right. With XML they start getting usable descriptions right away and can fine-tune them as they go. And these descriptions will be immediately usable by generic XML tools that actually produce these bibliographies in different styles.
Processing XML is like parsing a language, except that the parser is generic. Assuming you have no text content it goes in two steps: first you get an element header (name and attributes), then the child elements. By the time you get these children they are no longer XML elements, but objects created by your code from these elements. Having all that you create another object and return it so that it will be processed by the code that handles the parent element. The process is two-step so that before parsing you could alter the parsing rules based on the element header. This is all very natural as long as you remember it is a language, not a data dump. Text complicates this only a little: on the second step you get objects interspersed with text, that's all.
People cannot author data dumps. E.g. the relational model is a very good fit for internal data representation, much better than JSON. But there is no way a human could author a set of interrelated tables aside from tiny toy examples. (The same thing happens with state machines.) Yet a human can produce tons of phrase-like descriptions of anything without breaking a sweat. XML is such an authoring tool.
But the part of XML that is equivalent to JSON is basically five special symbols: angle brackets, quotes and ampersand. Syntax-wise this is less than JSON (and it even has two kinds of quotes). All the rest are extras: grammar, inclusion of external files (with name and position based addressing), things like element IDs and references, or a way to formally indicate that contents of an element are written in some other notation (e. g. "markdown").
Having used XSLT, I remember hating it with the passion of a thousand suns. Maybe we could have improved what we had, but anything I wanted to do was better done somehow else.
I'm glad to have all sorts of specialists on our team, like DBAs, security engineers, and QA. But we had XSLT specialists, and I thought it was just a waste of effort.
You can do some cool stuff, like serving an RSS file that is also styled/rendered in the browser. A great loss for the 2010 idea of semantic web. One corporation is unhappy because it does not cover their use cases
> RIP a good tech killed by committees overembellishing it and engineers failing to recognise what they already had over the high of building something else.
Hope I can quote it to Transofrmer architecture One day
BMW tried that with the i3, it wasn't particularly popular. An engine, fuel system and a generator are all relatively complex additions compared to just putting the same cost and vehicle space into more batteries, and the public charging networks are definitely up to the task by now (having been EV-only for almost 5 years now).
I've ridden in one and this was basically a case of "you get the worst of both worlds". The engine was small and running at a constant RPM, leading to an annoying noise in the cabin. The range extension was not particularly huge. Worse, when we did eventually run out of battery on one trip, the range extender was unable to recharge the battery after refueling the car, forcing us to scramble to find an available charging station with the car on the tow truck...
I've been driving one for a while. On the trips I've driven, even on the highway, the engine was able to keep up with the energy draw. I think it's around 60HP. If I'm on a long trip, I'll start the engine once I reach 75% of my battery capacity. That gives a significant buffer for the engine. It's a serial hybrid.
Yes, the Volt was a better balanced version of that. It had a longer EV range and the range extender was larger. It was much less likely to need the range extender and when it did, the larger (1.4l) engine was no more noisy than any gas engine.
>> the range extender was unable to recharge the battery
Could you expand on this? What was the actual problem? For example, did the range extender start and run? Did it put any energy into the battery at all?
We ran out of battery and fuel (I was on the backseat so I don't remember the exact sequence, I just remember the car basically losing power without much warning on the autobahn). The car was towed to a gas station were we proceeded to refill the tank, but the range extender would not start and the car computer was adamant we had to recharge. Which led to a quest to find a public charger in the middle of the night, after 2 failed attempts, we gave up and asked to be towed to our destination instead since it was close enough.
The original BMW i3 had/has a lockout that prevented the range extender from charging the battery until it reached 6% in order to meet California's ZEV BEVx regulation.
I can't help thinking more startups need greybeards around. (Of which, realistically, I'm now one.)
Largest table 100 million rows and they were paying 6 figures for database services annually? I have one now that sits happily enough on an 8yo laptop. I've worked on systems that had similar scale tables chugging along on very average for 20 years ago MSSQL 2000 boxes. There just isn't a need for cloud scale systems and cloud scale bills for that data volume.
The problems they're describing should never have got that far without an experienced hand pointing out they didn't make sense, and if they'd hired that greybeard they'd have spotted it long before.
And yet it happens. A doctor in my family told the story of a patient they were treating in hospital who medically needed to lose weight, and who they found unable to get any reduction until they dropped below _200_ calories a day.
Metabolism is _significantly_ more complex than CI/CO, from experience.
- a food addicted person is lying about their consumption
200 kcal a day.. yeah sure. A human body needs more than that just to breathe and pump blood. Even comatose a skinny person needs 5 times as much.
Overweight people have a significantly higher metabolic base rate. Just breathing can easily be 1000kcal a day if your lungs have to move 30 kilos of upper body fat 10 times a minute. They also have more muscle mass compared to the average person their size, even when not physically active, which increases MBR as well.
Weight loss and gain is a solved problem, but self control and human behavior is not.
Perpetuating myths of impossible weight loss is not beneficial for our society and moves us further away from solving the underlying issues.
I can believe this as a human who fasts. I just don't eat every other day. I've fasted for multiple days. You would amazed at how much the scale doesn't move. I can lose zero weight after 36 hours of nothing but water entering me. The body is less CICO and more a system trying to maintain homeostasis as much as possible and pulling ever lever it can.
Yes, eventually eating every other day I did lose weight, but we're talking a steady glide of 1-2lbs a week nothing as severe as people would expect a severely overweight person who only ate half the week to lose.
If you treat humans as biological machines, signaling system (hormones, endocrine system) is very important. When signaling system is messed up, your CI/CO model with self-control doesn't work at all. Signals need to be fixed.
It's insane how arrogant people can be about this. You have not accounted for all of the variables and that is blatantly obvious. The phenomenon is well known and there are multi-variable equations for it, many different models. One popular model is that NEET can decrease below the caloric deficit, meaning you still gain weight by becoming subconsciously lazier despite everything feeling equally difficult subjectively. There are several other more advanced models adding other variables, some depending on insulin sensitivity, for example. Anyway, no physics is broken, laymen are just naive to the complexity abd adaptability of biology.
200kcal/day is less than 10W. Since all energy the body uses is released as heat this puts the maximum sustainable heat radiation with such a diet at 10W.
That's about the level of a typical household LED, which at most feels slightly warm to the touch.
At that energy level you could not sustain weight while maintaining body temperature let alone having a healthy metabolism. That's just a plain fact.
You will lose weight long before you reach 200kcal/day.
That's a good argument against what I said for this particular case. I didn't realize 200kcal was so little. Imagine my comment was written in reply to a higher number though because that criticism is definitely valid for a large number of posts people make in general. ;)
But it is basic physics, which is in fact being violated. It's really very normal to require exceptional proof for this. Like literally any scientific study, not just an anecdote claiming magic.
It is the amazing the hoops people will jump through and the lies they will tell themselves and others rather than facing the obvious truth that they are consuming too many calories.
You cant cheat thermodynamics, so something does not add up. Most likely the calorie estimates if they were self reported.
To illustrate a single 37.5g snickers is slightly below 200kcal. I probably get that number of calories just from the milk in my coffee in an average day.
For how long? Irregularities can persist for a small amount of time, no doubt; but for how long does one maintain weight and life on 300 calories a day?
I don't believe that people can gain weight while eating almost nothing, but I believe that their internal distribution of energy may be out of whack.
Proponents of naive thermodynamics model tend to assume that only "excessive" energy is stored into bodily fat, once all the other tissues have had their fair share.
That is not really true in insulin resistant people, whose storage may be excessive and leave the rest of the body unsatisfied and hungry, which drives them to eat more than a healthy person would.
To us, this looks like deliberate overeating, to them, it is a result of constant hunger caused by the fact that some part of the energy consumed is being immediately locked away in fat tissues by dysregulated metabolic processes.
Notably, it isn't easy to "correct" this situation by just eating less, because that will leave those people feeling really starved. Insulin sensitivity must be restored first, then the fat stores will give up their excess willingly and that person won't suffer.
HP did the same when migrating away from the PA RISC architecture.
IIRC as part of their testing they set it up so it would ingest, recompile and output code for the same architecture, and found that it could produce speed improvements due to context awareness.
Nikon aren't doing much better on letting third-party lenses into the ecosystem, sadly - speaking as someone with three Nikon DSLRs and Sigma glass I'm very happy with.
That's incredibly short-sighted, not least because the number of fires per vehicle is orders of magnitude higher for combustion vehicles. I'd be astonished if that made actuarial sense if actually investigated.
I've seen and used underground car chargers in multiple European cities. They're definitely viable.
Yes, except ICE fires are a lot easier to put out (possibly partly because of long experience developing tools & techniques vs for electric fires). Electric cars were also banned at most racetracks for years (not sure if that is still true).
I'm a software engineer not a firefighter, but my understanding is that it's significantly an experience thing. Lithium battery fires clearly aren't trivial to deal with and do burn hotter than petroleum fires, but they're also very much less frequent and firefighters are learning how to handle them.
I'm not sure what the specific issue would be with race tracks. The average road course is very open and a very controlled environment; if you had to deal with a car fire, it's probably among the best places to do so, thanks to lessons learned in the blood of previous generations of drivers and track workers.
Yup, I think experience and new tools & techniques will have battery fires as controllable as gasoline fires relatively soon.
I'm pretty sure the racetrack issue was related to the garages. The pretty much standard rule I've seen at all racetracks I've been to is and absolute "NO FUELING IN THE GARAGE" as in if they even catch you with an open fuel can in the garage you are immediately banned from the premises. They're obviously trying to minimize the risk of one of their building burning down, especially since once a fire started in one team's bay, it would be likely to spread to adjacent bays. I expect they just consider electric fires to start in broader circumstances.
Also, I think it might be relaxing the rules some, as I just checked and at least SCCA is now working on rules for electric cars on road tracks, which presumably means some tracks are more open to it.
It seems that most tracks in the US allow EVs now. A handful of EVs regularly race in 24 hours of lemons which operates all over the US. They don't have charging infrastructure though, so there have been some...creative solutions.
Agreed, but they're not that complex to retrofit to a parking space. I can foresee a future where each space has a port and an account card reader - they'd make the buildings more valuable, the supplier to that space has a basically guaranteed income stream, and the government has an easy emissions reduction. Wins all round, so why wouldn't it happen?
The operator could demand an exclusivity contract from the landlord, provide faulty equipment, then charge high fees to repair it, eventually leading to many stalls non-functional until the contract is invalidated in court or the two parties settle, the chargers are ripped out, maybe with purposeful permanent damage to the wiring to make it unusable, and another mildly more honest provider comes in and does something slightly better but not by much.
Or, an enterprising landchad could realize they can charge 10% more kWh than people actually pull (blaming efficiency losses), along with a healthy margin for "maintenance".
Sure, it's a market with massive abuse potential, but we have a world full of them and we regulate to control the abuses. The underlying service is clearly of societal benefit and would clearly be profitable to all parties, so it's worth doing and working out the regulation to make it viable.