Apple's native SwiftUI framework and Google's Jetpack Compose framework stumbled upon basically the same layout system, "Constraints down, sizes up":
1. Proposal: The parent component proposes a size/constraints to the child
2. Measurement: The child component picks its own size based on those constraints
3. Placement: The parent component then positions the child in its coordinate space
It's all done in a single pass. It scales great. It fits great with React-like reactive UI frameworks. (SwiftUI and Jetpack Compose are based on React.)
That pretty much is how CSS works! At the most basic level, Flow level is about widths down, heights up. But this basic model doesn't let you do a lot of things some people want to do, like distributing left-over space in a container equally among children (imagine a table). So then CSS added more stuff, like Flex-box, which also fundamentally works like this though adds a second pass.
> With JSON, you often send ambiguous or non-guaranteed data. You may encounter a missing field, an incorrect type, a typo in a key, or simply an undocumented structure. With Protobuf, that’s impossible. Everything starts with a .proto file that defines the structure of messages precisely.
> Never add a required field, instead add `// required` to document the API contract. Required fields are considered harmful by so many they were removed from proto3 completely.
Protobuf clients need to be written defensively, just like JSON API clients.
The blog seems to contain other similar misunderstandings: for example the parallel article against using SVG images doesn't consider scaling the images freely a benefit of vector formats.
https://aloisdeniel.com/blog/i-changed-my-mind-about-vector-... seems fairly clearly to be talking about icons of known sizes, in which case that advantage disappears. (I still feel the article is misguided and that the benefit of runtime-determined scaling should have been mentioned, and see no benchmarks supporting its performance theses, and I’d be surprised if the difference was anything but negligible; vector graphic pipelines are getting increasingly good, and the best ones do not work in the way described, and could in fact be more efficient than raster images at least for simpler icons like those shown.)
> seems fairly clearly to be talking about icons of known sizes, in which case that advantage disappears.
That's the point: obliviousness to different concerns and their importance.
Among mature people, the main reason to use SVG is scaling vector graphics (in different contexts, including resolution-elastic final rendering, automatically exporting bitmap images from easy to maintain vector sources, altering the images programmatically like in many icon collections); worrying about file sizes and rendering speed is a luxury for situations that allow switching to bitmap images without serious cost or friction.
Are there display pipelines that cache the generated-for-my-device-resolution svgs instead of doing all the slower parsing etc from scratch every time, achieving benefits of both worlds? And you can still have runtime-defined scaling by "just" rebuilding the cache?
> The drawback to using vector images is that it can take longer to render a vector image than a bitmap; you basically need to turn the vector image into a bitmap at the size you want to display on the screen.
Indeed, would be nice if one of these blogs explained the caching solution to tackle the drawback.
Another issue, I think, especially at smaller sizes, is the pixel snapping might be imperfect and require "hints" like in fonts? Wonder if these icons suffer from these/address it
Increasingly I think you’ll find that the efficient format for simple icons like this actually isn’t raster, due to (simplifying aggressively) hardware acceleration. We definitely haven’t reached that stage in wide deployment yet, but multiple C++ and Rust projects exist where I strongly suspect it’s already the case, at least on some hardware.
The best place for such a cache is a GPU texture, and in a shader that does simple texture mapping instead of rasterizing shapes it would cost more memory reads in exchange for less calculations.
Most web frameworks do both at the same time to the point where having to write code which enforced a type contract after deserializing is a delabreaker for me. I eant to be able to define my DTOs in one place, once, and have it both deserialize and enforce types/format. Anything else is code smell
I'm in the same boat. I mostly write Rust and Python. Using serde_json and Pydantic, you get deserialization and validation at the same time. It allows you to de-serialize really "tight" types.
Most of my APIs are internal APIs that accept breaking changes easily. My experience with protobufs is that it was created to solve problems in large systems with many teams and APIs, where backwards compatibility is important. There are certainly systems where you can't "just" push through a breaking API change, and in those cases protobufs make sense.
> My experience with protobufs is that it was created to solve problems in large systems with many teams and APIs
Also significant distribution such that it’s impossible to ensure every system is updated in lockstep (at least not without significant downtime), and high tail latencies e.g. a message could be stashed into a queue or database and processed hours or days later.
I feel like that's fine since both things go hand in hand anyway.
And if choosing the JSON-format comes with a rather high amount of contract-breaches it might just be easier to switch that instead of fixing the contract.
The post is about changing the serialization-format so enforcing contracts becomes esier; and I am defending the post, so I don't understand what you're hinting at here.
Isn't the core issue just language and implementation differences of clients vs servers here?
I went all in with Go's Marshalling concept, and am using my Gooey framework on the client side nowadays. If you can come around Go's language limitations, it's pretty nice to use and _very_ typesafe. Just make sure to json:"-" the private fields so they can't be injected.
Skew is an inherent problem of networked systems no matter what the encoding is. But, once the decoding is done, assuming there were no decoding errors in either case, at least with protobuf you have a statically typed object.
You could also just validate the JSON payload, but most people don't bother. And then they just pass the JSON blob around to all sorts of functions, adding, modifying, and removing fields until nobody knows for sure what's in it anymore.
> You could also just validate the JSON payload, but most people don't bother.
I don't think I have ever worked somewhere that didn't require people to validate inputs.
The only scenario could be prototypes that made it to production, and even when its thrown over the wall I'll make it clear that it is unsupported until it meets minimum requirements. Who does it is less important than it happening.
The convention at every company I've worked at was to use DTO's. So yes, JSON payloads are in fact validated, usually with proper type validation as well (though unfortunately that part is technically optional since we work in php).
Usually it's not super strict, as in it won't fail if a new field suddenly appears (but will if one that's specified disappears), but that's a configuration thing we explicitly decided to set this way.
I think the OP meant something far simpler (and perhaps less interesting), which is that you simply cannot encounter key errors due to missing fields, since all fields are always initialized with a default value when deserializing. That's distinct from what a "required" field is in protobuf
> Protobuf clients need to be written defensively, just like JSON API clients.
Oof. I'd rather just version the endpoints and have required fields. Defensive is error-prone, and verbose, harder to reason about, and still not guaranteed. It really feels like an anti-pattern.
It is not surprising that a libc headed by the chief gnuissance would produce sn ecosystem where everything works better when apps are compiled from source for each distro.
Whoa, I had no idea about that. Tauri is way less fully baked than I realized.
The bug goes on to explain that Tauri apps can't have Windows "package identity", which means that there's a bunch of Windows APIs you simply can't use in Tauri, including the notifications API.
Without package identity, IMO, Tauri isn't ready for primetime on Windows.
Huh, I thought I'd seen notifications and app URI handlers in a Tauri app. Maybe they were using a custom Wix builder and not the Tauri template though.
Speaking of URI handlers, any idea what this bit in Microsoft's docs is supposed to mean?
> Starting with the Windows 10 Creators update and in all Windows 11 versions, supported links clicked in Microsoft Edge Legacy will launch the corresponding app. Supported links clicked in supported browsers (for example, Microsoft Edge Chromium, Firefox, Internet Explorer, etc.), will keep you in the browsing experience.
I can think of two ways to interpret it, neither of which seems good:
1) It doesn't work at all in any modern browser, and "supported" is the term Microsoft has chosen to describe this state of affairs?
2) Microsoft is sneakily installing a Firefox extension to subvert URL handling and embed UWP apps inside Firefox ("the browsing experience")?
"Yet you don’t need SEEDS analysis to know that the British economy itself is at an advanced stage of disintegration."
"Put another way, very little of the world’s supposedly enormous wealth actually exists in any meaningful sense."
Citation overwhelmingly needed. His claim that the economy is disintegrating is supported by the argument that: "you know, just look around." But what we're looking around and seeing is wealth accumulating at the very top.
The mistake he's making is thinking that if most people aren't doing well, then nobody is doing well, that the 1% aren't even really rich because their wealth is all a fiction.
In fact, wealthy people are really, actually wealthy. They are unimaginably wealthy. It is literally beyond the author's imagination how wealthy they are, leading him to the truly absurd conclusion that they're not really even wealthy at all.
"Nobody could be that wealthy, could they?!" Yes, my dude. Yes, they actually can be that wealthy. Indeed, they actually are.
That's entirely right. Products have to transition from fast-moving exploratory products to boring infrastructure. We have different goals and expectations for an ecommerce web app vs. a database, or a database vs. the software controlling an insulin pump.
Having said that, at this point, Cloudflare's core DDOS-protection proxy should now be built more like an insulin pump than like a web app. This thing needs to never go down worldwide, much more than it needs to ship a new feature fast.
You are simplifying the control software of an insulin point to a degree that does not match reality. I'm saying that because I actually reviewed the code of one and the amount of safety consciousness on display there was off the charts compared to what you usually encounter in typical web development. You also under-estimate the dynamic nature of the environment these pumps operate in as well as the amount of contingency planning that they embody, failure modes of each and every part in the pump were taken into consideration, and there are more such parts that you are most likely aware of. This includes material defects, defects as a result from abuse, wear & tear, parts being simply out of spec and so on.
To see this as the typical firmware that ships with say a calculator or a watch is to diminish the accomplishment considerably.
I had a former coworker who moved from the medical device industry to similar-to-cloudflare-web software. While he had some appreciation for the validation and intense QA they did (they didn't use formal methods, just heavy QA and deep specs), it became very clear to him very clearly that those approaches don't work with speed-of-release as a concern (his development cycles were annual, not weekly or daily). And they absolutely don't work in contexts where user-abuse or reactivity are necessary. The contexts are just totally different.
It is perfectly possible to engineer for faster cycles without losing control over what your code can and can not do. It is harder, for sure. But I do not think it is a matter of this absolutely not working, that's black-and-white and it never is black and white, it is always some shade of gray.
For instance: validating a configuration before loading it is fairly standard practice, as are smoke tests and gradual roll-outs. Configuration fuck-ups are fairly common so you engineer with that in mind.
Thanks for spelling that out. It's so often tempting to be reductionist about things, but there is often a tremendous amount of thankless engineering inside products that we are privileged to consider as being somewhat boring. It takes a lot of work to make something so dynamic and life-critical and make it reliable enough to be considered simple, when it's anything but.
If humans beings had a small chance to transform into say quadrupeds or suddenly grow tenatacles, extra hearts, organs, etc., in any given year… then wouldn’t designing a safe insulin pump literally be impossible?
The point still stands. The human body still isn't going change. That's why insulin pump can afford to have all kinds of rigorous engineering, while web-facing infrastructure on the other hand needs to be able to quickly adapt to changes.
> That's why insulin pump can afford to have all kinds of rigorous engineering, while web-facing infrastructure on the other hand needs to be able to quickly adapt to changes.
The only reason we have a web in the first place is because of rigorous engineering. The whole thing was meant to be decentralized, if you're going to purposefully centralize a critical feature you are not going to get away with 'oh we need to quickly adapt to changes so let's abandon rigor'.
That's just irresponsible. In that case we'd be better off without CF. And I don't see CF arguing this, in fact I'm pretty sure that CF would be more than happy to expend the extra cycles so maybe stop attempting to make them look bad?
All the more reason to be careful about relying on humans to avoid making mistakes when changing it rather than moving quickly and letting things fail in production.
an insulin pump is a good metaphor; insulin as a hormone has a lot of interactions and the pump itself, if not wanting to unalive its user, should (most do not) account for external variables, such as: exercise, heart rate, sickness, etc. these variables are left for the user to deal with, and in this case, is a subpar experience in managing a condition.
In part 1 of this article, the author wrote, "XSLT is an essential companion to RSS, as it allows the feed itself to be perused in the browser"
Actually, you can make an RSS feed user-browsable by using JavaScript instead. You can even run XSLT in JavaScript, which is what Google's polyfill does.
I've written thousands of lines of XSLT. JavaScript is better than XSLT in every way, which is why JavaScript has thrived and XSLT has dwindled.
Obviously not in every way. XSLT is declarative and builds pretty naturally off of HTML for someone who doesn't know any programming languages. It gives a very low-effort but fairly high power (especially considering its neglect) on-ramp to templated web pages with no build steps or special server software (e.g. PHP, Ruby) that you need to maintain. It's an extremely natural fit if you want to add new custom HTML elements. You link a template just like you link a CSS file to reuse styles. Obvious.
The equivalent Javascript functionality's documentation[0] starts going on about classes and callbacks and shadow DOM, which is by contrast not at all approachable for someone who just wants to make a web page. Obviously Javascript is necessary if you want to make a web application, but those are incredibly rare, and it's expected that you'll need a programmer if you need to make an application.
Part of the death of the open web is that the companies that control the web's direction don't care about empowering individuals to do simple things in a simple way without their involvement. Since there's no simple, open way to make your own page that people can subscribe to (RSS support having been removed from browsers instead of expanded upon for e.g. a live home page), everyone needs to be on e.g. Facebook.
It's the same with how they make it a pain to just copy your music onto your phone or backup your photos off of it, but instead you can pay them monthly for streaming and cloud storage.
> XSLT is declarative and builds pretty naturally off of HTML for someone who doesn't know any programming languages.
Have you ever met a single non-programmer who successfully picked up XSLT of their own volition and used it productively?
I'd be willing to bet good money that the Venn diagram of users that fit the intersection of "authoring content for the web", "care about separating content from HTML", "comfortable with HTML", "not comfortable with JavaScript", and "able to ramp up on XSLT" is pretty small.
At some point, we have to just decide "sorry, this use case is too marginal for every browser to maintain this complexity forever".
> Have you ever met a single non-programmer who successfully picked up XSLT of their own volition and used it productively?
Hi! I'm a non-programmer who picked up XSLT of my own volition and spent the last five-ish years using it to write a website. I even put up all the code on github: https://github.com/zmodemorg/wyrm.org
I spent a few weeks converting the site to use a static site generator, and there were a lot of things I could do in XSLT that I can't really do in the generator, which sucks. I'd revert the entire website in heartbeat if I knew that XSLT support would actually stick around (in fact, that's one of the reasons I started with XSLT in the first place, I didn't think that support would go away any time soon, but here we are)
For what it's worth, you can still run an XSL processor as a static generator. You of course lose some power like using document() to include information for a logged in user, but if it's a static site then that's fine.
I eventually started using server-side XSL processing (https://nginx.org/en/docs/http/ngx_http_xslt_module.html) because I wanted my site to be viewable in text-based browsers, too, but it uses the same XSLT library that the browsers use and I don't know how long it's going to be around.
> Have you ever met a single non-programmer who successfully picked up XSLT of their own volition and used it productively?
Admittedly this was 20ish years ago, but I used to teach the business analysts XSLT so they could create/edit/format their own reports.
At the time Crystal Reports had become crazy expensive so I developed a system that would send the data to the browser as XML and then an XSLT to format the report. It provided basic interactivity and could be edited by people other than me. Also, if I remember, at the time it only worked in IE because it was the only browser with the transform function.
Funnily enough, XSLT is one of those things that I don't know very well but LLMs do. I find that I can ask Gemini to blurt out an XSLT implementation of my requirements given a snippet of example doc, and I have used this to good effect in some web scrapers/robots.
I was such a non-programmer as a child, yes. At the time that XSLT was new, if you read a book on HTML and making web pages from the library, it would tell you about things like separating content from styles and layout, yes. Things that blew my mind were that you could install Apache on your own computer and your desktop could be a website, or (as I learned many years later) that you could make a server application (or these days now Javascript code) that calls a function based on a requested path instead of paths being 1:1 with files. By contrast, like I said XSLT was just a natural extension of HTML for something that everyone who's written a couple web pages wants to do.
The fact that the web's new owners have decided that making web pages is too marginal a use-case for the Web Platform is my point.
Yes that's why XSLT is such a natural fit when you learn about HTML+CSS. It's the same idea, but applied to HTML templates, which is something you immediately want when you hand-write HTML (e.g. navbars, headers, and footers that you can include on every page).
Your problem here is that you're hand-writing HTML including all the templates. This wasn't a good way to do it 30 years ago and it's not a good way to do it now.
See all these "static site generators" everyone's into these days? We used those in the mid-90s. They were called "Makefiles".
Yeah because I was 11 and didn't know what a Makefile was. That's my point. I wanted to make web pages, and didn't know any programming. HTML is designed to be hand-written. You just write text, and when you want it to look different, you wrap it in a thing. When doing this, you'll quickly want to re-use snippets/invent your own tags. XSLT gives a solution to this without saying "okay let's back up and go learn how to use a command line now, and probably use an entirely different document format" (SSGs) or "okay let's back up and learn about functions, variables, classes, and callbacks, and maybe a compiler" (Javascript). It just says "when you want to make your own tags, extract them into a 'template' tag, then include your templates just like you include a CSS file for styles".
I've seen non-programmers learn SQL, and SQL is far more inconsistent, complex, non-orthogonal, fragmented, footgunny, and user hostile than most programming languages.
I'm not sure what I mean by this, WRT XSLT vs Javascript.
XSL is a Turing-complete functional programming language, not a declarative language. When you xsl:apply-template, you're calling a function.
Functional programming languages can often feel declarative. When XSL is doing trivial, functional transformations, when you keep your hands off of xsl:for-each, XSL feels declarative, and doesn't feel that bad.
The problem is: no clean API is perfectly shaped for UI, so you always wind up having to do arbitrary, non-trivial transformations with tricky uses of for-each to make the output HTML satisfy user requirements.
XSL's "escape hatch" is to allow arbitrary Turing-complete transformations, with <xsl:variable>, <xsl:for-each>, and <xsl:if>. This makes easy transformations easy and hard transformations possible.
XSL's escape hatch is always needed, but it's absolutely terrible, especially compared to JS, especially compared to modern frameworks. This is why JS remained popular, but XSL dwindled.
> It gives a low-effort but fairly high power (especially considering its neglect) on-ramp to templated web pages with no build steps or special server software (e.g. PHP, Ruby) that you need to maintain. It's an extremely natural fit if you want to add new custom HTML elements.
JavaScript is a much better low-effort high-power on-ramp to templated web pages with no build steps or server software. JavaScript is the natural fit for adding custom HTML elements (web components).
Seriously, XSLT is worse than JavaScript in every way, even at the stuff that XSLT is best at. Performance/bloat? Worse. Security? MUCH worse. Learnability / language design? Unimaginably worse.
EDIT: You edited your post, but the Custom Element API is for interactive client-side components. If you just want to transform some HTML on the page into other HTML as the page loads, you can use querySelectorAll, the jQuery way.
This is patently false. It is much better for security if you use one of the many memory-safe implementations of it. This is like saying “SSL is insecure because I use an implementation with bugs”. No, the technology is fine. It's your buggy implementation that's the problem.
XSLT used as a pre-processor is obviously also a fundamentally better model for security because... it's used as a preprocessor. It cannot spy on you and exfiltrate information after page load because it's not running anymore (so you can't do voyeuristic stuff like capture user mouse movements or watch where they scroll on the page). It also doesn't really have the massive surface Javascript does for extracting information from the user's computer. It wasn't designed for that; it was designed to transform documents.
This is one of simplest ways to do things. With JavaScript you what? Call methods?
CreateElement("def").setAttribute("def", "jkl")
There is a ton of "template engines" (all strictly worse than XSLT); why people keep writing them? Why people invented JSX with all the complicated machinery if plain JavaScript is better?
Because you do not want to create web pages, but to render some information in the form of web pages. And as you write that information you make distinctions unique to a) this information and b) your approach to it. And one of the best ways to do this is to come up with a custom set of XML tags. You write about chess? Fine: invent tags to decribe parties, positions and moves. Or maybe a tutorial on Esperanto? Fine; invent a notation to highlight the lexical structure and the grammar. You can be as detailed as you want and at the same time you can ignore anything you do not care about.
And then you want to merely render this semantically rich document into HTML. This is where XSLT comes in.
> Why would adding a fragile and little-used technology like XSLT help?
A few years ago I bought a bunch of Skylanders for practically nothing when the toys to life fad faded away. To keep track of everything I made a quick and dirty XSLT script that sorted and organized the list of figures and formatted each one based on their 'element'. That would have been murderous to do in plain HTML and CSS: https://wyrm.org/inventory/skylanders.xml
It would have been murderous with just CSS, but it would have been trivial to do with JS, much easier than the hundreds of lines of XSL you wrote. https://wyrm.org/inventory/skylanders.xsl
Maybe! How much Javascript would I have to learn before I could come up with a 'trivial' solution?
> the hundreds of lines of XSL you wrote.
Those hundreds of lines are the same copy/pasted if statement with 5 different conditions. For each game, I create a table by: alphabetizing the XML > going through the list searching for figures that match the game > each time I find one go through the color list to find the color to use for the table row. There are 10 color choices per game, which means that I repeated a 10-choice if statement 5 times.
There's nothing difficult here, it's just verbose.
Aside: you could factor that out by making an <xsl:template match="figure"> that does the choose on element, then your repeated code can just expand that template. The immense power in XSLT comes from xpath to make it easy to match on things like "all figures that contain a <name series=1/>":
Then `xsltproc refactor.xsl skylanders.xml > skylanders-refactored.xml`
As I've said elsewhere, I like XSL for its beginner-approachability, so not doing a bunch of factoring is fine, but I also like it for its power: such factoring into simple templates is possible once you wrap your head around the idea (as with CSS). Using for-each or choose should be a sign you're doing it wrong. Ideally if you did your data model well, you just do simple template expansions everywhere.
> Using for-each or choose should be a sign you're doing it wrong.
I wouldn't say that I did it wrong, I just didn't do it efficiently. And I knew that at the time.
I appreciate the work, but I've said it elsewhere: I'm not a programmer. This was something I spent a couple of afternoons on five years ago and never looked at again after getting the results I wanted.
Sorry, I communicated poorly there, and was kind of replying more to the other commenter. I actually meant to reinforce your point that you didn't need some complex hundreds-of-lines-of-code template (and that what you had wasn't complex), and if you wanted to put in that effort, you could've condensed it here too to just 2 small templates.
The thing about doing it wrong was meant as a reply to the comment upthread about for-each etc. being necessary. For something like you have, they're absolutely not. It's fine if that was the easiest way for you to do it though. My whole point was that I've always seen XSLT as much more of an approachable, enabling technology than modern JS approaches.
I'm a web components guy myself, but that's not the equivalent JavaScript functionality at all, as XSLT doesn't event have components.
XSLT is a functional transform language. The equivalent JavaScript would be something like registry of pure functions of Node -> Node and associated selectors and a TreeWalker that walks the XML document, invokes matching functions, and emits the result into a new document.
Or you could consume the XML as data into a set of React functions.
JavaScript is ever evolving and it means you need to stick to one of the two browsers (WebKit or Firefox) and keep upgrading. XSLT hasn't changed in years. It's an actual standard instead of an evolving one.
I know that other independent browsers that I used to use back in the day just gave up because the pace of divergence pushed by the major implementations meant that it wasn't feasible to keep up independently.
You are confusing backwards and forwards compatibility. Those sites may have added features that your iPad does not support, which is why it broke, if they have not added those, it might still work.
However JS is not 100% backwards compatible either, it is in many cases, largely backwards compatible, but there are rare cases of bug fixes, or deprecated APIs that might be removed and break old code, but this is not even JS itself, it's more like web/engine standards.
JS is backwards compatible: new engines support code using old features.
JS is not forward compatible: old engines don't support code using new features.
Regarding your iPad woes, the problem is not the engine but websites breaking compat with it.
The distinction matters as it means that once a website is published it will keep working. The only way to break an existing website is to publish a new version usually. The XSLT situation is note-worthy as it's an exception to this rule.
But it is quite interesting and especially learning about the security problems of the document() function (described @ 19:40-25:38) made me feel more convinced that removing XSLT is a good decision.
How's the support for the first option? If that reliably works, I'm gonna stop being angry, as it solved all my needs (and would allow polyfilling xslt anyway)
On second thought, that wouldn't allow me to modify the DOM before it's being parsed, I'd have to wipe the DOM and polyfill the entire page load, right?
When you thought to yourself, "I think therefore I am," in what language did you think it? In English? The English language is an artifact of a community of English speakers. You can't have a language with grammatical rules without a community of speakers to make that language.
Almost nobody in the English-speaking community has direct access to the internals of your mind. The community learns things through consensus, e.g. via the scientific method. We know things in English via a community of English-speaking scientists, journalists, historians, etc. etc. Wittgenstein calls these the "structures of life," the ordinary day-to-day work we do to figure out what's true and false, likely and unlikely.
As you're probably aware, the scientific method has long struggled to find a "mind" in the brain doing the thinking; all we can find are just atoms, molecules, neurons, doing things, having behaviors. We can't find "thoughts" in the atoms. As far as our ordinary day-to-day scientific method is concerned, we can't find a "mind."
But "cogito ergo sum" isn't part of the scientific method. We don't believe "cogito ergo sum" because reproducible experiments have shown it to be true. "Cogito ergo sum" proposes a way of knowing disconnected from the messy structures of life we use in English.
So, perhaps you'd say, "oh, good point, I suppose I didn't think 'cogito ergo sum' in English or Latin or whatever, I thought it in a private language known only to me. From this vantage point, I only have direct knowledge of my own existence and my own perceptions in the present moment (since the past is uncertain), but at least I can have 100% certainty of my own existence in that language."
The problem is, you really can't have a private language, not a language with words (terms) and grammatical rules and logical inferences.
Suppose you assigned a term S to a particular sensation you're having right now. What are the rules of S? What is S and what is not S? Are there any rules for how to use S? How would you know? How would you enforce those rules over time? In a private language, there's no difference between using the term S "correctly" or "incorrectly." There are no rules in a private language; there can't be. Even mathematical proofs are impossible when every term in the proof means anything you want.
Descartes didn't originally write "cogito ergo sum" in Latin. He originally published it in French, "je pense, donc je suis." But in Europe, where Descartes was writing, Latin was the universal language, the one known to all sorts of people across the continent. For Descartes, Latin was the language of empire, the language every civilized person knew because their ancestors were forced to learn it at the point of a sword, the language of absolutes.
Wittgenstein has a famous line, "Whereof one cannot speak, thereof one must be silent." So must we be silent about "cogito ergo sum." "cogito ergo sum" isn't valid in Latin; "je pense, donc je suis" isn't valid in French. It could only be valid in an unspeakable private language, a language with no grammatical rules, no logic, where true and false are indistinguishable. "Cogito ergo sum" could only be valid in an unusable language where everything is meaningless.
That's a lot of words to claim that language has exist before thought can, which gets disproved in an instant when your audience points to the large number of fauna on earth that has no language and yet displays thought.
That's not what I'm arguing. The argument is that "cogito ergo sum" is invalid, which is part of an argument against the existence of a "mind" above and beyond what the brain does in a living body. The atoms are all there is.
I don't think I have a "mind" above and beyond my body, and I don't think you do, either. Animals can remember stuff, solve puzzles, and express pain, just like you or I do. We do all that with our brains, not with our "minds."
The problem with making universal assertions as opposed to existential assertions is that a single counterexample is all that is necessary to prove the assertion is incorrect or wrong.
> That's not what I'm arguing.
Okay; your argument is difficult to digest because, unlike most philosophy arguments, you neither lead nor end with the actual thesis; you present a book-length text as support for a thesis that is never stated.
> The argument is that "cogito ergo sum" is invalid, which is part of an argument against the existence of a "mind" above and beyond what the brain does in a living body. The atoms are all there is.
What's your thesis, then? "Cogito ergo sum is invalid" is hardly a thesis. Maybe you are asserting that there is no "mind" above and beyond the living brain, which will be a universal claim not an existential one.
If that is indeed your claim, then it's not a testable/falsifiable one anyway; you are going to require instead a sequence of premises that are each accepted by the audience you wish to sway, with intermediate conclusions that are likewise accepted by the audience, before you present your final conclusion based exclusively on the premises list.
A narrative is not a good way to present a philosophical argument, especially when it is a counter argument to an argument that was presented (even if only verbally at the time) in the standard logical format I described.
A better way to convince that any formally presented logic (as Cogito ergo sum was) is invalid (or unsound) is to attack the premises. It is not normal to ignore the premises of the original argument and present premises of your own.
(PS. It's been a long time since I was in a formal logic philosophy class and maybe things have changed, but they haven't (I hope!) changed so much that logic is completely thrown out the window in favour of narrative)
Language, and especially its mechanics like grammar, are entirely a distraction w.r.t. "cogito ergo sum". The underlying argument it points to is language-independent.
Correct. Here is the stub of a reply I can't be assed to finish right now:
Words and language refer to sensations (P.I. §244: "How do words refer to sensations?").
Sensations can exist independently of language to refer to them (P.I. §256: "—But suppose I didn’t have any natural expression for the sensation, but only had the sensation?").
Thus it can be possible for one to experience the cogito, the mere act of awareness, independently of language. The point of the cogito is its self-evidence, prior to language even entering the picture as a sign standing for or referring to the self-evident sensation of conscious awareness.
I note that you keep saying "cogito" without the "ergo."
"I think therefore I am" is invalid, and what's wrong with it is the "therefore," the idea that you knew one thing, and you drew a "logical" conclusion from it, in a "prior to language" environment where words have no meaning, where "true" and "false" are indistinguishable, and logic is impossible.
Logic requires words. "Logical" means "verbal," from the Greek logos (λόγος). You can't have a logical argument (you can't draw a conclusion) from the instantaneous standpoint of someone "experiencing" cogito, where words mean whatever you want, or nothing at all.
The experience you're having is not a logical argument. As a sentence, "cogito ergo sum" is invalidated as soon as you write it down in a shared language.
I'm sure it feels right to you! But you can't actually say anything true about it in English, or Latin, or any other shared language.
For, on the one hand, there is the real world, and on the other,
a whole system of symbols about that world which we have in our minds.
These are very very useful symbols; all civilization depends on them;
but like all good things they have their disadvantages, and the
principle disadvantage of symbols is that we confuse them with reality,
just as we confuse money with actual wealth; and our names about
ourselves, our ideas of ourselves, our images of ourselves, *with*
ourselves.
Now of course, reality, from a philosopher's point of view, is a
dangerous word. A philosopher will ask me, what do I mean by reality?
Am I talking about the physical world of nature, or am I talking about
a spiritual world, or what?
And to that I have a very simple answer. When we talk about the material
world, that is actually a philosophical concept - so in the same way, if
I say that reality is spiritual, that's also a philosophical concept -
and reality itself is not a concept.
Reality is - [...]
... and we won't give it a name.
The last refuge of the Cartesian is always, "My argument is correct in an ineffable way that I couldn't possibly write down."
"Cogito ergo sum" presents itself as a self-evident deduction, the one guaranteed universally agreeable truth, but, when you investigate it a little… oh, well, it's really more of a vibe than an argument, and isn't "logical argument" really a monkey-mind distraction from the indescribable lightness of existence?
If you define "logic" as requiring words, then it's only a model of casuality, which is real irrespective of life entirely.
You're demanding that language perfectly convey an abstract argument, which is obviously unreasonable, and saying that since it can't do that we can't discuss tricky subjects at all, which if you take this line of reasoning seriously is all of them. So how about you "remain silent".
They're discontinuing the Facebook Like button on third-party sites. That's pretty wild! The Like button used to be Facebook's major initiative back in 2010. https://en.wikipedia.org/wiki/Facebook_like_button
Media sites in particular used to try to drive people to click the Like button, causing their articles to appear prominently on Facebook. And since it was an <iframe> running on every site, Facebook would automatically know what articles you viewed, data that they could use to target ads to you.
It didn't even have to be someone with an FB profile, they surely had shadow profiles of a lot of Internet users (when Zuck appeared before congress he dodged a question about this).
Just like Spotify or Netflix's recommendation engine, that's a gold mine for ads..
These used to be on website - but websites themselves for the most part have stopped using them. I don't find every blog post with a "Click to like on Facebook" much more these days.
I did used to like reading HPE spec sheets for equipment and there was always a "Click to like on Facebook" button at the bottom. I'm not going to broadcast to my friends that I like eight different RAID card models thanks.
My guess? implicit behaviours are always more valuable than explicit behaviours, so when implicit engagement became the metric and measurable, Facebook didn't need people to deliberately engage with content to determine its value
Also..
* In that time we moved from desktop to mobile; clicking that tiny button was difficult on mobile and they couldn't replace it with a comparable mechanic
* Changes to third-party cookie and XHR rules in browsers made the data less reliable
Blame Apple and Mozilla, too, then. They all agreed to remove it.
They all agreed because XSLT is extremely unpopular and worse than JS in every way. Performance/bloat? Worse. Security? MUCH worse. Language design? Unimaginably worse.
EDIT: I wrote thousands of lines of XSLT circa 2005. I'm grateful that I'll never do that again.
This is only repeated by people who have never used it.
XSLT is still a great way of easily transforming xml-like documents. It's orders of magnitude more concise than transforming using Javascript or other general programming languages. And people are actively re-inventing XSLT for JSON (see `jq`).
I used to use XSLT a lot, though it was a while ago.
You can use Javascript to get the same effect and, indeed, write your transforms in much the same style as XSLT. Javascript has xpath (still). You have a choice of template language but JSX is common and convenient. A function for applying XSLT-style matching rules for an XSLT push style of transform is only a few lines of code.
Do you have a particular example where you think Javascript might be more verbose than XSLT?
Who is transforming XML documents on the web? Most people produce HTML to begin with, so XSLT is a solution seeking a problem. If you really insist, you could just use XSLT via server side rendering.
I actually do have to work with raw XML and XSLTs every once in a while for a java-based CMS and holy hell, it's nasty.
Java in general... Maven, trying to implement extremely simple things in Gradle (e.g. only execute a specific Thing as part of the pipeline when certain conditions are met) is an utter headache to do in the pom.xml because XML is not a programming language!
It is an unfortunate fact about our industry that all build tools suck. Tell me what your favorite build tool is and I can point at hundreds of HN threads ripping it to shreds. Maybe it's NPM? Cue the screams...
I agree though, "XML is not a programming language" and attempts to use it that way have produced poor results. You should have seen the `ant` era! But this is broader than XML - look at pretty much every popular CI system for "YAML is not a programming language".
That doesn't mean that XML isn't useful. Just not as a programming language.
But, that's what XSL is! XSL is a Turing-complete programming language in XML for processing XML documents. Being in XML is a big part of what makes XSL so awful to write.
XSL may be Turing-complete but it's not a programming language and wasn't intended to be one. It's a declarative way to transform XML. When used as such I never found it awful to write... it's certainly much easier than doing the equivalent in general purpose programming languages.
Maybe by analogy: There are type systems that are Turing complete. People sometimes abuse them to humorous effect to write whole programs (famously, C++ templates). That doesn't mean that type systems are bad.
XSL is a functional programming language, not a declarative language. When you xsl:apply-template, you're calling a function.
Functional programming languages can often feel declarative. When XSL is doing trivial, functional transformations, when you keep your hands off of xsl:for-each, XSL feels declarative, and doesn't feel that bad.
The problem is: no clean API is perfectly shaped for UI, so you always wind up having to do arbitrary, non-trivial transformations with tricky uses of for-each to make the output HTML satisfy user requirements.
XSL's "escape hatch" is to allow arbitrary Turing-complete transformations. This was always intended to exist, to make easy transformations easy and hard transformations possible.
You basically never need to write Turing-complete code in a type system, but in any meaningful XSL project you will absolutely need to write Turing-complete XSL.
XSL's escape hatch is always needed, but it's absolutely terrible, especially compared to JS, especially compared to modern frameworks. This is why JS remained popular, but XSL dwindled.
> It is an unfortunate fact about our industry that all build tools suck. Tell me what your favorite build tool is and I can point at hundreds of HN threads ripping it to shreds. Maybe it's NPM? Cue the screams...
npm isn't even a build tool, it's a package manager and at that it's actually gotten quite decent - the fact that the JS ecosystem at large doesn't give a fuck about respecting semantic versioning or keeps reinventing the wheel or that NodeJS / JavaScript itself lacks a decent standard library aren't faults of npm ;)
Maven and Gradle in contrast are one-stop-shops, both build orchestrators and dependency managers. As for ant, oh hell yes I'm aware of that. The most horrid build system I encountered in my decade worth of tenure as "the guy who can figure out pretty much any nuclear submarine project (aka, only surfaces every few years after everyone working on it departed)" involved Gradle, which then orchestrated Maven and Ant, oh and the project was built on a Jenkins that was half DSL, half clicked together in the web UI, and the runner that executed the builds was a manually set up, "organically grown" server. That one was a holy damn mess to understand, unwind, clean up and migrate to Gitlab.
> look at pretty much every popular CI system for "YAML is not a programming language".
Oh yes... I only had the misfortune of having to code for Github Actions once in my life time, it's utter fucking madness compared to GitLab.
Comparing single-purpose declarative language that is not even really turing-complete with all the ugly hacks needed to make DOM/JS reasonably secure does not make any sense.
Exactly what you can abuse in XSLT (without non-standard extensions) in order to do anything security relevant? (DoS by infinite recursion or memory exhaustion does not count, you can do the same in JS...)
> Although XSLT in web browsers has been a known attack surface for some time, there are still plenty of bugs to be found in it, when viewing it through the lens of modern vulnerability discovery techniques. In this presentation, we will talk about how we found multiple vulnerabilities in XSLT implementations across all major web browsers. We will showcase vulnerabilities that remained undiscovered for 20+ years, difficult to fix bug classes with many variants as well as instances of less well-known bug classes that break memory safety in unexpected ways. We will show a working exploit against at least one web browser using these bugs.
They are about libxslt but Mason Freed doesn’t want you to know that. They could contribute a rust project which has already implemented XSLT 1.0 thus matching the browsers. But that would good software engineering and logical.
> XSLT is extremely unpopular and worse than JS in every way
This isn't a quorum of folks torpedoing a proposed standard. This is a decades-old stable spec and an established part of the Web platform, and welching on their end of the deal will break things, contra "Don't break the Web".
Oh, that's the operative part? Accept my apologies. What I meant to say is, "I can see that you're deeply, deeply concerned about being able to continue beating your wife. I think you should reconsider your position on this matter."
No question mark, see? So I should be good now.
> Am I missing something here?
Probably not. People who engage in the sort of underhandedness linked to above generally don't do it without knowing that they're doing it. They're not missing anything. It's deliberate.
So, too, I would guess, is the case with you—particularly since your current reply is now employing another familiar underhanded rhetorical move. Familiar because I already called it out within the same comment section:
> The problem is that there is content that works today that will break after the Chrome team follows through on their announced plans of shirking on their responsibility to not break the Web. That's what the problem is. Any "solution" that involves people having to go around un-breaking things that the web browser broke is not a solution to the problem that the Chrome team's actions call for people to go around un-breaking things that the web browser broke.
I seem to have personally offended you, and for that I am sorry.
This seems personal to you so I'll bow out of further discourse on the subject as it is not particularly personal to me. The websites I maintain use a framework to build RSS output, and the framework will be modified to do server-side translation or polyfill as needed to provide a proper HTML display experience for end-users who want that.
They did not agree to remove it. This is a spun lie from the public posts I can see. They agreed to explore removing it but preferred to keep it for good reasons.
Only Google is pushing forward and twisting that message.
> They did not agree to remove it. This is a spun lie from the public posts I can see. They agreed to explore removing it but preferred to keep it for good reasons.
Mozilla:
> Our position is that it would be good for the long-term health of the web platform and good for user security to remove XSLT, and we support Chromium's effort to find out if it would be web compatible to remove support.
> WebKit is cautiously supportive. We'd probably wait for one implementation to fully remove support, though if there's a known list of origins that participate in a reverse origin trial we could perhaps participate sooner.
So you’re choosing to help them spin the lie by cherry picking comments.
The Mozilla comment itself ends with:
> If it turns out that it's not possible to remove support, then we think browsers should make an effort to improve the fundamental security properties of XSLT even at the cost of performance.
> If it turns out not to be possible to remove the feature, we’d like to replace our current implementation. The main requirements would be compatibility with existing web content, addressing memory safety security issues, and not regressing performance on non-XSLT content. We’ve seen some interest in sandboxing libxslt, and if something with that shape satisfied our normal production requirements we would ship it.
But the only way it’s possible to remove the feature is if you ignore everyone asking you to please not to remove it.
Therefor by totally ignoring push back you can then twist Mozilla reps words to mean the only option is to remove it.
Similarly with the Webkit comment:
> WebKit is cautiously supportive.
Both these orgs requested investigation not removal. Both expressed some concern and caution. Google did not, they only ever pushed forward with removing it. Even goong so far as to ignore the followup request to implement XSLT 3.0.
No it’s not blatantly untrue. It’s unblatantly misleading.
Furthermore I’d say for those specific comments, “go ahead and remove it”, the inverse is blatantly untrue.
If somebody says “our position is A but if that’s not possible we should do B”, it means they prefer A. It doesn’t mean they prefer B, and telling people that they prefer B when you know otherwise is dishonest.
The comment isn’t “our position is A” the comment is “our position is A if B,C,D aren’t possible for compatibility reasons”. Aka “if we must remove it then fine, else we would like to improve it”.
Google then side stepped all the compatibility concerns and notions to improve it and arguments against removing it so they could only address A.
Go ahead and argue for all the word twisting these bad actors have done. You won’t change my mind this is an obvious attack on the open web by people subservient to the ad tech industry. Prove to me it’s not when all the browsers depend on a platform for ad money. M
They have installed people like Mason Freed into these positions, who are incapable of reason, to carry this objective forward.
1. Proposal: The parent component proposes a size/constraints to the child
2. Measurement: The child component picks its own size based on those constraints
3. Placement: The parent component then positions the child in its coordinate space
It's all done in a single pass. It scales great. It fits great with React-like reactive UI frameworks. (SwiftUI and Jetpack Compose are based on React.)
This is what CSS needs.
reply