Hacker Newsnew | past | comments | ask | show | jobs | submit | tmpfs's commentslogin

I disagree, I think most of these opinions I see about CSS are from people that haven't taken the time to learn it and particularly to understand the cascade.

Many years ago I did a very deep dive into the CSS specs as I was researching for a new implementation and it struck me as well designed for its purpose of separating style from the semantics of markup.


IMO the cascade is the exact problem with CSS. It worked fine when all it was dealing with was super simple documents with a few rules here and there, as soon as we started making applications the cascade causes endless headaches, and most modern methods of handling CSS like modules or libraries like Tailwind are made specifically to avoid cascade/specificity issues. Conceptually CSS is not really all that complicated, but in practice when you have dozens or even hundreds of CSS files for an entire app, it simply becomes a herculean nightmare to deal with and to wrap your head around.


As someone who works with many dozens of CSS files each day, I can say the problem isn't CSS. It's devs who are too lazy to document their work.

If everyone is working from the same spec/reference, it's fine, and you get consistent, reliable results.

When devs have to stumble around in the dark and end up reinventing the wheel every few months, that's when things go badly.


> devs who are too lazy to document their work

That's as bad a complaint as the one about cascading.

Your rules should be close to the object that uses them. It's really bad that CSS only supports global rules, and that is not a fault of the developers writing those rules.


When the !important comes out it's all over


It has been my experience that those that complained the loudest about CSS, expected to know it without actually studying or learning it. They already know a "real" language, so why should they have to study a "toy" styling language.

When it comes down to it, making a great looking and maintainable page is just as much work and planning as building a good backend codebase. Neither one just happens.


This problem is compounded by CSS kind of overlaying HTML in a way.

Lots of "real" devs treat HTML with similar "I don't need to really learn this toy markup" kind of attitude. The worst CSS issues I've ever had to deal with were often caused by horrible markup that was impossible to consistently style.

Basic stuff like how to make a good `<form>`. Putting `<label>` elements next to your `<input>` elements, or making sure the `for` and `ID` attributes are set. Hell, even using `<label>` instead of some `<span>` they threw a bunch of random framework classes on.


I think developers nowadays being coerced into being a jack of all trades, master of none contributes to CSS being the least studied and practiced by full stack devs. Such devs would also be much more inclined to put their time into, for example, learning and practicing databases rather than improving their skills regarding UIs, and for webdev that means skimping on CSS while just doing the bare minimum, mediocre thing to get it working.


I is certainly true for me that I didn't take the time to learn the intricacies of CSS behavior and I still think the cascading properties are the worst feature, namely concepts like specificity. It is hard to keep a mental model of styling rules and at some point it is just degrades to trial and error.

Nobody came up with a better alternative though (apart from the many dialects that transpile to CSS again).


In 2025 when starting from scratch all you really need is flexbox, css grid, and the box model and that will get you like 99% of the way on most layouts.

Specificity is really just

* ids take priority * the highest number of specified classes, and if there’s a tie it’s the first one specified * the highest number of types

The only problem is that most paths in frontend development say “slap a class on an element” and call it a day but you do need to be intentional about it and only specify what you need


What about CSS layers[0] doesn't solve your specificity issues?

[0]: https://developer.mozilla.org/en-US/docs/Web/CSS/@layer


Video games became a lot better when developers stopped saying, "You just haven't taken the time to learn it and don't understand it", and started saying, "That's my fault for designing it that way. Let me redesign it so that it's accessible to everyone in my target audience". I wish programming languages would be humble enough to do the same (although many are!).


The problem here is that it wouldn't matter if you made a better language for styling web pages, because you have no vector to get that language included in browsers; and even if you did it would take years for it to get enough market penetration for you to rely on it as something you can deliver an app in. So, you'd end up with some horrendous compatibility layer that compiles everything down to nasty CSS anyway.

Same problem with Javascript. Here we are compiling Typescript down into it... and running JS on the server with Node in order to avoid the classic impedance mismatch we all had back in the day writing backends in PHP etc.

Feels like the real solution would be to allow browsers to become more flexible with language implementations instead so we can make more progress. WASM might be the ticket there?


All that complexity is a trap. It creates a need for people that have spent the unreasonable time to grasp a poor design, and then they have the secrets that others simply do not have the time to waste learning yet another poorly designed reindeer game. That complexity eats time, energy and is pointless. CSS is a shit show of poor design, poor documentation, and secrets.


What secrets are you talking about?


All the edge cases.


Do you have any concrete examples?


How's about the very existence of different layout engines, which can all be in-use at once on the same page?

So, what is your proposal?


Pointing out another's failures does not place me on the hook to provide a replacement.

Throw away the front-end of the Web and start over with a system actually designed to facilitate graphical applications instead of delivering styled documents.

HTTP can stay, and HTML/CSS can stay just like PDFs for delivering a document, but when it comes to UI components, we should be able to have things as fast and performant as e.g. RedLang / Processing / Enlightenment DR17 / etc without every developer having to shovel megabytes of shim-ware down to the client.


That's a good solution in a perfect world. But that's magical thinking. What's a real solution?


We have a plethora of native frameworks for building UI-s. Some of them are quite well designed. And yet the Web front-end has won against all the UI frameworks designed specifically to build UI-s.


That’s partly due to reach, though. The combination of the browser coupled with Electron eating away at those native UI layers means it unfortunately makes little sense to bother with alternatives, even if the alternatives are better.

I.e, the GP is trying to argue one thing and you’re kind of going down a different tangent.


Hey, I went to take a look at Obscura and I like the ideas but I can't find the source code.

You are making some bold claims but without the source I can't verify those claims.

Any plans to open-source it?


We should link it in more places, apologies!

Here it is: https://github.com/Sovereign-Engineering/obscuravpn-client


This is nonsense. By your logic me and the majority of people using Signal are criminals.

As the other commenter mentioned please provide proof for these hyperbolic claims.


>By your logic me and the majority of people using Signal are criminals

False. "The majority of X are Y" does not imply that any particular X is Y.

I don't have data for Signal. I use it extensively. Even setting aside that the American legal system makes everyone a criminal several times a day so that the laws can be selectively enforced against anyone who becomes a target, I have no data on whether the majority of Signal users are criminals, but given that criminals have significantly higher interest in secure communications than the general population it wouldn't shock me if evidence came out that it was the case.


How long before UK VPN users are criminals?

How long before US states that are enforcing online ID laws will be doing the same?


maybe we need a law against selective enforcement of laws. together with the comprehensive statistics collection agency that would be required to enforce it.


I have used this as well as many of the other lower-level db drivers (which don't check your SQL at compile time) and I can say I much prefer the latter.

My issues with SQLx when I first tried it were that it was really awkward (nigh impossible) to abstract away the underlying DB backend, I expect those issues are fixed now but for some simple apps it's nice to be able to start with SQLite and then switch out with postgres.

Then I wanted to dockerize an SQLx app at one point and it all becomes a hassle as you need postgres running at compile time and trying to integrate with docker compose was a real chore.

Now I don't use SQLx at all. I recommend other libraries like sqlite[1] or postgres[2] instead.

SQLx is a nice idea but too cumbersome in my experience.

[1]: https://docs.rs/sqlite/latest/sqlite/ [2]: https://docs.rs/postgres/latest/postgres/


I'm have no experience with abstracting away the backend, but Dockerizing is actually pretty easy now - there's an offline mode[1] where you can have sqlx generate some files which let it work when there's no DB running.

[1]: https://docs.rs/sqlx/latest/sqlx/macro.query.html#offline-mo...


It's definitely not perfect, but I think both of those issues are better now, if not fully solved.

For needing a DB at compile time, there's an option to have it produce artefacts on demand that replace the DB, although you'll need to connect to a DB again each time your queries change. Even that is all optional though, if you want it to compile time check your queries.


I know it's annoying (and apparently there is a solution for generating the required files before the build), but in these kinds of situations Go and Rust are great for doing a static build on the system and then copying into a scratch image.

Versus Python and Node often needing to properly link with the system they'll actually be running in.


Why would you want to abstract away the underlying database? Wouldn't it better to already use the target DB to cattch potential issues earlier? Also to avoid creating another layer of indirection, potentially complecting the codebase and reducing performance?


Primarily for libraries and deployment environments that aren't fully in your control which is still pretty common once you get to B2B interactions, SaaS is not something you can easily sell to certain environments. Depending on the assurance you need, you might even need to mock out the database entirely to test certain classes of database errors being recoverable or fail in a consistent state.

Even in SaaS systems, once you get large enough with a large enough test suite you'll be wanting to tier those tests starting with a lowest common denominator (sqlite) that doesn't incur network latency before getting into the serious integration tests.


> Wouldn't it better to already use the target DB to cattch potential issues earlier?

The target DB can change as a project goes from something mildly fun to tinker with to something you think might actually be useful.

Also I personally find that SQLite is just nice to work with. No containers or extra programs, it just does what you ask it to, when you ask it to


Thanks, interesting experience - so much depends on getting developer ergonomics right. There is something to be said for checking the SQL at compile-time, though - esp. if trying to ORM to a typesafe language.


How long ago did you try SQLx? Not necessarily promoting SQLX, but the `query_as` which lets one make queries without the live database macro has been around for 5 years [1].

For lower level libraries there is also the more downloaded SQLite library, rusqlite [2] who is also the maintainer of libsqlite3-sys which is what the sqlite library wraps.

The most pleasant ORM experience, when you want one, IMO is the SeaQl ecosystem [3] (which also has a nice migrations library), since it uses derive macros. Even with an ORM I don't try to make databases swappable via the ORM so I can support database-specific enhancements.

The most Rust-like in an idealist sense is Diesel, but its well-defined path is to use a live database to generate Rust code that uses macros to then define the schema-defining types which are used in the row structs type/member checking. If the auto-detect does not work, then you have to use its patch_file system that can't be maintained automatically just through Cargo [4] (I wrote a Makefile scheme for myself). You most likely will have to use the patch_file if you want to use the chrono::DateTime<chrono::Utc> for timestamps with time zones, e.g., Timestamp -> Timestamptz for postgres. And if you do anything advanced like multiple schemas, you may be out of luck [5]. And it may not be the best library for you if want large denormalized tables [6] because compile times, and because a database that is not normalized [7], is considered an anti-pattern by project.

If you are just starting out with Rust, I'd recommend checking out SeaQl. And then if you can benchmark that you need faster performance, swap out for one of the lower level libraries for the affected methods/services.

[1] https://github.com/launchbadge/sqlx/commit/47f3d77e599043bc2...

[2] https://crates.io/crates/rusqlite

[3] https://www.sea-ql.org/SeaORM/

[4] https://github.com/diesel-rs/diesel/issues/2078

[5] https://github.com/diesel-rs/diesel/issues/1728

[6] https://github.com/diesel-rs/diesel/discussions/4160

[7] https://en.wikipedia.org/wiki/Database_normalization


Interesting as I was researching this recently and certainly not impressed with the quality of the Readability implementations in various languages. Although Readability.js was clearly the best, it being Javascript didn't suit my project.

In the end I found the python trifatura library to extract the best quality content with accurate meta data.

You might want to compare your implementation to trifatura to see if there is room for improvement.


> ...it being Javascript didn't suit my project.

If you're using Go, I maintain Go ports of Readability[0] and Trafilatura[1]. They're actively maintained, and for Trafilatura, the extraction performance is comparable to the Python version.

[0]: https://github.com/go-shiori/go-readability

[1]: https://github.com/markusmobius/go-trafilatura


We've been active users of go-trafilatura and love it


this is what i came here to see, thanks!


reference to the library: https://trafilatura.readthedocs.io/en/latest/

for the curious: Trafilatura means "extrusion" in Italian.

| This method creates a porous surface that distinguishes pasta trafilata for its extraordinary way of holding the sauce. search maccheroni trafilati vs maccheroni lisci :)

(btw I think you meant trafilatura not trifatura)


Been using it since day one but development has stalled quite a bit since 2.0.0.


It's a bit old, but I bench marked a number of the web extraction tools years ago, https://github.com/Nootka-io/wee-benchmarking-tool, resiliparse-plain was my clear winner at the time.


This is very interesting, are there any examples of interacting with LLMs? If the queries are compiled and loaded into the database ahead of time the pattern of asking an LLM to generate a query from a natural language request seems difficult because current LLMs aren't going to know your query language yet and compiling each query for each prompt would add unnecessary overhead.


This is definitely a problem we want to work on fixing quickly. We're currently planning an MCP tool that can traverse the graph and decide for itself at each step where to go to next. As opposed to having to generate actual text written queries.

I mentioned in another comment that you can provide a grammar with constrained decoding to force the LLM to generate tokens that comply with the grammar. This ensures that only valid syntactic constructs are produced.


As somebody with a wooden house and the feeling to learn carpentry and spend less time programming I think this is brilliant. Combining minimal design with a hacker and DIY ethos is brilliant. Kudos, bookmarked; hope I can find the time to tinker with the designs.


I recommend making time to build at least one piece of furniture. I did not use hyperwood principles, but I built my own computer desk and workbench to my specifications and I cannot imagine ever buying a premade work surface again. It is rewarding, helps you think about a project from both a production and use-case perspective, and unlike my programming/tech troubleshooting efforts, the results are very tangible and something I can touch and see every day, lending to a lasting sense of accomplishment.


Good article but a minor nitpick is that port zero is not strictly an invalid port as it's often used to allow the OS to pick an available port at random.


The problem is that the requirements can be vastly different. A collaborative editor is very different to say syncing encrypted blobs. Perhaps there is a one size fits all but I doubt it.

I've been working on sync for the latter use case for a while and CRDTs would definitely be overkill.


Really hope this project succeeds as somebody heavily invested in a Flutter app with a Rust backend Dioxus could be great for us so we will continue to follow closely.

Flutters hot reloading is awesome but some days I feel like it is death by a thousand cuts working with Flutter (so many unresolved bugs) so to have a cross-platform framework that wouldn't require Flutter and Dart would be great.

Wish the Dioxus team a lot of success!


Same! I inherited a Flutter app from a previous team, and while it's OK it feels kind of like developing on a Galapagos island — by which I mean, we'll never use Dart in any other context, and Flutter's web story doesn't really work for most kinds of apps).

I think Dioxus actually fits into a fairly unique set of slots. There's Tauri, with which it shares a lot of stuff, but Tauri's web story is mostly "build it yourself". There's Leptos, which arguably has a better web app story, but lacks most of the rest.

It is also heartening to see how these projects really do share a lot of the building blocks and don't seem to be overly competitive.


What kinds of bugs are you talking about. I’ve used it for years and it’s been nothing but incredibly stable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: