Hacker Newsnew | past | comments | ask | show | jobs | submit | quicksilver03's commentslogin

I have an issue with

    The geneal idea of HTMX is that your HTML will be rendered by the backend — à la Server Side Rendering.
To me this phrase makes no sense, what's the thought process behind this meaning of "render"? The only place HTML is "rendered" is in a browser (or in a user agent, if you prefer).


>, what's the thought process behind this meaning of "render"?

It's another use of "render" relative to the server such as converting non-HTML data inside database tables, json, etc --> rendered into HTML:

https://www.google.com/search?q=SSR+server+side+rendering

Many different perspectives of "rendering":

- SSR server-side rendering : server converting data to HTML

- CSR client-side rendering : e.g. client browser fetching and converting JSON/XML into dynamic HTML

- browser-engine rendering : converting HTML to operating system windowing GUI (i.e. "painting")


"render" as in being templated with server-side data/logic.

See also Server Side Rendering (SSR) which uses the term rendering in the same way.


Django uses a render() function to convert a template to HTML, then return it as an HTTP response.

One of the many dictionary definitions of the word also appears to be to "give an interpretation or rendition of" something.


Imagine you have a markdown file.

You could "render" it to html with pandoc, then serve the html from disk by a static web server.

This would be "build time" html - it's html before it's read by the server.

Then you could setup a cgi script, php-script or an application server that converted markdown to html - then sent it to the client. This would be server-side rendering.

Finally, you could send some html+javascript to the client that fetch the raw markdown, then generates html on the client. That would be client side rendering.


Having seen some of those cases, I'd say it's rather because Bob in Finance doesn't want to be bothered with MFA and has raised so much stink with the CFO that IT has been ordered to disable MFA for him.


Can you share what's your solution for filtering incoming spam? I've had to abandon Stalwart because its spam filter is so ineffective and inconsistent.


Mind you I am hosting this just for about a week now - +100GB in total for all inboxes. Also I removed automatic daily purging so all spam and deleted items stay just to be safe.

Haven't looked into spam more closely yet. After first glance on most publicly shared email address - there is around 2 spam messages per hour.

Here is report prepared by llm which looked through the last 20 email headers found in spam. All of them were categorized correctly, however there were few emails in the past few days which went to spam where they shouldn't but I think this is fixable.

- Critical Authentication Failures: A large number of the messages failed basic email authentication. We see many instances of SPF_FAIL and VIOLATED_DIRECT_SPF, meaning the sending IP address was not authorized to send emails for that domain. This is a major red flag for spoofing.

- Poor Sender IP Reputation: Many senders were listed on well-known Real-time Blackhole Lists (RBLs). Rules like RBL_SPAMCOP, RBL_MAILSPIKE_VERYBAD, and RBL_VIRUSFREE_BOTNET indicate the sending IPs are known sources of spam or are part of botnets.

- Suspicious Content and Links: The spam filter identified content patterns statistically similar to known spam (BAYES_SPAM) and found links to malicious websites (ABUSE_SURBL, PHISHING).

- Fundamental Technical Misconfigurations: Many sending servers had no Reverse DNS (RDNS_NONE), a common trait of compromised machines used for spam.

There have been few messages which went to spam which didn't meet any of this spam criteria but actually they were cold marketing emails, so it's good too. In addition to this stalwart emits info log for each possible spam message ingested. Not sure if this can get any better than this.


That looks more like SOC2 than ISO-27001 though.


It's the same with ISO27001. A bad actor can always weasel their way through.


GoSSL certificates (using the ACME protocol) will also no longer be issued: https://community.buypass.com/t/y4y130p


Is having one key per zone worth paying money for? It's on the list of features I'd like to implement for PTRDNS because it makes sense for my own use case, but I don't know if there's enough interest to make it jump to the top of this list.


The AMD drivers are open source, but they definitely are not good. Have a look at the Fedora discussion forums (for example https://discussion.fedoraproject.org/t/fedora-does-not-boot-... ) to see what happens about each month.

I have no NVIDIA hardware, but I understand that the drivers are even worse than AMD's.

Intel seems to be, at the moment, the least worse compromise between performance and stability,


Although you get to set your own standards "A bug was discovered after upgrading software" isn't very illuminating vis a vis quality. That does happen from time to time in most software.

In my experience an AMD card on linux is a great experience unless you want to do something AI related, in which case there will be random kernel panics (which, in all fairness, may one day go away - then I'll be back on AMD cards because their software support on Linux was otherwise much better than Nvidia's). There might be some kernel upgrades that should be skipped, but using an older kernel is no problem.


Distribute your domain across 2, better 3 registrars, so if one does something stupid with your domains at least the others keep working.


What technique are you using for redirecting traffic to region B when region A is offline? And what happens if I have 2 nodes in a region and one goes offline?


For high-availability deployments, we leverage Fly.io's global Anycast network and DNS-based health checks. When a machine in region A goes offline, Fly's Anycast routing automatically directs traffic to healthy machines in other regions without manual intervention.

For intra-region redundancy, we deploy 2 nodes per region in HA mode. If one node fails, traffic is seamlessly routed to the other node in the same region through Fly.io's internal load balancing. This provides N+1 redundancy within each region, ensuring service continuity even during single-node failures.


I recommend adding more details like this to the website. Knowing it's Fly.io under the hood gives me significantly more confidence in your service.


Updated the site, we'll add more about it shortly.


How much of a difference would automated health checks+programatic dns updates make vs any cast


Depends on the setup and what your goals are. Anycast typically takes the shortest route based on topology. This is particularly nice when you use something like caddy (because of the huge plugin system, you can do lots of stuff directly on the edge) to build your own CDN by caching at the edge or go all in and use caddy-lua to build apps at the edge. Gluing together dns systems (health checks, proximity + edge nodes) can be similar but the benefits of being "edge" largely go away as soon as you add the extra hop to a different region server.


That's an excellent example of what I like to call "good IT hygiene". I too would like to know what kind of tools you have to perform the functional and integration tests, and to execute the various rollouts.


Without going too deeply into details, we use common non-cloud-native platforms such as Jenkins to configure and schedule the tests. Unit tests are often baked into Makefiles while functional / integration tests are usually written as shell scripts, python scripts, or (depending on what needs to happen) even Ansible playbooks. This allows us to avoid cloud vendor lock-in, while using the cloud to host this infra and the deployment envs themselves.

Edit: we use Makefiles, not because we are writing code in C (we are not) but because our tech culture is very familiar with using 'make' to orchestrate polyglot language builds and deployments.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: