Hacker News new | past | comments | ask | show | jobs | submit | awoimbee's comments login

I'm in the position where I have to run a WAF to pass security certifications. The only open source WAFs are modsecurity and it's beta successor, coraza. These things are dumb, they just use OWASP's coreruleset which is a big pile of unreadable garbage.


It looks like what the grafana stack does but it's linking specialized tools instead of building one big tool (eg linking traces [0]).

The only thing then is that there is no link between logs and metrics, but I guess since they created alloy [1] they could make it so logs and metrics labels match, so we could select/see both at once ?

Oh ok here's a blog post from 2020 saying exactly this: https://grafana.com/blog/2020/03/31/how-to-successfully-corr...

[0]: https://grafana.com/docs/grafana/latest/datasources/tempo/tr... [1]: https://grafana.com/docs/alloy/latest/


Yes, that's the LGTM(Loki, Grafana, Tempo, and Mimir) stack.

First, the main issue with this stack is maintenance: managing multiple storage clusters increases complexity and resource consumption. Consolidating resources can improve utilization.

Second, differences in APIs (such as query languages) and data models across these systems increase adoption costs for monitoring applications. While Grafana manages these differences, custom applications do not.


Ansys SimAI | DevOps (more developer than SRE) | REMOTE (France, office is in Paris) | Full time

We're building products to predict the results of numerical simulation and act on them. Since joining the Ansys portfolio we are experiencing more demand than ever and we need help to transform a good product and infra into something truly great.

At SimAI we lightly use AWS and heavily depend on Kubernetes, both being 100% configured through Infrastructure as Code (a bit of terraform and a lot of pulumi). You will mostly work with Typescript (pulumi), Python (scripts and application code), Bash (scripts).

As a DevOps at SimAI you will work with me on exciting subjects like moving the compute closer to the customer, you will also work on improving our security posture and help secure more certifications for our platform. As part of my job at SimAI I personally maintain <https://github.com/Trow-Registry/trow>.

Don't hesitate to apply, motivation matters as much as experience to me ! Apply here: https://careers.ansys.com/job/Montigny-le-Bretonneux-Senior-...


remotely only in france?


The offer was quite good honestly: people who left got a $30k buyout.


At least — or six months salary, which is higher (and even substantially higher) than $30k for I’d guess most people leaving


Either substantially higher or they don’t get paid enough.


> The long double type varies dramatically across platforms: > [...] > What sets numpy_quaddtype apart is its dual-backend approach: > Long Double: This backend uses the native long double type, which can offer up to 80-bit precision on some systems allowing backwads compatibility with np.longdouble.

Why introduce a new alias that might be 128 bits but also 80 ? IMO the world should focus on well defined types (f8, f16, f32, f64, f80, f128), then maybe add aliases.


Maybe it depends on the application?

If you have written a linear system solver, you might prefer to express yourself in terms of single/double precision. The user is responsible for knowing if their matrix can be represented in single precision (whatever that means, it is their matrix and their hardware after all), how well conditioned it is, all that stuff. You might rather care that you are working in single precision, and that there exists a double precision (with, it is assumed, hardware support, because GPUs can’t hurt us if we pretend they don’t exist) to do iterative refinement in if you need to.


Really seems like propagating the current platform inconsistencies into the future. Stick with 128 always, performance be damned. Slow code is much preferable to subtlety broken because you switched the host OS.


Especially if you need 128-bit float precision. It's very well known and understood that quad float is much slower in most platforms, extremely slow in some. If you're using quad float, it's because you absolutely need need all 128 bits, so why even reduce it to 80 bits for "performance"? Seems like an irrelevant design choice. Programmer can choose between f128 vs f80 if performance is intractable in the target platform.


In the current ecosystem even for a single server I would use K8S via something like minikube. You get for free: operators, observability, a standard API (so you can use helm and such), ...


That 8th entry pouring loads on WD40 on trees is just crazy, that thing is petroleum distillate !


Don't worry, this is a stump that's been dead for years, isolated to a very small garden bed in my yard


I'd still wager that is why it didn't win any prizes though. They don't want to show a video spraying WD40 in nature.

I enjoyed the post, I appreciate a good methodical process.


"dead" stumps provide an essential home to insects, plants, lichen, fungi and other stuff. Rotting wood is essential for a healthy ecosystem. These days it is not seen as unwanted, dead or waste material from a tree but part of a life cycle. It is a living habitat.

So the judges may have seen something different rather than protecting, preserving and prioritising your family and the joyful and creative structures for children's play. At the very least it would give ambiguity in today's more ecological minded world.


Yeah the anti-eco optics was my first though too - also the voiceover is a little more obviously AI. “Dump chemicals into the forest, brought to you by WD-40” isn’t a super appealing message (regardless of what was actually happening / intended)


Well there is the water table too. But that said weed killer is poured all over everywhere


You chose papyrus too, I think that was the most damning.


Fairies literally only use papyrus


>> pouring loads on WD40 on trees

not really the same thing as spraying a bit of WD40 on a dead stump.


For a single node cluster, just use minikube (or RKE2 if you have masochistic tendencies). And if everything runs on a single small vps, why even use postgres and not SQLite.


k3s (512mb ram suggested) is smaller than minikube (2gb ram suggested)


> In the end, we chose the potential dangers of reimplementing command line parsing over the potential issues of including clap

Have you considered using argh ? Seems like it has the upsides without the downsides.


Don’t think it’s worth it. Looking at sudo’s man page at https://linux.die.net/man/8/sudo, it looks like sudo only uses single-letter flags, some of which take arguments. Argh implements long options, built-in parsing, subcommands, and lots of other nice to have features that nevertheless add a lot of code. It’s normal in traditional UNIX C programs to parse sudo-style flags in a handful of lines without any external dependencies.


I consider single letter flags only to be a mistake. There should almost always be a verbose double-dash option.

I get it, most of the tooling which uses single letters is totally ossified due to backwards compatibility reasons. However, the sudors team is already breaking backwards compat. Now is the time to make a minor usability improvement.


I consider double-dashes to be a mistake. hell, after a few drinks and quiet thought I consider single dashes to be a mistake. Perhaps the dd arg=val form is actually the ideal argv method after all. What if getopt was all a huge mistake. And then I sober up and realize they are just dashes, useless but harmless, not a thing worth worrying about.

And then you have the absolutely inane doubledash --arg=value format. Way to carry a bad idea to it's logical conclusion guys. somebody drunk their getopt kool-aid that morning. just get rid of the stupid dashes if you are going to do that.


That's a bit dated. Both regular sudo (1.9.13p3) and sudo-rs (0.2.2) on my machine (Debian) support double dash style options.


I've used argh a fair bit. It has some weird ideas and restrictions and generally isn't nearly as good as clap. I would definitely recommend clap (unless you have extreme security concerns like this).


Why not use `getopt()` which already exists in libc?

(Or even `getopt_long()` if you're Linux/glibc-only? Author mentions not supporting Windows, but is unclear whether non-Linux Unices, e.g. *BSD, are intended target platforms.)

https://manpages.debian.org/bookworm/manpages-dev/getopt.3.e...


If you're trying to implement as much in Rust as possible, keeping an important part of the codebase in C code feels like the wrong decision, in my opinion.


Will this allow running linux VMs on any Android device ? Via something like nestbox: https://www.patreon.com/posts/74333551 ?


This is already possible if your phones ship with the KVM kernel module, like on some Pixel devices, but reading the article suggests that KVM will become standard on all Android devices to enable this.

edit: according to this[1], yes, the pKVM functionality that's standard in Android exposes KVM functionality so that you can run VMs on Android.

[1] https://www.xda-developers.com/android-13-dp1-google-pixel-6...


A full linux environment (with external monitor support) sounds awesome. I hope that enabled KVM becomes standard

Would graphics acceleration work properly?


Depends on the kind of acceleration you want. VirGL is available on Linux host/Linux guest setups with recent kernels, not sure if QXL/SPICE will be available or can be added to the userland. Can't imagine a hardware passthrough situation making sense on a phone/tablet, either.


You can already to that with termux, XSDL and some scripts. No virtualization needed.


Which Pixel devices? Is it something new or not-so-new?


I couldn't tell you, sorry. Just going off what I've read in some articles like the one I posted.


AVF supports this, people have used it to boot Linux and Windows. See for example https://twitter.com/kdrag0n/status/1493089098944237568


Straight from the horse's mouth:

> pKVM is built on top of the industry standard Kernel-based Virtual Machine (KVM) in Linux. It means all existing operating systems and workloads that rely on KVM-based virtual machines can work seamlessly on Android devices with pKVM.


It sounds like it will become common eventually. I just wish that there were a more supported pathway to running full VMs like that. These devices are powerful enough to do it pretty well now.


Are the VMs hardware accelerated?


Yes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: