Hacker Newsnew | past | comments | ask | show | jobs | submit | cb22's commentslogin

> I'd like to see the same stats for tigerbeetle

Actual SIGSEGVs are pretty rare, even during development. There was a pretty interesting one that affected our fuzzing infra a little bit ago: https://ziggit.dev/t/stack-probe-puzzle/10291

Almost all of the time we hit either asserts or panics or other things which trigger core dumps intentionally!


I just did some quick testing on my XPS 7390 2-in-1 (i7 1065G7) with Wayland:

  * Idle @ 50% brightness, WiFI on: 3.5W
  * mpv + hardware acceleration: 15W
  * Firefox + hardware acceleration: 19W
  * Firefox software decoding: 29W
The test video was https://www.youtube.com/watch?v=LXb3EKWsInQ - which is 4k 60 FPS and got scaled to 1920x1080 for display.


Huh, that's the video I use to test hardware acceleration too.

Good taste.


The difference is much more pronounced at lower resolutions. 4K60 is not handled efficiently by any hardware.


That's the opposite of my experience. Depending on the decoder, lower resolutions may be more efficiently handled by the CPU instead of the hardware decoder (which is a decent chunk of silicon and takes extra power to use). But once you hit a certain resolution, the hardware decoder is pretty much always way more efficient than a software CPU decoder. Some well-implemented hardware decoders even handle low resolutions more efficiently.

My experience is largely focused on battery-powered mobile devices (Android/iOS), though.


To give a concrete example, VAAPI Haswell integrated graphics, peak CPU usage:

  mpv hardware H.264 1080p60: 3% CPU usage
  mpv software H.264 1080p60: 20% CPU usage

  mpv hardware H.264 4K60: 96% CPU usage
  mpv software H.264 4K60: 80% CPU usage
There comes a point of diminishing (or in this case, negative) returns. A bit more subjectively, the H.264 software rendering at 4K60 also felt smoother.

I'm using H.264 as it is hardware accelerated on this platform.


I don't think all Haswell chips support 4k H.264. That's one downside of hardware decoders: they have limits on what they support. I suspect the decoder is actually falling back to a software implementation to handle the bitstream since the hardware can't.

For example: https://www.avsforum.com/threads/no-hardware-4k-h264-decode-...


"Offloaded decoding takes more CPU than non-offloaded decoding" doesn't make any sense - clearly something else is going awry here.


I have desktop product that among other things decodes 4K60P using MMF. The decoding part either decodes straight into DirectX texture if hardware based video decoding is supported or does it in software otherwise.

Software based path causes significant CPU load. Using hardware path keeps CPU at about 0%. Since hardware video decoding supported nearly on any modern Laptop, Desktop I would say 4K60 is handled very efficiently.


> We've seen at least one person slightly tweak TigerBeetle to run on FreeBSD already through the darwin code paths.

I'm part of that sample set! Was quite surprised how easy it was to get it up and running on FreeBSD. Benchmarking on tmpfs on both, it even had a ~10% lead on Linux.

(Of course, that's not exactly the intended use case, so don't pay too much attention to that number!)


I too was initially put off by those requirements. Now we run our Nomad server on a single t3.medium instance that sits at a 0.1 15 minute load average and has ~500MB of RAM used.

This manages about 100 client nodes. No need for a cluster since we don't need high availability on our control plane, and there's no actual state stored there that isn't created from our CI pipeline.


Glad you went for it despite our poor documentation cb22! That sounds like a great setup. I think the only defensible way to describe our "Requirements" page is that we wanted to make the safest suggestion for the widest range of users. Obviously it's wildly inaccurate for a wide range use cases, and we should fix that.

Nomad servers could start with 300mhz, 100mb of memory, and eMMC storage and run a RaspberryPI cluster just fine. Our most important resource guidance is all the way over in our Monitoring docs!

> Nomad servers' memory, CPU, disk, and network usage all scales linearly with cluster size and scheduling throughput.

https://developer.hashicorp.com/nomad/docs/operations/monito...

Any cluster can start with 300mhz and 100mb as long as they monitor usage and scale appropriately.

We're going to try to update our Requirements docs to add this nuance and guidance on how to calculate requirements based on projected load. We recently spent some time improving our heartbeat docs, and I think the approach we took there will serve us well for system requirements: https://developer.hashicorp.com/nomad/docs/configuration/ser...


Thank you for the follow up! This makes it much more likely that I’ll remember to give it a shot when the next applicable project comes up… Because I’ll probably have forgotten these comments and gone to check the docs again :-)


See this is the sort of information that they should have posted on the requirements page. Not some arbitrary sized high water mark needing gigs of disk and memory. I'll have to give using Nomad (and Consul) another go next time, and test it on physical hardware to see.

Thanks for the info! and for doing what Hashicorp seemed too busy to do themselves.


Can't speak for the online side of things, but it's possible to use most Garmin watches fully offline.

I'm not talking about a reduced functionality style offline mode either; the watch computes all metrics locally, you can load OSM for navigation, copy MP3 files to it and copy GPX traces off of it.


Android (and BlackBerry before that) have had something similar to this for a while.

On a BlackBerry, it would shut down the radio and give you a "Battery too low for radio use" message, but you could trick it into turning the radios back on by dialing 911 and then hanging up really quickly.

Various Android manufacturers do their own version of this, Sony had Ultra STAMINA mode and Samsung have an option to do a whole bunch of power optimizations as well as limit apps to those explicitly specified.

For me, the main critical purpose of my phone is to communicate with others in realtime which happens using things like WhatsApp, Slack or Email (maybe the odd call). If I can't use those, then it might as well be dead!


> I’d rather just put a normal Linux distro there, however, I’m not feeling like waiting 8 seconds every time I turn in my car.

I've been working on a Pi Zero W based carputer (really, it's headless and just interfaces with my phone over Bluetooth / the existing radio head unit and steering wheel menu and controls).

Boot time is probably around ~8 second mark currently (fully booted with buildroot / systemd, it's powered from 12V switched by the key) and it already feels fast enough. I normally start my car, put on my seat belt and look around. By the time I've finished doing that it has connected to my phone.

Of course, it would be possible to do even smarter things. Hook up a low power micro that can interface with the CAN bus and a MOSFET, and you can instead power it from the battery and turn it on when the car is unlocked and off after a timeout based on the engine going off.


To give some numbers to the comment I made previously in this thread, with an XPS 7390 2-in-1:

Single core turbo 3.9GHz, maintains indefinitely, temperature hovers around ~75 degrees, fans around 5000RPM

All core turbo: Keeps 3.4GHz and ~45W TDP for around 10 seconds, before dropping to 2.7GHz and ~25W TDP indefinitely. Temps still around ~75, with the fans around 8000 RPM.

That's without any undervolting, but with the processor set for 25W TDP (like the Windows "Ultra Performance" mode). Adding in some undervolting gets the all core turbo up to 2.9GHz.


This is Ice Lake?


Ice Lake, yeah. It's the i7 1065G7 [1]

[1] https://ark.intel.com/content/www/us/en/ark/products/196597/...


Can you tell me what you did to set it to 25watt? I swear I've tried this a few times going through Throttlestop tutorials with different usb-c chargers (from the stock 67w up to 97w) and either that option was greyed out or I never got it to actually do 25w.


> I absolutely hate it. It throttles constantly even after undervolting it. I had to do a bunch of black magic to get it to sleep properly (which is evidently happening to every Dell) and eventually gave up on that and just set it to hibernate any time the lids closed (it's 32gb so that adds about 30 seconds to the start up time). I've spent more time tweaking this thing, reading forums and reddit about how to make it perform DECENTLY than I did building my last hackintosh and I don't enjoy that experience ever. When you get past all these issues it's still Windows 10 which I just find to be the most annoying OS I've ever used.

Really? I love my XPS 7390 2-in-1, it's the best laptop I've used and owned by far and that includes Macbooks. Combined with a WD19TB dock (and a useful trick of flipping it around so it's an inverted L) it makes a great work from home setup. Its thermal profile is relatively aggressive by default but it should stay at 15W (and ~65 degree temps) indefinitely. Upping the power limit to 25W (like the Windows / Dell "Ultra Performance" mode does) and it appears it can maintain that too. I mean, Crysis can even run on the thing [1].

That said, I'd probably lose my mind if I had to run Windows on it. A suggestion for getting good old S3 sleep to work: enable the hack for reenabling S3 sleep in the Windows registry, then disable "Early Signs of Life / Dell Logo" in the BIOS [2]. I'm not sure if this will work on Windows, but it works flawlessly on Arch.

[1] https://www.youtube.com/watch?v=3MXS_KJf_M0 [2] https://wiki.archlinux.org/index.php/Dell_XPS_13_2-in-1_(739...


It seems like we're just waiting on bios updates for Ice Lake to have a proper thermal profile. I'm not sure. What happens is the laptops thermal profile is way under the Intel recommended profile. I believe it starts to throttle around 80* instead of the 100* or so Intel says Ice Lake should be able to run at.

It's been awhile since I played with this but I don't believe I was able to bump my voltage to 25w on Ice Lake. Maybe I'm wrong.

Maybe I'll do a fresh install and give it a try again. I've heard a few comments where peoples 7390 2in1s were running great but on /r/dell I've seen a lot more with complaints like mine.

I did get some improvement by dropping the thermal/power plan from High Performance down to Quiet. That seems to keep the thermals down so it triggers throttling less.

My problem is it throttles constantly and has other issues where if you move it (lift it up) while it's under load it will immediately throttle and I trigger it sitting on my lap a lot.

I bought it hoping I could run a bunch of VMs (thus the 32gb) and do light development work on it but I can barely even draw in Figma without having a rough experience so I completely gave up on developing on it.

This is a really good article https://getpocket.com/redirect?url=https%3A%2F%2Fwww.playerz...


> SCTP could have been amazing

It still is, it's part of the WebRTC spec, and when you use a WebRTC data channel, you're using SCTP over DTLS over UDP! (or TCP, possibly with a TURN relay, which may end up tunneling the whole thing over TLS over TCP :))

There are a lot of acronyms in WebRTC, thankfully there's https://webrtcglossary.com


> ...when you use a WebRTC data channel, you're using SCTP over DTLS over UDP!

https://orchid.com VPN does tunnel the traffic over webrtc.

Ref: https://news.ycombinator.com/item?id=21952887


Wow, I knew it was using SCTP but I didn't know that was inside the DTLS channel. That seems like a lot of overhead.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: