Hacker Newsnew | past | comments | ask | show | jobs | submit | haywirez's favoriteslogin

https://www.doc.ic.ac.uk/~eedwards/compsys/float/

Basically, you rewrite one of the numbers so that the exponents match, then sum the mantissas. There are a few extra steps, though: https://imgur.com/fgOH4sS

A "floating point addition toy" would be way cool!


The thing that made floating point easy to understand for me was the concept of a dyadic rational [1]. Those are just the rational numbers whose denominator is a power of two (e.g. 1/2, 3/16, 61/32). Fundamentally, binary floating point represents dyadic rationals (not counting special stuff like NaN, infinity, etc.) To represent such a number in floating point form, just write it as a/2^b. Then the mantissa is a, and the exponent is -b.

This intuitively makes it obvious why, for example, 1/3 can't be represented. I find the rational form like a/b easier to understand than the "scientific notation" form like a×2^b.

[1]: https://en.wikipedia.org/wiki/Dyadic_rational


WebAssembly has a stack that lives separately from the linear memory, but C++ compiled to WebAssembly generally manages its own parallel "shadow stack" in linear memory that it keeps some of its stack variables inside. (I believe you can't have pointers into the WebAssembly stack, so anything that might need to be pointed to can't live in it.)

Because the shadow stack is created and managed by the WebAssembly binary itself, it would be its responsibility to add protections like stack protectors or ASLR on it within the linear memory if it wants them. A WebAssembly JIT isn't ever going to touch the linear memory in a way the binary doesn't specify.

>But function addresses can still be randomly offset, no?

No, functions have fixed indexes in a WebAssembly binary. You can't dynamically reassign the indexes at runtime.

You do have the benefit that if your program tries to jump to a function index chosen by an attacker, the attacker can only jump to a function with a compatible type signature. The attacker can't do anything too clever like jumping partway into an arbitrary function, jumping into attacker-written code in the mutable linear memory, or queuing up a series of return addresses to pull off return-oriented-programming.


Remember: if VCs believed in what they were doing they would not take a 2% annual management fee and 20% of the upside.

They’d take 40% of the upside and live on ramen noodles.

VCs make money by raising money from LPs.

They spend this money on investments which don’t look too bad if they fail, because nearly all of them fail. Looking good while losing all of your investors money on companies which go broke is the key VC skill.

Once in a while you get a huge hit. That’s a lottery win, there is no formula for finding that hit. Broad bets helps but that’s about it. The “VC thesis” is a fundraising tool, a pitch instrument, it makes no measurable difference to success. It’s a shtick.

Sympathy, however, for the VC: car dealership sized transactions paired with the diligence burdens of real finance. It’s a terrible job.

Once you understand that VC is one of the worst jobs in finance and they don’t believe most of their own story — it’s fundraising flimflam for their LPs - it’s a lot easier to negotiate.

1) we are a sound bet not to get you in trouble if we fail (good schools and track records)

2) we will work hard on things which your LPs and their lawyers understand, leaving evidence of a good effort on failure

3) we know how the game works and will play by the unwritten rules: keep up appearances

The kind of lunatics who actually stand to make money with a higher probability than average - the “Think Different” category - usually violate all of these rules.

1) they have no track record

2) they work on esoteric nonsense

3) they look weird in public

And they’re structurally uninvestable.

Once you get this it’s all a lot easier: the job of a VC is not to invest in winners, that’s a bonus.

The job of a VC is to look respectable while losing other people’s money at the roulette wheel, and taking a margin for doing so.

I hope that helps.


Flushing subnormals to zero produces speed gains only on certain CPU models, while on others it almost does not have any effect.

For example Zen CPUs have negligible penalties for handling denormals, but many Intel models have a penalty between 100 and 200 clock cycles for an operation with denormals.

Even on the CPU models with slow denormal processing, a speedup between 100 and 1000 exists only for the operation with denormals itself and only when the operation belonged to a stream of operations working at the maximum CPU SIMD speed, when during the one hundred and something lost clock cycles the CPU could have done 4 or 8 operations during every clock cycle.

Any complete computations cannot have a significant percentage of operations with denormals, unless they are written in an extremely bad way.

So for a complete computation, even on the models with bad denormal handling, a speedup of more than a few times would be abnormal.

The only controversy that has ever existed about denormals is that handling them at full speed increases the cost of the FPU, so lazy or greedy companies, i.e. mainly Intel, have preferred to add the flush-to-zero option for gamers, instead of designing the FPU in the right way.

When the correctness of the results is not important, like in many graphic or machine-learning applications, using flush-to-zero is OK, otherwise it is not.


There is a proposal about "unum" or "posit" number system. They give more precision for small numbers (small meaning smaller than about 10^70, for 64bit numbers), less precision for huge numbers, and an overall larger range, than the floating point system.

https://en.wikipedia.org/wiki/Unum_(number_format)

http://www.johngustafson.net/pdfs/BeatingFloatingPoint.pdf

(They are definitely not any easier to understand than the floating point system, though.)


> considering the problem is to fit the reals into 64/32/16 bits and have fast math

Floating-point numbers (and IEEE-754 in particular) are a good solution to this problem, but is it the right problem?

I think the "minimum of surprises" part isn't true. Many programmers develop incorrect mental models when starting to program, and get no feedback to correct them until much later (when they get surprised).

It is true that for the problem you mentioned, IEEE 754 is a good tradeoff (though Gustafson has some interesting ideas with “unums”: https://web.stanford.edu/class/ee380/Abstracts/170201-slides... / http://johngustafson.net/unums.html / https://en.wikipedia.org/w/index.php?title=Unum_(number_form... ). But many programmers do not realize how they are approximating, and the "fixed number of bits" may not be a strict requirement in many cases. (For example, languages that have arbitrary precision integers by default don't seem to suffer for it overall, relative to those that have 32-bit or 64-bit integers.)

Even without moving away from the IEEE-754 standard, there are ways languages could be designed to minimize surprises. A couple of crazy ideas: Imagine if typing the literal 0.1 into a program gave an error or warning saying it cannot be represented exactly and has been approximated to 0.100000000000000005551, and one had to type "~0.1" or "nearest(0.1)" or add something at the top of the program to suppress such errors/warnings. At a very slight cost, one gives more feedback to the user to either fix their mental model or switch to a more appropriate type for their application. Similarly if the default print/to-string on a float showed ranges (e.g. printing the single-precision float corresponding to 0.1, namely 0.100000001490116119385, would show "between 0.09999999776482582 and 0.10000000521540642" or whatever) and one had to do an extra step or add something to the top of the program to get the shortest approximation ("0.1").


This succinct style of reporting reminds me of how surgeons are taught to report on a patient (to colleagues):

  1. name/age/gender
  2. current problem (+ potentially relevant preexisting condition)
  3. relevant clinical/laboratory findings
  4. suspected cause
  5. recommended actions
  6. If an operation is planned: general health assessment (ASA)/allergies
  7. prognosis/miscellaneous notes
As a rule of thumb, non-surgical medical professionals have a similar framework, but will report more in-depth and less focused on a singular logical path.

> But a different question is, why is no company trying to do this differently?

I once worked at a company - in a different domain - that made a conscious decision to make this kind of hire. It worked incredibly well, and I never understood why more companies didn't do it.

The context in my case was the Australian offices of a management consulting firm (BCG). The Melbourne and Sydney offices hired what were called "editors", brought on at the same grade as the consultants. Not editing as in correcting grammar. But helping the consultants improve the logic of the arguments in their slide decks: so they were logically consistent, easy to understand, and actually addressed the clients' issues. I was a junior consultant back then, and we were constantly pushed by our managers "have you seen Yvonne?" [the Melbourne editor] when preparing for major presentations.


I have worked on automotive infotainment systems, set top box and UI for certain kitchen appliances at various jobs before.

Some companies were taking contracts for big name companies, while some produced their own dashboards/hardware. The tasks start becoming repetitive after a while though.

Tech stack in almost all those places has been:

* Embedded Linux (Buildroot/Yocto) / rarely QNX.

* C++11 + Qt/QML/QtQuick for UI ( case study: https://www.slideshare.net/BurkhardStubert/qt-dd2014-casestu... ). We did prototype an application using Qt Webkit, but the performance was meh and we moved back to QML. This was in 2013 iirc. Things might have gotten better these days. It has almost always been 1 main UI application - so we didn't even need a window manager. We booted straight to the main application in 2-3 seconds and then used Qt's eglfs to render to full screen directly.

* Developers were free to implement other services (Notifications demon, Recovery/OTA system, logging etc...) needed by the main application in tech of their choice. We mostly ended up using C++ because that's what most of us were experienced in, and also meant less bloat/messy integration. Sometimes shell scripts.

Hardware has always been some NXP iMX6 variant.

None of those devices had any internet connectivity and Software updates meant using a usb stick etc... So never got to try out something like balena for OTA updates.


The best way to do pub/sub on the web with a standard protocol is MQTT (https://mqtt.org). It supports websockets, it scales, supports authentication, can handle unreliable networks.

We use it exclusively for the soon to be released 1.0 version of Zotonic. See: https://test.zotonic.com (running on a 4.99 euro Hetzner vps).

We developed an independent support javascript library called Cotonic (https://cotonic.org) to handle pub/sub via MQTT. This library can also connect to other compliant MQTT brokers. Because MQTT is fairly simple protocol, it is fairly easy to integrate in existing frameworks. Here is an example chat application (https://cotonic.org/examples/chat) which uses the open Eclipse broker.


This is absolutely my experience of Adobe software too. Terribly engineered with priorities weighted for the board rather than users.

I’ve been a designer for +20 years and sans-Adobe for five, I absolutely recommend any young designers to avoid the Adobe workflow. You don’t need it.

Photoshop = Pixelmator (Kitura is also very good) Illustrator/Indesign = Affinity Designer

Highly precise vectors I draw in Glyphs/Fontlab.


Here's an interview with Aaron McLeran, who has done a lot of work with CSound, and collaborated with Brian Eno on the procedural music in Will Wright's "Spore" computer game at Maxis, using Pure Data (PD).

Immersive Audio Podcast Episode 7 Aaron McLeran

https://podcasts.apple.com/ie/podcast/immersive-audio-podcas...

https://soundcloud.com/user-713907742/immersive-audio-podcas...

>In today’s episode Oliver was joined via Skype by Aaron McLeran, Lead Audio Programmer at Epic Games. Aaron’s first taste of audio programming was writing computer music in CSound while in graduate school at University of Notre Dame (when he was supposed to be doing astrophysics research). Realising his true calling, he left physics to study procedural and interactive computer music, audio synthesis, and audio analysis with Dr. Curtis Roads at the University of Santa Barbara. His first game audio experience was writing procedural music on Spore where he got to collaborate with Brian Eno and Maxis’ audio director Kent Jolly on writing much of the game’s truly procedural music. His next game audio gig was a sound designer on Dead Space 2 where he wrote much of the games interactive audio systems in Lua along with accomplished audio director Don Veca. He made the leap from technical sound designer to audio programmer at Sledgehammer Games where he worked on Call of Duty: Modern Warfare 3 and Call of Duty: Advanced Warfare. His next audio programming gig was at ArenaNet where he got to wrangle with the unpredictability and scale of game audio in the context of an MMO and developed some pretty cool tech around for player-created music and musical interaction. He’s currently working on a new multi-platform audio mixer backend for UE4 and developing new tech and approaches to game audio for VR.

>Aaron speaks to Oliver about all things Game Audio and Procedural Audio and his unusual entry into the industry.

GDC Vault: Procedural Music in SPORE, with Kent Jolly, Aaron McLeran

https://www.gdcvault.com/play/323/Procedural-Music-in

MAKE YOUR OWN KIND OF MUSIC IN 'SPORE' WITH HELP FROM BRIAN ENO (LISTEN TO THIS)

http://www.mtv.com/news/2456432/make-your-own-kind-of-music-...

THE BEAT GOES ON: DYNAMIC MUSIC IN SPORE: Audio engineers Kent Jolly and Aaron McLeran unveil Spore's procedural music generation.

https://www.moredarkthanshark.org/eno_int_gspy-feb08.html

Will Wright and Brian Eno - Generative Systems

https://www.youtube.com/watch?v=UqzVSvqXJYg

Pure Data

https://en.wikipedia.org/wiki/Pure_Data#Projects_using_Pure_...

>Projects using Pure Data

>Pure Data has been used as the basis of a number of projects, as a prototyping language and a sound engine. The table interface called the Reactable and the abandoned iPhone app RjDj both embed Pd as a sound engine.

>Pd has been used for prototyping audio for video games by a number of audio designers. For example, EAPd is the internal version of Pd that is used at Electronic Arts (EA). It has also been embedded into EA Spore.

>Pd has also been used for networked performance, in the Networked Resources for Collaborative Improvisation (NRCI) Library.


All three are good, well maintained projects.

Tree-sitter is more general: it aims to provide support for many languages and so its bash support is lacking a little. But it's very fast. It's designed for IDEs (hence why bash-lsp is using it). It caches its results and only re-parses on newly changed code blocks.

sh by mvdan and shellcheck are both built specifically for the shell. They both have really robust grammar. shellcheck is great for error handling. sh is great for building ASTs (it's more of a true parser)

As always, it depends on your use case. If you need a descriptive summary of errors / warnings in a shell script and performance isn't crucial, use shellcheck. If you need a really robust parser and AST output, use sh. If performance is crucial and/or you eventually want to support different languages, use tree-sitter.

Please feel free to jump in if I am wrong here!


All cloud native networking options, including Cilium: https://landscape.cncf.io/category=cloud-native-network&form...

All service proxies, include HAProxy, NGINX, and Traefik: https://landscape.cncf.io/category=service-proxy&format=card...


I already made it

Twiktwok.github.io


This also has a very nice and simple explanation of the whole thing:

https://www.youtube.com/watch?v=996OiexHze0

It's about 1h long, but it's really worth it.


In case you're not into VR but is curious about it from time to time: DoF means degree of freedom and you can easily imagine the tech leap from 3 to 6 (X, Y & Z axis of movements of the head plus surge [back and forth], sway [sideways] and heave [vertical] movements of your body).

I don't know of a guide but there's lots of projects built on miekg's DNS package: https://github.com/miekg/dns/

This Github repository tracks a whole bunch of these drop-in CSS files: https://github.com/dohliam/dropin-minimal-css

And you can use this demo site to switch between them all on the fly: https://dohliam.github.io/dropin-minimal-css/


I find these “shorter work weeks are just as effective” articles to be nonsense, at least for knowledge workers with some tactical discretion. I can imagine productivity at an assembly line job having a peak such that overworking grinds someone down to the point that they become a liability, but people that claim working nine hours in a day instead of eight gives no (or negative) additional benefit are either being disingenuous or just have terrible work habits. Even in menial jobs, it is sort of insulting – “Hey you, working three jobs to feed your family! Half of the time you are working is actually of negative value so you don’t deserve to be paid for it!”

If you only have seven good hours a day in you, does that mean the rest of the day that you spend with your family, reading, exercising at the gym, or whatever other virtuous activity you would be spending your time on, are all done poorly? No, it just means that focusing on a single thing for an extended period of time is challenging.

Whatever the grand strategy for success is, it gets broken down into lots of smaller tasks. When you hit a wall on one task, you could say “that’s it, I’m done for the day” and head home, or you could switch over to something else that has a different rhythm and get more accomplished. Even when you are clearly not at your peak, there is always plenty to do that doesn’t require your best, and it would actually be a waste to spend your best time on it. You can also “go to the gym” for your work by studying, exploring, and experimenting, spending more hours in service to the goal.

I think most people excited by these articles are confusing not being aligned with their job’s goals with questions of effectiveness. If you don’t want to work, and don’t really care about your work, less hours for the same pay sounds great! If you personally care about what you are doing, you don’t stop at 40 hours a week because you think it is optimal for the work, but rather because you are balancing it against something else that you find equally important. Which is fine.

Given two equally talented people, the one that pursues a goal obsessively, for well over 40 hours a week, is going to achieve more. They might be less happy and healthy, but I’m not even sure about that. Obsession can be rather fulfilling, although probably not across an entire lifetime.

This particular article does touch on a goal that isn’t usually explicitly stated: it would make the world “less unequal” if everyone was prevented from working longer hours. Yes, it would, but I am deeply appalled at the thought of trading away individual freedom of action and additional value in the world for that goal.


I am CEO of a swedish VC-backed startup, and i am currently off 2 days a week for parental leave for 6 months. Yes, i am a man. And when we hire somebody in their 30s (man or woman), we can expect that for 6-12 months they will be gone on parental leave at some stage. (We have 4 people now out of 16 who will be on parental leave during 2020 - 3 of them men).

You can always use https://ngrok.com and other similar "tunneling" solutions. It works like a charm:

https://ngrok.com

https://github.com/cloudflare/cloudflared

https://github.com/inlets/inlets


https://www.lingscars.com/ - yes, this is a real car rental business. Best viewed on desktop - the mobile version is not nearly as... potent.

(reposting my comment from the original)

There’s some potentially misleading information here. Background: I’ve spent the last 20+ years writing low-latency realtime audio applications, technically cross-platform but focused on Linux.

If you care about low latency on any general purpose OS, you need to use a realtime scheduling policy. The default scheduling on these OS’s is intended to maximise some combination of bandwidth and/or fairness. Low latency requires ditching both of those in favor of limiting the maximum scheduling delay of a thread that is otherwise ready to run.

Measuring how long synchronization primitives take without SCHED_FIFO is illustrative, but only of why, if you care about scheduling latency, you need SCHED_FIFO. There are several alternative schedulers for Linux – none of them remove the need for SCHED_FIFO if latency is important.

It is absolutely not the case that using SCHED_FIFO automatically starves non-SCHED_FIFO threads. Scheduling policy is set per-thread, and SCHED_FIFO will only cause issues if the threads that use it really do “burn the CPU” (e.g. by using spinlocks). If you combine SCHED_FIFO with spinlocks you need to be absolutely certain that the locks have low contention and/or are held for extremely short periods (preferably just a few instructions). If you use mutexes (which ultimately devolve to futexes at the kernel level), the kernel will take care of you a little better, unless your SCHED_FIFO thread doesn’t block – if it doesn’t do that, that’s entirely on you. Blocking means making some sort of system call that will cause the scheduler to put the thread to sleep – could be a wait on a futex, waiting for data, or an explicit sleep.

In particular, this: “This was known for a while simply because audio can stutter on Linux when all cores are busy (which doesn’t happen on Windows)” is NOT true. Linux actually has better audio performance than Windows or macOS, but only if the app developer understands a few basic principles. One of them is using SCHED_FIFO appropriately.

Pro-audio/music creation scheduling is MUCH more demanding than video, and a bit more demanding than games. Linux handles this stuff just fine – it just gives you enough rope to shoot yourself in the foot if you don’t fully understand what you’re doing.


His quote (my favourite) may help understand this value: “If you're going to try, go all the way. Otherwise, don't even start. This could mean losing girlfriends, wives, relatives and maybe even your mind. It could mean not eating for three or four days. It could mean freezing on a park bench. It could mean jail. It could mean derision. It could mean mockery--isolation. Isolation is the gift. All the others are a test of your endurance, of how much you really want to do it. And, you'll do it, despite rejection and the worst odds. And it will be better than anything else you can imagine. If you're going to try, go all the way. There is no other feeling like that. You will be alone with the gods, and the nights will flame with fire. You will ride life straight to perfect laughter. It's the only good fight there is.”

It won't. NL has been preparing quite hard for a rise in sea level and/or worse storms. The port of IJmuiden is being overhauled, the shoreline defenses have all been raised, in some places only about 3 ft, in others much more depending on the likelihood of flooding. A bigger problem is that during an extended period of high water that the rivers won't be able to drain into the North Sea and studies are being made on how to deal with that, the 'room for the river' ("ruimte voor de rivier") plan is one of many ways in which change can be made. Other options are to use the IJsselmeer as buffer storage and to vastly increase pumping capacity.

https://www.ruimtevoorderivier.nl/

https://www.gemalen.nl/gemaal_detail.asp?gem_id=264

(Photo 4 is particularly interesting, that's one of three such pumps, and this is an old one and now relatively small, it is one of many such pumps)

The alarmist tone of the article is a bit strange, if there is a place where I feel safe with respect to water it is here, there is an extensive network of canals, pumps, monitoring and reserve capacity on just about everything to deal with water and flooding. Compared to how other countries fare (annual news from France, Germany, Spain and elsewhere shows extensive damage) we do pretty good here.

Of course on large time scales there is a real risk and it will cost a small fortune to deal with all that but with dikes as a well understood mechanism to keep the see out and one of the wealthiest nations on a per area basis it would highly surprise me if NL were the worst hit.

Consider another angle: if the problem had not been dealt with successfully in the past this country would not even exist today.


Nobody pays for or gets product placement on HN. You have to make something readers find interesting. I assume that's the case with the current submission, since it hasn't been affected by moderators or by any of the below.

We do sometimes place stories that we or a small number of story reviewers think the community might like. That program is described at https://news.ycombinator.com/item?id=11662380 and the links back from there. Such stories get lobbed by software randomly onto the bottom half of the front page, whence they fall off in a few minutes unless people upvote them. The purpose is to make HN more interesting (because https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...). Many good submissions fall through the cracks and this is a way to give some a second crack.

Anyone who knows of a story that deserves a second chance like this should email us at hn@ycombinator.com and we'll consider throwing it in that pool. It's ok to ask this for one of your own stories, but it's better if you just ran across something and think it's cool.

By the way, you can see a partial list of these at https://news.ycombinator.com/invited. Those are the ones that were too old to be lobbed directly when we saw them, so we emailed the submitter and invited them to repost it instead. It's on my list to publish a more comprehensive set of lobbed stories.

There are two other types of placed submission that HN does as a way to give back to YC for funding this place: job ads (https://news.ycombinator.com/jobs) and Launch HNs for startups (https://news.ycombinator.com/launches).


A list of all official .well-known files and paths:

https://www.iana.org/assignments/well-known-uris/well-known-...

Wikipedia knows some less official ones too:

https://en.wikipedia.org/wiki/List_of_/.well-known/_services...


Oh man! If only there was a way to take UDP packets and tunnel them over TCP! Wait a second!

http://manpages.ubuntu.com/manpages/xenial/man1/udptunnel.1....

Setup Wireguard on your server as though everything were normal. However, on the server, run this command (as a service):

udptunnel -s 443 127.0.0.1/51820

Then on your client run:

udptunnel -c [SERVER-ADDR]/443 127.0.0.1 51818

In the Client's Wireguard Config, where you would normally specify the server's address / port. Instead specify 127.0.0.1 51818. Finished!

Don't forget to open the firewall on the server's port 443!

Setting up udptunnel as a systemd service to auto start / restart only involves writing two short files! Wireguard uses a standard service file as well so you can simply require the udptunnel service as a prerequisite!

Personally, I find this style of combining simple components much more satisfying (and secure!) than the gargantuan complexity of OpenVPN/IPSec! Wireguard's simplicity means it is easy to have a mental model around how it functions and how it can be composed!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: