Hacker Newsnew | past | comments | ask | show | jobs | submit | PMunch's commentslogin

Just did a bit of a deep dive into dithering myself, for my project of creating an epaper laptop. https://peterme.net/building-an-epaper-laptop-dithering.html it compares both error diffusion algorithms as well as Bayer, blue noise, and some more novel approaches. Just in case anyone wants to read a lot more about dithering!


After implementing a number of dithering approaches, including blue noise and the three line approach used in modern games, I’ve found that quasi random sequences give the best results. Have you tried them out?

https://extremelearning.com.au/unreasonable-effectiveness-of...


What is the advantage over blue noise? I've had very good results with a 64x64 blue noise texture and it's pretty fast on a modern GPU. Are quasirandom sequences faster or better quality?

(There's no TAA in my use case, so there's no advantage for interleaved gradient noise there.)

EDIT: Actually, I remember trying R2 sequences for dither. I didn't think it looked much better than interleaved gradient noise, but my bigger problem was figuring out how to add a temporal component. I tried generalizing it to 3 dimensions, but the result wasn't great. I also tried shifting it around, but I thought animated interleaved gradient noise still looked better. This was my shadertoy: https://www.shadertoy.com/view/33cXzM


Ooh, I haven't actually! I'll need to implement and test this for sure. Looking at the results though it does remind me of a dither (https://pippin.gimp.org/a_dither/), which I guess makes sense since they are created in a broadly similar way.


Just had a look at this and here is the result for the test image: https://uploads.peterme.net/test-image_qr.png.

Looks pretty good! It looks a bit like a dither, but with fewer artifacts. Definitely a "sharper" look than blue noise, but in places like the transitions between the text boxes you can definitely see a bit more artifacts (almost looks like the boxes have a staggered edge).

Thanks for bringing this to my attention!


Nice writeup. I've been looking at this for a print-on-demand project and found that physical ink bleed changes the constraints quite a bit compared to e-paper. In my experience error diffusion often gets muddy due to dot gain, whereas ordered dithering seems to handle the physical expansion of the ink better.


> In my experience error diffusion often gets muddy due to dot gain

Absolutely - there's a reason why traditional litho printing uses a clustered dot screen (dots at a constant pitch with varying size).

I've spent some time tinkering with FPGAs and been interested by the parallels between two-dimensional halftoning of graphics and the various approaches to doing audio output with a 1-bit IO pin: pulse width modulation (largely analogous to the traditional printer's dot screen) seems to cope better with imperfections in filters and asymmetries in output drivers than pulse density modulation (analogous to error diffusion dithers).


Traditional litho actually uses either lines in curved crosshatch patterns or irregular stippling. Might be doable using an altered error-diffusion approach that rewards tracing a clearly defined line as opposed to placing individual dots or blots.


Thanks! I would imagine printing on paper would be a completely different ball game. I actually considered scanning the actual epaper display to show each of the dithering techniques in their intended environment as it does change the look quite a bit. From the little I know about typography and things like ink-wells I can definitely see how certain algorithms can change quite significantly. The original post here has a pattern which looks similar to old newspapers, maybe that's worth looking into?


I had a project with those 7 colour e-paper displays and used dithering and it looked amazing. Crazy how much you could fake with just 7 colours and dithering


Definitely, I've been trying out a lot of dithering algorithms, and while they have big differences with only black and white as soon as you start adding more shades of grey they all look pretty much exactly the same as the input image. I'd imagine good dithering with colours would look amazing


Same! I've already signed up for the Time 2 and super stoked for it, then I saw the announcement for the Round 2 and I was about to switch over until I noticed it didn't have a heart rate sensor. I know its sleek and elegant, but that slight bulk would be worth it in my opinion. And who knows, with the extra thickness they might've been able to squeeze in more battery to get it to the 30 days battery life cited for the Time 2.

Also unfortunate that it's missing the RGB backlight of the Time 2. I can think of a few good use cases for it, but if it's only on the Time 2 that means fewer apps would use it.


I've been wanting a browser plugin like this for ages. Basically tell it which sites to limit, then once loaded it won't re-load for a certain amount of time, or until the next day (not necessarily 24 hours). This way there is no reason to keep checking the news, they won't change.


Indeed, I have a remote doorbell where the outer button is a piezo button and the inside bell part is connected to a socket. But the button is quite thick, presumably because it needs a bit of travel to get enough energy. Granted that's for a device that sends multi-wall penetrating strength of 433Mhz radio waves. For something like this where the distance is only about 25cm you might be able to get a button small enough.


Nothing, and it fact this works. To move to an example which actually compiles:

    import math
    
    echo fcNormal
    echo FloatClass.fcNormal
    echo math.fcNormal
    echo math.FloatClass.fcNormal
All of these ways of identifying the `fcNormal` enum value works, with varying levels of specificity.

If instead you do `from math import nil` only the latter two work.


There is a direct connection, you just don't have to bother with typing it. Same as type inference, the types are still there, you just don't have to specify them. If you have a collision in name and declaration then the compiler requires you to specify which version you wanted. And with language inspection tools (like LSP or other editor integration) you can easily figure out where something comes from if you need to. Most of the time though I find it fairly obvious when programming in Nim where something comes from, in your example it's trivial to see that the error code comes from the errorcodes module.

Oh, and as someone else pointed out you can also just `from std/errorcodes import nil` and then you _have_ to specify where things come from.


When I was learning Nim and learned how imports work and that things stringify with a $ function that comes along with their types (since everything is splat imported) and $ is massively overloaded I went "oh that all makes sense and works together". The LSP can help figure it out. It still feels like it's in bad taste.

It's similar to how Ruby (which also has "unstructured" imports) and Python are similar in a lot of ways yet make many opposite choices. I think a lot of Ruby's choices are "wrong" even though they fit together within the language.


Do note that unlike Python’s “import * from a; import * from b” where you have no idea where a name cam from later in the code (and e.g. changes to a and b, such as new versions, will change where a name comes from), Nim requires a name to be unambiguous, so that if “b” added a function that previously only “a” had, you’ll get a compile time error.


> The community "leaders" / moderation team is also full of abrasive individuals with fragile egos.

I certainly hope this isn't the case any longer. As one of the moderators I feel the current group is very patient and welcoming. At least that's what we're trying for, no one is perfect so I'm certain you can find counter examples. But as a whole I think we're doing pretty well. If you have any specific complaints we would love to hear them. They can be left anonymously in our community feedback form, or you can find we anywhere in the community for a chat.


It makes sense you feel that way, as you're one of the moderators. I feel quite differently. Thanks for the offer, but there's a reason why Nim hemorrhages users as fast as it gains them, and a big reason for that, IMO, is the toxic community which definitely includes the moderation team.


It's cheap to hide behind a pseudonym here and complain as you do. Given the time scales you mention and how you are complaining, I have a theory under what nick you were present in the Nim community though.

If I'm correct, I find your complaints especially about our moderators especially unfair. Arguably the only drama with moderators was in the context of Dom and you know that.

I really don't see where any of the current moderators can be described as "toxic", but you'll just say "you're one of them" anyway, so why do I even bother...


FWIW, I agree that Araq is an abrasive character and probably not a great community leader for an open source project.

But I disagree with your take on the moderation team. I don't know if you have specific names to call out, but PMunch, miran and the rest of the team have been nothing but welcoming, in my experience.


Of course, I'm heavily biased, but also very interested in mediating any such issues. I obviously can't, or wouldn't want to, force you to report anything. But it would be very appreciated if you, or anyone else reading this with similar experiences, could report it here: https://docs.google.com/forms/d/1ZWa2GONAM825IxFt8ZOdfn_XeJy...


Isn't this better though? The alternative being that every email or phone call is an hour, or that they batch it up based on gut feel. If I get a phone call that could easily steal 6 minutes of focus time (note that I specify focus time, even if the call is only 30 seconds you'd have to mentally switch tasks back and forth).


I’m not sure that 6 minutes is a useful denomination of focus time. Maybe legal work is too different from IT and it makes sense there, but when I need to focus on something, 15 minutes is the smallest amount I would allocate for a task.


Speech for the disabled would indeed be great, as long as the disability doesn't also affect the system which you use to "silent speak".

As for the privacy thing, I would say that I absolutely hate talking out loud to my devices. Just the idea of talking my ideas into a recorder in my own office where nobody can hear me feels very strange to me. But I love thinking through ideas and writing scripts for speeches or presentations in my mind, or to plan out some code or overall project. A device like this would allow me to do the internal monologue thing, then turn to "silent speak" them into this device to take notes which sounds great. And the form-factor doesn't look that dissimilar to a set a bone-conduction headsets which would be perfect for privacy-aware feedback while allowing you to take in your surroundings.

With this tech demo though it seems like the transmission rate is veeery slow, he sits still in his chair staring into the room and a short sentence is all that appears. Not exactly speed of thought..

And of course there is the cable running off to who knows what kind of computational resources.

The AI parts of this are less exciting to me, but as an input device I'm really on-board with the idea.


As other people have pointed out "audio range" is generally 20Hz-20kHz. Your phone (and other audio equipment) is therefore built to be able to transmit those frequencies. The way a speaker creates sound is by passing electricity through a wire, creating a magnetic field, and pushing against a permanent magnet. Either the magnet or the wire is attached to a membrane that will then get pushed out. Doing this between 20-20k times a second and you make sound. However when charged particles (like the electrons in a wire) accelerate they create radio waves, so the magnetic coil in the speaker will also create a small amount of radio waves in the same frequency as the sound it is producing. This is what's called parasitic EMF, and in this case it turns out that this small amount of radio signal is enough to interact with the radio in the wheels.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: