Hacker News new | past | comments | ask | show | jobs | submit login

From what I gather this is done to persist the sort order. All calls within the same millisecond will get the same timestamp component so that can't be used to sort the ULIDs. So the "random" part is incremented and the resulting ULIDs can still be sorted by the order of function calls. This wouldn't be possible if the random part were truly random. I'm not sure this is a good idea but that is what I understood from the spec.



It’s not clear why they don’t just use a finer-grained timestamp component (e.g. nanoseconds) and increase that number if two events are within the same clock interval, and keep the random component random.


The reference implementation is written in Javascript and I don't think that provides a reliable way to get timestamps this fine grained.


The underlying clock doesn’t need to be fine-grained; only the target representation needs to be fine-grained enough for the additional increments inserted by the code that generates the IDs. Effectively, the precision of the timestamp part needs to be large enough to support the number of IDs per second you want to be able to generate sustainably.


Also, the spec notes there is an entropy trade off in using more bits for the time stamp. More timestamp bits is fewer random bits, because ULIDs constrained themselves to a combined total 128 bits (to match GUID/UUID width).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: