I am aware, just saying per spec, its supposed to be random bit data, thats all I was saying. I am familiar with a spec since I maintain a UUID library that has 6,7, and a custom 8 implemented.
It can have extra monotonicity data instead, per section 6.2 but ideally its random. Again, Not saying you can't do what you are doing, I just know per the conversations while the draft was gathering feedback, your type of change was intended to be done as uuidv8
Well, which is it? These are incompatible requirements.
If I give you a standard UUIDv7 sample, it is impossible for you to interpret the last 62 bits. You cannot determine how they were generated. If I give you two samples with the same timestamp, you cannot say which was generated first. These bits are de facto uninterpretable, unlike e.g. the 48 MSB, which have clearly defined semantics.
For list item #3 it says "Random data for each new UUIDv7 generated for any remaining space." without the word "optional" and the bit layout diagram says `rand_b`
But when you read the description for `rand_b` it says: "The final 62 bits of pseudo-random data to provide uniqueness as per Section 6.8 and/or an optional counter to guarantee additional monotonicity as per Section 6.2."
If you can guarantee that you custom uuidv7 is globally unique for 10000 values per second or more, I don't see why you can't do what you do and treat your custom data as random outside of your implementation.
I think part of this is my mistake, because I assumed you replaced most of the random data with information, but reading it now, I read that you replaced just the last 16 bits. Also since most people used random data for UUIDv1's remaining 48bits of `node` then your variation is no worse than UUIDv1 (or 6) while also being compatible with v7.
I think I just got too caught up on the the bit layout calling it `random` and misread your information. Sorry for the misunderstanding, and thanks for discussing it.
Well UUIDv7 can be consumed as a UUIDv4 in the same way, its just 16 bytes. The point of the standard is to define how the particular bytes are chosen.
The latest standard for v7 does not meaningfully describe how to interpret the last segment.
It says they could be pseudorandom and non-monotonic. Or it could be monotonic and non-random. These are completely disjoint cases! "X or not X" is tautological. And there is no way to determine which (e.g. there could be a flag that indicates this mode, but there is not).
To be clear, the standard should be amended to resolve this ambiguity. Say the last bits MAY be monotonic or MAY be pseudorandom. Or add a flag that indicates which.
As there is currently no standard way to interpret these bits, I feel perfectly justified in using the a few of the least significant ones to encode additional information.
I think the purpose of the standard is so that different software implementations work the same way, so that once you've picked a standard, you can use it everywhere and know that keys are assigned the same way regardless of which software stack is generating a particular key. Its not so that systems can "interpret" it. Obviously they are your bytes to use however you want if you are rolling your own generator.
The goals are smallness, uniqueness, monotonicity, resistance to enumeration attacks, etc. Not randomness for randomness sake.
My UUIDv7+ can be consumed as a standard UUIDv7. It is not intended to be v8. A program can treat the last 16 bits as random noise if it wants.