Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I use JSONschema, it's honestly great the way it is [0]. So I guess the guarantee I want is "don't change anything"? Ha.

[0] my biggest gripe is it's not well defined what to do with multipleOf when the number isn't an exact integer.



Your [0] might be a change that can be made in a backwards compatible manner.. as long as current implementations don't diverge on what they do when the number is not an integer (whether they error or do something)


I think you would have to build in a "integerMultipleOf" filter and slow-deprecate "multipleOf"

To be more specific, since I wasn't in GP... It's not clear what you should do when for example you want to see if 0.3f is a "multiple of" 0.1f (due to the whole 0.30000000000004 thing)


That sounds like a limitation of the implementation. Decimal types exist.

If the implementation wants to account for some floating point errors I think that's fine too. If they want to codify it though, maybe add an epsilon param so the user can specify how close they want it to be.


Decimal types don't exist in JSON. If you need decimals, you definitely should encode them as strings. As JSONschema exists to document JSON, it should be agnostic to that. You can if you wish, provide format information in the format field, which is not prescriptive.


Neither binary nor decimal floating point types exist in the JSON spec. The JSON spec does not give an explicit internal representation for floating points, implementations are free to use decimal types internally, if they wish to do so. The JSON spec merely specifies what a "number" is at the grammar level.

Having said that most implementations don't use decimal floating points to represent these.


That is correct, but JSONschema does and there is an implicit behavioural difference introduced by the existence of the MultipleOf filter when you're dealing with floats versus integers that can't simply be elided away or excused by the lack of a distinction in the underlying type system.

The point is JSONschema could take a stand and say for example, "this filter will always fail when the operand is not the number representation of an exact integer".


It is a scandal that JSON does not support the BigInt type.


It does. In fact, not only does JSON support big ints, but it supports arbitrary-precision base-10 decimal numbers.

Whether your JSON parser will preserve the precision correctly is another story. For JavaScript/ECMAScript, you'll need to use a library.


It does. Just string encode it.


   1 == "1"; // true
   1n == "1"; // true
   1n == "1n"; // false


You have to describe what your encoding is in the "format" field of your JSONschema and implement the correct semantic meaning.


This is wrong. All numbers in JSON are arbitrary-precision base-10 decimal numbers.


No they aren't. The JSON spec says implementation is up to the engine and de facto JSON numbers are parsed as ieee754 on just about every platform I can think of.


I think what you're getting hung up on is that JSON does not use IEEE floating point math. If your implementation or environment stores parsed numbers as an IEEE floating point, that's a limitation the implementation has to disclose or work around in some fashion, but it's not a limitation of JSON or JSON Schema as such.


The behavior is perfectly well defined... there's no special cases for integers vs. rational numbers, they're all the same.

For example, given multipleOf: 0.3, and an input of 0.9, 0.9/0.3 is 3, which is an integer, so would be accepted. A value of 0.8 would be rejected.


0.3 is not the same thing as 3/10 in ieee754, neither is 0.1 the same as 1/10.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: