RFC 7493: The I-JSON Message Format
(datatracker.ietf.org)
from lysdexic@programming.dev to programming@programming.dev on 15 Sep 2024 10:21
https://programming.dev/post/19421705
from lysdexic@programming.dev to programming@programming.dev on 15 Sep 2024 10:21
https://programming.dev/post/19421705
#programming
threaded - newest
Ok.
Tbh this is a programming community. While yes, a quick summary would not have gone amiss, I don’t fault OP for not including it. RFCs are often pretty dry but this one is reasonably straightforward as a subset of JSON to reduce some ambiguity.
Imo the only ones that should feel bad about it are those upvoting it.
Edit: lemmy is mostly low effort stuff and not interesting discussions. So while this post provides nothing of value that is something the voting system is supposed to handle
Yeah, the quality on Lemmy is nowhere near what Reddit was back in its heyday 10+ years ago; mostly due to the quality of the users; users who think content like this is worthy of posting and upvoting.
Dude, go drink a coffee, and then reflect on what a negative little bitch you’re being.
The quality on Lemmy is somewhat worse than Reddit 10 years ago, entirely because the user base is a fraction of the size and is more equivalent to when Reddit was first growing 15-20 years ago. Even then it was only a success because they bootstrapped it using fake posts and comments.
Lemmy is doing great, what it needs to grow is a positive and welcoming community, and then for Reddit to do something stupid again to trigger an exodus.
Go ahead and contribute things that you find interesting instead of wasting your time whining about what others might like.
So far, all you’re contributing is whiny shitposting. You can find plenty of that in Reddit too.
You’re clearly one of the reasons the quality is so low. Wasting everyone’s time using lemmy as your personal link aggregator. It’s obnoxious af.
Please be mindful of fellow community members and our TOS when making posts/comments on our communities.
As stated in our TOS:
Please be respectful and mindful of others when making posts and comments towards others.
Consider this your first warning.
Thank you.
Just skimmed but seems like a decent idea. Not that I’ve knowingly run into issues parsing JSON too much
It’s from 2015, so its probably what you are doing anyway
No, you are probably not using this at all. The problem with JSON is that this details are all handled in an implementation-defined way, and most implementation just fail/round silently.
Just give it a try and send down the wire a JSON with, say, a huge integer, and see if that triggers a parsing error. For starters, in .NET both Newtonsoft and System.Text.Json set a limit of 64 bits.
…microsoft.com/…/system.text.json.jsonserializero…
Why restrict to 54-bit signed integers? Is there some common language I’m not thinking of that has this as its limit?
Edit: Found it myself, it’s the range where you can store an integer in a double precision float without error. I suppose that makes sense for maximum compatibility, but feels gross if we’re already identifying value types. I don’t come from a web-dev/js background, though, so maybe it makes more sense there.
Because
number
is a double, and IEEE754 specifies the mantissa of double-precision numbers as 53bits+sign.Meaning, it’s the highest integer precision that a double-precision object can express.
It’s not about compatibility. It’s because JSON only has a
number
type which covers both floating point and integers, andnumber
is implemented as a double-precision value. If you have to express integers with a double-precision type, when you go beyond 53bits you will start to experience loss of precision, which goes completely against the notion of an integer.I didn’t think you realize just how much code is written in JavaScript these days.
A summary:
An old proposal (2015, not sure why OP posted it now), that basically proposes to put some more standards and limitations around JSON formatting to make it more predictable. Most of it seems pretty reasonable:
It recommends:
Honestly, the only part of this I dislike is the order of keys not mattering. I get that in a bunch of languages they use dictionary objects that don’t preserve order, but backend languages have a lot more headroom to adapt and create objects that can, vs making a JavaScript thread loop over an object an extra time to reorder it every time it receives data.
Personally, I prefer duplicate keys to be eaten by the parser but I can see how it’d be beneficial to prevent them.
I’m honestly unsure if they intend the ‘must-ignore’ policy to mean to eat duplicate keys without erroring, or just to eat keys that are unexpected based on some contract or schema…