The Absolute Minimum Every Software Developer Must Know About Unicode in 2023 (Still No Excuses!) (tonsky.me)
from snaggen@programming.dev to programming@programming.dev on 02 Oct 2023 21:47
https://programming.dev/post/3830313

#programming

threaded - newest

simple@lemm.ee on 02 Oct 2023 22:10 next collapse

Now this is UX. Wonderful stuff.

<img alt="Screenshot of the page showing me 20 mouse cursors moving across the page" src="https://i.vgy.me/LQLNTN.png">

[deleted] on 02 Oct 2023 22:25 next collapse
.
flamingos@feddit.uk on 02 Oct 2023 23:34 next collapse

Thank god for reader view because this makes me feel physically sick to look at.

neo@lemmy.comfysnug.space on 03 Oct 2023 07:24 next collapse

Same.

hazelnoot@beehaw.org on 04 Oct 2023 14:01 collapse

Right?? I normally love it when websites have a fun twist, but this one really needs an off button. The other cursors keep covering the text and it becomes genuinely uncomfortable to read. Fortunately, you can easily block the WS endpoint with any ad blocker.

xoggy@programming.dev on 03 Oct 2023 00:22 next collapse

And the site’s dark mode is fantastic…

snooggums@kbin.social on 03 Oct 2023 00:25 next collapse

Best dark mode ever!

kambusha@feddit.ch on 03 Oct 2023 06:10 collapse

Lol, who turned the lights out?

AeroLemming@lemm.ee on 03 Oct 2023 04:09 next collapse

I didn’t realize this was sarcastic and was getting ready to post about how broken it looks for me.

Virkkunen@kbin.social on 03 Oct 2023 06:08 next collapse

This one really got a laugh out of me

JackbyDev@programming.dev on 03 Oct 2023 18:28 collapse
interolivary@beehaw.org on 03 Oct 2023 03:50 next collapse

The horror

redcalcium@lemmy.institute on 03 Oct 2023 17:34 next collapse

I love it. People should be having more fun with their own personal sites.

Blackmist@feddit.uk on 04 Oct 2023 11:22 collapse

Is that other readers’ mouse pointers?

abhibeckert@lemmy.world on 03 Oct 2023 02:20 next collapse

I love the comparison of string length of the same UTF-8 string in four programming languages (only the last one is correct, by the way):

Python 3:

len(“🤦🏼‍♂️”)

5

JavaScript / Java / C#:

“🤦🏼‍♂️”.length

7

Rust:

println!(“{}”, “🤦🏼‍♂️”.len());

17

Swift:

print(“🤦🏼‍♂️”.count)

1

Walnut356@programming.dev on 03 Oct 2023 02:51 collapse

That depends on your definition of correct lmao. Rust explicitly counts utf-8 scalar values, because that’s the length of the raw bytes contained in the string. There are many times where that value is more useful than the grapheme count.

Knusper@feddit.de on 03 Oct 2023 06:03 next collapse

Yeah, and as much as I understand the article saying there should be an easily accessible method for grapheme count, it’s also kind of mad to put something like this into a stdlib.

Its behaviour will break with each new Unicode standard. And you’d have to upgrade the whole stdlib to keep up-to-date with the newest Unicode standards.

ono@lemmy.ca on 03 Oct 2023 07:42 next collapse

It might make more sense to expose a standard library API for unicode data provided by (and updated with) the operating system. Something like the time zone database.

Treeniks@lemmy.ml on 03 Oct 2023 08:21 collapse

The way UTF-8 works is fixed though, isn’t it? A new Unicode standard should not change that, so as long as the string is UTF-8 encoded, you can determine the character count without needing to have the latest Unicode standard.

Plus in Rust, you can instead use .chars().count() as Rust’s char type is UTF-8 Unicode encoded, thus strings are as well.

turns out one should read the article before commenting

Knusper@feddit.de on 03 Oct 2023 10:24 next collapse

No offense, but did you read the article?

You should at least read the section “Wouldn’t UTF-32 be easier for everything?” and the following two sections for the context here.

So, everything you’ve said is correct, but it’s irrelevant for the grapheme count.
And you should pretty much never need to know the number of codepoints.

Treeniks@lemmy.ml on 04 Oct 2023 04:11 collapse

yup, my bad. Frankly I thought grapheme meant something else, rather stupid of me. I think I understand the issue now and agree with you.

Knusper@feddit.de on 04 Oct 2023 04:17 collapse

No worries, I almost commented here without reading the article, too, and did not really know what graphemes are beforehand either. 🫠

unique_hemp@discuss.tchncs.de on 03 Oct 2023 11:31 collapse

Nope, the article says that what is and is not a grapheme cluster changes between unicode versions each year :)

Black616Angel@feddit.de on 03 Oct 2023 10:20 collapse

And rust also has the “🤦”.chars().count() which returns 1.

I would rather argue that rust should not have a simple len function for strings, but since str is only a byte slice it works that way.

Also also the len function clearly states:

This length is in bytes, not chars or graphemes. In other words, it might not be what a human considers the length of the string.

Knusper@feddit.de on 03 Oct 2023 10:29 next collapse

That Rust function returns the number of codepoints, not the number of graphemes, which is rarely useful. You need to use a facepalm emoji with skin color modifiers to see the difference.

The way to get a proper grapheme count in Rust is e.g. via this library: crates.io/crates/unicode-segmentation

Djehngo@lemmy.world on 03 Oct 2023 11:05 collapse

Makes sense, the code-points split is stable; meaning it’s fine to put in the standard library, the grapheme split changes every year so the volatility is probably better off in a crate.

Knusper@feddit.de on 03 Oct 2023 11:34 collapse

Yeah, although having now seen two commenters with relatively high confidence claiming that counting codepoints ought be enough…

…and me almost having been the third such commenter, had I not decided to read the article first…

…I’m starting to feel more and more like the stdlib should force you through all kinds of hoops to get anything resembling a size of a string, so that you gladly search for a library.

Like, I’ve worked with decoding strings quite a bit in the past, I felt like I had an above average understanding of Unicode as a result. And I was still only vaguely aware of graphemes.

Turun@feddit.de on 04 Oct 2023 07:14 collapse

For what it’s worth, the documentation is very very clear on what these methods return. It explicitly redirects you to crates.io for splitting into grapheme clusters. It would be much better to have it in std, but I understand the argument that Std should only contain stable stuff.

As a systems programming language the .len() method should return the byte count IMO.

Knusper@feddit.de on 04 Oct 2023 08:34 collapse

The problem is when you think you know stuff, but you don’t. I knew that counting bytes doesn’t work, but thought the number of codepoints was what I want. And then knowing that Rust uses UTF-8 internally, it’s logical that .chars().count() gives the number of codepoints. No need to read documentation, if you’re so smart. 🙃

It does give you the correct length in quite a lot of cases, too. Even the byte length looks correct for ASCII characters.

So, yeah, this would require a lot more consideration whether it’s worth it, but I’m mostly thinking there’d be no .len() on the String type itself, and instead to get the byte count, you’d have to do .as_bytes().len().

lemmyvore@feddit.nl on 03 Oct 2023 11:20 collapse

None of these languages should have generic len() or size() for strings, come to think of it. It should always be something explicit like bytes() or chars() or graphemes(). But they’re there for legacy reasons.

mindbleach@sh.itjust.works on 03 Oct 2023 03:06 next collapse

I’m still sour about text having color. Yeah I know little icons peppered forums. That’s why people liked reddit! It got rid of that shit! Now it’s part of the universal standard? Not just the ability to draw a turd on someone’s monitor, but to have it be colored-in brown? The hell with that. You wanna have animated GIFs next? Let someone put their username in marquee? Or like right-alignment, make rainbow signatures a free gimmick that text engines have to live with.

Meanwhile the alphabet of upside-down or small-caps letters are still incomplete.

eeleech@lemm.ee on 03 Oct 2023 07:56 next collapse

I agree that having some glyphs in color can be bad, for example when you are typesetting a formula in TeX that contains emoji, the color looks just unprofessional. As a solution, let me introduce you to the Noto Emoji font: fonts.google.com/noto/specimen/Noto+Emoji

gerryflap@feddit.nl on 03 Oct 2023 13:51 collapse

As a developer, I feel absolute pain for the people who had to convert these. There’s quite some edge cases and sensitive topics to dodge here, and doing something wrong might piss people off. They must’ve had some lengthy meetings about a few emoji.

JackbyDev@programming.dev on 03 Oct 2023 18:26 next collapse

𝖄𝖔𝖚 𝖘𝖔𝖗𝖙 𝖔𝖋 𝖊𝖓𝖉 𝖚𝖕 𝖍𝖆𝖛𝖎𝖓𝖌 𝖋𝖔𝖓𝖙𝖘 𝖊𝖒𝖇𝖊𝖉𝖉𝖊𝖉 𝖎𝖓 𝖙𝖍𝖊 𝖊𝖓𝖈𝖔𝖉𝖎𝖓𝖌. 𝕴 𝖚𝖓𝖉𝖊𝖗𝖘𝖙𝖆𝖓𝖉 𝖜𝖍𝖞 𝖎𝖙 𝖍𝖆𝖕𝖕𝖊𝖓𝖘, 𝕴’𝖒 𝖓𝖔𝖙 𝖘𝖆𝖞𝖎𝖓𝖌 𝖎𝖙 𝖘𝖍𝖔𝖚𝖑𝖉𝖓’𝖙, 𝖇𝖚𝖙 𝖎𝖙’𝖘 𝖘𝖙𝖎𝖑𝖑 𝖆 𝖜𝖊𝖎𝖗𝖉 𝖘𝖎𝖉𝖊 𝖊𝖋𝖋𝖊𝖈𝖙.

As normal letters in case your screen cannot render it.

You sort of end up having fonts embedded in the encoding. I understand why it happens, I’m not saying it shouldn’t, but it’s still a weird side effect.

mindbleach@sh.itjust.works on 03 Oct 2023 18:34 collapse

And it risks sites like this entering an arms race for attention-grabbing bullshit, where every post tries not to look like plain text. This didn’t really happen to reddit because the old guard (hello) were curmudgeons. Happened to Craigslist and eBay, though, where the attention-whore behavior is directly monetized.

Blackmist@feddit.uk on 04 Oct 2023 11:23 collapse

˙ƃuıʎouuɐ ʎʃʃɐǝɹ s,ʇı 'ʇɥƃıɹ ʍouʞ I

Knusper@feddit.de on 03 Oct 2023 06:16 next collapse

They believed 65,536 characters would be enough for all human languages.

Gotta love these kind of misjudgements. Obviously, they were pushing against pretty hard size restrictions back then, but at the same time, they did have the explicit goal of fitting in all languages and if you just look at the Asian languages, it should be pretty clear that it’s not a lot at all…

amio@kbin.social on 03 Oct 2023 08:28 next collapse

Holy Jesus, what a color scheme.

Nighed@sffa.community on 03 Oct 2023 09:52 collapse

I prefer it to black on white. Inferior to dark mode though.

zquestz@lemm.ee on 03 Oct 2023 10:05 next collapse

Was actually a great read. I didn’t realize there were so many ways to encode the same character. TIL.

atheken@programming.dev on 03 Oct 2023 17:03 next collapse

Unicode is thoroughly underrated.

UTF-8, doubly so. One of the amazing/clever things they did was to build off of ASCII as a subset by taking advantage of the extra bit to stay backwards compatible, which is a lesson we should all learn when evolving systems with users (your chances of success are much better if you extend than to rewrite).

On the other hand, having dealt with UTF-7 (a very “special” email encoding), it takes a certain kind of nerd to really appreciate the nuances of encodings.

Jummit@lemmy.one on 03 Oct 2023 17:31 next collapse

I’ve recently come to appreciate the “refactor the code while you write it” and “keep possible future changes in mind” ideas more and more. I think it really increases the probability that the system can live on instead of becoming obsolete.

Pantoffel@feddit.de on 04 Oct 2023 09:21 collapse

Yes, but once code becomes too spaghetti such that a “refactor while you write it” becomes too time intensive and error prone, it’s already too late.

JackbyDev@programming.dev on 03 Oct 2023 18:22 collapse

Unrelated, but what do you think (if anything) might end up being used by the last remaining reserved bit in IP packet header flags?

en.wikipedia.org/wiki/Evil_bit

en.wikipedia.org/…/Internet_Protocol_version_4#He…

Obscerno@lemm.ee on 03 Oct 2023 18:01 next collapse

Man, Unicode is one of those things that is both brilliant and absolutely absurd. There is so much complexity to language and making one system to rule them all ends up involving so many compromises. Unicode has metadata for each character and algorithms dealing with normalization and capitalization and sorting. With human language being as varied as it is, these algorithms can have really wacky results. Another good article on it is eev.ee/blog/2015/09/12/dark-corners-of-unicode/

And if you want to RENDER text, oh boy. Look at this: faultlore.com/blah/text-hates-you/

emptyother@programming.dev on 04 Oct 2023 09:59 collapse

Oh no, we’ve been hacked! Theres chinese character in the event log! Or was it just unicode?

The entire video is worth watching, the history of “Plain text” from the beginning of computing.

lyda@programming.dev on 03 Oct 2023 20:20 next collapse

The mouse pointer background is kinda a dick move. Good article. but the background is annoying for tired old eyes - which I assume are a target demographic for that article.

lyda@programming.dev on 03 Oct 2023 20:33 next collapse

js console: document.querySelector(‘.pointers’).hidden=true

hazelnoot@beehaw.org on 04 Oct 2023 13:48 collapse

Thank you for this! You can also get rid of it with a custom ad-blocker rule. I added these to uBlock Origin, and it totally kills the pointer thing.

wss://tonsky.me
http://tonsky.me/pointers/
https://tonsky.me/pointers/
DeprecatedCompatV2@programming.dev on 04 Oct 2023 05:27 next collapse

Wow this is awful on mobile lol

heftig@beehaw.org on 04 Oct 2023 10:37 collapse

You’re actually seeing mouse pointers of other people having the page open. It connects to a websocket endpoint including the page URL and your platform (OS) and sends your current mouse position every second.

lyda@programming.dev on 04 Oct 2023 10:56 collapse

Just because you can do something…

[deleted] on 03 Oct 2023 20:46 next collapse
.
robinm@programming.dev on 03 Oct 2023 21:55 next collapse

I do understant why old unicode versions re-used “i” and “I” for turkish lowercase dotted i and turkish uppercase dotless I, but I don’t understand why more recent version have not introduce two new characters that looks exactly the same but who don’t require locale-dependant knowlege to do something as basic as “to lowercase”.

chinpokomon@lemmy.ml on 04 Oct 2023 09:15 collapse

Probably for the same reason Spanish used to consider ch, ll, and rr as a single character.

zlatko@programming.dev on 04 Oct 2023 07:01 next collapse

The article sure mentions 💩a lot.

LaggyKar@programming.dev on 04 Oct 2023 07:26 next collapse

If you go to the page without the trailing slash, the images don’t load

lucas@startrek.website on 04 Oct 2023 08:50 next collapse

currency symbols other than the $ (kind of tells you who invented computers, doesn’t it?)

Who wants to tell the author that not everything was invented in the US? (And computers certainly weren’t)

SnowdenHeroOfOurTime@unilem.org on 04 Oct 2023 13:20 next collapse

Where were computers invented in your mind? You could define computer multiple ways but some of the early things we called computers were indeed invented in the US, at MIT in at least one case.

lucas@startrek.website on 04 Oct 2023 14:16 next collapse

Well, it’s not really clear-cut, which is part of my point, but probably the 2 most significant people I could think of would be Babbage and Turing, both of whom were English. Definitely could make arguments about what is or isn’t considered a ‘computer’, to the point where it’s fuzzy, but regardless of how you look at it, ‘computers were invented in America’ is rather a stretch.

SnowdenHeroOfOurTime@unilem.org on 04 Oct 2023 14:33 collapse

‘computers were invented in America’ is rather a stretch.

Which is why no one said that. I read most of the article and I’m still not sure what you were annoyed about. I didn’t see anything US-centric, or even anglocentric really.

lucas@startrek.website on 04 Oct 2023 15:17 collapse

To say I’m annoyed would be very much overstating it, just a (very minor) eye-roll at one small line in a generally very good article. Just the bit quoted:

currency symbols other than the $ (kind of tells you who invented computers, doesn’t it?)

So they could also be attributing it to some other country that uses $ for their currency, which is a few, but it seems most likely to be suggesting USD.

Deebster@lemmyrs.org on 05 Oct 2023 02:46 collapse

I think the author's intended implication is absolutely that it's a dollar because the USA invented the computer. The two problems I have is that:

  1. He's talking about the American Standard Code for Information Interchange, not computers at that point
  2. Brits or Germans invented the computer (although I can't deny that most of today's commercial computers trace back to the US)

It's just a lazy bit of thinking in an otherwise excellent and internationally-minded article and so it stuck out to me too.

Deebster@lemmyrs.org on 05 Oct 2023 02:37 collapse

The stupid thing is, all the author had to do was write "kind of tells you who invented ASCII" and he'd have been 100% right in his logic and history.

TehPers@beehaw.org on 04 Oct 2023 09:37 next collapse

The only modern language that gets it right is Swift:

print("🤦🏼‍♂️".count)
// => 1

Minor, but I’m not sure this is as unambiguous as the article claims. It’s true that for someone “that isn’t burdened with computer internals” that this is the most obvious “length” of the string, but programmers are by definition burdened with computer internals. That’s not to say the length shouldn’t be 1 though, it’s more that the “length” field/property has a terrible name, and asking for the length of a string is a very ambiguous question to begin with.

Instead, I think a better solution is to be clear what length you’re actually referring to. For example, with Rust, the .len() method documents itself as the number of bytes in the string and warns that it may not be what you’re interested in. Similarly, .chars() clarifies that it iterates over Unicode Scalar Values, and not grapheme clusters (and that grapheme clusters are unfortunately not handled by the standard library).

For most high level applications, I think you generally do want to work with grapheme clusters, and what Swift does makes sense (assuming you can also iterate over the individual bytes somehow for low level operations). As long as it is clearly documented what your “length” refers to, and assuming the other lengths can be calculated, I think any reasonably useful length is valid.

The article they link in that section does cover a lot of the nuances between them, and is a great read for more discussion around what the length should be.

Edit: I should also add that Korean, for example, adds some additional complexity to it. For example, what’s the string length of 각? Is it 1, because it visually consumes a single “space”? Or is it 3 because it’s 3 letters (ㄱ, ㅏ, ㄱ)? Swift says the length is 1.

neutron@thelemmy.club on 04 Oct 2023 12:37 collapse

If we’re being really pedantic, the last part in Korean is counted with different units:

  • 각 as precomposed character: 1자 (unit ja for CJK characters)
  • 각 (ㄱㅏㄱ) as decomposable components: 3자모 (unit jamo for Hangul components)

So we could have separate implementations of length() where we count such cases with different criteria… But I wouldn’t expect non-speakers of Korean know all of this.

Plus, what about Chinese characters? Are we supposed to count 人 as one but 仁 as one (character) or two (radicals)? It gets only more complicated.

Espi@lemmy.world on 04 Oct 2023 14:01 next collapse

I’m personally waiting for utf-64 and for unicode to go back to fixed encoding and forgetting about merging code points into complex characters. Just keep a zeptillion code points for absolutely everything.

phoenixz@lemmy.ca on 04 Oct 2023 17:01 next collapse

Just give me plain UTF32 with ~@4 billion code points, that really should be enough for any symbol ee can come up with. Give everything it’s own code point, no bullshit with combined glyphs that make text processing a nightmare. I need to be able to do a strlen either on byte length or amount of characters without the CPU spendings minute to count each individual character.

I think Unicode started as a great idea and the kind of blubbered into aimless “everybody kinda does what everyone wants” territory. Unicode is for humans, sure, but we shouldn’t forget that computers actually have to do the work

onlinepersona@programming.dev on 04 Oct 2023 17:14 collapse

Because strings are such a huge problem nowadays, every single software developer needs to know the internals of them. I can’t even stress it enough, strings are such a burden nowadays that if you don’t know how to encode and decode one, you’re beyond fucked. It’ll make programming so difficult - no even worse, nigh impossible! Only those who know about unicode will be able to write any meaningful code.