←back to thread

178 points dgl | 1 comments | | HN request time: 0.217s | source
Show context
strogonoff ◴[] No.44364169[source]
The issue with the emoji, at least in their current depictions, is that they are guaranteed to be higher in visual hierarchy (among the few things of undying relevance that we were taught in university) than any surrounding text. They stand out thanks to their different nature and a lot of visual complexity (intricate features).

Good visual hierarchy means you end up looking first at what is important. Good visual hierarchy sets correct context.

Bad visual hierarchy adds mental overhead. Bad visual hierarchy means that any time you look, even when you don’t consciously realize it, you end up scanning through hierarchy offenders, discarding them, then getting to the important part, and re-acknowledging offenders when it comes to them in appropriate context. This can happen multiple times: first for the screen as a whole, then when you focus on a smaller part, etc. As we encounter common visual hierarchy offenders more and more often, we train ourselves to discard them quicker, but it is never completely free of cost.

There are strategic uses for symbols in line with visual hierarchy principles. For example, using emoji as an icon in an already busy GUI is something I do as well.

However, none of those apply in terminal’s visual language of text and colours, and unlike a more or less static artifact fully under designer’s control (like a magazine or a GUI) in a fluid terminal screen where things shift around and combine in different ways it is almost impossible for software author to correctly predict what importance what has to me.

Those CLI tool authors who like to signify errors with bright emoji: have you thought that my screen can be big, and after I ran your program N times troubleshooting something there can be N bright red exclamation marks on my screen, N-1 of which are not even remotely close to where the message of interest is? have you thought that your output can coexist in a multiplexer with output from another program, which I am more interested in? should other programs compete for attention with brighter emojis? and so on.

As to joyful touches, which are of course appreciated, those can be added with the old-style text-based emoticons.

replies(7): >>44364628 #>>44364715 #>>44365364 #>>44365386 #>>44366298 #>>44366575 #>>44366806 #
jerf ◴[] No.44366298[source]
This post nicely doubles as an explanation of why current-gen LLM's tendencies to spray emoji everywhere is annoying. Replacing bullet points with rocket ships and hand symbols pointing at things and spraying the emoji everywhere, rather than highlighting the things the emoji are trying to decorate, actually tends to obscure the thing being nominally highlighted under the massively over-powerful colorful symbols entirely. The initial visual impression of these texts is "a splattering of colorful symbols, with some incidental boring text around them" and it takes time and cognitive effort to filter out the blaring color, which has no useful content in it, and get to the real content.
replies(2): >>44366449 #>>44367658 #
whateveracct ◴[] No.44367658[source]
That just happens because those LLMs were trained in medium blog posts haha
replies(1): >>44369179 #
1. jerf ◴[] No.44369179[source]
I really should have included the word "default" in there somewhere. It's effectively impossible to make any blanket statement about "what LLMs do" because it's one prompt away from doing almost literally anything else.

However, it's a style that currently has a lot of popularity.

Indeed, asking for answers in the style of Alice in Wonderland is one of my favorite things to do, like programming questions. The extra frisson from something so non-whimsical being expressed so whimsically via such a complicated technology goes all the way around the "cringe/cool" circle at least twice; you can decide for yourself where it lands in the end.

I did finally hear about the students getting wise to LLM style issue. I just saw a YouTube video about a student saying he would 1. have the LLM write his essay 2. rephrase the first two paragraphs in his own style 3. tell the LLM to rewrite the essay from step 1 in the style exemplified from his rewrite. AI detection tools, which are really "default AI detection tools", call it 0% AI. Stick a fork in them, they're done at that point. I don't think any "AI detection tool" is likely to defeat that, unless LLMs suddenly freeze in advancement for, oh, at least 3 years or so, which seems unlikely.