←back to thread

179 points joelkesler | 5 comments | | HN request time: 0.257s | source
1. xnx ◴[] No.46258826[source]
I felt rage baited when he crossed out Jakob Nielsen and promoted Ed Zitron (https://youtu.be/1fZTOjd_bOQt=1852). Bad AI is not good UI, but objecting based on AI being "not ethically trained" and "burning the planet" aren't great reasons.
replies(3): >>46258908 #>>46264461 #>>46267188 #
2. GaryBluto ◴[] No.46258908[source]
https://www.youtube.com/watch?v=1fZTOjd_bOQ&t=1852s You're missing the ampersand.

It's really strange how he spins off on this mini-rant about AI ethics towards the end. I clicked on a video about UI design.

replies(1): >>46258975 #
3. xnx ◴[] No.46258975[source]
Same. AI is absolutely the future of human computer interaction (exactly the article from Jakob Nielsen that he crossed out). Even the father of WIMP, Douglas Engelbart, thought it was flawed: ""Here's the language they're proposing: You point to something and grunt". AI finally gives us the chance to instruct computers as humans.
4. array_key_first ◴[] No.46264461[source]
From an economic standpoint those are both very good reasons, because:

1. Burning the planet on your servers is expensive, offloading it to a client-side LLM is not.

2. Ethics means risk means you won't be SOC compliant, your legal department will be mad, your users will be mad, etc.

The current status-quo of a few giant LLMs on supercomputers operated by OpenAI and Google is basically destined to fail, in my eyes. At least from a business standpoint. Consumer stuff might be different.

5. ◴[] No.46267188[source]