Why not publish some actual benchmarks that prove your claim in even a few special cases?
Why not publish some actual benchmarks that prove your claim in even a few special cases?
Even on sites that have a "Like / Don't like" button, my understanding is that clicking "Don't like" is a form of "engagement", that the suggestion algorithm are going to reward.
Give me a button that says "this article was a scam", and have the publisher give the advertisement money back. Of better yet, give the advertisement money to charity / public services / whatever.
Take a cut of the money being transfered, charge the publishers for being able to get a "clickbait free" green mark if they implement the scheme.
Track the kind of articles that generate the most clickbait-angry comment. Sell back the data.
There might a business model.
And, two, because the actual energy cost savings claimed aren't even the experimental question -- the energy cost differences between various operations on modern hardware have been established in other research, the experimental issue here was whether the mathematical technique that enables using the lower energy cost operations performs competitively on output quality with existing implementations when substituted in for LLM inference.
What could work is social media giving people an easy button to block links to specific websites from appearing in their feed, or something along those lines. It’s a nice user feature, and having every clickbait article be a chance someone will choose to never see your website again could actually reign in some of the nonsense.
"The first release of bitnet.cpp is to support inference on CPUs. bitnet.cpp achieves speedups of 1.37x to 5.07x on ARM CPUs, with larger models experiencing greater performance gains. Additionally, it reduces energy consumption by 55.4% to 70.0%, further boosting overall efficiency. On x86 CPUs, speedups range from 2.37x to 6.17x with energy reductions between 71.9% to 82.2%. Furthermore, bitnet.cpp can run a 100B BitNet b1.58 model on a single CPU, achieving speeds comparable to human reading (5-7 tokens per second), significantly enhancing the potential for running LLMs on local devices. More details will be provided soon."
Nvidia will be very unhappy.
Agreed, of course.
In a reasonable world, that could be considered part of the basic, law mandated requirements. It would be blurry and subject to interpretation to decide what is clickbait or not, just like libel or defamation - good thing we're only a few hundred years away from someone reinventing a device to handle that, called "independent judges".
In the meantime, I suppose you would have to bring some "unreasonable" thing to it, like "brands like to have green logos on their sites to brag" ?
> What could work is social media giving people an easy button to block links to specific websites from appearing in their feed, or something along those lines.
I completely agree. It's a feature they have had the technology to implement such a thing since forever, and they've decided against it since forever.
However I wonder if that's something a browser extension could handle ? A merge of AdBlock and "saved you a click" that displays the "boring" content of the link when you hoveron a clickbaity link ?