←back to thread

352 points ferriswil | 7 comments | | HN request time: 0.001s | source | bottom
Show context
djoldman ◴[] No.41889903[source]
https://arxiv.org/abs/2410.00907

ABSTRACT

Large neural networks spend most computation on floating point tensor multiplications. In this work, we find that a floating point multiplier can be approximated by one integer adder with high precision. We propose the linear-complexity multiplication (L-Mul) algorithm that approximates floating point number multiplication with integer addition operations. The new algorithm costs significantly less computation resource than 8-bit floating point multiplication but achieves higher precision. Compared to 8-bit floating point multiplications, the proposed method achieves higher precision but consumes significantly less bit-level computation. Since multiplying floating point numbers requires substantially higher energy compared to integer addition operations, applying the L-Mul operation in tensor processing hardware can potentially reduce 95% energy cost by elementwise floating point tensor multiplications and 80% energy cost of dot products. We calculated the theoretical error expectation of L-Mul, and evaluated the algorithm on a wide range of textual, visual, and symbolic tasks, including natural language understanding, structural reasoning, mathematics, and commonsense question answering. Our numerical analysis experiments agree with the theoretical error estimation, which indicates that L-Mul with 4-bit mantissa achieves comparable precision as float8 e4m3 multiplications, and L-Mul with 3-bit mantissa outperforms float8 e5m2. Evaluation results on popular benchmarks show that directly applying L-Mul to the attention mechanism is almost lossless. We further show that replacing all floating point multiplications with 3-bit mantissa L-Mul in a transformer model achieves equivalent precision as using float8 e4m3 as accumulation precision in both fine-tuning and inference.

replies(3): >>41890324 #>>41892025 #>>41901112 #
onlyrealcuzzo ◴[] No.41890324[source]
Does this mean you can train efficiently without GPUs?

Presumably there will be a lot of interest.

replies(2): >>41890353 #>>41901656 #
crazygringo ◴[] No.41890353[source]
No. But it does potentially mean that either current or future-tweaked GPUs could run a lot more efficiently -- meaning much faster or with much less energy consumption.

You still need the GPU parallelism though.

replies(2): >>41890621 #>>41893598 #
fuzzfactor ◴[] No.41890621[source]
I had a feeling it had to be something like massive waste due to a misguided feature of the algorithms that shouldn't have been there in the first place.

Once the "math is done" quite likely it would have paid off better than most investments for the top people to have spent a few short years working with grossly underpowered hardware until they could come up with amazing results there before scaling up. Rather than grossly overpowered hardware before there was even deep understanding of the underlying processes.

When you think about it, what we have seen from the latest ultra-high-powered "thinking" machines is truly so impressive. But if you are trying to fool somebody into believing that it's a real person it's still not "quite" there.

Maybe a good benchmark would be to take a regular PC, and without reliance on AI just pull out all the stops and put all the effort into fakery itself. No holds barred, any trick you can think of. See what the electronics is capable of this way. There are some smart engineers, this would only take a few years but looks like it would have been a lot more affordable.

Then with the same hardware if an AI alternative is not as convincing, something has got to be wrong.

It's good to find out this type of thing before you go overboard.

Regardless of speed or power, I never could have gotten an 8-bit computer to match the output of a 32-bit floating-point algorithm by using floating-point myself. Integers all the way and place the decimal where it's supposed to be when you're done.

Once it's really figured out, how do you think it would feel being the one paying the electric bills up until now?

replies(5): >>41890824 #>>41891053 #>>41892039 #>>41892366 #>>41895079 #
jimmaswell ◴[] No.41890824[source]
Faster progress was absolutely worth it. Spending years agonizing over theory to save a bit of electric would have been a massive disservice to the world.
replies(2): >>41890834 #>>41891732 #
rossjudson ◴[] No.41891732[source]
You're sort of presuming that LLMs are going to be a massive service to the world there, aren't you? I think the jury is still out on that one.
replies(1): >>41891845 #
jimmaswell ◴[] No.41891845[source]
They already have been. Even just in programming, even just Copilot has been a life changing productivity booster.
replies(5): >>41891944 #>>41892396 #>>41892532 #>>41894732 #>>41900955 #
recursive ◴[] No.41892532[source]
I've been using copilot for several months. If I could figure out a way to measure its impact on my productivity, I'd probably see a single digit percentage boost in "productivity". This is not life-changing for me. And for some tasks, it's actually worse than nothing. As in, I spend time feeding it a task, and it just completely fails to do anything useful.
replies(3): >>41892918 #>>41895485 #>>41903092 #
jimmaswell ◴[] No.41892918{7}[source]
I've been using it for over a year I think. I don't often feed it tasks with comments so much as go about things the same as usual and let it autocomplete. The time and cognitive load saved adds up massively. I've had to go without it for a bit while my workplace gets its license in order for the corporate version and the personal version has an issue with the proxy, and it's been agonizing going without it again. I almost forgot how much it sucks having to jump to google every other minute, and it was easy to start to take for granted how much context copilot was letting me not have to hold onto in my head. It really lets me work on the problem as opposed to being mired in immaterial details. It feels like I'm at least 2x slower overall without it.
replies(2): >>41893474 #>>41893539 #
atq2119 ◴[] No.41893539{8}[source]
> I almost forgot how much it sucks having to jump to google every other minute

Even allowing for some hyperbole, your programming experience is extremely different from mine. Looking anything up outside the IDE, let alone via Google, is by far the exception for me rather than the rule.

I've long suspected that this kind of difference explains a lot of the difference in how Copilot is perceived.

replies(1): >>41894099 #
namaria ◴[] No.41894099{9}[source]
Claiming LLMs are a massive boost for coding productivity is becoming a red flag that the claimant has a tenuous grasp on the skills necessary. Yeah if you have to look up everything all the time and you can't tell the AI slop isn't very good, you can put out code quite fast.
replies(5): >>41894349 #>>41895172 #>>41898052 #>>41900076 #>>41903110 #
soulofmischief ◴[] No.41894349{10}[source]
Comments like this are a great example of the Dunning-Kruger effect. Your comment is actually an indication that you don't have the mastery required to get useful, productive output from a high quality LLM.

Maybe you don't push your boundaries as an engineer and thus rarely need to know new things or at least learn new API surfaces. Maybe you don't know how to effectively prompt an LLM. Maybe you lack the mastery to analyze and refine the results. Maybe you just like doing things the slow way. I too remember a time as an early programmer where I eschewed even Intellisense and basic auto complete...

I'd recommend learning a bit more and practicing some humility and curiosity before condemning an entire class of engineers just because you don't understand their workflow. Just because you've had subpar experiences with a new tool doesn't mean it's not a useful tool in another engineer's toolkit.

replies(1): >>41894397 #
namaria ◴[] No.41894397{11}[source]
Funny you should make claims about my skills when you have exactly zero data about my abilities or performance.

Evaluating my skills based on how I evaluated someone else's skills when they tell me about their abilities with and without a crutch, and throwing big academic sounding expressions with 'effect' in them might be intimidating to some but to me it just transparently sounds pretentious and way off mark, since, like I said, you have zero data about my abilities or output.

> I'd recommend learning a bit more and practicing some humility and curiosity before condemning an entire class of engineers

You're clearly coming from an emotional place because you feel slighted. There is no 'class of engineers' in my evaluation. I recommend reading comments more closely, thinking about their content, and not getting offended when someone points out signs of lacking skills, because you might just be advertising your own limitations.

replies(1): >>41894604 #
soulofmischief ◴[] No.41894604{12}[source]
> Funny you should make claims about my skills when you have exactly zero data about my abilities or performance.

Didn't you just do that to an entire class of engineers:

> Claiming LLMs are a massive boost for coding productivity is becoming a red flag that the claimant has a tenuous grasp on the skills necessary

Anyway,

> Evaluating my skills based on how I evaluated someone else's skills when they tell me about their abilities with and without a crutch

Your argument rests on the assumption that LLMs are a "crutch", and you're going to have to prove that before the rest of your argument holds any water.

It sucks getting generalized, doesn't it? Feels ostracizing? That's the exact experience someone who productively and effectively uses LLMs will have upon encountering your premature judgement.

> You're clearly coming from an emotional place because you feel slighted.

You start off your post upset that I'm "making claims" about your skills (I used the word "maybe" intentionally, multiple times), and then turn around and make a pretty intense claim about me. I'm not "clearly" coming from an emotional place, you did not "trigger" me, I took a moment to educate you about being overly judgemental before fully understanding something, and pointed out the inherent hypocrisy.

> you might just be advertising your own limitations

But apparently my approach was ineffective, and you are still perceiving a world where people who approach their work differently than you are inferior. Your toxic attitude is unproductive, and while you're busy imagining yourself as some masterful engineer, people are out there getting massive productivity boosts with careful application of cutting-edge generative technologies. LLMs have been nothing short of transcendental to a curious but skilled mind.

replies(1): >>41894901 #
Ygg2 ◴[] No.41894901{13}[source]
> Didn't you just do that to an entire class of engineers

Not really. He said "if you claim LLM's are next thing since sliced butter I am doubting your abilities". Which is fair. It's not really a class as much as a group.

I've never been wowed over by LLMs. At best they are boilerplate enhancers. At worst they write plausibly looking bullshit that compiles but breaks everything. Give it something truly novel and/or fringe and it will fold like a deck of cards.

Even latest research called LLM's benefits into question: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4945566

That said. They are fine at generating commit messages and docs than me.

replies(1): >>41894947 #
1. soulofmischief ◴[] No.41894947{14}[source]
> Not really. He said "if you claim LLM's are next thing since sliced butter I am doubting your abilities". Which is fair.

No, OP said:

> Claiming LLMs are a massive boost for coding productivity is becoming a red flag that the claimant has a tenuous grasp on the skills necessary

Quotation marks are usually reserved for direct quotes, not paraphrases or straw mans.

> I've never been wowed over by LLMs.

Cool. I have, and many others have. I'm unsure why your experience justifies invalidating the experiences of others or supporting prejudice against people who have made good use of them.

> Give it something truly novel and/or fringe and it will fold like a deck of cards.

Few thoughts are truly novel, most are derivative or synergistic. Cutting edge LLMs, when paired with a capable human, are absolutely capable of productive work. I have long, highly technical and cross-cutting discussions with GPT 4o which I simply could not have with any human that I know. Humans like that exist, but I don't know them and so I'm making due with a very good approximation.

Your and OP's lack of imagination at the capabilities of LLMs are more telling than you realize to those intimate with them, which is what makes this all quite ironic given that it started from OP making claims about how people who say LLMs massively boost productivity are giving tells that they're not skilled enough.

replies(1): >>41896844 #
2. Ygg2 ◴[] No.41896844[source]
> Quotation marks are usually reserved for direct quotes,

Not on HN. Customary is to use > paragraph quotes like you did. However I will keep that in mind.

> Cool. I have, and many others have. I'm unsure why your experience justifies invalidating the experiences of others

If we're both grading a single student (LLM) in same field (programming), and you find it great and I find it disappointing, it means one of us is scoring it wrong.

I gave papers that demonstrate its failings, where is your counter-proof?

> Your and OP's lack of imagination at the capabilities of LLMs

It's not lack of imagination. It's terribleness of results. It can't consistently write good doc comments. I does not understand the code nor it's purpose, but roughly guesses the shape. Which is fine for writing something that's not as formal as code.

It can't read and understand specifications, and even generate something as simple as useful API for it. The novel part doesn't have to be that novel just something out of its learned corpus.

Like Yaml parser in Rust. Maybe Zig or something beyond it's gobbled data repo.

> Few thoughts are truly novel, most are derivative or synergistic.

Sure but you still need A mind to derive/synergize the noise of everyday environment into something novel.

It can't even do that but remix data into plausibly looking forms. A stochastic parrot. Great for DnD campaign. Shit for code.

replies(1): >>41897229 #
3. soulofmischief ◴[] No.41897229[source]
> Not on HN. Customary is to use > paragraph quotes like you did. However I will keep that in mind.

Hacker News is not some strange place where the normal rules of discourse don't apply. I assume you are familiar with the function of quotation marks.

> If we're both grading a single student (LLM) in same field (programming), and you find it great and I find it disappointing, it means one of us is scoring it wrong.

No, it means we have different criteria and general capability for evaluating the LLM. There are plenty of standard criteria which LLMs are pitted against, and we have seen continued improvement since their inception.

> It can't consistently write good doc comments. I does not understand the code nor it's purpose, but roughly guesses the shape.

Writing good documentation is certainly a challenging task. Experience has led me to understand where current LLMs typically do and don't succeed with writing tests and documentation. Generally, the more organized and straightforward the code, the better. The smaller each module is, the higher the likelihood of a good first pass. And then you can fix deficiencies in a second, manual pass. If done right, it's generally faster than not making use of LLMs for typical workflows. Accuracy also goes down for more niche subject material. All tools have limitations, and understanding them is crucial to using them effectively.

> It can't read and understand specifications, and even generate something as simple as useful API for it.

Actually, I do this all the time and it works great. Keep practicing!

In general, the stochastic parrot argument is oft-repeated but fails to recognize the general capabilities of machine learning. We're not talking about basic Markov chains, here. There are literally academic benchmarks against which transformers have blown away all initial expectations, and they continue to incrementally improve. Getting caught up criticizing the crudeness of a new, revolutionary tool is definitely my idea of unimaginative.

replies(1): >>41898249 #
4. Ygg2 ◴[] No.41898249{3}[source]
> Hacker News is not some strange place where the normal rules of discourse don't apply. I assume you are familiar with the function of quotation marks.

Language is all about context. I wasn't trying to be deceitful. And on HN I've never seen anyone using quotation marks to quote people.

> Writing good documentation is certainly a challenging task.

Doctests isn't same as writing documentation. Doctest are the simplest form of documentation. Given function named so and so write API doc + example. It could not even write example that passed syntax check.

> Actually, I do this all the time and it works great. Keep practicing!

Then you haven't given it interesting/complex enough problems.

Also this isn't about practice. It's about its capabilities.

> In general, the stochastic parrot argument is oft-repeated but fails to recognize the general capabilities of machine learning.

I gave it write YAML parser given Yaml org spec, and it wrote following struct:

   enum Yaml {
      Scalar(String),
      List(Vec<Box<Yaml>>),
      Map(HashMap<String, Box<Yaml>>),
   }
This is the stochastic parrot in action. Why? Because it tried to pass of JSON like structure as YAML.

Whatever LLM's are they aren't intelligent. Or they have attention spans of a fruit fly and can't figure out basic differences.

replies(2): >>41898825 #>>41899658 #
5. williamcotton ◴[] No.41898825{4}[source]
That’s not a good prompt, my friend!
6. soulofmischief ◴[] No.41899658{4}[source]
> Language is all about context. I wasn't trying to be deceitful. And on HN I've never seen anyone using quotation marks to quote people.

It's still unclear how this apparent lack of knowledge of basic writing mechanics would justify your use of quotation marks to attempt a straw man argument wherein you deliberately attempted to convince me that OP said something completely different.

> Doctests isn't same as writing documentation. Doctest are the simplest form of documentation. Given function named so and so write API doc + example. It could not even write example that passed syntax check.

That truly sounds like a skill issue. This no-true-Scotsman angle is silly. I said documentation and tests, I don't know how you got "doctests" out of that. I said "documentation", and "tests". I didn't say "the simplest form of documentation", that is another straw man on your behalf.

> Then you haven't given it interesting/complex enough problems.

Wow, the arrogance. There is absolutely nothing to justify this assumption. It's exceedingly likely that you yourself aren't capable of interacting meaningfully with LLMs for one reason or another, not that I haven't considered interesting or complex problems. I bring some extraordinarily difficult cross-domain problems to these tools and end up satisfied with the results far more often than not.

My argument is literally that cutting-edge LLMs excel with complex problems, and they do in many cases in the right hands. It's unfortunate if you can't find these problems "interesting" enough, but that hasn't stopped me from getting good enough results to justify using an LLM during research and development.

> Also this isn't about practice. It's about its capabilities.

Unfortunately, this discourse has made it clear that you do need considerable practice, because you seem to get bad results, and you're more interested in defending those bad results even if it means insulting others, instead of just considering that you might not quite be skilled enough.

> This is the stochastic parrot in action. Why? Because it tried to pass of JSON like structure as YAML.

That proves its stochasticity, but it doesn't prove it is a "stochastic parrot". As long as you lack the capability to realistically assess these models, it's no wonder that you've had such bad experiences. You didn't even bother clarifying which LLM you used, nor did you mention any parameters of your experiment or even if you attempted multiple trials with different LLMs or prompts. You failed to follow the scientific method and so it's no surprise that you got subpar results.

> Whatever LLM's are they aren't intelligent.

You have demonstrated throughout this discussion that you aren't capable of assessing machine intelligence. If you learned how to be more open-minded and took the time to learn more about these new technologies, instead of complaining about contemporary shortcomings and bashing those who do benefit from the technologies, it would likely open many doors for you.

replies(1): >>41902066 #
7. Ygg2 ◴[] No.41902066{5}[source]
> That truly sounds like a skill issue. This no-true-Scotsman angle is silly. I said documentation and tests, I don't know how you got "doctests" out of that. I said "documentation", and "tests". I didn't say "the simplest form of documentation", that is another straw man on your behalf.

What are you on about? Doctest is the simplest form of documentation and test. I.e. you don't have to write an in-depth test, you just need to understand what the function does. I expect even juniors can write a doctest that passes the compiler check. Not a good, not a passing one doctest, a COMPILING one. It's rate of writing a passing one was even worse.

> Wow, the arrogance. There is absolutely nothing to justify this assumption.

Ok, then. Prove what exactly hard problems did you give it?

I gave my examples, I noticed it fails at complex tasks like YAML parser in an unknown language.

I noticed when confronted with anything harder than writing pure boilerplate, it fails. E.g. it would fail 10% of the time.

> Unfortunately, this discourse has made it clear that you do need considerable practice

You can practice with a stochastic parrot all you want, it won't make it an Einstein. Programming is all about converting requirements to math, and LLMs aren't good at it. Do I need to link stuff like doing basic calculation and counting 'r' in the word 'strawberries'.

The best you can do is half the error rate, but that follows a power law. You need to double the energy to half the error rate. So unless you intend to boil the surface of the Earth to get it to be decent at programming, I don't think it's going to change anytime soon.

> You have demonstrated throughout this discussion that you aren't capable of assessing machine intelligence.

Pure ad hominem. You've demonstrated nothing outside your ""Trust me bro, it's not a bubble"" and ""You're wrong"". I'm using double double quotes so you don't assume I'm quoting you.

> You didn't even bother clarifying which LLM you used.

For YAML parser, I used Chat GPT-4o at my friend's place. For the rest of the tasks I used JetBrains AI assistant, which is a mix of Chat GPT-4, GPT-4o and GPT-3.