←back to thread

170 points PaulHoule | 1 comments | | HN request time: 0.232s | source
Show context
Scene_Cast2 ◴[] No.45118686[source]
The paper is hard to read. There is no concrete worked-through example, the prose is over the top, and the equations don't really help. I can't make head or tail of this paper.
replies(3): >>45118775 #>>45119154 #>>45120083 #
lumost ◴[] No.45118775[source]
This appears to be a position paper written by authors outside of their core field. The presentation of "the wall" is only through analogy to derivatives on the discrete values computer's operate in.
replies(2): >>45119119 #>>45119709 #
jibal ◴[] No.45119709[source]
If you look at their other papers, you will see that this is very much within their core field.
replies(3): >>45119914 #>>45120336 #>>45124453 #
lumost ◴[] No.45119914[source]
Their other papers are on simulation and applied chemistry. Where does their expertise in Machine Learning, or Large Language Models derive from?

While it's not a requirement to have published in a field before publishing in a field. Having a coauthor who is from the target field or a peer review venue in that field as an entry point certainly raises credibility.

From my limited claim to be in either Machine Learning or Large Language Models the paper does not appear to demonstrate what it claims. The author's language addresses the field of Machine Learning and LLM development as you would a young student - which does not help make their point.

replies(1): >>45132135 #
1. stonogo ◴[] No.45132135[source]
If you can't look at that publication list and see their expertise in macine learning, then it may be that they know more about your field than you know about theirs. Nothing wrong with that! Computational chemists use different terminology than computer scientists but there is significant overlap in the fields.