←back to thread

Context Engineering for Agents

(rlancemartin.github.io)
114 points 0x79de | 1 comments | | HN request time: 0.573s | source
Show context
ares623 ◴[] No.44461351[source]
Another article handwaving or underselling the effects of hallucination. I can't help but draw parallels to layer 2 attempts from crypto.
replies(1): >>44462031 #
FiniteIntegral ◴[] No.44462031[source]
Apple released a paper showing the diminishing returns of "deep learning" specifically when it comes to math. For example, it has a hard time solving the Tower of Hanoi problem past 6-7 discs, and that's not even giving it the restriction of optimal solutions. The agents they tested would hallucinate steps and couldn't follow simple instructions.

On top of that -- rebranding "prompt engineering" as "context engineering" and pretending it's anything different is ignorant at best and destructively dumb at worst.

replies(7): >>44462128 #>>44462410 #>>44462950 #>>44464219 #>>44464240 #>>44464924 #>>44465232 #
skeeter2020 ◴[] No.44465232[source]
We used to call both of these "being good with the Google". Equating it to engineering is both hilarious and insulting.
replies(1): >>44466397 #
1. triyambakam ◴[] No.44466397[source]
It is a stretch but not semantically wrong. Strictly, engineering is the practical application of science; we could say that the study of the usage of a model is indeed science and so by applying this science it is engineering.