←back to thread

83 points wavelander | 1 comments | | HN request time: 0s | source
Show context
NeutralCrane ◴[] No.41214178[source]
The more I’ve looked at DSPy, the less impressed I am. The design of the project is very confusing with non-sensical, convoluted abstractions. And for all the discussion surrounding it, I’ve yet to see someone actually using for something other than a toy example. I’m not sure I’ve even seen someone prove it can do what it claims to in terms of prompt optimization.

It reminds me very much of Langchain in that it feels like a rushed, unnecessary set of abstractions that add more friction than actual benefit, and ultimately boils down to an attempt to stake a claim as a major framework in the still very young stages of LLMs, as opposed to solving an actual problem.

replies(5): >>41214243 #>>41216015 #>>41216615 #>>41217298 #>>41218357 #
curious_cat_163 ◴[] No.41216015[source]
The abstractions could be cleaner. I think some of the convolution is due to the evolution that it has undergone and core contributors have not come around to being fully “out with the old”.

I think there might be practical benefits to it. The XMC example illustrates it for me:

https://github.com/KarelDO/xmc.dspy

replies(1): >>41221123 #
1. isaacfung ◴[] No.41221123[source]
This repo has some less trivial examples. https://github.com/ganarajpr/awesome-dspy

You can try STORM(also from Stanford) and see the prompts it generates automatically, it tries to expand on your topic and simulate the collaboration among several domain experts https://github.com/stanford-oval/storm

An example article I asked it to generate https://storm.genie.stanford.edu/article/how-the-number-of-o...