←back to thread

340 points agomez314 | 4 comments | | HN request time: 0.412s | source
Show context
madsbuch ◴[] No.35245982[source]
It seems like many people focus on reasoning capabilities of the GPT models.

The me the real value is in the industrial scale pattern recognition capabilities. I can indicate something I vaguely know or ask it to expand on a concept for further research.

Within the last hours I have used it to kick-start my research on the AT1 bond and why Credit Suisse let them default and it helped me recall that it was the GenServer pattern I was looking for in Elixir when you have a facade that calls to an independent process.

replies(3): >>35246019 #>>35246275 #>>35247434 #
1. soared ◴[] No.35246275[source]
How do you know that the research you’ve conducted is accurate, rather than just precise?
replies(3): >>35246361 #>>35247705 #>>35248714 #
2. YetAnotherNick ◴[] No.35246361[source]
For most things, verification is far easier than getting the answer. The same is the case with using stack overflow, where I think that at least half the answer doesn't answer my query, but once I have the potential solution, it is easy to look for the documentation of the key function call etc. Or purely by running it if it is simple and doesn't seem dangerous.
3. madsbuch ◴[] No.35247705[source]
I don't

And I don't care. As I wrote in the initial comment:

> ... kick-start my research ...

I use it in conjunction with search engines.

4. pixl97 ◴[] No.35248714[source]
How do you know when you go to google that your research is accurate?