←back to thread

1901 points l2silver | 4 comments | | HN request time: 0.92s | source

Maybe you've created your own AR program for wearables that shows the definition of a word when you highlight it IRL, or you've built a personal calendar app for your family to display on a monitor in the kitchen. Whatever it is, I'd love to hear it.
Show context
akhayam ◴[] No.35730472[source]
About 8 years back, I was leading an engineering team which was the escalation path for customer support. We were sitting on a large corpus of support tickets but didn't have any insights. I was amazed when word2vec came out and blew my mind. So I built a language model that trained on support ticket data. I modeled system logs attached to support tickets as an NLP model to predict what was going to fail next and for which customer.

Never made it to prod but was a great tool for me to see where I want to budget my team's time.

This is way before all the LLM and Generator models, but it was such a fun project.

replies(3): >>35739376 #>>35739939 #>>35756307 #
onesphere ◴[] No.35739939[source]
We have a corpus or database of programs that follow logic but with no simulation, so it represents knowledge to solve a problem yet all we have control over is the parameters (inputs). In this case, input is functional logical content (a program), describing the resolution of corpus details. The model solves its integrated, corporate logic, and our output is an interpretation of that individual program.

Now our task is to swap out this entire database for something like it, but not exactly the same. The output becomes the input to this new matrix. The individual program persists, but everything is the next generation. With a little book-keeping, the programs do our will...

replies(1): >>35741733 #
akhayam ◴[] No.35741733[source]
Don't think I quite follow. Is the new program (operating on the output of the earlier program) supposed to reason about why you are seeing the result that you are seeing? Or is it doing more post processing to make the earlier output directly consumable by your corporate systems.
replies(1): >>35743143 #
1. onesphere ◴[] No.35743143[source]
The new program’s purpose could be to do more post processing to make the interpretation of that earlier program directly consumable (inter-generationally), or it could simply start producing more problems to solve.
replies(1): >>35744881 #
2. akhayam ◴[] No.35744881[source]
Gotcha! That makes sense. I would recommend looking at LangChain though, as it does a good job at modeling multi-stage learning / inference environments.
replies(1): >>35751965 #
3. onesphere ◴[] No.35751965[source]
Integration API: https://thetaplane.com/ai/langchain/api

Inspired by: https://github.com/daveebbelaar/langchain-experiments/blob/m...

replies(1): >>35759223 #
4. akhayam ◴[] No.35759223{3}[source]
Wow... that was quick. Would love to see what results you get.