←back to thread

AI 2027

(ai-2027.com)
949 points Tenoke | 1 comments | | HN request time: 1.345s | source
Show context
827a ◴[] No.43574608[source]
Readers should, charitably, interpret this as "the sequence of events which need to happen in order for OpenAI to justify the inflow of capital necessary to survive".

Your daily vibe coding challenge: Get GPT-4o to output functional code which uses Google Vertex AI to generate a text embedding. If they can solve that one by July, then maybe we're on track for "curing all disease and aging, brain uploading, and colonizing the solar system" by 2030.

replies(3): >>43576981 #>>43581462 #>>43582672 #
Philpax ◴[] No.43581462[source]
Haven't tested this (cbf setting up Google Cloud), but the output looks consistent with the docs it cites: https://chatgpt.com/share/67efd449-ce34-8003-bd37-9ec688a11b...

You may consider using search to be cheating, but we do it, so why shouldn't LLMs?

replies(1): >>43586611 #
1. 827a ◴[] No.43586611[source]
I should have specified "nodejs", as that has been my most recent difficulty. The challenge, specifically, with that prompt is that Google has at least four nodejs libraries that are all seem at least reasonably capable of accessing text embedding models on vertex ai (@google-ai/generativelanguage, @google-cloud/vertexai, @google-cloud/aiplatform, and @google/genai), and they've also published breaking changes multiple times to all of them. So, in my experience, GPT not only will confuse methods from one of their libraries with the other, but will also sometimes hallucinate answers only applicable to older versions of the library, without understanding which version its giving code for. Once it has struggled enough, it'll sometimes just give up and tell you to use axios, but the APIs it recommends axios calls for are all their protobuf APIs; so I'm not even sure if that would work.

Search is totally reasonable, but in this case: Even Google's own documentation on these libraries is exceedingly bad. Nearly all the examples they give for them are for accessing the language models, not text embedding models; so GPT will also sometimes generate code that is perfectly correct for accessing one of the generative language models, but will swap e.g the "model: gemini-2.0" parameter for "model: text-embedding-005"; which also does not work.