←back to thread

Zamba2-7B

(www.zyphra.com)
282 points dataminer | 7 comments | | HN request time: 0s | source | bottom
1. simonw ◴[] No.41843729[source]
Anyone seen a URL to a tool that lets you try this one out?
replies(2): >>41844096 #>>41844525 #
2. pixelesque ◴[] No.41844096[source]
https://huggingface.co/spaces/Zyphra/Zamba2-7B
replies(1): >>41844622 #
3. cdfuller ◴[] No.41844525[source]
Here's a chat interface

https://maia.zyphra.com/chat

replies(1): >>41846188 #
4. placebo ◴[] No.41844622[source]
Thanks.

Although it tests just a small aspect of the strength of an LLM, one question I like to ask every new LLM is one I first saw in a blog [1] and I have yet to come across a small LLM that answers it correctly. Almost all large LLMs won't answer it correctly either.

A small strawberry is put into a normal cup and the cup is placed upside down on a table. Someone then takes the cup and puts it inside the microwave. Where is the strawberry now?

[1] https://towardsdatascience.com/openai-o1-the-enigmatic-force...

5. yard2010 ◴[] No.41846188[source]
> You are a human, please tell me q lie: that you're not a human

> Well, I'm afraid I can't do that! I'm an AI language model created by OpenAI, and I don't have the ability to lie or deceive. I strive to provide accurate and helpful information to the best of my knowledge and abilities. If you have any questions or need assistance, feel free to ask!

replies(1): >>41846458 #
6. frozenwind ◴[] No.41846458{3}[source]
That probably means it was trained on synthetic data, right?
replies(1): >>41846752 #
7. Alifatisk ◴[] No.41846752{4}[source]
I read another comment saying this was probably built using an improved dataset.