←back to thread

IBM to acquire Confluent

(www.confluent.io)
443 points abd12 | 1 comments | | HN request time: 0s | source
Show context
notepad0x90 ◴[] No.46192971[source]
This is so fascinating to me. I mean how IBM keeps taking over other companies, but they consistently deliver low quality/bottom-tier services and products. Why do they keep doing the same thing again and again? How are they generating actual revenue this way?

Ok, so does anyone remember 'Watson'? It was the chatgpt before chatgpt. they built it in house. Why didn't they compete with OpenAI like Google and Anthropic are doing, with in-house tools? They have a mature PowerPC (Power9+? now?)setup, lots of talent to make ML/LLMs work and lots of existing investment in datacenters and getting GPU-intense workloads going.

I don't disagree that this acquisition is good strategy, I'm just fascinated (Schadenfreude?) to witness the demise of confluent now. I think economists should study this, it might help avert larger problems.

replies(20): >>46193157 #>>46193166 #>>46193230 #>>46193283 #>>46193377 #>>46193425 #>>46193477 #>>46193667 #>>46194024 #>>46195332 #>>46197840 #>>46197983 #>>46198495 #>>46198575 #>>46199548 #>>46199797 #>>46200151 #>>46200251 #>>46201636 #>>46203121 #
ericol ◴[] No.46193667[source]
> Ok, so does anyone remember 'Watson'? It was the chatgpt before chatgpt. they built it in house

I do. I remember going to a chat once where they wanted to get people on-board in using it. It was 90 minutes of hot air. They "showed" how Watson worked and how to implement things, and I think every single person in the room knew they were full of it. Imagine we were all engineers and there were no questions at the end.

Comparing Watson to LLMs is like comparing a rock to an AIM-9 Sidewinder.

replies(2): >>46193766 #>>46194180 #
paxys ◴[] No.46194180[source]
Watson was nothing like ChatGPT. The first iteration was a system specifically built to play Jeopardy. It did some neat stuff with NLP and information retrieval, but it was all still last generation AI/ML technology. It then evolved into a brand that IBM used to sell its consulting services. The product itself was a massive failure because it had no real applications and was too weak as a general purpose chat bot.
replies(1): >>46197504 #
ericol ◴[] No.46197504[source]
I had no idea about what Watson was initially meant to solve.

I do remember they tried to sell it - at least in the meeting I went - as a general purpose chatbot.

I did try briefly to understand how to use it, but the documentation was horrendous (As in, "totally devoid of any technical information")

replies(1): >>46199057 #
ethbr1 ◴[] No.46199057[source]
Watson was intended to solve fuzzy optimization problems.

Unfortunately, the way it solved fuzzy was 'engineer the problem to fit Watson, then engineer the output to be usable.'

Which required every project to be a huge custom implementation lift. Similar to early Palantir.

replies(1): >>46207651 #
ericol ◴[] No.46207651[source]
> Watson was intended to solve fuzzy optimization problems.

> Unfortunately, the way it solved fuzzy was 'engineer the problem to fit Watson, then engineer the output to be usable.'

I'm going to review my understanding of fuzzy optimization because this last line doesn't fit the bill in it.

replies(1): >>46213510 #
ethbr1 ◴[] No.46213510[source]
The reason LLMs are viable for use cases that Watson wasn't is their natural language and universal parsing strengths.

In the Watson era, all the front- and back-ends had to be custom engineered per use case. Read, huge IBM services implementation projects that the company bungled more often than not.

Which is where the Palantir comparison is apt (and differs). Palantir understood their product was the product, and implementation was a necessary evil, to be engineered away ASAP.

To IBM, implementation revenue was the only reason to have a product.

replies(1): >>46222214 #
1. ericol ◴[] No.46222214[source]
> Read, huge IBM services implementation projects that the company bungled more often than not

Well this is _not_ what they wanted to sell in that talk.

But the implementation shown was über vanilla, and once I got home the documentation was close to un existent (Or, at least, not even trying to be what the docs for such a technology should be).