←back to thread

600 points antirez | 3 comments | | HN request time: 0.001s | source
Show context
quantumHazer ◴[] No.44625120[source]
I'm going a little offtopic here, but I disagree with the OPs use of the term "PhD-level knowledge", although I have a huge amount of respect for antirez (beside that we are born in the same island).

This phrasing can be misleading and points to a broader misunderstanding about the nature of doctoral studies, which it has been influenced by the marketing and hype discourse surrounding AI labs.

The assertion that there is a defined "PhD-level knowledge" is pretty useless. The primary purpose of a PhD is not simply to acquire a vast amount of pre-existing knowledge, but rather to learn how to conduct research.

replies(6): >>44625135 #>>44626038 #>>44626244 #>>44626345 #>>44632846 #>>44633598 #
ghm2180 ◴[] No.44626244[source]
> but rather to learn how to conduct research

Further, I always assumed PhD level of knowledge meant coming up with the right questions. I would say it is at best a "Lazy Knowledge Rich worker", it won't explore hypothesis if you don't *ask it* to. A PHD would ask those questions to *themselves*. Let me give you a simple example:

The other day Claude Code(Max Pro Subscription) commented out a bunch of test assertions as a part of a related but separate test suite it was coding. It did not care to explore — what was a serious bug — why it was commenting it out because of a faulty assumption in the original plan. I had to ask it to change the plan by doing the ultra-think, think-hard trick to explore why it was failing, amend the plan and fix it.

The bug was the ORM object had null values because it was not refreshed after the commit and was fetched before by another DB session that had since been closed.*

replies(1): >>44631306 #
1. vl ◴[] No.44631306[source]
It's ultrathink one word, not ultra-think. (See below).

I use Claude Code with Opus, and had same experience - was pushing it hard to implement complex test, and it gave me an empty test function with test plan inside in a comment (lol).

I do want to try Gemini 2.5 Pro, but I don't know a tool which would make experience compatible to Claude Code. Would it make sense to use with Cursor? Do they try to limit context?

  ~/.nvm/versions/node/v22.16.0/lib/node_modules/@anthropic-ai/claude-code $ npx prettier cli.js | ack ultrathink -C 20
  var jw1 = { HIGHEST: 31999, MIDDLE: 1e4, BASIC: 4000, NONE: 0 },
  Yk6 = {
    english: {
      HIGHEST: [
        { pattern: "think harder", needsWordBoundary: !0 },
        { pattern: "think intensely", needsWordBoundary: !0 },
        { pattern: "think longer", needsWordBoundary: !0 },
        { pattern: "think really hard", needsWordBoundary: !0 },
        { pattern: "think super hard", needsWordBoundary: !0 },
        { pattern: "think very hard", needsWordBoundary: !0 },
        { pattern: "ultrathink", needsWordBoundary: !0 },
      ],
      MIDDLE: [
        { pattern: "think about it", needsWordBoundary: !0 },
        { pattern: "think a lot", needsWordBoundary: !0 },
        { pattern: "think deeply", needsWordBoundary: !0 },
        { pattern: "think hard", needsWordBoundary: !0 },
        { pattern: "think more", needsWordBoundary: !0 },
        { pattern: "megathink", needsWordBoundary: !0 },
      ],
      BASIC: [{ pattern: "think", needsWordBoundary: !0 }],
      NONE: [],
    },
replies(2): >>44631521 #>>44632754 #
2. andrew_k ◴[] No.44631521[source]
Google has gemini-cli that is pretty close to Claude Code in terms of experience https://github.com/google-gemini/gemini-cli and has a generous free tier. Claude Code is still superior in my experience, Gemini CLI can go off-course pretty quickly if you accept auto edits. But it is handy for code reviews and planning with it's large context window.
3. elyase ◴[] No.44632754[source]
https://github.com/sst/opencode