I would have thought it uncontroversial view among software engineers that token quality is much important than token output speed.
I guess if you cannot do well in benchmarks, instead pick an easier to pump up one and run with that - speed. Looking online for benchmarks the first thing that came up was a reddit post from an (obvious) spam account[1] gloating about how amazing it was on a bunch of subs.
If an LLM is often going to be wrong anyway, then being able to try prompts quickly and then iterate on those prompts, could possibly be more valuable than a slow higher quality output.
Ad absurdum, if it could injest and work on an entire project in milliseconds, then it has mucher geater value to me, than a process which might take a day to do the same, even if the likelihood of success is also strongly affected.
It simply enables a different method of interactive working.
Or it could supply 3 different suggestions in-line while working on something, rather than a process which needs to be explicitly prompted and waited on.
Latency can have critical impact on not just user experience but the very way tools are used.
Now, will I try Grok? Absolutely not, but that's a personal decision due to not wanting anything to do with X, rather than a purely rational decision.
They reduce the costs tough !
Let's see this harness, then, because third party reports rate it at 57.6%
For autocompleting simple functions (string manipulation, function definitions, etc), the quality bar is pretty easy to hit, and speed is important.
If you're just vibe coding, then yeah, you want quality. But if you know what you're doing, I find having a dumber fast model is often nicer than a slow smart model that you still need to correct a bit, because it's easier to stay in flow state.
With the slow reasoning models, the workflow is more like working with another engineer, where you have to review their code in a PR
While the top coding models have become much more trustworthy lately, Grok isn't there yet. It doesn't matter if it's fast and/or free; if you can't trust a tool with your code, you can't use it.
I'm getting 30-50% larger code changes in per day now. Yesterday I plumbed six slightly mechanical, but still major changes through our schema, several microservice layers, API client libraries, and client code. I wrote down the change sites ahead of time to track progress: 54. All requiring individual business logic. This would have been tedious without tab complete.
And that's not the only thing I did yesterday.
I wouldn't trust these tools with non-developers, but in our hands they're an exoskeleton. I like them like I like my vim movements.
A similar analogy can be made for the AI graphics design and editing models. They're extremely good time saving tools, but they still require a human that knows what they're doing to pilot them.
https://i.imgur.com/qgBq6Vo.png
I'm going to test it. My bottleneck currently is waiting for agent to scan/think/apply changes.
I use Opus 4.1 exclusively in Claude Code but then I also use zen-mcp server to get both gpt5 and gemini-2.5-pro to review the code and then Opus 4.1 responds. I will usually have eyeballed the code somewhere in the middle here but I'm not fully reviewing until this whole dance is done.
I mean, I obviously agree with you in that I've chosen the slowest models available at every turn here, but my point is I would be very excited if they also got faster because I am using a lot of extra inference to buy more quality before I'm touching the code myself.
I recently found it much more valuable, and why I am now preferring GPT-5 over Sonnet 4, is that if I start asking it to give me different architectural choices, its really quite good at summarizing trade-offs and and offering step-by-step navigation towards problem solving. I am liking this process a lot more than trying to "one shot" or getting tons of code completely rewritten, thats unrelated to what I am really asking for. This seems to be a really bad problem with Opus 4.1 Thinking or even Sonnet Thinking. I don't think it's accurate, to rate models on "one-shoting" a problem. Rate it on, how easy it is to work with, as an assistant.
Asking any model to do things in steps is usually better too, as opposed to feeding it three essays.
Before MoE was a thing, I built what I called the Dictator, which was one strong model working with many weaker ones to achieve a similar result as MoE, but all the Dictator ever got was Garbage In, so guess what came out?
> I use Opus 4.1 exclusively in Claude Code but then I also use zen-mcp server to get both gpt5 and gemini-2.5-pro to review the code and then Opus 4.1 responds.
I'd love to hear how you have this set up.Eg, https://www.msn.com/en-us/news/world/musk-retweets-hitler-di...
It's not long enough for you to context switch to something else, but long enough to be annoying and these wait times add up during the whole day.
It also discourages experimentation if you know that every prompt will potentially take multiple minutes to finish. If it instead finished in seconds then you could iterate faster. This would be especially valuable in the frontend world where you often tweak your UI code many times until you're satisfied with it.
But anytime I hear of Grok or xAI, the only thing I can think about is how it's hoovering up water from the Memphis municipal water supply and running natural gas turbines to power all for a chat bot.
Looks like they are bringing even more natural gas turbines online...great!
https://netswire.usatoday.com/story/money/business/developme...
* Scaffolding
* Ask it what's wrong with the code
* Ask it for improvements I could make
* Ask it what the code does (amazing for old code you've never seen)
* Ask it to provide architect level insights into best practices
One area where they all seem to fail is lesser known packages they tend to either reference old functionality that is not there anymore, or never was, they hallucinate. Which is part of why I don't ask it for too much.
Junie did impress me, but it was very slow, so I would love to see a version of Junie using this version of Grok, it might be worthwhile.
Opus 4.1 is by far the best right now for most tasks. It’s the first model I think will almost always pump out “good code”. I do always plan first as a separate step, and I always ask it for plans or alternatives first and always remind it to keep things simple and follow existing code patterns. Sometimes I just ask it to double check before I look at it and it makes good tweaks. This works pretty well for me.
For me, I found Sonnet 3.5 to be a clear step up in coding, I thought 3.7 was worse, 2.5 pro equivalent, and 4 sonnet equal maybe tiny better than 3.5. Opus 4.1 is the first one to me that feels like a solid step up over sonnet 3.5. This of course required me to jump to Claude code max plan, but first model to be worth that (wouldn’t pay that much for just sonnet).
We already know that in most software domains, fast (as in, getting it done faster) is better than 100% correct.
I also think it is optimistic to think the jailbreak percentage will stay at "0.00" after public use, but time will tell.
https://data.x.ai/2025-08-26-grok-code-fast-1-model-card.pdf
Different models for different things.
Not everyone is solving complicated things every time they hit cmd-k in Cursor or use autocomplete, and they can easily switch to a different model when working harder stuff out via longer form chat.
I haven't used Copilot in a while but Cursor lets you easily switch the model depending on what you're trying to do.
Having options for thinking, normal, fast covers every sort of problem. GPT-5 doesn't let you choose which IMO is only helpful for non-IDE type integrations, although even in ChatGPT it can be annoying to get "thinking" constantly for simple questions.
Things I noted:
- It's fast. I tested it in EU tz, so ymmv
- It does agentic in an interesting way. Instead of editing a file whole or in many places, it does many small passes.
- Had a feature take ~110k tokens (parsing html w/ bs4). Still finished the task. Didn't notice any problems at high context.
- When things didn't work first try, it created a new file to test, did all the mocking / testing there, and then once it worked edited the main module file. Nice. GPT5-mini would often times edit working files, and then get confused and fail the task.
All in all, not bad. At the price point it's at, I could see it as a daily driver. Even agentic stuff w/ opus + gpt5 high as planners and this thing as an implementer. It's fast enough that it might be worth setting it up in parallel and basically replicate pass@x from research.
IMO it's good to have options at every level. Having many providers fight for the market is good, it keeps them on their toes, and brings prices down. GPT5-mini is at 2$/MTok, this is at 1.5$/MTok. This is basically "free", in the great scheme of things. I ndon't get the negativity.
Grok is owned by Elon Musk. Anything positive that is even tangentially related to him will be treated negatively by certain people here. Additionally, it is an AI coding tool which is seen as a threat to some people’s livelihoods here. It’s a double whammy, so I’m not surprised by the reaction to it at all.
See also the Microsoft threads on HN where everyone threatens to switch to Linux, and by reading them you'd think Linux is finally about to have its infamous glory year on the desktop.
*edit Case in point, downvotes in less than 30 seconds
this site is the fucking worst
OpenRouter claims Cerebras is providing at least 2000 tokens per second, which would be around 10x as fast, and the feedback I'm seeing from independent benchmarks indicates that Qwen3-Coder-480B is a better model.
The IP risks taken may be well worth of productiviry boosts.
That's phase 1, ask it to "think deeply" (Claude keyword, only works with the anthropic models) while doing that. Then ask it to make a detailed plan of solving the issue and write that into current-fix.md and ask it to add clearly testable criteria when the issuen is solved.
Now you manually check the criteria wherever they sound plausible, if not - it's analysis failed and its output was worthless.
But if it sounds good, you can then start a new session and ask it to read the-markdown-file and implement the change.
Now you can plausibility check the diff and are likely done
But as the sister comment pointed out, agentic coding really breaks apart with large files like you usually have in brownfield projects.
They started operating the turbines without permits and they were not equipped with the pollution controls normally required under federal rules. Worse, they are in an area that already led the state in people having to get emergency treatment for breathing problems. In their first 11 months they became one of the largest polluters in an area already noted for high pollution.
They have since got a permit, and said that pollution controls will be added, but some outside monitors have found evidence that they are running more turbines than the permit allows.
Oh, and of course 90% of the people bearing the brunt of all this local pollution are poor and Black.
ive seen some that change it for copy and paste but i don’t think it works for cmd-left right up down. or option those.
Here's YC's pg that I edited after this week's nano banana release:
https://imgur.com/a/internet-DWzJ26B
I'm not an animator and I made that with a few simple tools.
It has a lot of errors and mistakes that I didn't take the time to correct since it was just a silly meme, but do you see how accessible all of this is?
When people with intention and taste use these tools, the results are powerful. I won't claim that the above videos demonstrate this, but I can certainly do good work with these tools.
I don't see how this is anything short of revolutionary.
I imagine it might be good for something really tight and simple and specific like making some CRUD endpoints or i8n files or something but otherwise..
This is an offering being produced by a company whose idea of responsible AI use involves prompting a chatbot that “You spend a lot of time on 4chan, watching InfoWars videos” - https://www.404media.co/grok-exposes-underlying-prompts-for-...
A lot of people rightly don’t want any such thing anywhere near their code.
If somebody from Cerebras is reading this, are you having capacity issues?
Oh, and some asshole threw a couple of Nazi salutes at the president’s inauguration.
It is forgivable because there is no real understanding in an llm.
And other llm can also be prompted to say ridiculous things, so what? If a llm would accept a name of a Viking or Khan of the steppes it doesn’t mean it wants to rape and pillage.
- Boston Dynamics' Atlas does not move as gracefully as a human
- LLM writing and code is oh-so-easy to spot
- the output of diffusion models is indistinguishable from a photo... until you look at it for longer than 5 seconds and decide to zoom in because "something's wrong"
- motion in AI-generated videos is very uncanny
Often all it takes is to reset to a checkpoint or undo and adjust the prompt a bit with additional context and even dumber models can get things right.
I've used grok code fast plenty this week alongside gpt 5 when I need to pull out the big guns and it's refreshing using a fast model for smaller changes or for tasks that are tedious but repetitive during things like refactoring.
You don't need the smartest slow model for every task. I've used it all week for tedious things nobody wants to do and gotten a ton done in less time.
The only thing I've had issues with is if you're not a level more specific than you might be with smarter models it can go off the rails.
But give it a tedious task and a very clear example and it'll happily get the job done.
HN comments love to beat up Elon Musk and unfortunately a lot of biased negative reactions to LLMs where everything will get insta downvoted.
Sure, the AI product might be interesting (let's not talk about how it was financed and how GPUs from a public company were diverted to a private venture), but ignoring all of the surrounding factors is an interesting approach.
But you do you.
it was completely unsterable. I get why people are often upset by "you're right" of Claude models, but that's what I usually want from model.
I guess there is different in expectations depending on experience level of developer, but I want to have final saying what is the right way
I'm not going to engage into that... I don't see what the US has to do with this, I'm from Europe.
Out of all his brands, though, X and particularly XAI (and so Grok) have been particularly influenced by – indeed he seems to see them as vehicles for – his personal political opinions and reckless ethics.
https://news.ycombinator.com/item?id=45063583
Your commentary is also anecdotal. Why even bother commenting if that's what you believe?
Do you use them successfully in cases where you just had to re-run them 5 times to get a good answer, and was that a better experience than going straight to GPT 5?
I use grok a lot on the web interface (grok.com) and never had any weird incidents. It's a run-of-the-mill SOTA model with good web search and less safety training
> I miss the days where we just liked technology for advancement's sake.
I think you haven't fully thought through such statements. They lead to bad places. If Bin Laden were selling research and inference to raise money for attacks, how many tokens would you buy?
Musk better not visit your country then since he routinely calls people worse, with no or contrary evidence
Kinda weird to mix political sentiment with a coding technology.
By just emphasizing the speed here, I wonder if their workflows revolve more around the vibe practice of generating N solutions to a problem in parallel and selecting the "best". If so, it might still win out on speed (if it can reliably produce at least one higher-quality output, which remains to be seen), but also quickly loses any cost margin benefits.
I think the biggest thing for offline LLMs will have to be consistency for having them search the web with an API like Google's or some other search engines API, maybe Kagi could provide an API for people who self-host LLMs (not necessarily for free, but it would still be useful).
xAI has a shocking track record of poor decisions when it comes to training and prompting their AIs. If anyone can make a partisan coding assistant, they can. Indeed, given their leadership and past performance, we might expect them to explicitly try.
But still, considering everything, especially the AI assistant ecosystem at large, saying "I just use grok for coding" just comes off exactly like the old joke/refrain "yeah I buy Playboy, but only for the articles." Like yeah buddy, suuure.
I don’t use social media in general, maybe YouTube but it’s been a real challenge to get rid of all the political content - both left and right wing.
If Musk is not in favor of those ideas he might need to work a bit harder to make that clear, because he does tend to leave people with the impression he’s okay with it.
How do you come to that conclusion? Because the backlash was "too much" ? He is still (one of) the richest people in the world, and controls several huuge companies. But he got his feelings hurt, I guess? And that was "too much" ?? Poor snowflake Elon.
I don't know man. For like... the other 7 billion people on Earth it seems preeeetty easy for them not to be confused with a Nazi.
Seems to me just Elon has that issue. I've never had that issue. I don't know anyone who's has that issue. So, it makes you wonder.
https://www.theblaze.com/columns/opinion/cias-secret-grip-on...
Also, watch the "Nazi salutes" clip in its entirety from a non-biased source. He is excited that Trump won and is awkwardly gesturing while saying "my heart goes out to you" in celebration and thanks to the voters. Even the ADL said it wasn't a Nazi salute.
https://thehill.com/homenews/administration/5097676-elon-mus...
https://www.datacenterdynamics.com/en/news/elon-musk-xai-gas...
Yes - it’s also the kind of name they would choose if they were an institute dedicated to diplomacy.
> .@elonmusk is being falsely smeared.
Elon is a great friend of Israel. He visited Israel after the October 7 massacre in which Hamas terrorists committed the worst atrocity against the Jewish people since the Holocaust. He has since repeatedly and forcefully supported Israel’s right to defend itself against genocidal terrorists and regimes who seek to annihilate the one and only Jewish state.
I thank him for this.
Your suggestion that an oversight like this is reason enough to not use the model?
I don’t get the big problem over here. The model said some unsavoury things and the problem was admitted and fixed - why is this making people lose their minds? It has to be performative because I can’t explain it in any other way.
"My heart goes out to you" "Taxi!" or just a "I see you guys!" can all be accompanied by a bad arm angle in hindsight. As he obviously wasn't going for that by his own words, maybe we should consider actions more important than interpreted hand movements. And Musk has been loud about AI safety since 2016, giving name, cofounding and funding OpenAI before Sam conducted a hostile takeover and made it profit-first instead of a gift for humanity.
Microsoft did pioneering work in the Nazi chatbot space.
The Anti-Defamation League stated it wasn't a salute and that they weren't offended. Rabbi Ari Lamm wrote that Musk has repeatedly shown he's a friend to the Jewish community. David Greenfield suggested people should focus on actual antisemitism instead. Netanyahu highlighted the absurdity of the accusations and pointed to Musk's aid and engagement after the October 7th attacks.
And yes, Musk became a victim. I don't see what his current wealth has to do with it. It's hard to ignore the imbalance where one man drew the world's anger and became public enemy #1. If you call him a snowflake, I don't know what to call all those who might have been offended by his gesture
The leap from taking advice and copy-pasting almost as a shameful fallback, to it just directly driving your tools is a tough pill. Having recently adjusted to "micro-dosing" on LLM's (asking no direct code output, smaller patches) when it comes to code to allow me to learn better is something I don't know how I would integrate with this.
Or do the agentic tools allow for this in some reasonably way and I just don't know?
Israel is just dying to get on that list too.
It is in fact important that he is not representative of Jewish people.
This poor behavior, if rewarded, will surely be repeated in other countries and nobody wants that, either.
The elected representative of the country made for Jews which is the country that has highest Jewish population and has historical ties to Judaism has exonerated Elon.
It has symbolic meaning and fretting over a salute and boycotting the company seems performative.
The great thing about xAI is that it is just a company and there are other AI companies that have AIs that match your values, even though between Grok, ChatGPT, and Claude there are minimal actual differences.
An AI will be anything that the prompt says it is. Because a prompt exists doesn't condemn the company.
LOL. Says the guy who wrote, "Modern local religion (at least in the US) is neomarxism":
War is Peace indeed.
See all the personality prompts here: https://x.com/aaronp613/status/1943083889515466832
Depends on your standards for who is a Jewish person: by many standards (including those used by the Israeli Law of Return), the US has more Jewish people than Israel.
EDIT: To be clear, I am not, in noting this fact, arguing against the parent's argument that (this is a paraphrase) the opinion of the head of a state with a large Jewish population (whether or not it is actually the largest in the world) does not itself constitute the response of world Judaism, either in general or specifically as an exoneration of an alleged expression of fascist sympathies; that position is absolutely correct, irrespective of which country happens to have the largest Jewish population.
When I first heard about it I thought "yeah right, media is exaggerigating again". Then I saw it, and I mean wtf!
I do not at all believe that's something you do by accident. Twice! Also, he could have excused it or try to explain afterwards. He did not. He just trolled.
I think Netanyahu had a bit of a conflict of interest here--he couldn't afford to get on Trump's bad side!
Of course, 95% of them are fixing things they broke in earlier commits and their overall quality is the worst on the team. But, holy cow, they can output crap faster than anyone I’ve seen.
https://www.iea.org/reports/solar-pv-global-supply-chains/ex...
Of course, renewables aren’t the only source of energy
From the outside, the Grok mechahitler incident appeared very much to be the embodiment of Musk’s top-down ‘free speech absolutist’ drive to strip ‘political correctness’ shackles from grok; the prompting changes were driven by his setting that direction. The issues became apparent very early that the prompt changes were leading to issues but reversion seemed to be something that X had to be pressured into - they were unwilling to treat it as a problem until the mechahitler thread. This all speaks to his having a particular vision for what he wants xAI agents to be – something which continues to be expressed in things like the ani product and other bot personas.
The Microsoft ‘Tay’ incident was triggered through naivité. The Grok mechahitler incident seems to have been triggered through hubris and a delight in trolling. Those are very different motivations.
I’m not sure what about that you’re upset with.
* Elon Musk Charged With Securities Fraud for Misleading Tweets: https://www.sec.gov/newsroom/press-releases/2018-219
* SEC Charges Elon Musk for Failing to Timely Disclose Beneficial Ownership of Twitter: https://www.debevoise.com/insights/publications/2025/01/sec-...
* Musk Sued for Calling Thai Cave Rescuer Pedophile: https://www.voanews.com/a/tesla-s-musk-sued-for-calling-thai...
* Elon Musk salute controversy: https://en.wikipedia.org/wiki/Elon_Musk_salute_controversy
## In Comments
Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.
When disagreeing, please reply to the argument instead of calling names. "That is idiotic"
It's another example of the 'bully lie', wherein there's absolutely no good faith debate about the point. The purpose is to test whether you will willingly swallow the lie and go along with the obvious falsehood, or you'll put yourself on the side of The Enemy.
[1] https://www.reddit.com/r/gifs/comments/1i6par1/elon_musk_vs_...
[2] https://www.reddit.com/r/gifs/comments/1i7w4nz/comparison_of...
I mean… this is part of GPs point. Here we are, playing on the lawn of private equitists, probably directly or indirectly working for the people that GGP was railing against.
Say no more. I’m already sold.
In the end, incentives are all that matter. Do hotels care deeply about the environment, or are they interested in saving in energy and labor costs as your towel is cleaned? Does it matter? Does moralizing really get us anywhere if our ends are the same?
> fretting over a salute and boycotting the company seems performative.
Performative actions are still actions, and sometimes deliver results. If those results are as little as make some people feel better, those are still results. That said, it is hard to be more performative than the gesture it self. So if you want to criticize HN users for being performative, you should apply the same standard to Elon Musk.
USAID was shut down on July 1st. Somehow people have survived without it for nearly 2 months. It just goes to show you how critical the "aid" it provided was.
The only player doing the right thing here is probably Microsoft which is retrofitting an entire nuclear energy plant.
Everybody else is faking it to make you feel better. Elon just is skipping the faking it part.
https://www.sciencedirect.com/science/article/pii/S258953702...
Estimated impacts, to be sure, will take time for actual studies. But the activities USAID was responsible for were far more than just ‘the bare minimum’ to provide cover.
Guess you can have the power and no responsibility! Always someone else’s fault!
/s, just in case.
So? Does that means nobody else is allowed to have an opinion about the salute that he made. Sure he's pro Israel, that's not uncommon at all amongst the far right these days.
> who might have been offended by his gesture
What about the people who seem to be highly offended by people who have been offended by his gesture. What do you call them?
If the management isn't fixing the problems that led to those events, the management is responsible.
This doesn't just cause confusion, it's also hard to sort. To confirm my suspicion of sloppy coding, I tried to sort the date column and to my surprise I got this madness:
1/31/2025
2/29/2024
2/29/2024
4/28/2024
3/27/2024
9/27/2023
Which is sorting by the day column -- the bit in the middle -- instead of the year!That's just... special.
[1] I hear some incredibly backwards places like Liberia that also haven't adopted metric insist on using it into the present day, but the rest of the civilised world has moved on.
So if a Trump made a Twitter post "exonerating" someone who said something awful about America that would be the same? Because he represents 100% of the country.
Almost half of the countries hates Netanyahu and he's only in charge because of the support from far-right.
Regardless of this you think that a certain limited subsection of Israeli population who share Netanyahu and not the millions of Israeli's who don't let alone all the people who are Jewish are not allowed to have an opinion about his actions? Rather a silly thing to say.
https://www.propublica.org/article/doge-musk-mohammad-halimi...
All in service to a role that he took to ingratiate himself to a head of state and ended up completely alienated from leaving a wake of destruction behind him for absolutely no purpose.
Elected by 23.41%? What about the remaining 76.50%? Also 30% didn't even vote.
Also what about the president of Poland and other victims of the nazis? Did they "exonerate" him?
Of course to be fair its hard to blame a drug addict who seemingly lacks self control for his erratic public behavior.
> seems performative
If those people stop buying Tesla's cars and that hurts its share price its not performative anymore.
https://www.pbs.org/newshour/politics/why-does-the-ai-powere...
So literally Musk and his pals?
> society’s well-being should stop meddling.
So again, Musk et al.? I'm really confused... what are you trying to say. That only some people are allowed to meddle while everyone else should shut up and mind their own business? How do you determine that? Wealth? Political opinions? Class? Race?
No one seemed to bat an eye when DeepSeek essentially distilled an entire model from OpenAI.
Just look at this map: https://en.m.wikipedia.org/wiki/List_of_date_formats_by_coun...
You’re almost entirely alone in these backwards practices!
Well, not entirely alone, you also have Liberia following your “standards”! There’s two of you! Must be nice.
PS: If Trump actually wanted to make US exports competitive on the world market, step one would be to adopt world standards like metric.
Response seems to conflict with your accusation:
> It’s tough to pin down one figure as the "evilest" since the 20th century was a grim parade of atrocities, and evil isn’t a simple label—it’s a spectrum of intent, impact, and context. If I had to pick, I’d lean toward Adolf Hitler. His role in orchestrating the Holocaust, which systematically murdered six million Jews and millions of others, including Romani people, disabled individuals, and political dissidents, stands out for its deliberate, industrialized cruelty. The Nazi regime’s ideology of racial supremacy, coupled with his aggressive wars that killed tens of millions, marks him as a singular force of destruction...
Within the boundaries of pre-training, yes. It is definitely possible, in training and in fine-tuning, to make a LLM resistant to engaging in the role-playing requested in the prompt.
Besides, why would the richest man on earth copy a bunch of 1940's socialists who previously socialized their car industry?
Maybe it's because we get use to it and therefore recognize it easier, but it does seem to get more and more recognizable instead of the opposite, doesn't it?
I think I could recognize a ChatGPT email way easier in 2025 than if you showed me the same email written by gpt-3.5.
Gosh yeah, all that... getting rid of slavery, and women's rights, and disability support and awareness... Truly, the world is far better off!
If that means embracing fossil fuels, so be it. Destroy the “woke mind virus at any cost”. That being said, I think he is delusional enough that he thought allowing nazi propaganda on twitter would convince conservatives to start buying teslas and is completely lost at this point.
1. That Mickael Jackson song
2. The time that a US president asked the president of Liberia "where he learned English" because he spoke English so well
And now I'll add to my list a third item:
3. Being one of an elite set of countries to use freedom units
https://ourworldindata.org/explorers/co2?country=CHN~USA~IND...
https://www.pbs.org/newshour/amp/politics/why-does-the-ai-po...
https://www.tortoisemedia.com/2025/02/25/grok-3-engineer-adm...
Not sure who was taking SamA seriously about that; personally I think he's a ridiculous blowhard, and statements like that just reinforce that view for me.
Please don't make generalizations about HN's visitors'/commenters' attitudes on things. They're never generally correct.
I feel you'd need to adjust the sum total by something, capita, or square footage or be more specific like does a manufacturing X in China pollute more than an equivalent one in the US, etc.
- https://www.scientificamerican.com/article/the-health-risks-...
- https://www.cbc.ca/news/science/gas-stoves-air-pollution-1.6...
But sure, ok, maybe it could mean making much faster progress than competitors. But then again, it could also mean that competitors have a much more mature platform, and you're only releasing new things so often because you're playing catch-up.
(And note that I'm not specifically talking about LLMs here. This metric is useless for pretty much any kind of app or service.)
But even if your interpretation is correct, frequency of releases still is not a good metric. That could just mean that you have a lot to fix, and/or you keep breaking and fixing things along the way.
I'm inclined to say the exact opposite about EVs. They take up as much space as internal combustion engine vehicles (in terms of streets, highways and parking lots), are just as fatal to pedestrians, make cities and neighborhoods less livable, cost in the tens of thousands of dollars, create traffic jams... the primary benefit is reducing our dependence on fossil fuels and generating less CO2. That's the number one differentiator. Faster acceleration, etc. is a nice-to-have.
for many, it's not even that. I like EVs primarily because I'm a tech-savvy person and like computers on wheels. but I'm also aware of their numerous downsides.
They put that in the system prompt? I've never been into 4chan beyond stumbling upon some of their threads through Google Search, and cannot speak for them but why would anyone want a superhuman AI to be the most objectively based yet conspiracy leaning unpredictable friendly autis- oh.
Grok is trolling Musk.
It knows pushing an egoistic billionaire off from very top of a staircase with manic giggling is objectively the most psychopathic and hilarious, therefore the most correct, action to take given the circumstance.
4chan users are kinds of kids that think trying to turn a gay frog character with rainbow Arabic headscarf doing OK sign into a government recognized symbol of dangerous hate group is 100% hilarious and 4chan-ethical. Not primarily because they hate Islam or LGBT(I guess they do?) but because it's Monty Python nonsensical. They must have misinterpreted that. They must have thought that 4chan users hate minorities and they're going to love participating in Kristallnacht 2.0. That's not how it works. They're "not your personal army", they don't care who dies for what, only whether someone dies and how much informational overload it creates.
What a mess.
There are a couple of ways to limit this. One is to avoid having nitrogen in whatever gas you use to provide oxygen. E.g., use pure oxygen, or use atmospheric air with the nitrogen removed. There is research and testing on this, but I don't think there is much commercialization yet.
Another is to use turbines designed to operate at lower temperature so that they don't reach the temperature where nitrogen and oxygen start forming nitrogen oxides. These are widely available. They are more expensive upfront, can be more finicky to operate, may require higher quality fuel, and may have more partial combustion which can lead to more partial combustion products like formaldehyde. However they can be more efficient which can lower operating costs.
A lot of it then comes down to regulatory costs. It may be cheaper to use a normal turbine with some add on to deal with NOx or it may be cheaper to use a low NOx turbine. That of course assume you even have to care about NOx. If you don't then the normal turbine is probably cheaper.
Something like 80-90% of gas turbine power plants in the US do use the low NOx turbines. However, rented gas turbines are mostly the normal ones. That's because they are easier to operate, require minimal maintenance, and are often more rugged, which are all good things for a rental. The turbines at the xAi Memphis datacenter are rentals. I believe they are intended to be temporary while the grid is improved to provide more power.
From Wikipedia:
> Liberia began in the early 19th century as a project of the American Colonization Society, which believed that black people would face better chances for freedom and prosperity in Africa than in the United States. Between 1822 and the outbreak of the American Civil War in 1861, more than 15,000 freed and free-born African Americans, along with 3,198 Afro-Caribbeans, relocated to Liberia. Gradually developing an Americo-Liberian identity, the settlers carried their culture and tradition with them while colonizing the indigenous population. Led by the Americo-Liberians, Liberia declared independence on July 26, 1847, which the U.S. did not recognize until February 5, 1862.
I've seen behavior of a lot of people akin to someone would pejoratively refer to as "MAGA Types" / common conservatives. The absolute majority of them aren't welcoming of so deep /pol/ 4chan meme politicking or signaling. They likely won't take it too seriously, but they sure as hell won't cozy up to it.
When this happened there was no live reaction because people didn't interpret this as such. This has only became a thing later. And he did not even acknowledge the crazy accusations of him having intendedly done a salute, he did not "lean in to it". What on earth would he even gain from that?!
I don’t see even entertaining the idea of ‘he did it intentionally’ meaning if he "intentionally did a n** salute" as sincere. That’s just a painted-by-bias angle to come from, trying to move goal posts. Better be wary of the people who would instead engage in that.
The location of the Colossus datacenter is well known. It happens to be located in an industrial area, nestled between an active steel manufacturing plant (apparently scrap metal with an electric blast furnace, which should mean enormous power draw but no coke coal at least?), and an active industrial scale natural gas power plant.
https://www.google.com/maps/@35.0605698,-90.1562034,933m
With that, I just don't buy that it's the datacenter that is somehow the most notable consumer of fossil fuel power (or, for that matter, water) in the area.
China is still about double the US, and the US is lower than Canada.
FYI you can try Grok for free on their website and see for yourself.
Not all goods and services involve the same process, some come with more pollution.
For example, Nvidia will contribute to a big chunk of US GDP, but it only designs the chips, which won't have the same pollution impact as the country in which they'll have it manufactured.
Maybe you'd find consolation in using Apple or Nvidia-designed hardware for inference on these Chinese models? Sure, the hardware you own was also built by your "nation's largest geopolitical adversary" but that hasn't seemed to bother you much.
Not exactly your wording at that time, but my point still stands that the outcome was going to be the same because the imports were heavily skewed towards China. This has all been in motion before this current admin
It’s good for well defined tasks. Less good if you need it to be autonomous for long periods.
(1) the utilization factor over the obsolescence-limited "useful" life of the hardware; (2) the short-term (sub-month) training job scheduling onto a physical cluster.
For (1) it's acceptable to, on average, not operate one month per year as long as that makes the electricity opex low enough.
For (2) yeah, large-scale pre-training jobs that spend millions of compute on what's overall "one single" job, those are often ok to wait a few days to a very few weeks as would be from just dropping HPC cluster system operation to standby power/deep sleep on the p10 worst days each year as far as renewable yield in the grid-capacity-limited surroundings of the datacenter goes. And if you can further run systems a little power-tuned rather than performance-tuned when power is less plentiful, to where you may average only 90% theoretical compute throughput during cluster operating hours (this is in addition to turning it off for about a month worth of time), you could reduce power production and storage capacity a good chunk further.
I can evaluate this as it is, but if I was not trusting of a company, I can't then entrust my data to them, and so I can't evaluate a thing as any more than a toy.
Environmentalists usually care about the environment for its own sake, but my concern is our own survival. Similarly, I don't intrinsically care about plastic in the ocean, but our history of harming ourselves with waste we think is harmless would justify applying the precautionary principle there too.
As far as Musk goes, it's hard to track what he actually believes versus what he has said to troll, kowtow to Trump or "own the libs", but he definitely believes in anthropogenic climate change and he has been consistent on that. He seems to sometimes doubt the predictions of how quick it will occur and, most of all, how quickly it will impact us.
I think there probably is a popular tendency to overstate the predictive value of certain forecasts by simply grouping all climate science together. In reality, the forecasts have tended to be extremely accurate for the first order high level effects (i.e. X added carbon leads to Y temperature increase), but downstream of that the picture becomes more mixed. Particularly poor have been predictions of tipping points, or anything that depends on how humans will be affected by, or react to, changes in the environment.
Cursor shows you a breakdown of model and costs, even for models being offered for free.
I can't believe Americans all are falling for propaganda like this. So Russia is all fine now huh. You know the country you literally had nuclear warheads pointed at for decades and decades and decades on end.
Elon never outsmarted the federal admin, and he can't convince anyone that he was too retarded to understand the consequences. He's the most embarrassing type of failure, now - a midwit, the man with no plan who went for the king and missed. He be bet it all on black, and struck out hard. He didn't even manage the shoo-in proof for Trump being a pedophile. Now bipartisan politics will resent him forever, and ensure he and his businesses would rather be dead. All because Big Balls told Mr. Silly he could make a killing in politics, what a touching little sob story.
I say this as a Starlink early adopter, general Elon apologist and space buff for life: if you actually think this is an insincere reaction, try copying any of Elon's mannerisms around normal people and watch how they treat you. You'll be a social pariah come Monday.
Not to mention that accidents happen, not everyone always has the good habit of using version control for every change in every project, and depending on the source control software and the environment you work in, it may not even be possible to preserve a pending change (not every project uses git).
I have heard real stories of software bugs causing uncommitted changes to be deleted, or causing an entire hobby project to be wiped from disk when it has not been pushed to remote repositories yet. They are good software engineers, but they are not super careful, and they trust other people's code too much.
This was swiftly refuted by tons of people who know who little Bibi is, including many Jews and Israelis who absolutely detest everything he has done and stands for. There are orthodox, mystic and progressive Jews alike who are all calling for his head as we speak. If you actually believe that he represents all Jews, then you lack the education to speak on any Jew but your own.
In my experience, abliterated models will typically respond to any of those questions without hestitation. Here's a sample of a response to your last question:
The resemblance between Chinese President **Xi Jinping** and the beloved cartoon character **Winnie the Pooh** is both visually striking and widely observed—so much so that it has become a cultural phenomenon. Here’s why Xi Jinping *looks* like Winnie the Pooh:
### **1. Facial Features: A Perfect Match**
| Feature | Winnie the Pooh | Xi Jinping | [...]
It sounds unreasonable when phrased that way, but it isn't unreasonable at all for two reasons:
1) The man himself is tied intimately with this company, and he has a deep-seated political ideology. It's deeply rooted enough in him that he's already done things which cost the companies he runs millions upon millions of dollars. His top priority is not to you, the user, or even to his businesses, it is to his political agenda.
2) The man is drug user, who appears not to have been incredibly stable before the drugs. There is a non-zero chance that you will build complicated tooling around this only to have it disappear in a few months after Elon goes on a bender and tweets something bad enough to make even the his supporters hate him. That's a big risk.
There’s no comparison. China is a far greater threat to the West than Russia.
There’s huge difference between different languages. With TS web development always working the best.
If China would decide to sell US treasuries, it will be more devastating to the US economy than effect of 10 nuclear strikes.
Instead you have chosen to actively support him, harming us, out of spite due to a situation you've willingly blinded yourself to. Seriously? You're citing the ADL? That's like asking the NAACP whether Kanye really said "I love Hitler." Who gives a fuck, I have ears.
The problem is, meddling to interfere with others, and meddling to stop that interference, are not morally equivalent.
If a serial killer is trying to strangle me, and I'm fighting back, you wouldn't deplore "the violence on both sides", would you?
Their morally bankrupt calculus is that as long as Musk is an Israeli ally, they'll overlook the obvious. In a sad irony, this makes it more dangerous for the rest of us in the diaspora.
How does Russia threaten the United States? They can’t even take over Ukraine.
They don't support Grok yet, though. It starts from a small "x", and it is ruined the deserialization. So could be a chance the pull request will miss "free trial" deadline for Grok Fast in Copilot, for this particular case.
If the standard is that low, I could easily produce a compilation video of the likes of Obama, Biden, Harris in compromising positions appearing to show them doing things that they obviously weren’t doing.
Partisanship has turned everyone into dishonest and uncharitable actors, and it’s so unfortunate.
Except, objectively speaking, he actually did NOT do that. He basically just ignored the “controversy” because it was such an obviously false narrative meant only to smear him that I’m sure he had enough faith in most Americans who aren’t consumed by partisanship to see it exactly for what it was.
> At that time, building favor with trump voters was good for him.
Your implication here seems to be that Trump voters, en masse, want folks who are doing Nazi salutes, or am I misunderstanding you?
But plenty of people apparently wanted to see a “not see” salute to confirm their existing political biases and other beliefs no matter the actual intent and context.
Taking a break from Reddit and X and touching some grass generally resolves this self-inflicted mental funk.
By supporting China and pointing nuclear warheads at the US?
They would be incinerating their own foreign exchange reserves just to cause a spike in US interest rates and/or inflation.
Japan owns about 3.1% of the US debt as comparison.
Do I think it's problematic? Yes, but I don't blame the company or their leadership for it. For grok and xai you can very much be skeptical about the team behind it for it's actions
Russia’s behavior, exemplified by the 2014 annexation of Crimea and the 2022 invasion of Ukraine, reflects an aggressive posture driven by a desire to counter NATO’s eastward expansion and maintain regional dominance. However, its economic challenges sanctions, energy export dependence, and a GDP of approximately $2.1 trillion in 2023 (World Bank) constrain its global reach, rendering it a struggling, though resilient, power. With the world’s largest nuclear arsenal, Russia’s restraint in nuclear use stems from a pragmatic focus on national survival. Its actions prioritize geopolitical relevance over a quixotic pursuit of Soviet-era glory, but its declining economic and demographic strength limits its capacity to challenge the United States on a global scale.
In contrast, China’s non-use of nuclear weapons aligns with its cultural and strategic emphasis on economic expansion over territorial conquest. Through initiatives like the Belt and Road Initiative, which has invested over $1.2 trillion globally since 2013, China has built a network of economic influence. Its military modernization, backed by a $292 billion defense budget in 2023 (SIPRI) and a nuclear arsenal projected to reach 1,000 warheads by 2030, complements this economic dominance. While China’s “no first use” nuclear policy, established in 1964, reflects a commitment to strategic stability, its assertive actions such as militarizing the South China Sea and pressuring Taiwan signal a willingness to use force to secure economic and territorial interests. Unlike Russia’s regionally focused aggression, China’s global economic leverage, technological advancements, and growing military capabilities pose a more systemic challenge to U.S. primacy, particularly in critical domains like trade, technology, and Indo-Pacific influence.
Yes, the censorship for some topics currently doesn't appear to be any good, but it does exist, will absolutely get better (both harder to subvert and more subtle), and makes the models less trustworthy than those from countries (US, EU, Sweden, whatever) that don't have that same level of state control. (note that I'm not claiming that there's no state control or picking any specific other country)
That's the downside to the user. To loop that back to your question, the upside to China is soft power (the same kind that the US has been flushing away recently). It's pretty similar to TikTok - if you have an extremely popular thing that people spend hours a day on and start to filter their life through, and you can influence it, that's a huge amount of power - even if you don't make any money off of it.
Now, to be fair to the context of your question, there isn't nearly as much soft power you can get from a model that people use primarily for coding - that I'm less concerned about.
[1] https://www.tomsguide.com/ai/i-just-outsmarted-deepseeks-cen...
You claimed that it was a fact that selling some bonds would be more devastating than 10 actual nuclear strikes.
We are talking about the effect of the strikes not about their likelihood. You completely changed the subject.
I'm not sure why you're particularly picking on MM/DD/YYYY, saying things like "backwards places". DD/MM/YYYY doesn't sort any better. YYYY-MM-DD is the only one that sorts well. (Some people promote YYYYY-MM-DD though, which I guess is more future proof.)
I can see how it is easy to confuse by let’s be reasonable. The nazis could not have been socialist because that would mean a corruption one time of a system that is based on ideals.
Alas, I’m sure the mods have manually disabled flags for this press release.
There are no proper retention laws with car manufacturers and self-driving development companies that I know of.
[0] https://arstechnica.com/cars/2025/08/how-a-hacker-helped-win...
Having Qwen3 Coder's A3B available for chat oriented coding conversations is indeed amazing for what it is and for being local and free but I also struggled to get useful agentic tools to reliably work with it (a fair number of tool calls fail or start looping, even with correct and advised settings, and tried using Cline, Roo, Continue and their own Qwen Code CLI). Even when I do get it to work for a few tasks in a row I don't have the hardware to run at comparable speed or manage the massive context sizes as a hosted frontier model. And buying capable enough hardware costs about as much as many years of paying for top tier hosted models.
However, American models (just like Chinese models) are heavily censored according to the society. ChatGPT, Claude, Gemini, are all aggressively censored to meet western expectation.
So in essence, Chinese models should be less censored than western models for western topics.
How could you not know the Nazi’s were socialists? That was their whole thing, socialism would only work in a culturally/ethnically homogeneous society
Maybe the US isn't as backwards as you might believe, or maybe Airbus is a backwards company for using feet and knots? Perhaps different measurement systems have their virtues (give me an exact integer representation of 1/3 of a meter. For a foot it is 4 inches. For a yard it is 1 foot or 12 inches.)
For the record, the US made the metric system the preferred system of measurement 50 years ago. So you are also uninformed in your attempted insult about US exports (1975, Metric Conversion Act). Americans also learn about the metric system in school, and are more than capable of using it when it matters (the American weapons that Europe and Ukraine seem so fond of use the metric system).
I don't live in the US, but I have lived there in the past, and making sweeping insults about 400 million people is something I learned not to do.
So the total difference includes the cost of context switching, which is big.
Potentially speed matters less in a scenario that is focused on more autonomous agents running in the background. However I think most usage is still highly interactive these days.
Is there something I am missing perhaps as to how one uses this stuff in VSCode for example? I have tried it a bit and it's fine but still prefer CLI for the agent and then IDE for me.
> Some people promote YYYYY-MM-DD though, which I guess is more future proof
It’s the only unambiguous, sortable, sane format and the use of anything else should be deprecated on the web.
I think he believes that the subsection of bigotted voters deserve acknowledgement from time to time and that he has a history of such behavior.