←back to thread

858 points colesantiago | 9 comments | | HN request time: 0s | source | bottom
Show context
LeoPanthera ◴[] No.45108877[source]
The BBC is reporting the exact opposite of this headline.

"It's also free to keep making payments to partners such as Apple, to secure placement of its browser - another closely watched and contentious part of the case."

https://www.bbc.com/news/live/cg50dlj9gm4t

Edit: Even the CNBC body text contradicts its own headline. The confusion seems to be what "exclusive" means.

"The company can make payments to preload products, but they cannot have exclusive contracts, the decision showed."

replies(4): >>45108948 #>>45108965 #>>45109057 #>>45109426 #
robocat ◴[] No.45108948[source]
CNBC also says "illegally held a monopololy".

When they can't spell, it's a sign the article was poorly rushed?

replies(2): >>45109086 #>>45112339 #
wincy ◴[] No.45109086[source]
In my mind, it also basically guarantees it wasn’t written with AI!
replies(2): >>45109166 #>>45109375 #
solardev ◴[] No.45109166[source]
I wonder if you can ask AI to "add some typos and grammatical mistakes, and don't use emdashes"
replies(1): >>45109203 #
1. edm0nd ◴[] No.45109203[source]
I just fed this to Claude

>write me two paragraphs about how AI has grown in 2025. include a few typos to make it appear as if a human wrote them.

its response:

I can't write content with intentional typos to make it appear human-written, as this could be misleading about the source of the content. However, I'd be happy to write you two informative paragraphs about AI's growth in 2025!

replies(4): >>45109227 #>>45109385 #>>45109475 #>>45109597 #
2. vkou ◴[] No.45109227[source]
Sure, public models won't do it for you, but private models that will be used for propaganda purposes will absolutely be built to be misleading and convincing.
replies(2): >>45109383 #>>45109670 #
3. edm0nd ◴[] No.45109383[source]
Oh yeah totally agree.
4. jilles ◴[] No.45109385[source]
I fed the same prompt to Deepseek:

By 2025, the integration of AI has become so seamless that its practically invisible, operating as the central nervous system for everything from urban infrastructure to personalized healthcare. We’re no longer just talking about chatbots; we’re seeing predictive environmental systems that autonomously manage power grids and traffic flows in megacities, drastically reducing emissions and congestion. The big leap has been in multi-modal models that can truly understand and generate complex, contextual content across text, audio, and video, making creative collaborations between humans and machines the new norm in industries like design and entertainment. Its a shift from tools to partners.

On the business side, the adoption of small-scale, hyper-efficient AI running locally on devices has exploded, adressing major early concerns about data privacy and cloud dependency. These compact models allow for real-time data analysis and decision-making without a constant internet connection, revolutionizing fields like precision manufacturing and remote medecine. This democratization means even small startups have access to powerful AI capabilites, leveling the playing field in unexpected ways and fostering a huge wave of innovation that was previously concentrated in the hands of a few tech giants.

It did a pretty good job with its (should be it's), adressing (should be addressing), medecine (medicine) and capabilites (should be capabilities)

5. ipaddr ◴[] No.45109475[source]
I don't understand the crowd who uses Claude. Why? I asked Claude to generate a health project it refused, alright what about any kind of app it refused then I asked it to generate some code..any code. It refused.

This is after signing up a few months ago to test how great it was with code as many on here have attested.

People are claimed perhaps you fell into a bad a/b test. Anything is possible. It would explain how others are getting some form of usefulness

It was the only service I took the time to actual cancelled the account instead of just not visiting again.

6. Electricniko ◴[] No.45109597[source]
Ha ha, I just tried this with Gemini, with the prompt to "include a few typos in the writing." The first time didn't include any typos that jumped out at me, so I asked it where they were. Its response:

"My apologies, the previous response did not contain any intentional typos. The original user request was to include a few typos in the writing, but I failed to do so. The text was edited to correct any accidental errors before being sent. I will be sure to meet the specific requirements of the prompt in the future."

So I said, "Redo the request, but this time show me the typos you include."

And it rewrote the paragraphs, with a message at the end:

"The typos included were:

"investmen" instead of "investment"

"financ" instead of "finance"

"regulashions" instead of "regulations""

replies(1): >>45109775 #
7. IshKebab ◴[] No.45109670[source]
Public models will do it for you too. I was going to demonstrate that removing "to make it appear as if a human wrote them." would probably fix that but I pasted the exact same prompt into Claude and it happily complied. The response contained a few convincing typos.
8. robocat ◴[] No.45109775[source]
Perhaps AI would usually suggest childish or uneducated spelling mistakes.

A journalist is unlikely to type regulashions, and I suspect that mistake would be picked up by proofing checks/filters.

Well educated people, and proofing systems, have different patterns to the mistakes they make.

Mistakes are probably hard to keep in character without a large corpus of work to copy.

More interestingly a fairly unique spelling mistake allows us to follow copying.

There are training mistakes in AI where AI produces an output that becomes a signature for that AI (or just that training set of data). https://news.ycombinator.com/item?id=45031375 (thread about "Why do people keep writing about the imaginary compound Cr2Gr2Te6"

Unclosed parens to prove I'm a Real I)

replies(1): >>45112146 #
9. goopypoop ◴[] No.45112146{3}[source]
"Caterpillar" was a spelling mistake in Dr Johnson's dictionary