Most active commenters
  • ants_everywhere(3)

←back to thread

491 points todsacerdoti | 12 comments | | HN request time: 1.006s | source | bottom
1. ants_everywhere ◴[] No.44383018[source]
This is signed off primarily by RedHat, and they tend to be pretty serious/corporate.

I suspect their concern is not so much whether users have own the copyright to AI output but rather the risk that AI will spit out code from its training set that belongs to another project.

Most hypervisors are closed source and some are developed by litigious companies.

replies(2): >>44383228 #>>44383236 #
2. duskwuff ◴[] No.44383228[source]
I'd also worry that a language model is much more likely to introduce subtle logical errors, potentially ones which violate the hypervisor's security boundaries - and a user relying heavily on that model to write code for them will be much less prepared to detect those errors.
replies(1): >>44383735 #
3. blibble ◴[] No.44383236[source]
> but rather the risk that AI will spit out code from its training set that belongs to another project.

this is everything that it spits out

replies(2): >>44383692 #>>44389149 #
4. ants_everywhere ◴[] No.44383692[source]
This is an uninformed take
replies(2): >>44383761 #>>44384595 #
5. ants_everywhere ◴[] No.44383735[source]
Generally speaking AI will make it easier to write more secure code. Tooling and automation help a lot with security and AI makes it easier to write good tooling.

I would wager good money that in a few years the most security-focused companies will be relying heavily on AI somewhere in their software supply chain.

So I don't think this policy is about security posture. No doubt human experts are reviewing the security-relevant patches anyway.

replies(4): >>44384267 #>>44385694 #>>44387290 #>>44388049 #
6. Groxx ◴[] No.44383761{3}[source]
It is a legally untested take
7. tho23i4234324 ◴[] No.44384267{3}[source]
I'd doubt this very much - LLMs hallucinate API calls and commit all sorts of subtle errors that you need to catch (esp. if you're on proprietary problems which it's not trained on).

It's a good replacement for Google, but probably nothing close to what it's being hyped out to be by the capital allocators.

8. otabdeveloper4 ◴[] No.44384595{3}[source]
No, this is an uninformed take.
9. OtherShrezzing ◴[] No.44385694{3}[source]
While LLMs are really good at generating content, one of their key weaknesses is their (relative) inability to detect _missing_ content.

I'd argue that the most impactful software security bugs in the last couple of decades (Heartbleed etc) have been errors of omission, rather than errors of inclusion.

This means LLMs are:

1) producing lots more code to be audited

2) poor at auditing that code for the most impactful class of bugs

That feels like a dangerous combination.

10. guappa ◴[] No.44387290{3}[source]
> Generally speaking AI will make it easier to write more secure code

In my personal experience, not at all.

11. latexr ◴[] No.44388049{3}[source]
> Generally speaking AI will make it easier to write more secure code.

https://www.backslash.security/press-releases/backslash-secu...

12. golergka ◴[] No.44389149[source]
When model trained on trillions of lines of code knows that inside of a `try` block, tokens `logger` and `.` have a high probability of being followed by `error` token, but almost zero probability of being followed by `find` token, which project does this belong to?