←back to thread

490 points todsacerdoti | 1 comments | | HN request time: 0.245s | source
Show context
ants_everywhere ◴[] No.44383018[source]
This is signed off primarily by RedHat, and they tend to be pretty serious/corporate.

I suspect their concern is not so much whether users have own the copyright to AI output but rather the risk that AI will spit out code from its training set that belongs to another project.

Most hypervisors are closed source and some are developed by litigious companies.

replies(2): >>44383228 #>>44383236 #
duskwuff ◴[] No.44383228[source]
I'd also worry that a language model is much more likely to introduce subtle logical errors, potentially ones which violate the hypervisor's security boundaries - and a user relying heavily on that model to write code for them will be much less prepared to detect those errors.
replies(1): >>44383735 #
ants_everywhere ◴[] No.44383735[source]
Generally speaking AI will make it easier to write more secure code. Tooling and automation help a lot with security and AI makes it easier to write good tooling.

I would wager good money that in a few years the most security-focused companies will be relying heavily on AI somewhere in their software supply chain.

So I don't think this policy is about security posture. No doubt human experts are reviewing the security-relevant patches anyway.

replies(4): >>44384267 #>>44385694 #>>44387290 #>>44388049 #
1. OtherShrezzing ◴[] No.44385694[source]
While LLMs are really good at generating content, one of their key weaknesses is their (relative) inability to detect _missing_ content.

I'd argue that the most impactful software security bugs in the last couple of decades (Heartbleed etc) have been errors of omission, rather than errors of inclusion.

This means LLMs are:

1) producing lots more code to be audited

2) poor at auditing that code for the most impactful class of bugs

That feels like a dangerous combination.