←back to thread

493 points todsacerdoti | 1 comments | | HN request time: 0.202s | source
Show context
ants_everywhere ◴[] No.44383018[source]
This is signed off primarily by RedHat, and they tend to be pretty serious/corporate.

I suspect their concern is not so much whether users have own the copyright to AI output but rather the risk that AI will spit out code from its training set that belongs to another project.

Most hypervisors are closed source and some are developed by litigious companies.

replies(2): >>44383228 #>>44383236 #
duskwuff ◴[] No.44383228[source]
I'd also worry that a language model is much more likely to introduce subtle logical errors, potentially ones which violate the hypervisor's security boundaries - and a user relying heavily on that model to write code for them will be much less prepared to detect those errors.
replies(1): >>44383735 #
ants_everywhere ◴[] No.44383735[source]
Generally speaking AI will make it easier to write more secure code. Tooling and automation help a lot with security and AI makes it easier to write good tooling.

I would wager good money that in a few years the most security-focused companies will be relying heavily on AI somewhere in their software supply chain.

So I don't think this policy is about security posture. No doubt human experts are reviewing the security-relevant patches anyway.

replies(4): >>44384267 #>>44385694 #>>44387290 #>>44388049 #
1. guappa ◴[] No.44387290[source]
> Generally speaking AI will make it easier to write more secure code

In my personal experience, not at all.