←back to thread

Nobody knows how to build with AI yet

(worksonmymachine.substack.com)
526 points Stwerner | 1 comments | | HN request time: 0.207s | source
Show context
d00mB0t ◴[] No.44617032[source]
"I'd wander into my office, check what Claude had built, test it real quick. If it worked, great! Commit and push."

Man, I'm going to make so much money as a Cybersecurity Consultant!

replies(3): >>44621541 #>>44624922 #>>44625666 #
sureglymop ◴[] No.44624922[source]
I tend to generally think the same as you, as I work in the same field. A long time ago I thought to myself, if AI adoption increases exponentially, there is a chance that the amount of security vulnerabilities introduced by it also increases at the same rate.

However, what we are maybe not considering enough is that general AI adoption could and almost certainly will affect the standards for cybersecurity as well. If everyone uses AI and everyone gets used to its quirks and mistakes and is also forgiving about someone else using it since they themselves use it too, the standards for robust and secure systems could decrease to adjust to that. Now, your services as a cybersecurity consultant are no longer in need as much, as whatever company would need them can easily point to all the other companies also caring less and not doing anything about the security issues introduced by the AI that everyone uses. The legal/regulation body would also have to adjust to this, as it is not possible to enforce certain standards if no one can adhere to them.

replies(1): >>44627312 #
stephenlf ◴[] No.44627312[source]
I don’t follow. Cybersecurity has always been about reducing the risk of costly cyber attacks. That hasn’t changed. It’s not like suddenly companies will stop caring that their software has been locked down by ransomware, or that their database leaked and now they have to pay a nine-figure fine. It’s not standards for standards’ sake (though it can feel that way). It’s loss prevention.
replies(1): >>44627707 #
1. sureglymop ◴[] No.44627707[source]
Sure, in the case of ransomware, phishing the ceo, etc. you are right. But for most other cases that AI would affect, it's not like companies would care if there weren't regulatory consequences forcing them to.