←back to thread

AI 2027

(ai-2027.com)
949 points Tenoke | 1 comments | | HN request time: 0.213s | source
Show context
beklein ◴[] No.43572674[source]
Older and related article from one of the authors titled "What 2026 looks like", that is holding up very well against time. Written in mid 2021 (pre ChatGPT)

https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-...

//edit: remove the referral tags from URL

replies(9): >>43572850 #>>43572964 #>>43573185 #>>43573413 #>>43573523 #>>43575079 #>>43575122 #>>43575183 #>>43575630 #
1. dkdcwashere ◴[] No.43572850[source]
> The alignment community now starts another research agenda, to interrogate AIs about AI-safety-related topics. For example, they literally ask the models “so, are you aligned? If we made bigger versions of you, would they kill us? Why or why not?” (In Diplomacy, you can actually collect data on the analogue of this question, i.e. “will you betray me?” Alas, the models often lie about that. But it’s Diplomacy, they are literally trained to lie, so no one cares.)

…yeah?