> The rushed deadline made proper safety testing impossible. GPT-4o was a multimodal model capable of processing text, images, and audio. It required extensive testing to identify safety gaps and vulnerabilities. To meet the new launch date, OpenAI compressed months of planned safety evaluation into just one week, according to reports.
> When safety personnel demanded additional time for “red teaming”—testing designed to uncover ways that the system could be misused or cause harm—Altman personally overruled them.
> The rushed GPT-4o launch triggered an immediate exodus of OpenAI’s top safety researchers. Dr. Ilya Sutskever, the company’s co-founder and chief scientist, resigned the day after GPT-4o launched.