> With AGI, Knowledge workers will be worth less until they are worthless.
The article you've linked fundamentally relies on the assumption that "the tasks can be done better/faster/cheaper by AIs". (Plus, of course, the idea that AGI would be achieved, but without this one the whole discussion would be pointless as it would lack the subject, so I'm totally fine with this one.)
Nothing about AGI (as in "a machine that can produce intelligent thoughts on a given matter") says that human and non-human knowledge workers would have some obvious leverage over each other. Just like my coworkers' existence doesn't hurt mine, a non-human intelligence is of no inherent threat. Not by definition.
Non-intelligent industrial robotics is well-researched and generally available, yet we have plenty of sweatshops because they turn out to be cheaper than robot factories. Not fun, not great, I'm no fond of this, but I'm merely taking it as a fact, as it is how it currently is. So I really wouldn't dare to unquestionably assume that "cheaper" would be true.
And then "better" isn't obvious either. Intelligence is intelligence, it can think, it can make guesses, it can make logical conclusions, and it can make mistakes too - but we've yet to see even the tiniest hints of "higher levels" of it, something that would make humans out of the league of thinking machines if we're ranking on some "quality" of thinking.
I can only buy "faster" - and even that requires an assumption that we ignore any transhumanist ideas. But, surely, "faster" alone doesn't cut it?