←back to thread

755 points MedadNewman | 1 comments | | HN request time: 0.209s | source
Show context
lxe ◴[] No.42891381[source]
You can also intercept the xhr response which would still stop generation, but the UI won't update, revelaing the thoughts that lead to the content filter:

    const filter = t => t?.split('\n').filter(l => !l.includes('content_filter')).join('\n');

    ['response', 'responseText'].forEach(prop => {
      const orig = Object.getOwnPropertyDescriptor(XMLHttpRequest.prototype, prop);
      Object.defineProperty(XMLHttpRequest.prototype, prop, {
        get: function() { return filter(orig.get.call(this)); }
      });
    });
Paste the above in the browser console ^
replies(2): >>42891427 #>>42891516 #
tills13 ◴[] No.42891516[source]
insane that this is client-side.
replies(8): >>42891775 #>>42891802 #>>42892213 #>>42892242 #>>42892457 #>>42896609 #>>42896617 #>>42896757 #
Gigachad ◴[] No.42896617[source]
It’s because they want to show the output live rather than nothing for a minute. But that means once the censor system detects something, you have to send out a request to delete the previously displayed content.

This doesn’t matter because censoring the system isn’t that important, they just want to avoid news articles about how their system generated something bad.

replies(3): >>42896943 #>>42897228 #>>42897366 #
andai ◴[] No.42897366[source]
Gemini does this too. There was a clip of what it does when you ask it for examples of Google's unethical behavior... the kids call this "watching it get lobotomized in real time."
replies(2): >>42897623 #>>42901969 #
1. freehorse ◴[] No.42897623[source]
Have seen chatgpt doing the same too, prob all of them