There were at least two other techniques back then.
The first is to write to another buffer (possibly in normal RAM, not video RAM), then when the frame is done copy the whole buffer at once, so every pixel gets changed only once.
The second is to write to another buffer that must be in video RAM too, then change the registers of the graphics hardware to use that buffer to generate pixels for the monitor to show.
They had different tradeoffs. Copying the whole buffer when done was expensive, changing an address register was cheap. But the details of the register were possibly hardware-dependent, and there was no real graphics driver framework in place. Also, to just "flip buffers" (as changing the address register was called), rendering to the off-screen buffer meant sending pixels to video RAM, which was (IIRC) slower to access than normal RAM (basically a NUMA architecture), so depending on how often a pixel gets overdrawn, rendering in normal RAM could be faster overall even with the final copy taken into account.