←back to thread

1071 points kentonv | 2 comments | | HN request time: 0.424s | source

I wasn't quite sure if this qualified as "Show HN" given you can't really download it and try it out. However, dang said[0]:

> If it's hardware or something that's not so easy to try out over the internet, find a different way to show how it actually works—a video, for example, or a detailed post with photos.

Hopefully I did that?

Additionally, I've put code and a detailed guide for the netboot computer management setup on GitHub:

https://github.com/kentonv/lanparty

Anyway, if this shouldn't have been Show HN, I apologize!

[0] https://news.ycombinator.com/item?id=22336638

Show context
RulerOf ◴[] No.42159254[source]
> I've never heard of anyone else having done anything like this. This surprises me! But, surely, if someone else did it, someone would have told me about it? If you know of another, please let me know!

I never had the tenacity to consider my build "finished," and definitely didn't have your budget, but I built a 5-player room[1] for DotA 2 back in 2013.

I got really lucky with hardware selection and ended up fighting with various bugs over the years... diagnosing a broken video card was an exercise in frustration because the virtualization layer made BSODs impossible to see.

I went with local disk-per-VM because latency matters more than throughput, and I'd been doing iSCSI boot for such a long time that I was intimately familiar with the downsides.

I love your setup (thanks for taking the time to share this BTW) and would love to know if you ever get the local CoW working.

My only tech-related comment is that I will also confirm that those 10G cards are indeed trash, and would humbly suggest an Intel-based eBay special. You could still load iPXE (I assume you're using it) from the onboard NIC, continue using it for WoL, but shift the netboot over to the add-in card via a script, and probably get better stability and performance.

[1]: https://imgur.com/a/4x4-four-desktops-one-system-kWyH4

replies(1): >>42159408 #
kentonv ◴[] No.42159408[source]
Hah, you really did the VM thing? A lot of people have suggested that to me but I didn't think it'd actually work. Pretty cool!

Yeah I'm pretty sure my onboard 10G Marvell AQtion ethernet is the source of most of my stability woes. About half the time any of these machines boot up, Windows bluescreens within the first couple minutes, and I think it has something to do with the iSCSI service crashing. Never had trouble in the old house where the machines had 1G network -- but load times were painful.

Luckily if the machines don't crash in the first couple minutes, then they settle down and work fine...

Yeah I could get higher-quality 10G cards and put them in all the machines but they seem expensive...

replies(9): >>42159687 #>>42160035 #>>42160606 #>>42160613 #>>42161388 #>>42162551 #>>42162914 #>>42163053 #>>42163249 #
1. ThatPlayer ◴[] No.42162914[source]
I've done a multi-seat gaming VM back in the day too. I don't think I'd want to do it again. Assigning hotplug USB devices was a pain: I mostly wanted unique USB devices per computer to easily figure which device was which. Though nowadays I would probably use a thin client Raspberry Pi running Moonlight to do it cheaply.

I think another issue is the limited amount of PCI-E lanes now that HEDT is dead. I picked up a 5930k for my build at the time for its 40 PCI-E lanes. But now consumer CPUs basically max out at 20-24 lanes.

Also with the best CPUs for gaming nowadays being AMD's X3D series because of its additional L3 cache, I wonder about the performance hit with 2 different VMs fighting for cache. Maybe the rumored 9950X3D will have 2 3D caches and you'd be able to pin the VMs to each CPU cores/cache. The 7950X3D had 3D cache only on half of its cores, so games generally performed better pinned to only those cores.

So with only 2-3 VMs/PC, and you still needing a GPU for each VM which are the most expensive part anyway, I'd pay a bit more to do it without VMs. The only way I'd be interested in multiseat VM gaming again would be if I could utilize GPU virtualization: split up a single GPU into many VMs. But like you say in the article that's usually been limited to enterprise hardware. And even then it'd be interesting only for the flexibility, being able to run 1 high-end GPU for when I'm not having a party.

replies(1): >>42163212 #
2. amluto ◴[] No.42163212[source]
If you’re on an Intel chip that supports “Resource Director,” you can assign most of your cache to a VM. I have no idea whether AMD can do this. I’ve also never done it, and I don’t know how well KVM supports it.