What should I do now? What stuff should I run?
I have a hello world Flask app running but obviously its not enough to use the full potential.
I'm thinking of running KVM and selling a few VDS to friends or companies.
Also thought of running thousands of Selenium browser tests but I do this maybe once a year, not enough to fully utilize the server 24/7
Help! I might have gone overboard with server capacity, I will never have to pay for AWS again, I can literally run every single project, APIs, database I want and still have space left over.
With hardware like that I would research a couple of things:
* Maximum number of servers, HTTP/WS, I could run simultaneously. I am working on a server application to do this right now. Each server should be 1, 2, or 4 ports. You could run HTTP over WS which allows both protocols over a single port. If you want to allow both TLS and insecure connections it would still be two 2 ports, or 4 ports if you are isolating HTTP and WS from each other.
* Maximum number of simultaneous sockets. I would test for the maximum number of open sockets connected to a single server instance and the maximum average number of sockets open to the maximum number of servers from the prior bullet point.
* Once you have confidence with both prior points I would then research the maximum amount of cross-talk. If your multiple servers can talk between each other then your servers almost achieve SMP. They could talk to each other via sockets like they are talking to everything else, but IPC communication would be even faster.
* Once you have all that then you have sufficient infrastructure in place to investigate more precise performance concerns. For example I have found that in my own personal implementation I could transmit messages via WebSockets almost 11x faster than I could receive the messages, but it could be that my own implementation is poorly executed. I also found that under the most ideal conditions my, likely poor, WebSocket implementation was still at least 8x faster than HTTP can go to over 80x and beyond performance after accounting for scale of high message frequency and socket concurrency.
* Once you have your performance bottlenecks identified you can then research performance bottlenecks on data transfer from a large data source of high frequency access. Then with that identified you can train AI on it for concurrency simulations that self-learns.
In the end you can sell your research. Consider that data centers require their own power plants to operate. While the hardware in those data centers is likely assembled in an efficient manner the software servers running on that hardware is often not efficient, and that costs millions of dollars a month just in electricity.
You could do this with the small Llama model, where the fitness function is basically the ability generate correct code and self detect errors, and adjust the weights based on the optimization algorithm.
I have a similar piece of hardware but 256 GB instead of a terabyte of RAM, and that is what I do. It has come in incredibly convenient to be able to spin up VMs as needed. I started creating different VMs for purposes. I would have normally just used the same host for, and have really enjoyed it.
I also run about a dozen personal services on there, such as audio bookshelf, archive box, jellyfin, navidrome, and more. Surprisingly, the archive box instance uses quite a bit of memory and CPU, so the box does get a fair amount of exercise. I have not looked very closely, but I believe archive box is using that memory and compute mainly for running headless Chrome. I have a browser extension installed that archives nearly every page I visit automatically, so especially during busy browsing times, I keep that thing running pretty hot.
In the past I set up a self-hosted openshift instance on it, spread across six VMs. I actually loved that and the only reason I'm no longer running it is because I broke it by messing around with risky and dangerous things that no sane person should ever do, and then did not want to dedicate the time to rebuild it. Someday I will recreate it again.
Whatever you decide to do with it, this is a really awesome problem to have!
I run proxmox on it, have servers for pihole and networking, AI, docker & portainer & jellyfin.
I've just about run out of the things I want to do with it and I barely use 10% of its capacity.
There are a variety of other LLM inference implementations that can run on CPU as well.
[0] - https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#su...
[1] - https://docs.vllm.ai/en/v0.6.1/getting_started/cpu-installat...
what model can i run on 1TB and how many tokens per second ?
for instance Nvidia Nemotron Llama 3.1 quantized at what speed ? ill get a GPU too but not sure how much VRAM I need for the best value for your buck
With 1TB of RAM you can run nearly anything available (405B essentially being the largest ATM). Llama 405B in FP8 precision fits in H100x8 which is 640GB VRAM. Quantization is a very deep and involved well (far too much for an HN comment).
I'm aware it "works" but I don't bother with CPU, GGUF, even llama.cpp so I can't really speak to it. They're just not even remotely usable for my applications.
> tokens per second
Sloooowwww. With 405B it could very well be seconds per token but this is where a lot of system factors come in. You can find benchmarks out there but you'll see stuff like a very high spec AMD EPYC bare metal system with very fast DDR4/5, tons of memory channels, etc doing low single-digit tokens per second with 70B.
> ill get a GPU too but not sure how much VRAM I need for the best value for your buck
Most of my experience is top-end GPU so I can't really speak to this. You may want to pop in at https://www.reddit.com/r/LocalLLaMA/ - there is much more expertise there for this range of hardware (CPU and/or more VRAM limited GPU configs).