←back to thread

176 points zorlack | 2 comments | | HN request time: 0.46s | source

I've frequently found myself using [nvitop](https://github.com/XuehaiPan/nvitop) to diagnose GPU/CPU contention issues.

The two best things about it are:

- It's easy to install if I can access pip in the container

- It makes a compelling screenshot (which helps me communicate with coworkers.)

With those two lessons in mind: Here is Sping!

Purpose: Help observe and diagnose latency issues at layer 4+ (TCP/HTTP/HTTPS)

Two good things about it:

- It's easy to install if you have pip. (Available at [service-ping-sping](https://pypi.org/project/service-ping-sping/) on PyPi)

- It makes a compelling screenshot.

Not sure if this is the kind of thing that anyone else would be interested in. But I've enjoyed making it and intend to keep using it.

1. gnyman ◴[] No.45011077[source]
Looks nice.

I would add a link to the gitlab to the page also, clicking the LICENCE brings me to the source code but other than that there did not seem to be a link .

Out of curiosity, did you use LLM's to code this? My gut feeling tells me at minimum the readme was written by one, or maybe it's normal to use emojis everywhere :-) Also I am not meaning to judge it as good or bad, I'm just curious.

I think one thing that LLM's and coding agents enables, is creating these customised solution which solve a specific problem, in a specific way. Some might consider it wasteful. I bet many thinks your effort would have been better spent contributing to one of the existing ones instead of doing yet another tool, but I find fascinating that we can finally tell our computers what we need and the will do it.

If you hand-wrote everything, then apologies for the unrelated rant :-)

replies(1): >>45012785 #
2. zorlack ◴[] No.45012785[source]
Yes, I used LLMs to develop this. I think the README has more emojis than any mortal could summon. Hehe

I used ChatGPT to design the solution that I wanted and Claude Sonnet to do most of the coding.

I'm trying to figure out what works for me in the brave new world of AI enabled development, so that I can make recommendations to my team.

A few things that really helped me here were:

- Having the gitlab cli (glab) installed and configured was very helpful because it allowed me to do things like lint the CI file and inspect the build output in the LLM context.

- Having the zereight/gitlab-mcp installed was useful as well. Even though I can make Issues and MRs using the CLI, the LLM frequently made escaping mistakes when writing long comment sections. The mcp tool was great for this.

- Almost all of my process started with me describing a bug or feature, then asking the LLM to investigate the feature and create an Issue. From there I tried as much as possible to keep the scope of my work small and exclusively tied to an issue branch.

I'm a reasonably good programmer - I've been at it for 30 years. I think there's no question that LLMs expand my "radius of capability." Just like everyone else, I'm trying to figure out the best way to safely maximize this new world of tools.