←back to thread

238 points jamesbvaughan | 2 comments | | HN request time: 0.417s | source
Show context
jesse__ ◴[] No.44440194[source]
It's concerning to me that the LSP idea is .. a thing. Casey Muratori observed years ago that it's just a way worse way of doing libraries. Like, you're introducing HTTP where there could just be a function call into a DLL/SO. What's the benefit there? Just make vim/emacs/$editor speak some native protocol and be done with it. Then you GUI is just welded directly into the running editor process.. right??

There's no security risk there that wasn't present before as far as I can tell because you were already planning on running the LSP on your local machine..

replies(3): >>44440201 #>>44440245 #>>44440574 #
dodomodo ◴[] No.44440574[source]
there are many practical benefits:

1. naturally async

2. each server and the editor itself can be written in its own language and runtime easily

3. servers can just crash or be killed because of oom errors and your editor won't be affected

4. in a lot of languages it is easier to write a server then to call/export c abi

5. the editor can run in a browser and connect to a remote server

6. you can have a remote central server

all of those things are done in practice

replies(1): >>44448622 #
jesse__ ◴[] No.44448622[source]
IMO, many of these are marginal or nonce gains for the huge cost of introducing a network call in the middle of your program.

1. There's nothing stopping you from shoving the library code onto a thread. For something like this request-response style usage pattern, that sounds extremely straight-forward, especially since the previous paradigm was probably an async server anyways. The calling code (from the editor) obviously already had to be support async calls, so no real change there.

2. If $LANGUAGE can be used to write a server, it should be able to build a DLL. I realize this is not practically true, but I also don't support the notion that we should be writing even light systems stuff like this in JS or python.

3. lol. You're telling me you're worried about a process eating 32GB of ram parsing text files ..? If some core part of my editing workflow is crashing or running out of memory, it's going to be a disruptive enough event that my editor might as well just crash. The program I'm working on uses a lot of memory, my editor better not.

4. I guess ..? Barely seems like that's worth mentioning because .. counterpoint, debugging networked, multi-process programs is massively harder than debugging a singular process.

5. Why would I want (other than extremely niche applications, shadertoy comes to mind) an editor to run in a browser? If I have a browser, I can run an editor that connects to a remote machine. Furthermore, the `library-over-http` approach of LSPs doesn't really buy you anything in this scenario that using a single process wouldn't.. you can just send all the symbol information to the browser.. it's just not that big.

6. Wut?

replies(1): >>44453643 #
dodomodo ◴[] No.44453643[source]
1. True, but in practice it's not always the case, for example older versions of resharper sometimes slowed down typing speed. It's much harder to fuck it up when there is a network request in the middle.

2. What about languages like Java and Go?

3. a. The experience of lsp server crashing is much better than an editor crashing, the editor usually automatically restart it. I had lsp servers crash without me noticing at all.

b. Memory problems in both lsp servers and traditional IDE analysis are extremely common in my experience. It seem to me that the problem is that there are a lot of pathological cases where the analysis enters a loop and keeps allocating memory.

4. When mixing runtimes I actually find it easier to have multiple processes because I can attach a specialized debugger to each process but this is definitely an important point.

5. good counter argument, I retract my point

6. what I meant is that for very large code bases it can be beneficial to run a central lsp server that many people can connect to because most of the index is shared between all of them and the parsing+indexing itself is very costly. I heard Google were doing something like that but I don't have more information.

replies(1): >>44458453 #
1. jesse__ ◴[] No.44458453[source]
1. Choosing a shitty architecture to absorb the downside of low quality tools seems like a bad decision, but I guess that's just where the industry's at right now.

2. AFAIK Go can both compile and load DLLs now. A language that doesn't have native compilation facilities (Java, JS, python, etc) would have to have extra tooling that loads the runtime and program from a DLL. A quick google search tells me Java has this tooling already. I'm sure other languages would too.

3 a,b. See point 1.

4. Yeah, that's fair, although I'd probably still rather have a single process.

6. Also a good point. Might be a useful paradigm for large codebases. Although at that point the usage pattern is so different that the calculus for choosing the architecture is different, and I stand by my choice. Google can afford to build fancy shit for themselves.

replies(1): >>44459208 #
2. dodomodo ◴[] No.44459208[source]
first of all I want to say that I have a lot fun talking to you! Honestly I would also prefer dll based approach, if you really want to see the limits of the architecture of Lsp I recommend you to read the discussion about adding syntax highlighting to the protocol, but I see why the people behind LSP did what they did.

One final point!

I think there is a difference between absorbing low quality tools and absorbing low quality code. I think that it makes a lot of sense for a plugin system to design for low quality plugins. I have two examples from my job, the first is a micro services based parsing infrastructure, where each team is responsible with parsing it's own formats, which are oftentimes layered apon formats parsed by different teams. I believe this takes about 10x-100x more resources than it should, but it has the benefit that a rouge parser can't takes everything down with it. The second example is internal scripting capability of a different system where a lot of work was done to make sure the script can't do anything stupid.

In both of those cases the system is designed for low quality code because it is more cost effective to make sure code can be low quality without affecting the overall system, than to make sure low quality code doesn't exists.