There's no security risk there that wasn't present before as far as I can tell because you were already planning on running the LSP on your local machine..
There's no security risk there that wasn't present before as far as I can tell because you were already planning on running the LSP on your local machine..
1. naturally async
2. each server and the editor itself can be written in its own language and runtime easily
3. servers can just crash or be killed because of oom errors and your editor won't be affected
4. in a lot of languages it is easier to write a server then to call/export c abi
5. the editor can run in a browser and connect to a remote server
6. you can have a remote central server
all of those things are done in practice
1. There's nothing stopping you from shoving the library code onto a thread. For something like this request-response style usage pattern, that sounds extremely straight-forward, especially since the previous paradigm was probably an async server anyways. The calling code (from the editor) obviously already had to be support async calls, so no real change there.
2. If $LANGUAGE can be used to write a server, it should be able to build a DLL. I realize this is not practically true, but I also don't support the notion that we should be writing even light systems stuff like this in JS or python.
3. lol. You're telling me you're worried about a process eating 32GB of ram parsing text files ..? If some core part of my editing workflow is crashing or running out of memory, it's going to be a disruptive enough event that my editor might as well just crash. The program I'm working on uses a lot of memory, my editor better not.
4. I guess ..? Barely seems like that's worth mentioning because .. counterpoint, debugging networked, multi-process programs is massively harder than debugging a singular process.
5. Why would I want (other than extremely niche applications, shadertoy comes to mind) an editor to run in a browser? If I have a browser, I can run an editor that connects to a remote machine. Furthermore, the `library-over-http` approach of LSPs doesn't really buy you anything in this scenario that using a single process wouldn't.. you can just send all the symbol information to the browser.. it's just not that big.
6. Wut?
2. What about languages like Java and Go?
3. a. The experience of lsp server crashing is much better than an editor crashing, the editor usually automatically restart it. I had lsp servers crash without me noticing at all.
b. Memory problems in both lsp servers and traditional IDE analysis are extremely common in my experience. It seem to me that the problem is that there are a lot of pathological cases where the analysis enters a loop and keeps allocating memory.
4. When mixing runtimes I actually find it easier to have multiple processes because I can attach a specialized debugger to each process but this is definitely an important point.
5. good counter argument, I retract my point
6. what I meant is that for very large code bases it can be beneficial to run a central lsp server that many people can connect to because most of the index is shared between all of them and the parsing+indexing itself is very costly. I heard Google were doing something like that but I don't have more information.
2. AFAIK Go can both compile and load DLLs now. A language that doesn't have native compilation facilities (Java, JS, python, etc) would have to have extra tooling that loads the runtime and program from a DLL. A quick google search tells me Java has this tooling already. I'm sure other languages would too.
3 a,b. See point 1.
4. Yeah, that's fair, although I'd probably still rather have a single process.
6. Also a good point. Might be a useful paradigm for large codebases. Although at that point the usage pattern is so different that the calculus for choosing the architecture is different, and I stand by my choice. Google can afford to build fancy shit for themselves.
One final point!
I think there is a difference between absorbing low quality tools and absorbing low quality code. I think that it makes a lot of sense for a plugin system to design for low quality plugins. I have two examples from my job, the first is a micro services based parsing infrastructure, where each team is responsible with parsing it's own formats, which are oftentimes layered apon formats parsed by different teams. I believe this takes about 10x-100x more resources than it should, but it has the benefit that a rouge parser can't takes everything down with it. The second example is internal scripting capability of a different system where a lot of work was done to make sure the script can't do anything stupid.
In both of those cases the system is designed for low quality code because it is more cost effective to make sure code can be low quality without affecting the overall system, than to make sure low quality code doesn't exists.