> Write in the browser with your text editor.
> Write in the browser with your text editor.
As a software developer, I always get frustrated when I am doing some graphical work and struggle to neatly parametrize whatever I am drawing (wooden cabinets and furniture, room layouts, installation plans...) and switch between coding where that makes most sense and GUI where it doesn't.
The best I've gotten was FreeCAD with Python bindings (I've got a couple of small libraries to build out components for me), but while you can use your own editor, the experience is not very seamless.
And then I start imagining tools like the one here, but obviously doing it just right for me (balancing the level of coding or GUI work).
More broadly, I was genuinely shocked to realize, when I was playing with it, that there is no cross CAD file format that captures even simple design concepts like “this hole is aligned to the center of this plate” or even “this is a 2mm fillet”. STEP (the file format) mostly just captures final geometry.
I think CAD people just … redesign the part again if they need to move from say Fusion 360 to FreeCAD or whatever. How do they live like that?!
Doing so for languages like C++, was a sea of boilerplate that you couldn't touch, which is why I never moved away from Pascal. Similar fragility was evident in WxPython and it's builder.
I'm glad to see that LLMs can provide a match for less well suited Language/GUI pairs. We all deserve to get that kind of productivity.
Screenshots and GIFs for the explanation!
I built a small project where you can live-code Love2D. The running program updates in real time (no saving needed) and see all values update in real-time, via LSP.
https://github.com/jasonjmcghee/livelove
And also added the same kind of interactivity like a number slider and color picker that replace text inline, like yours (though via vs code extension: https://gist.github.com/jasonjmcghee/17a404bbf15918fda29cf69...)
Here's another experiment where I made it so you could "drag and drop" to choose a position for something by manipulating the editor / replacing a computed position with a static one, on keypress.
https://clj.social/@jason/113550406525463981
There's so much cool stuff you can do here.
The PDF ones are especially fun!
The whole playground is built for bidi sync!
We put in for some funding for the next edition of STEP AP242 for me to be able to work more closely with the user group to improve this area.
https://lists.openscad.org/empathy/thread/GAX4QYYRUC3CEH572I...
The devil is in the details though, and I worry about the UI becoming cluttered and unmanageable.
A lot of folks had fun watching Minecraft built using a live code session, if I recall.
Why do cool ideas take so much time to be embraced in mainstream?
Retoric question, naturally they weren't VC friendly with exponential growth capitalising user acquisition. /s
That is why STEP containing the final BREP manifold solid is the standard interchange that it is - it is a final representation of the solved output that IS portable, and anything else is... difficult.
But well, the project is very cool and I love the idea of using LSP for something more!
I was a bit concerned that HN's algorithm might down-weight posts with non-ASCII characters in titles to discourage people from trying to attract attention with them, but it seems like it's fine?
Of course it’s great for vendor lock-in.
But given how many companies need to work with diverse suppliers, there must be a whole bunch of re-creating models happening. There is no chance that everybody is using the same CAD tool
There's no security risk there that wasn't present before as far as I can tell because you were already planning on running the LSP on your local machine..
I think I’ve heard that vscode has benefitted hugely from it starting out with a client-server architecture from the start, since it started as a browser based editor. Things like editing code directly on servers via ssh or in containers is easy for vscode cause its client-server all the way down.
Vscode and LSP are both Microsoft products, maybe Microsoft has been pushing the client server thing?
1. naturally async
2. each server and the editor itself can be written in its own language and runtime easily
3. servers can just crash or be killed because of oom errors and your editor won't be affected
4. in a lot of languages it is easier to write a server then to call/export c abi
5. the editor can run in a browser and connect to a remote server
6. you can have a remote central server
all of those things are done in practice
This looks like it could be a great way to get my feet wet - I don’t do well with math and physics programming, but I used to make things in Flash back in the day that were similar to the particles demo and being able to quickly change things and see the updates makes it a lot easier for me to grok. Thanks for sharing!
[1] https://en.wikipedia.org/wiki/Airbus_A380#Production_and_del...
Every time the users do some change in the code, the editor sends the document and we re-render the preview.
I use the textDocument/documentHighlight request to know when elements are being selected from the code so I can highlight them in the preview.
When selecting an element in the preview UI, my LSP server sends a window/showDocument to position the cursor at the right location. And if the user changes property or do change in the file, we do a workspace/applyEdit command with the changes.
Btw, the code is there: https://github.com/slint-ui/slint/tree/master/tools/lsp
Highly recommend adding code definitions https://github.com/LuaCATS/love2d
And getting the lua LSP.
It reminds me a lot of processing / p5js. So easy to get something fun up and running quickly.
There is a reason that systems like Dassault's CATIA are both ubiquitous in aerospace and automotive and closed source- it is literally the culmination of probably 3000+ person-years of programming, stemming from the 1970s. The same for many others... making them interoperable would mean making the entire chain of systems open source and there isn't a reason for them to do so.
For small projects though, building DSLs and graphical tools on top of OpenCASCADE (which is GNU LGPL version 2.1 licensed) is about the closest you could get right now, particularly, in my opinion, building something like a pseudo-gui+textural tool with the Python package Build123D.
1. There's nothing stopping you from shoving the library code onto a thread. For something like this request-response style usage pattern, that sounds extremely straight-forward, especially since the previous paradigm was probably an async server anyways. The calling code (from the editor) obviously already had to be support async calls, so no real change there.
2. If $LANGUAGE can be used to write a server, it should be able to build a DLL. I realize this is not practically true, but I also don't support the notion that we should be writing even light systems stuff like this in JS or python.
3. lol. You're telling me you're worried about a process eating 32GB of ram parsing text files ..? If some core part of my editing workflow is crashing or running out of memory, it's going to be a disruptive enough event that my editor might as well just crash. The program I'm working on uses a lot of memory, my editor better not.
4. I guess ..? Barely seems like that's worth mentioning because .. counterpoint, debugging networked, multi-process programs is massively harder than debugging a singular process.
5. Why would I want (other than extremely niche applications, shadertoy comes to mind) an editor to run in a browser? If I have a browser, I can run an editor that connects to a remote machine. Furthermore, the `library-over-http` approach of LSPs doesn't really buy you anything in this scenario that using a single process wouldn't.. you can just send all the symbol information to the browser.. it's just not that big.
6. Wut?
I don't think it does. I think it's a bad architectural decision that web bros thought sounded cute.
> Things like editing code directly on servers via ssh or in containers
I mean, vim and emacs have supported editing over ssh for like .. longer than I've been alive probably.
> I think I’ve heard that vscode has benefitted hugely from [clinet-server architecture]
IMO VSCode is a giant steaming pile; I'm not sure what the huge benefits could have been. It's intolerably slow, uses an insane amount of system resources, and the debugger barely works most of the time.
2. What about languages like Java and Go?
3. a. The experience of lsp server crashing is much better than an editor crashing, the editor usually automatically restart it. I had lsp servers crash without me noticing at all.
b. Memory problems in both lsp servers and traditional IDE analysis are extremely common in my experience. It seem to me that the problem is that there are a lot of pathological cases where the analysis enters a loop and keeps allocating memory.
4. When mixing runtimes I actually find it easier to have multiple processes because I can attach a specialized debugger to each process but this is definitely an important point.
5. good counter argument, I retract my point
6. what I meant is that for very large code bases it can be beneficial to run a central lsp server that many people can connect to because most of the index is shared between all of them and the parsing+indexing itself is very costly. I heard Google were doing something like that but I don't have more information.
2. AFAIK Go can both compile and load DLLs now. A language that doesn't have native compilation facilities (Java, JS, python, etc) would have to have extra tooling that loads the runtime and program from a DLL. A quick google search tells me Java has this tooling already. I'm sure other languages would too.
3 a,b. See point 1.
4. Yeah, that's fair, although I'd probably still rather have a single process.
6. Also a good point. Might be a useful paradigm for large codebases. Although at that point the usage pattern is so different that the calculus for choosing the architecture is different, and I stand by my choice. Google can afford to build fancy shit for themselves.
One final point!
I think there is a difference between absorbing low quality tools and absorbing low quality code. I think that it makes a lot of sense for a plugin system to design for low quality plugins. I have two examples from my job, the first is a micro services based parsing infrastructure, where each team is responsible with parsing it's own formats, which are oftentimes layered apon formats parsed by different teams. I believe this takes about 10x-100x more resources than it should, but it has the benefit that a rouge parser can't takes everything down with it. The second example is internal scripting capability of a different system where a lot of work was done to make sure the script can't do anything stupid.
In both of those cases the system is designed for low quality code because it is more cost effective to make sure code can be low quality without affecting the overall system, than to make sure low quality code doesn't exists.