As for cryptography, trusting that the WASM build of your preferred library hasn't introduced any problems demonstrates a level of risk tolerance that far exceeds what most people working in cryptography would accept. Besides, browsers have quite good cryptographic APIs built in. :)
The browser often runs on an immensely powerful computer. It's a waste to use that power only for a dumb terminal. As a matter of fact, my laptop is 6 years old by now, and considerably faster than the VPS on which our backend runs.
I let the browser do things such as data summarizing/charting, and image convolution (in Javascript!). I'm also considering harnassing it for video pre-processing.
... I mean... elaborate?
Everytime I've heard somebody say this, it's always a form of someone stuck in the 90s/00s where they have this notion that browsers showing gifs is the ceiling and that real work can only happen on the server.
Idk how common this is now, but a a few years ago (~2017) people would show projects like figma tha drew a few hundred things on screen and people would be amazed. Which is crazy, because things like webgl, wasm, webrtc, webaudio are insanely powerful apis that give pretty low level access. A somewhat related idea are people that keep clamoring for dom access in wasm because, again, people have this idea that web = webpage/dom, but that's a segway into a whole other thing.
Used a similar technique using tinygo wasm builds (without Vite ofcourse) on toy project where WASM based functionality acted as a fallback if the API wasn't available or user was offline - found it an interesting pattern.
I've seen teams do this in the wild more than once.
also "segway" is a scooter, "segue" is a narrative transition
If I needed more, I would probably not use Go anyways, but a sharper tool instead.
Additionally JIT optimisations means that even if you're doing very computationally heavy tasks unless they're one-offs or have a significant amount of computational variance JavaScript is surprisingly performant.
So unless you need to compute something for several seconds and it's done as a one-off typically there will be very little (if any) gain in trying to squeeze out a bit of additional performance in this way.
However this is all off the top of my head and from my own experimentation several years back. Someone please correct me if I'm wrong.
This is a really cool project and I must admit that and I am on the side as well also asking for something similar to your project for julia since that has one of the highest focus on scientific computing. I would like it if you could create something similar to this but for julia as well, it shall be really cool.
Now coming back to my main point, my question is that what if the scientific computing project is too complicated and might require on features which shall not be available on tinygo as from what I remember, tinygo and go aren't 1:1 compatible
How much impact could it have though, like I am basically asking about the state of tinygo really and if it could do the scientific thing as accurately as you describe it but still a great project nonetheless. Kudos.
Modern tools often make this tradeoff, like Astro, and none of the tools authors are claiming you need to use the tool.
Yes, the pattern can be abused, but dogmatic rules against mixing languages may also entail downsides.
To be clear I'm fine with importing .go from JS, it's the "go in file.js" thing I don't like.
And why exactly? Your original comment made sense but it was irrelevant to the OP. This one just doesn’t make sense but I could be missing something.
Did you want me to expand my thoughts on "backed oriented language for frontend oriented work", or does that address your query?
This is obvious but it needs to be said: Backend languages are designed for backend work, and frontend languages for frontend work. Where this becomes a real pain point is where the design goals of the language run counter, and probably the chief one is around modelling business rules.
It is the job of the backend to constrain itself down into fitting the business rules and nothing else, and backend languages aid this by allowing one to model well-defined types and put defensive guards and deal with bad behaviour in a number of ways (e.g. there is often a distinction between a runtime error and non-runtime errors).
It is the job of the frontend (or at least, what my ideal frontend would be) to have good UX and to delegate to the backend for business rules. Indeed in my ideal, the backend would dictate the HTML content to the greatest degree possible. The coding that is needed on the frontend is for DOM manipulation to add some dynamic feel to the UI, and avoid full page reloads for small adjustments. A dynamically typed scripting language (e.g. Javascript) is good for this, because it is quick to hack and tweak an experimental view, review it with users, adjust, and repeat (which is at least how I go about getting the UX good enough).
Using a typed backend language on the frontend would get in the way of me just hacking the view, which is the appropriate mode of programming for a dumb client (dumb client being my ideal).
Also, and where it does tie in with my original comment, is that I do think using a backend language on the frontend invites putting business rules in the UI code. I think that because I've been on projects where that has happened, and I understand the instinct- why pivot from my frontend coding and go and figure out what I need to modify on the backend to code a feature when it is seemingly easy for me to model it on the frontend just as well? Infact, why not put all the logic in the frontend and let the backend be a dumb CRUD REST API/a GraphQL layer above the DB?
Conversely, if it is not easy to do much beyond DOM manipulation on the frontend (because the language and setup don't make it easy), and I am forced to modify the business rules in the backend, then fantastic.
a REST API needs to be descriptive enough and have a wide enough contract with the client that the response can modify the behaviour of the client so as to deal with any multitude of situations going on with the server. This works great if the response is HTML and the client is a browser, as the HTML dictates where and how to interact with the server (e.g. a link is a GET request to XYZ, followed by a page load). For JSON REST to meet that bar one needs JSON+HATEOAS, and having worked on a project that tried that, let me tell you that there is HATE aplenty to be found in trying to make that work.
So if we abandon the strict notion of what REST is, then what does JSON REST mean? In my experience, its been a lot of arguing over what paths and methods and resources to use, which at best are a waste of time (because no one is going to see the choice, its just whatever your JS lib is going to call and your backend is going to return) and at worse it puts bad constraints on how the backend is modeled by forcing one to do it in terms of Resources for ones REST API to work effectively.
In my opinion, its much better to use an RPC API which simply describes API "functions". These APIs can work over any number of actual db resources (and sometimes none) and importantly, leave you the time and the freedom to model your backend in terms of business rules and not "RESTful" norms.