←back to thread

175 points nateb2022 | 7 comments | | HN request time: 0.411s | source | bottom
Show context
glutamate ◴[] No.41521214[source]
Love the idea, but i am having a hard time finding out what the code looks like. Where can i see the code for spawn, receive and send?

> ... a command-line utility designed to simplify the process of generating boilerplate code for your project based on the Ergo Framework

Why is there any boiler plate code at all? Why isn't hello world just a five line programme that spawns and sends hello world somewhere 5 times?

replies(3): >>41521800 #>>41522012 #>>41523043 #
whalesalad ◴[] No.41521800[source]
I was looking for the same thing. A project like this really needs an `examples/` directory with a few projects to sink your teeth into.

I've been thinking for years that if a project existed like this for Python it would take over the world. Golang is close, I guess.

replies(2): >>41521998 #>>41522357 #
nvarsj ◴[] No.41521998[source]
It's right there. https://github.com/ergo-services/examples

It looks like a close copy of Erlang APIs, albeit with the usual golang language limitations and corresponding boilerplate and some additional stuff.

Most interesting to me is it has integration with actual Erlang processes. That could fill a nice gap as Erlang lacks in some areas like media processing - so you could use this to handle those kind of CPU bound / native tasks.

  func (a *actorA) HandleMessage(from gen.PID, message any) error {
    switch message.(type) {
      case doCallLocal:
        local := gen.Atom("b")
        a.Log().Info("making request to local process %s", local)
        if result, err := a.Call(local, MyRequest{MyString: "abc"}); err == nil {
          a.Log().Info("received result from local process %s: %#v", local, result)
        } else {
          a.Log().Error("call local process failed: %s", err)
        }
        a.SendAfter(a.PID(), doCallRemote{}, time.Second)
        return nil
replies(4): >>41522124 #>>41522166 #>>41522289 #>>41524383 #
1. fidotron ◴[] No.41522289[source]
Honestly for Erlang integration just use NIFs or an actual network connection.

That golang is a mess, and demonstrates just what a huge conceptual gap there really is between the two. Erlang relies on many tricks to end up greater than the sum of its parts, like how receiving messages is actually pattern matching over a mailbox, and using a tail recursive pattern to return the next handling state. You could conceivably do that in golang syntax but it will be horrible and absolutely not play nicely with the golang runtime.

replies(2): >>41522803 #>>41522971 #
2. jerf ◴[] No.41522803[source]
The ideal situation for this sort of code is to basically treat it as marshalling code, which is often ugly by its nature, and have the "payload" processing be significantly larger than this, so it gets lost as just a bit of "cost of doing business" but is not the bulk of the code base.

Writing safe NIFs has a certain intrinsic amount of complication. Farming off some intensive work to what is actually a Go node (or any other kind of node, this isn't specific to Go) is somewhat safer, and while there is the caveat of getting the data into your non-BEAM process up front, once the data is there you're quite free.

Then again, I think the better answer is just to make some sort of normal server call rather than trying to wrap the service code into the BEAM cluster. There's not actually a lot of compelling reasons to be "in" the cluster like that. If anything it's the wrong direction, you want to reduce your dependency on the BEAM cluster as your message bus.

(Non-BEAM nodes have no need to copy using tail recursion to process the next state. That's a detail of BEAM, not an absolute requirement. Pattern matching out of the mailbox is a requirement... a degenerate network service that is pure request/response might be able to coincidentally ignore it but it would be necessary in general.)

replies(1): >>41529126 #
3. amdsn ◴[] No.41522971[source]
NIFs have the downside of potentially bringing down the VM don't they? It's definitely true that the glue code can be a pain and may involve warping the foreign code into having a piece that plays along nicely with what erlang expects. I messed around with making erlang code and python code communicate using erl_interface and the code to handle messages pretty much devolved into "have a running middleman process that invokes erl_interface utilities in python via a cffi wrapper, then finally call your actual python code." Some library may exist or could be written to help with that, but it's a lot when you just wanna invoke some function elsewhere. I also have not tried using port drivers, the experience may be a bit different there.
replies(3): >>41524674 #>>41526978 #>>41529112 #
4. toast0 ◴[] No.41524674[source]
Yeah, NIFs are dynamically linked into the running VM, and generally speaking, if you load a binary library, you can do whatever, including crashing the VM.

BEAM has 4 ways to closely integrate with native code: NIFs, linked in ports, OS process ports (fork/ecommunicate over a pipe), and foreign nodes (C Nodes). You can also integrate through normal networking or pipes too. Everything has plusses and minusses.

5. hosh ◴[] No.41526978[source]
NIFs do have that downside. Rust NIFs mitigates some of those risks, but that doesn't work so well with other languages.

Port drivers have their own tradeoffs, but you can retain the fault isolation.

6. pdimitar ◴[] No.41529112[source]
Yeah, a NIF can bring down the entire OS process but I've used quite a bit of Rust NIFs with Elixir and never once had a crash. With Rust you can make sure nothing ever goes down, minus stuff that's completely out of your control of course (like a driver crash).
7. pdimitar ◴[] No.41529126[source]
In my 8.5 years of Elixir practice I found it much easier to just use a Rust NIF or, in extreme cases, publish to an external job queue. Had success with one of Golang's popular ones (River); you schedule stuff for its workers to do their thing and they publish results to Kafka. Was slightly involved but IMO much easier than trying to coax Golang / Java / C++ / Rust nodes join a BEAM cluster. Though I am also negatively biased against horizontal scaling (distribution / clusters) so there's also that.