Most active commenters
  • (10)
  • jtolds(7)
  • hacknat(7)
  • jerf(6)
  • msbarnett(5)
  • david-given(5)
  • sagichmal(5)
  • pcwalton(4)
  • catnaroek(4)
  • pmarreck(4)

Go channels are bad

(www.jtolds.com)
298 points jtolds | 164 comments | | HN request time: 2.319s | source | bottom
1. Jabbles ◴[] No.11210740[source]
Effective Go has always said:

Do not communicate by sharing memory; instead, share memory by communicating.

This approach can be taken too far. Reference counts may be best done by putting a mutex around an integer variable, for instance.

https://golang.org/doc/effective_go.html#sharing

replies(3): >>11210862 #>>11210978 #>>11210990 #
2. advanderveer ◴[] No.11210780[source]
I see channels as an architectural option when it comes to structuring the communication between components of my software. Mutexes are another option that are more effective in situations where multiple threads may access the interface of a single structure. E.g I use channels to distribute os.Signals througout my software and a mutex for making a "context" structure thread safe. Right tool for the right job
replies(1): >>11211049 #
3. woodcut ◴[] No.11210785[source]
I find it hard to read something when the language used is so patronising.
replies(3): >>11210853 #>>11210860 #>>11210965 #
4. travjones ◴[] No.11210794[source]
This was a well-written and entertaining post. It represents the kind of self-reflection every programming community should encourage. Too often are devs zealously supportive of their language of choice without considering thoughtful critiques that could make their chosen language even better, and/or present an alternate way of looking at things that makes one better at programming in general.
5. nevir ◴[] No.11210853[source]
Hmm, didn't get that vibe at all. Tone came off as a little exasperated to me, but mostly all about giving enough background to back up the claims
6. feathj ◴[] No.11210860[source]
The language is one thing. I am so tired of trying to read articles riddled with gifs.
replies(4): >>11210937 #>>11210944 #>>11210984 #>>11211067 #
7. poizan42 ◴[] No.11210862[source]
Reference counts are best done using interlocked increment/decrement primitives.
replies(2): >>11210871 #>>11211088 #
8. catnaroek ◴[] No.11210871{3}[source]
s/interlocked/atomic/
9. lazyjones ◴[] No.11210886[source]
Saving the highscore in a goroutine becomes more interesting if that action can block or simply take a while, both more realistic occurrences than such minimalistic examples.
10. ◴[] No.11210897[source]
11. dilap ◴[] No.11210911[source]
Funny timing for me -- last Friday I rewrote some code from channels to traditional sync primitives (to the code's improvement), and I was musing in my head that while everyone always says "don't communicate by sharing, share by communicating, yada yada," it doesn't seem to work out that way in practice.

I think the article is well-written, and clearly comes from a place of deep experience and understanding. Good stuff.

replies(1): >>11211900 #
12. arethuza ◴[] No.11210937{3}[source]
I agree - very distracting. However, at least it made me find the "Disable Image Animations" option in my browser!

Edit: If you have web developer toolbar installed in Firefox then it is:

Images > Disable Images > Disable Image Animations

13. plorkyeran ◴[] No.11210944{3}[source]
Luckily the article doesn't use any images for content, so disabling images for the page (or deleting them all via the inspector) was an option, and made the article far more readable.

I wish I had the magic power of being able to read text with an animated gif next to it without getting distracted every other word that some people apparently have.

14. drdaeman ◴[] No.11210958[source]
Offtopic: that animated image is literally nauseating. Consider removing it, or making it animate just once and then halt. It was meant to be "fun" or whatever but, seriously, I wasn't able to read the text when it looped over and over in the corner of the eye.
replies(3): >>11211009 #>>11211062 #>>11211110 #
15. coldtea ◴[] No.11210965[source]
The tone of the article is quite casual and clear, and the contents are extremely accurate. If that's patronizing, don't ever read Rob Pike (or Linus for that matter)...
16. api ◴[] No.11210978[source]
I thought Go had gc. Why would you ever need reference counts?
replies(2): >>11211015 #>>11211294 #
17. amelius ◴[] No.11210979[source]
This article should have been titled: "Go channels considered harmful" :)
replies(1): >>11211038 #
18. coldtea ◴[] No.11210984{3}[source]
I'm so tired of reading negative comments about entirely subjective (others might appreciate the gifs) and totally skippable if one doesn't like them (you can also ignore them) elements of a good post.
replies(2): >>11211103 #>>11211259 #
19. mike_hearn ◴[] No.11210990[source]
I'm not sure making an absolute statement ("do not...") followed by "... actually do, sometimes" is helpful. How is this different to any other language that gives you a toolbox of synchronisation primitives?
20. hacknat ◴[] No.11211002[source]
I think I've just come to accept that sychronization is the pain point in any language. It's callbacks, promises, and the single event loop in nodejs. It's channels in golang.

No one can come up with a single abstraction for synchronization without it failing in some regard. I code in go quite a bit and I just try to avoid synchronization like the plague. Are there gripes I have with the language? Sure, CS theory states that a thread safe hash table can perform just about as well as a none-thread safe, so why don't we have one in go? However...

Coming up with a valid case where a language's synchronization primitive fails and then flaming it as an anti-pattern (for the clicks and the attention, I presume) is trolling and stupid.

replies(3): >>11211077 #>>11211292 #>>11211863 #
21. tantalor ◴[] No.11211009[source]
http://superuser.com/questions/23655/how-to-stop-animated-gi...
replies(1): >>11211254 #
22. voidlogic ◴[] No.11211015{3}[source]
Non-memory resources that exist within your application.

Example: Perhaps once no instances of an object are in use, you want to remove that object from persistent storage such as a DB.

replies(1): >>11211115 #
23. kbenson ◴[] No.11211038[source]
Thankfully it wasn't, and we were spared the ensuing discussion about how "..considered harmful" is good/bad/overused/misunderstood/causes cancer.
replies(1): >>11211106 #
24. jemfinch ◴[] No.11211049[source]
Even when that's the case, it's rare that fixed-size buffered or unbuffered channels are really the best option for communication between different components of your software. A simple mutex-guarded queue is easier to begin with and easier to evolve when requirements change. You can prioritize queued work trivially and transparently; you can add batch processing, monitoring, and resolve other production issues without any undue refactoring: it can all be encapsulated behind your mutex-guarded queue.

It's really quite a pity that Go's channel syntax treats channels as unique snowflakes, rather just being sugar for calls into an interface that would allow the underlying channel implementation to differ based on software needs.

replies(1): >>11211635 #
25. hacknat ◴[] No.11211054[source]
Should have been titled, "Interesting Cases Where Go Channels Fail"...
26. jtolds ◴[] No.11211062[source]
Alright, alright, I froze a bunch of the animated gifs. There were too many, I get it.
replies(2): >>11211183 #>>11211243 #
27. jtolds ◴[] No.11211067{3}[source]
Okay, I froze most of the gifs.
28. david-given ◴[] No.11211075[source]
I've always really, really liked Ada's rendezvous-based concurrency.

There's more to it than I can really describe here, but in effect it allows you to treat a thread as an object with methods; calling a method on the object sends a message to the thread. The thread's main code can, at any point, block and wait for a message, or combination of messages.

The handling code looks like this:

    ...some code...
    accept DoSomething(value: in out integer) do
      ...some code here...
    end
    ...some more code...
That causes the thread to block and wait for the message. When the message is sent, the caller blocks, the receiver runs the handler, then the caller resumes.

The beauty here is that inside the message handler, you know that the caller is blocked... which means it's safe to pass parameters by pointer[]. Everywhere the parameter's in scope, the parameter is safe to use. The type system won't let the thread store the pointer anywhere without copying the contents first, so you get zero-copy messaging and* it's failsafe.

You can also do really cool stuff with timeouts, guards, automatic thread termination, etc. Here's a simple multithreaded queue (syntax and logic not guaranteed, it's been a while):

    loop
      select
        when usage < capacity =>
          accept Push(value: in integer) do
            data[usage] := value;
            usage := usage + 1;
          end;
      or
        when usage > 0 =>
          accept Pop(value: out integer) do
            usage := usage - 1;
            value := data[usage];
          end;
      or
        terminate;
      end select;
    end loop;
Multithreaded! Blocks the client automatically if they pop while the queue's empty or push while it's full! Automatically terminates the thread when the last connection goes away and the thread leaves scope! Thread safe! Readable!

I'd love to be able to do this in a more mainstream language.

[*] This is a simplification. Ada's pointers are not like other language's pointers.

replies(3): >>11211589 #>>11211639 #>>11211817 #
29. yetihehe ◴[] No.11211077[source]
> No one can come up with a single abstraction for synchronization without it failing in some regard.

Erlang did. Or at least it's as close as possible.

replies(2): >>11211145 #>>11211260 #
30. chrisseaton ◴[] No.11211088{3}[source]
I wonder if there are any compilers which can replace

    mutex.lock { x++ }
With a 'lock xaddl x 1' instruction.
replies(4): >>11211325 #>>11211581 #>>11211703 #>>11212909 #
31. Animats ◴[] No.11211091[source]
Good points.

The author points out that channel teardown is hard. He's right. Figuring out how to shut down your Go program cleanly can be difficult, especially since calling "close" on a closed channel causes a panic. You have to send an EOF on each channel so the receiver knows to stop. When you have a pair of channels going in opposite directions between two goroutines, and either end can potentially initiate shutdown, it gets messy.

At least in the original implementation, "select" for more than one option was really slow and complex. The single-input case was handled efficiently with generated code, but for N > 1, a very general library mechanism with several heap allocations for each message was used. This means having both a wait for data and a timeout in a select puts you through the slow path. Not good. Someone did an analysis of existing programs and found that N=1 was most common, N=2 was reasonably common, and N>2 was rare. N=2 needs special case support.

QNX interprocess messaging has a similar architecture. But they don't have the panic on close problem, and timeout is handled efficiently. So you can generally shut things down by closing something. As each process is notified about the close, it closes any channels with which it is involved, even if some other process has already closed them. The closes thus cascade and everything shuts down cleanly. Processes that time out at a message receive check to see if the rest of the system is still running, and shut down if it's not.

Go's "share by communicating" would be more useful if Go had Rust's borrow checker, so you could share data without races. Yes, Go has a run-time race detector, but that's only useful if races are common enough that they occur during testing.

replies(3): >>11211222 #>>11211330 #>>11211375 #
32. woodcut ◴[] No.11211103{4}[source]
Criticising how information is communicated is wholly valid.
replies(1): >>11211867 #
33. falcolas ◴[] No.11211106{3}[source]
Instead we get the debate of whether "and you should feel bad" is appropriate in a title.

I don't believe it is, but ultimately click bait is click bait.

replies(1): >>11211175 #
34. PaulHoule ◴[] No.11211108[source]
Pixie dust that makes concurrency problems go away is an antipattern.
35. michaelwww ◴[] No.11211110[source]
I felt the same way so I made a bookmarklet to blank out images. I see it's not needed now, but I'm set for future pages that do this.

javascript:(function (){var x = document.getElementsByTagName("img");for (i = 0; i < x.length; i++){x[i].setAttribute("src","");}}());

36. sagichmal ◴[] No.11211112[source]
This is a frustrating and overly-exasperated post which reaches conclusions that have always been part of the Go canon. APIs should be designed synchronously, and the callers should orchestrate concurrency if they choose -- yes! Channels are useful in some circumstances, but if you just want to synchronize access to shared memory (like the author's example) then you should just use a mutex -- yes! These are well understood truths.

Novices to the language have a tendency to overuse channels. Here is Andrew Gerrand addressing precisely this point two years ago at GopherCon: https://www.youtube.com/watch?v=u-kkf76TDHE&t=815

Neither the referenced performance characteristics via Tyler Treat, nor the FUD surrounding channel-based program architecture, invalidate channels generally. One does have to think carefully about ownership hierarchies: only one goroutine gets to close the channel. And if it's in a hot loop, a channel will always perform worse than a mutex: channels use mutexes internally. But plenty of problems are solved very elegantly with channel-based CSP-style message passing.

It's unfortunate that articles like this are written and gain traction. The explicit instruction to [new] Go programmers is that they should avoid channels, even that they are badly implemented, and both of those things are false.

replies(7): >>11211239 #>>11211262 #>>11211272 #>>11211656 #>>11211660 #>>11214091 #>>11228064 #
37. api ◴[] No.11211115{4}[source]
Does Go not have finalizers? These are mostly solved problems since the Smalltalk era. Haven't learned Go yet and from what I read I'd be better off with Rust or something that would stretch my brain more like Haskell. When I read about it I get the sense that we are reinventing stuff from the 90s. But hey, it's hip.
replies(3): >>11211141 #>>11211151 #>>11211331 #
38. gravypod ◴[] No.11211129[source]
In non-critical things (not important to execution speed), is it still acceptable to use go channels? I'm always weary of using a mutex because then I have to spend a much larger amount of time checking to see if it will lock.
39. catnaroek ◴[] No.11211141{5}[source]
Sometimes you need to guarantee that resources are cleaned up in a timely fashion. Finalizers don't help here.
40. hacknat ◴[] No.11211145{3}[source]
I'm not saying Erlang isn't great, but if you need to pass a large datastructure around between Erlang processes then copy message passing starts to be a lot and you need to share memory. You can do it in Erlang, but I'd hardly call it great, and you're avoiding the sync primitive that Erlang offers.
replies(2): >>11211256 #>>11211361 #
41. hacknat ◴[] No.11211151{5}[source]
Piggy backing off @catnaroek, go does have finalizers too though.
42. GhotiFish ◴[] No.11211155[source]
a nice feature of this post would be the ability to click on images to hide them.

Normally I have ublock to remove distracting elements.

43. kbenson ◴[] No.11211175{4}[source]
Eh, I don't mind click bait titles as long as the article delivers, and the title isn't too egregious in its manipulation. In this case, I think it's pretty well understood by most that the title is poking fun, since taking it truthfully is fairly ridiculous.
44. ◴[] No.11211183{3}[source]
replies(1): >>11211226 #
45. ◴[] No.11211195[source]
46. helper ◴[] No.11211222[source]
The panic when calling close on a closed channel is a bit annoying. Recently I've been using x/net/context to signal goroutines instead of closing a channel. The CancelContext allows you to call cancel multiple times.
47. ◴[] No.11211226{4}[source]
48. mordocai ◴[] No.11211239[source]
As a non-go programmer, I'm pretty sure the author made some very good objective arguments that channels are in fact badly implemented.
replies(2): >>11211298 #>>11211519 #
49. gkya ◴[] No.11211254{3}[source]
https://en.wikipedia.org/wiki/List_of_web_browsers
50. catnaroek ◴[] No.11211256{4}[source]
How about Rust's “share by transferring ownership”?

(0) In the general case, whatever object you give to a third party, you don't own anymore. And the type checker enforces this.

(1) Unless the object's type supports shallow copying, in which case, you get to keep a usable copy after the move.

(2) If the object's type doesn't support shallow copying, but supports deep cloning, you can also keep a copy [well, clone], but only if you explicitly request it.

This ensures that communication is always safe, and never more expensive than it needs to be.

---

Sorry, I can't post a proper reply because I'm “submitting too fast”, so I'll reply here...

The solution consists of multiple steps:

(0) Wrap the resource in a RWLock [read-write lock: http://doc.rust-lang.org/std/sync/struct.RwLock.html], which can be either locked by multiple readers or by a single writer.

(1) The RWLock itself can't be cloned, so wrap it in an Arc [atomically reference-counted pointer: http://doc.rust-lang.org/std/sync/struct.Arc.html], which can be cloned.

(2) Clone and send to as many parties as you wish.

---

I still can't post a proper reply, so...

Rust's ownership and borrowing system is precisely what makes RWLock and Arc work correctly.

replies(1): >>11211307 #
51. muraiki ◴[] No.11211259{4}[source]
The gifs were actually causing Firefox to periodically freeze for me. For some reason it worked in reader mode, even though the gifs were still shown. This makes no sense to me, but in the end whatever was going on with the gifs initially caused the article to not only be unreadable but to negatively affect my entire browser. As such, I think it's reasonable to point this out in this case.
52. jerf ◴[] No.11211260{3}[source]
I've been bitten by the fact that Erlang lacks a channel-like primitive. You've got half-a-dozen "pool" abstractions on github because it's actually sorta hard to run a pool on pure asynchronous messages when there is absolutely no way to send a message out to "somebody", the way Go channels can have multiple listeners. I know that would only work on a local node but there's already a couple of functions that have already penetrated that abstraction anyhow.

You also have to deal with mailboxes filling up, still have problems with single processes becoming bottlenecks, and the whole system is pervasively dynamically typed which is fine until it isn't.

It is pretty good, but it's not the best possible. (Neither is Go. I still like Erlang's default of async messages better in a lot of ways. I wish there was a way to get synchronous messages to multiple possible listeners somehow in Erlang, but I still think async is the better default.)

replies(1): >>11211606 #
53. bad_user ◴[] No.11211262[source]
> APIs should be designed synchronously, and the callers should orchestrate concurrency if they choose

Wait, why would you say that?

In general, if "orchestrating concurrency" involves guarding access to shared mutable state, then you can't orchestrate it at the callers site. It would be a massive encapsulation leak, because synchronization is not composable, requires special knowledge, plus you don't necessarily know how to synchronize unless you have knowledge about internals. Furthermore, because it is after the fact, your only choice of handling it is by means of mutexes, which has really terrible performance characteristics. Even if you could do ordering by other means, you end up paying the price of LOCK XCHG or whatever mutexes translate to, not to mention that you'll have problems if you want (soft) real-time behavior, because now you can end up with both dead-locks and live-locks.

And this brings us to another problem. If you end up doing such synchronization in Go, then Go's M:N multi-threading ends up doing more harm than good, because if you need such synchronization, you also need to fine tune your thread-pools and at this point 1:1 would be better. On top of 1:1 platforms you can build M:N solutions, but it doesn't work well in the other direction.

> Novices to the language have a tendency to overuse channels

Novices to software development have a tendency to overuse mutexes as well.

replies(4): >>11211326 #>>11211353 #>>11211663 #>>11222537 #
54. hamburglar ◴[] No.11211264[source]
Non Go programmer here. Can someone explain the initial goroutine leak that is being addressed? I don't see the issue.
replies(4): >>11211441 #>>11211510 #>>11211514 #>>11211630 #
55. pklausler ◴[] No.11211265[source]
Channels are great, but I prefer lazy lists.
56. mordocai ◴[] No.11211272[source]
In addition, the author points out existing go libraries that people use that use channels when they shouldn't, so apparently the go language community needs more people pointing out that this is a bad idea.

(I decided to make a new comment rather than edit my existing comment)

57. zzzcpan ◴[] No.11211292[source]
> I think I've just come to accept that sychronization is the pain point in any language.

No, it's not. Everything is easier with event loops, because everything is always synchronized. And since it is, there is no need for concurrent hash tables, locks, channels, you name it. There is also no more shutdown and cancellation problems, you get them for free and easier than anything. The only thing left is a __consistent__ API with callbacks. But as long as you go with higher order functions you are not going to have any problems.

replies(1): >>11211318 #
58. chrsm ◴[] No.11211294{3}[source]
Example: Recently built a small service that responds to requests and walks various files looking for data.

The service can be asked to unload (close) the file, but it's hard to say whether it's in-use at the time without some kind of reference count to current requests using the file.

replies(1): >>11211343 #
59. ◴[] No.11211293{4}[source]
60. sagichmal ◴[] No.11211298{3}[source]
Read Tyler's original article for a less FUDdy take on it. Channels are always slower than mutexes, which is obvious when you understand their implementation. They are definitely not badly implemented as a general rule.
replies(1): >>11212165 #
61. hacknat ◴[] No.11211307{5}[source]
What if you want multiple readers at once, and a writer thrown in once in a while?

Edit:

Okay, my point was that the sync primitives of most languages alone can't save you and you're using RWLock in your example, so clearly ownership by itself doesn't solve everything, right? That's the point I'm trying to make.

Edit2:

Hmm, I'll have to check that out. I don't know that I would call Rust's ownership model super easy to reason about, but it is nice that the compiler prevents you from doing so much stupid $#^&.

replies(2): >>11211466 #>>11213916 #
62. hacknat ◴[] No.11211318{3}[source]
What if you need to do a compute intensive task on a large data structure? You know you might need to take advantage of more than one core and sharing memory between the threads will be difficult. Assuming you're talking about nodeJS, nodeJS serializes and deserializes objects in and out of C++ land in order to do compute intensive tasks. Hardly a catch all!

Are event loops good at some things? Of course! Are the good at everything. Are you high?

replies(1): >>11211620 #
63. nly ◴[] No.11211325{4}[source]
It's conceivable, if you made mutexes compiler/language intrinsic, but as long as you're calling pthread_mutex_lock, the compiler has to assume that that pthread library, which is linked dynamically, is interchangeable, and can do anything it likes to memory. That includes mutating x
replies(1): >>11213078 #
64. sagichmal ◴[] No.11211326{3}[source]
> In general, if "orchestrating concurrency" involves guarding access to shared mutable state, then you can't orchestrate it at the callers site.

Shared mutable state is generally behind an API boundary. I'm talking about the exported method set of that API. That is,

    func (f *Foo) Update(i int) (int, error)             // yes
    func (f *Foo) Update(i int) (<-chan int, error)      // no
    func (f *Foo) Update(i int, result chan<- int) error // no
65. jerf ◴[] No.11211330[source]
"When you have a pair of channels going in opposite directions between two goroutines, and either end can potentially initiate shutdown, it gets messy."

It does get messy to do it correctly, but I've found in the end it comes out less messy to have a channel communicating back to the sender that can be closed if you want the recipient to be able to close channel. I haven't needed it very often, but it happens. It still ends up simpler than hacking around the problem by trying to "close" the channel from the wrong end and the resulting panic handling.

For concreteness, at least from what I've experienced, the "messiness" is that if you close one of these channels, you may have to "drain" the other channel lest you let the other side block. If the other side is only using the channel in a "select" block with other options you may not need to but if it ever does a "bare" send you need to wait for the other end to send its close. This can be particularly complicated if for some reason the "draining" process has to do something other than drop the messages on the floor.

66. pcwalton ◴[] No.11211331{5}[source]
Go does have runtime.SetFinalizer: https://golang.org/pkg/runtime/#SetFinalizer

But beware of finalizer ordering issues.

67. nly ◴[] No.11211343{4}[source]
That's kinda what dup() is for. Reference counting happens in the kernel.
68. jerf ◴[] No.11211353{3}[source]
"In general, if "orchestrating concurrency" involves guarding access to shared mutable state, then you can't orchestrate it at the callers site."

What this generally means in Go is that it is an antipattern for your library to provide something like "a method that makes an HTTP request in a goroutine". In Go, you should simply provide code that "makes an HTTP request", and it's up to the user to decide whether they want to run that in a goroutine.

The rest of what you're talking about is a completely different issue.

Channels are smelly in an API. IIRC in the entire standard library there's less than 10 functions/methods that return a channel. But the use case does occasionally arise.

69. felixgallo ◴[] No.11211361{4}[source]
Erlang lifts sufficiently large binaries into refs, which isn't perfect but pragmatically helps a lot with that problem.
70. r_sreeram ◴[] No.11211375[source]
> N=1 was most common

Why would somebody use "select" for this at all? I.e., if you were going to write:

  select {
    case send/receive statement:
      statement
      ...
  }
Why not just write:

  send/receive statement
  statement
  ...
What am I missing?
replies(2): >>11211433 #>>11213162 #
71. b169118 ◴[] No.11211393[source]
How can I play the gifs again?
72. spenczar5 ◴[] No.11211433{3}[source]
I believe the GP was referring to select-with-one-case-and-a-default, like

  select {
    case <- ch:
    default:
  }
73. aardvark179 ◴[] No.11211441[source]
I'm not a go programmer, but think of it like this.

You start a game, and that starts a goroutine that goes round in a loop getting scores from a channel. You have players which also have references to the channel and who put scores onto it.

When all the players have left the only thing that has access to the channel is the game's goroutine. It's not consuming CPU itself because it's simply waiting for something to be put on its channel, but it does still have its stack and other resources, and it now has no way to exit.

You can get this sort of resource leak in lots of ways in concurrent systems, and they all essentially boil down to the same thing, a thread or goroutine, or whatever, is waiting on a resource that nothing else has a reference to anymore, and there is no other way end it.

74. pcwalton ◴[] No.11211466{6}[source]
> Okay, my point was that the sync primitives of most languages alone can't save you and you're using RWLock in your example, so clearly ownership by itself doesn't solve everything, right?

The thing is that Rust ensures that you take the locks properly. It's an compile-time error to forget to take the lock or to forget to release the lock†. You can't access the guarded data without doing that.

† For lock release, it's technically possible to hold onto a lock forever by intentionally creating cycles and leaking, but you really have to go out of your way to do so and it never happens in practice.

75. ◴[] No.11211481[source]
76. r_sreeram ◴[] No.11211510[source]
> Can someone explain the initial goroutine leak that is being addressed?

The "for score := range g.scores {" loop runs forever, since nothing ever closes the g.scores channel. I.e., the "range" only terminates when the channel is explicitly closed. Even if there are no current senders on the channel, and even if nobody else holds a reference to the channel (and thus nobody else could potentially create a new sender and start sending on that channel), Go doesn't realize it (garbage collection doesn't help here). The "range" waits forever.

Thus, all goroutines that run this code (via calls to NewGame(), via "go g.run()") will run forever, and leak, as long as something else in the program is running. When the rest of the program is done, Go will correctly detect that all these leaked goroutines are blocked and thus it's a deadlock, leading Go to panic ("no goroutines can proceed") and terminate.

77. ◴[] No.11211514[source]
78. voidlogic ◴[] No.11211519{3}[source]
Hammers make poor screw drivers. Mutexes, atomic integer operations, and channels (buffered and unbuffered) all have their place. You will think any of these is "badly implemented" if you choose the wrong tool for the job.
79. pjmlp ◴[] No.11211581{4}[source]
Java and .NET will do it if you make use of InterlockedIncrement APIs.
replies(1): >>11211672 #
80. IanCal ◴[] No.11211589[source]
If I've understood this correctly, this sounds very much like erlangs concurrency. Elixr I guess by extension too.

Old, tatty code but:

https://github.com/IanCal/semq/blob/master/src/messagequeue....

This made message queues that would pause if there was a pop on an empty queue (for long-polling), supports removing everything and if a new 'client' connects while another is waiting for an item sends an error message to the original client. I'm sure there's a neater way of doing it but this sat and ran for quite a while for me and didn't take long to write :)

Generally, the loops are achieved by making an infinitely recursive function call, and you can therefore switch between major behaviours by having multiple functions.

For a quick syntax thing, sending a message is "address ! message" and what I think the "accept" in your code is equivalent to a 'receive' in mine.

You won't have the same type safety, but the general pattern of just blocking and waiting safely is there. It's a fun language, and people seem to be pretty happy with elixr these days too (built on top).

There's some better examples here:

http://learnyousomeerlang.com/the-hitchhikers-guide-to-concu...

http://learnyousomeerlang.com/more-on-multiprocessing

replies(1): >>11213369 #
81. divan ◴[] No.11211592[source]
I didn't get the point of example with Game and Player. The code behaves exactly how it's told to. If you need some logic to handle conditions where all players have been disconnected - you should implement it, no matter how. Maybe you want to wait for some time for a new players and teardown only after this timeout. Or, maybe, you want to reuse this game object, moving to some kind of pool (like sync.Pool). Or, perhaps, you really want to wait forever for returning players. It's not 'mutex vs channels' example in any way.

It's not 'fix goroutine leak' it's "write the logic you want", it's that simple.

Next, channels are slow, really? Send-receive operation on unbuffered channel typically takes around 300ns. Nanoseconds. 300 nanosecond in exchange of nice and safe way to express concurrent things - I wouldn't even call it a tradeoff. It's not slow at all in vast majority of cases. Of course, if you write that software that do care about nanoseconds and channels becomes your bottleneck - congratulations, you're doing great, and you probably have to switch to C++, Rust or even Assembler.

But, please, don't mislead people telling them, that channels are slow. They could be slow for your exact case, but it's not the same.

I don't really get the tone and arguments of the article. Some of the points are totally valid, but they easily falls into the 'hey folks, be careful about this small thing you may misunderstand at the beginning' category. Pity.

replies(3): >>11211665 #>>11211666 #>>11211933 #
82. yetihehe ◴[] No.11211606{4}[source]
> You've got half-a-dozen "pool" abstractions on github because it's actually sorta hard to run a pool on pure asynchronous messages when there is absolutely no way to send a message out to "somebody"

You can store receivers in ets table and implement any type of selection algorithm you want or have some process which selects workers. There is no default method, because one default method is not good for everyone and people will complain that it's not good for them. Implementing pools is easy in erlang, I've done tailored implementations for several projects.

> You also have to deal with mailboxes filling up

Yeah, unless you implement back-pressure mechanism like waiting for confirmation of receiving. In ALL systems you have to deal with filling queues.

> I wish there was a way to get synchronous messages to multiple possible listeners somehow in Erlang

You can implement receiver which waits for messages and exits when all are received or after timeout, it's trivial in erlang but I haven't needed it yet. Here is a simple example:

    receive_multi(Acc,0) ->
        Acc;
    receive_multi(Acc,Num) ->
        receive {special,Data} ->
            receive_multi([Data|Acc],Num-1)
        after 5000 ->
            Acc
        end.
replies(2): >>11211833 #>>11212236 #
83. zzzcpan ◴[] No.11211620{4}[source]
Well, no, I'm not talking about nodejs. Just in general, about event loops in programming languages.

> What if you need to do a compute intensive task on a large data structure?

That's a very specialized thing, not something general, that everyone needs. But either way there is no problem abstracting it away with higher order functions in event loops.

However, everyone will most definitely need networking and doing networking by sharing memory between threads is very very hard. Event loops are much easier for that.

replies(2): >>11212220 #>>11212288 #
84. divan ◴[] No.11211630[source]
It's not actually a leak. It's a program explicitly doing 'run goroutine and don't care of it anymore'. If the program logic wants this - it's ok. If author wants it to finish on some condition, but didn't write the condition code (like in this article) - it's a leak, but it's purely author's mistake.
replies(1): >>11211684 #
85. richard_todd ◴[] No.11211635{3}[source]
That's an excellent example (in a long list) of things that would be possible with generics, or even parameterized packages. They could have provided an interface Channel[T] with syntax sugar if desirable. But as it is, everything in Go that can handle multiple types has snowflake status.
86. majewsky ◴[] No.11211639[source]
If I read the snippet correctly, that's a stack, not a queue.
replies(1): >>11212245 #
87. elcct ◴[] No.11211641[source]
There is probably a large number of developers who think "OMG my Go code doesn't have any channels and goroutines. Am I doing this right?" If you try to force a solution that isn't quite right for the given problem, then well, have fun. Case presented by the author I would naturally program with Mutexes, as me thinks using channels / goroutines is an overkill for this task.
88. _0w8t ◴[] No.11211642[source]
The article presents very similar arguments to those that I read in a book from 1982 or so. It discussed channels in Ada and pointed out that without super smart compilers that would turn channels into mutex operations the code using channels would be slower and more complex due to the need to create an extra threads.

Base on that I can predict that in 2050 I will also read an article discussing channels in yet another language and advocating using mutexes instead...

89. admiun ◴[] No.11211656[source]
"APIs should be designed synchronously, and the callers should orchestrate concurrency if they choose"

Just to add to this, I found the blog post he mentions[1] towards the bottom that supports this conclusion a really good read.

[1] http://journal.stuffwithstuff.com/2015/02/01/what-color-is-y...

90. soroso ◴[] No.11211659[source]
From http://go-proverbs.github.io/:

Channels orchestrate; mutexes serialize.

91. jtolds ◴[] No.11211660[source]
This post was written primarily as a response to http://www.informit.com/articles/article.aspx?p=2359758, which, when it came out last June, frustrated me to no end. It then sat in my drafts folder for months until patiently attempting to bring another experienced programmer, new-to-Go, up to speed on best practices.

If it truly is the accepted best practice for novices to avoid channels, then that PR campaign has been tried and found lacking. EDIT: whoops, read parent wrong.

replies(1): >>11211698 #
92. pmarreck ◴[] No.11211663{3}[source]
Reading this... only makes me gladder that I'm pursuing work in the Erlang/Elixir space, where messaging "just works" and concurrency "just works" and immutability "just works" (and new processes take a microsecond to spin up) and tearing anything down is basically a nonissue as failure is embraced and logged at every turn and cascading teardowns simply happen automatically depending on how the processes are linked

and all this turns out to be a really amazing system of paradigms when designing apps to work in the real world

replies(1): >>11212034 #
93. dllthomas ◴[] No.11211665[source]
I've done work where 300 nanoseconds is a noticeable chunk of my time budget...

(not, of course, in go...)

94. nemothekid ◴[] No.11211666[source]
>* Of course, if you write that software that do care about nanoseconds and channels becomes your bottleneck - congratulations, you're doing great, and you probably have to switch to C++, Rust or even Assembler.*

Thats ridiculous. I could switch my entire language... or I could just use a lock?

First off, looking at Tyler's post, he measured unbuffered channels taking 2200ns vs 400ns for the lock solution - a 5x speed up. That is a large gain, especially when dealing a program that may have high lock contention. Switch from Go to C++ or Rust my not even gain you that much in terms of throughput - they are both compiled code and moving to either language will only mainly alleviate of magic STW pauses - acquiring a lock likely won't be any faster.

Second, in the point of Game and Player, the logic to handle conditions where players disconnect is still simpler to implement with locks - its 2 lines, and there is no need to bring in sync.Pool, or introduce arbitrary timeouts.

Channels are slower than locks. In more complex applications, channels are easier to reason about than locks, but those tends to be in cases where you care more about message passing than state synchronization.

replies(1): >>11211692 #
95. chrisseaton ◴[] No.11211672{5}[source]
But I think InterlockedIncrement is just 'lock xaddl x 1', so using InterlockedIncrement would be to do it manually.

I'm asking if any compiler can take a statement which uses a high level, general purpose lock and increments a variable inside it using conventional language expressions, and convert it to use 'lock xaddl x 1' (perhaps via InterlockedIncrement or whatever other intrinsics you have) instead.

I only know Java well, not .NET, but I'm pretty sure no Java compiler does it.

replies(1): >>11213205 #
96. jtolds ◴[] No.11211684{3}[source]
There is a deliberate leak in the example program. The author was attempting to illustrate that fixing the leak using just channels would be a challenge.
replies(1): >>11211763 #
97. pcwalton ◴[] No.11211692{3}[source]
> Switch from Go to C++ or Rust my not even gain you that much in terms of throughput - they are both compiled code and moving to either language will only mainly alleviate of magic STW pauses

That is not the only performance-related difference between those language implementations. It's not even the most significant one.

For instance, there is a large difference between a compiler with LLVM's optimizations and one without an SSA backend at all.

replies(1): >>11212719 #
98. d_theorist ◴[] No.11211698{3}[source]
I think you have the sense of the last sentence of the parent comment backwards.
replies(1): >>11211717 #
99. mike_hearn ◴[] No.11211703{4}[source]
It's not quite the same thing but recent JVMs can translate synchronised blocks into Intel TSX transactions, which means multiple threads can run inside the lock at once, with rollback and retry if interference is detected at the hardware (cache line) level. So yeah .... almost. But it's fancy and cutting edge stuff.
replies(1): >>11211715 #
100. ◴[] No.11211715{5}[source]
101. jtolds ◴[] No.11211717{4}[source]
Oh, so I did.
102. tptacek ◴[] No.11211719[source]
You can express unbounded buffered channels in Go straightforwardly with the stacked channel idiom.
103. richard_todd ◴[] No.11211748[source]
I enjoyed the article and nodded along as I read it. But after, I felt like it was overstating its case a little. It puts up a toy implementation that kinda works, and then explains that to make it act correctly in the real world you have to add uglier code. I can't really see blaming the language constructs for that... show me a language where that doesn't happen!

I do appreciate that the article tries to deflate some of the hype about channels that you see when first investigating Go (I know I bought into it at first). After a little experience, I settled into a pattern of using channels for large-scale and orderly pipelines, and more typical mutexes and conditions for everything else. They have strengths and weaknesses, like all tools.

104. courtf ◴[] No.11211763{4}[source]
Synchronizing access to a memory address isn't really the use-case for channels. I think that's fairly well understood by the Go programmers I work with. This example demonstrates why, but it prefaces the discussion by implying this is the standard practice, which I think is misleading.
replies(1): >>11213854 #
105. msbarnett ◴[] No.11211817[source]
The older I get, the clearer it is that Ada was the answer to the last 2 decades worth of problems (fast, able to go low-level when you need to, very type safe, easy concurrency primitives) and we all just ignored it because it wasn't fashionable.
replies(3): >>11212044 #>>11212403 #>>11212438 #
106. jerf ◴[] No.11211833{5}[source]
"You can store receivers in ets table and implement any type of selection algorithm you want or have some process which selects workers."

Your process that selects workers has no mechanism for telling which are already busy.

It is easy to implement a pool in Erlang where you may accidentally select a busy worker when there's a free one available. Unfortunately, due to the nature of the network and the way computations work at scale, that's actually worse than it sounds; if one of the pool members gets tied up, legitimately or otherwise, in a long request, it will keep getting requests that it ignores until done, unnecessarily upping the latency of those other requests, possibly past the tolerance of the rest of the system.

"You can implement receiver which waits for messages and exits when all are received or after timeout, it's trivial in erlang but I haven't needed it yet."

That's the opposite of the direction I was talking about. You can't turn that around trivially. You can fling N messages out to N listeners, you can fling a message out to what always boils down to a random selection of N listeners (any attempt to be more clever requires coordination which requires creating a one-process bottleneck), but there is no way to say "Here's a message, let the first one of these N processes that gets to it take it".

You wouldn't have so many pool implementations if they weren't trying to get around this problem. It would actually be relatively easy to solve in the runtime but you can't bodge it in at the Erlang level; you simply lack the necessary primitives.

replies(2): >>11212031 #>>11213639 #
107. nostrademons ◴[] No.11211863[source]
Because concurrency is hard. You can't reason about concurrent programs the way you can about sequential ones, and no abstraction is going to completely fix that.

After having worked with it a fair bit, however, I'm beginning to really like Promises + async/await (as in ES7, Python 3.4, and C#). It manages to keep most of the concurrency explicit while still letting you use language mechanisms like semicolons, local variables, and try/catch for sequencing. If you make sure your promises are pure, you can also avoid the race conditions & composability problems of shared state + mutexes. (Although that requirement is easier said than done...it'll be interesting to see what Rust's single-writer multiple-reader ownership system brings to the mix.)

108. coldtea ◴[] No.11211867{5}[source]
Criticising how information is communicated yes.

But saying "I hate articles riddled with gifs" is far from Marshall McLuhan and Edward Tufte.

Especially since it's not some shallow Buzzfeed post, but a detailed technical explanation of a programming-related issue that the author took time and effort to write -- which makes complaining about its presentation petty.

The author obviously wanted to lighten it up and add some fun elements. And he provided his opinion and expertise for free. These kind of comments can mainly serve to discourage him from writing more, not get him to "improve" his communication.

replies(1): >>11212476 #
109. gort ◴[] No.11211900[source]
I had a similar experience in the opposite direction. Two weeks ago I moved some code from a mutex-based design (including a map of string to mutex, which itself needed to be behind a mutex) to channels, and I love it, though the result seemed about 10% slower.

I guess the message is: everything has its place; don't make a straight-jacket for yourself.

replies(1): >>11212662 #
110. f2f ◴[] No.11211918[source]
the Clive[1] system uses a fork of the Go language which allows readers to close channels (I think it's the most significant difference between the languages, if not the only one):

http://syssoftware.blogspot.com/2015/06/lsub-go-changes.html

--

1: http://lsub.org/ls/clive.html

111. stcredzero ◴[] No.11211933[source]
Of course, if you write that software that do care about nanoseconds and channels becomes your bottleneck - congratulations, you're doing great, and you probably have to switch to C++, Rust or even Assembler.

Why not profile to identify which channels are a bottleneck and just replace them with a Disruptor?

https://github.com/smartystreets/go-disruptor

112. yetihehe ◴[] No.11212031{6}[source]
Then it's even easier, pool selector just hands out free workers and deletes them from queue. When worker is free, it just sends a message "I'm free" and it gets added to "free" pool. Yes, it will be "one master process is a choke point" but it's only a problem when your tasks are so short that sending messages is slower than doing the work. But then probably sending messages is the wrong way to do those tasks. There are so many pool implementations because there are many possible solutions depending on what exact problem you have.
replies(1): >>11212177 #
113. svanderbleek ◴[] No.11212034{4}[source]
It "just works" in Go too, minus immutability, but congrats on your technology decision. You don't get type checking but c'est la vie.
replies(3): >>11212486 #>>11213900 #>>11214370 #
114. bliti ◴[] No.11212044{3}[source]
You and me both. I day dream about working on Ada. end of random quibble
115. _ph_ ◴[] No.11212053[source]
I am not a Go veteran, but can see where this article is not helpful. Yes, channels are not a solve-everything. That is, why the Go library also contains mutexes etc. The game serving example could have been fixed by adding a channel to signalize that the game is finished. The game runner function should listen on the "scores" and the "done" channel with a select. Or, not use a channel at all. The channels are great, when you just want a completely safe method of communicating between goroutines, as long as the communication reasonably falls in the "streaming" behavior of the channel model.
116. zeeboo ◴[] No.11212165{4}[source]
The api definitely is badly implemented and makes them hard to use. That's the point of the post. There are design decisions around channels (sends panicing, close panicing, nil channels blocking) that make them hard to understand, follow, and compose concurrent solutions.
replies(1): >>11212804 #
117. jerf ◴[] No.11212177{7}[source]
"Yes, it will be "one master process is a choke point" but it's only a problem when your tasks are so short that sending messages is slower than doing the work."

You're simply reiterating my point now, while still sounding like you think you're disagreeing. Yes, if you drop some of the requirements, the problem gets a lot easier. Unfortunately these are not such bizarre requirements, and Erlang tends to be positioned in exactly the spaces where they are most likely to come up.

"But then probably sending messages is the wrong way to do those tasks."

That translates to "Erlang is the wrong solution if that's your problem". Since my entire point all along here has been that Erlang is not the magic silver bullet, that's not a big problem for me.

118. jerf ◴[] No.11212220{5}[source]
Either your event handlers are going to be called in a nondeterministic order, or they won't.

If they are going to be called in a nondeterministic order, you still have access control issues and can get yourself into all sorts of concurrency-style problems.

If they aren't going to be called in a nondeterministic order, perhaps because you just have a single cascade of events (open socket, write this, get that, close socket), then in a language like Go you just write the "synchronous"-looking code, and you don't have to write the code as if it's evented. You have only marginally more sharing problems than the event loop.

Raw usage of event loops are a false path. They solve very few problems and introduce far more.

replies(1): >>11212641 #
119. querulous ◴[] No.11212236{5}[source]
message sending has backpressure built in. as a mailbox's size increases it gets more and more expensive (in reductions, the currency erlang uses for scheduling processes) for a process to send a message to it
120. david-given ◴[] No.11212245{3}[source]
Ah, you spotted my deliberate mistake!

Er, yes. It's a stack. Oops. Ta.

121. hacknat ◴[] No.11212288{5}[source]
>That's a very specialized thing, not something general

While polling for i/o may be common the next most common problem in computers is solving computationally complex tasks. Why is Intel making all these cores? I guess no one actually needs them, they just think they do.

122. david-given ◴[] No.11212403{3}[source]
I would love to have a modernised Ada. With case sensitivity. And garbage collection (a lot of the language semantics are obviously intended to be based around having a garbage collector. I'm very surprised that it never seemed to get one). And a less wacky OO system (invisible syntax, ugh).

But those are quibbles, and at it's heart it's still an excellent, excellent language. And there are Ada compilers in Debian, it's still being maintained, it compiles really quickly into excellent code, it interoperates beautifully with C...

replies(2): >>11212489 #>>11212538 #
123. com2kid ◴[] No.11212438{3}[source]
Agreed on this. Ada has many solutions that I wish I had access to in C and C++.

In regards to efficiency, Ada as a language can be optimized to a greater extent than C/C++. It avoids the aliasing problem all together, ALIASED is a keyword in Ada that must be explicitly used, by default the compiler prevents aliasing! Everything else in the language is very unambiguous, a lot of checks are done at compile time, and if needed for performance, run time checks can be turned off on a selective basis.

Combined with the optional but enabled by default since-you-are-going-to-write-them-anyone bounds checking on parameters, and a type/subtype system that lets me ACTUALLY DEFINE the ranges of every parameter going in and out of my function calls, well, whenever I look at a bug fix, I do a mental check of "would this even be possible to do wrong in Ada?" and for about 30% of bugs, I'd say no.

Ada's main disadvantage from an embedded point of view is the hoops it makes people go through to do bit manipulation. It is understandable why, bit manipulation breaks the entire type system in every possible way, but a lot of embedded development requires it. At some point it'd be nice if the language had a keyword that just said "this variable can be abused, let it live dangerously."

It also has proper tagged types and built in syntax for circular arrays. Two bits of code I am sick and tired of writing again and again in C, and then having to explain to people what a tagged type is.

Ada's main flaw is that it doesn't look like C.

replies(1): >>11212584 #
124. woodcut ◴[] No.11212476{6}[source]
alright, point taken.
125. raould42 ◴[] No.11212486{5}[source]
Dialyzer
126. com2kid ◴[] No.11212489{4}[source]
> And a less wacky OO system (invisible syntax, ugh).

Didn't Ada 2005 fix the OO system to give it the CLASS syntax everyone is used to?

Ada's usual syntax and declaring class inheritance are isomorphic with each other, the transformation a compilers does are the same, but non-JS programmers are used to class inheritance syntax.

I've always wondered if JS programmers would actually pick up on Ada's object system faster, just because they wouldn't mind the lack of an explicit inherits quite so much.

As for GC, I thought it was optional in Ada, just never implemented. For most of Ada's target audience though, heap allocators are already verboten, so GC isn't needed. :)

I'd really like some of Rust's ownership semantics along with Ada's already well developed feature set. Pointer ownership is still a gnarly problem, I don't recall what, if anything, Ada does to help out with it.

127. msbarnett ◴[] No.11212538{4}[source]
I don't mind the lack of GC. Storage pools are reminiscent of Objective-C's autorelease pools, which I've always thought were a very nice way of handling a group of objects' lifetimes.

> And a less wacky OO system (invisible syntax, ugh).

Not sure what you mean by this one?

> And there are Ada compilers in Debian, it's still being maintained, it compiles really quickly into excellent code, it interoperates beautifully with C...

I came across http://www.getadanow.com the other day. Really easy way to get Ada going on OS X, too.

replies(1): >>11213130 #
128. msbarnett ◴[] No.11212584{4}[source]
> Ada's main flaw is that it doesn't look like C.

Yeah, I used to think it looked bizarre, as someone who grew up with C and C++.

Having spent the last few years doing a lot of Ruby, though, I find looking at it now with fresh eyes it looks quite natural. Aesthetics really seems to boil down to simple familiarity.

129. zzzcpan ◴[] No.11212641{6}[source]
> Either your event handlers are going to be called in a nondeterministic order, or they won't.

The order is not going to be completely deterministic, but your whole program operates on explicitly deterministic units of computation that never implicitly execute in parallel (event handlers). This eliminates all of those issues with concurrent memory access.

Writing "synchronous" looking code cannot be a substitute, since it makes these units of computation implicit. After which it's no longer possible to distinguish which function call is going to yield, therefore dealing with concurrent memory access is going to be needed, just like in any multithreaded program.

So, no, event loops are superior to multithreaded model in almost every way.

130. dilap ◴[] No.11212662{3}[source]
No doubt it's quite easy to go down ratholes using mutexes and whatnot. It's a very low-level way of synchronizing.

The OPs critques of the specific design of channels as implemented by Go seem on-point to me.

131. ngrilly ◴[] No.11212719{4}[source]
The new Go SSA backend was merged into tip a few days ago:

https://groups.google.com/d/topic/golang-dev/49VaiLCDbeQ/dis...

132. sagichmal ◴[] No.11212804{5}[source]
I'm sorry, but I don't agree with any of your assertions. The constraints on channels are there not as an accident of a bad implementation, but as deliberate decisions to enforce a certain set of design contracts. Panics on invalid channel operations enforce those contracts. That nil channels block is actually an incredibly handy feature: see e.g. https://github.com/streadway/handy/blob/b8cb168/breaker/brea...

Without exception, hitting one of these corner cases exposes an error in design, from Go's perspective on CSP. You can disagree with that perspective on a subjective basis ("hard to understand") -- but you can't lift that opinion to objective fact, and you certainly can't claim these artifacts of design as evidence of incompetence or neglect.

replies(2): >>11213487 #>>11213762 #
133. jupp0r ◴[] No.11212909{4}[source]
There ist std::atomic in recent C++ flavors
replies(1): >>11213355 #
134. pcwalton ◴[] No.11213078{5}[source]
That hasn't inhibited optimizations for a long time. Disassemble a call to printf("Hello world") in optimized clang output and look at what it turns into.
replies(2): >>11213389 #>>11214716 #
135. david-given ◴[] No.11213130{5}[source]
Okay, so I can't duplicate the exact OO syntax issues I was having before. But, from memory, I was finding that by putting the wrong kind of statement between the type definition and the method declaration, I could thoroughly upset the compiler --- there was invisible syntax connecting the two together, and if you I put the wrong thing in between, then things stopped working.

But as I can't duplicate it it's entirely possible I was just hallucinating.

In general I find the OO syntax desperately confusing. It feels like it's been shoehorned in on top of the existing record and procedure syntax, and it's never clear exactly what anything does. e.g. you need to suffix the object type with 'class in methods in order to make them non dispatching, but you need to suffix the object type with 'class in variable types if you want them to dynamically dispatch? That's not a happy choice.

(Case in point: I've just spent 20 minutes trying to refresh my memory by making this code snippet work. And failing. What am I doing wrong? http://ideone.com/6iPdYF)

Incidentally, re getadanow.com: that's really nice! And it's not pointing at the Adacore compilers, either; beware of these, as their standard library is GPL, not LGPL, which means you can't distribute binaries built with them. (The standard GNAT version is fine.)

replies(1): >>11213728 #
136. jusssi ◴[] No.11213162{3}[source]
Channel send/recv is a blocking operation, select is a commonly used workaround to make it non-blocking.
137. pjmlp ◴[] No.11213205{6}[source]
Ah, I missed the point.
138. chrisseaton ◴[] No.11213355{5}[source]
Right. My question is whether we can translate locks which only reference a single variable, to use something like std::atomic, automatically.
139. Matthias247 ◴[] No.11213369{3}[source]
It's a little bit different. In Ada it's a real rendezvous. Either the "client" or the "server" task is running. In Erlang the mailbox is asynchronous, which means the server can't make any assumptions about what state the client is in while it works on processing the message and sending it back and the client can't assume that the server is directly working on the message after it put it in his mailbox.
140. chrisseaton ◴[] No.11213389{6}[source]
Yes if the library is covered by a standard just like the language is, then the compiler can make assumptions. Also threads are a language feature in C and C++ now.
141. mordocai ◴[] No.11213487{6}[source]
While I agree all of this is subjective, I would argue that something being composed of "deliberate decisions to enforce a certain set of design contracts" doesn't mean those decisions nor the design contracts are good. Nor does it automagically make a good implementation.

In addition, making bad design decisions that you think are good is actually one of the best types of evidence for incompetence (though not neglect, in this case).

I don't personally have enough data to have a strong opinion on where Go channels falls here, but I don't think any of your arguments here have any bearing on the idea that Go's channel implementation is bad.

replies(1): >>11213567 #
142. AnimalMuppet ◴[] No.11213567{7}[source]
But using something in the way it was not intended to be used, and then complaining that it works badly, is evidence of incompetence on the part of the user, not the designer.

> I don't personally have enough data to have a strong opinion on where Go channels falls here, but I don't think any of your arguments here have any bearing on the idea that Go's channel implementation is bad.

If sagichmal is correct, zeeboo is trying to use channels in a way that they were explicitly not designed to be used. That makes zeeboo's criticism very likely to be invalid. (It is the one who uses them as they were designed to be used who knows what the actual problems with the design are.)

replies(2): >>11213634 #>>11213771 #
143. mordocai ◴[] No.11213634{8}[source]
That's where the purely subjective argument comes in I suppose.

The argument could be made that go's channels SHOULD be able to handle zeeboo's use case and the fact that they weren't designed to be able to handle it makes them bad.

replies(1): >>11213875 #
144. lgas ◴[] No.11213639{6}[source]
Is there any reason you couldn't just have the workers request work from the pool process when they are ready for work instead of trying to push it to them?
145. msbarnett ◴[] No.11213728{6}[source]
> But as I can't duplicate it it's entirely possible I was just hallucinating.

There's a thing where if you declare a type A, and then a derived type B, methods on A have to be declared before type B gets declared, because B's declaration "freezes" A. I think it's mostly a single-pass optimization that might have made sense 20 years ago but is meaningless in an era of gigabytes of RAM.

> (Case in point: I've just spent 20 minutes trying to refresh my memory by making this code snippet work. And failing. What am I doing wrong? http://ideone.com/6iPdYF)

The specific error message is: you declared a package, which is basically a header in C parlance. You declare signatures in it, not method bodies. Method bodies go in package bodies. You were conflating the package and the package body.

And then from line 19 onwards you were using the package name where you wanted to be using a class name. I cleaned it up a bit and made it work: https://gist.github.com/mbarnett/9c6701fe74524a6df522

replies(1): >>11215827 #
146. zeeboo ◴[] No.11213762{6}[source]
It's great that panics happen when you violate those contracts. That is a deliberate design decision and I agree with it. However, the contracts that they enforce cause real problems evidenced by the article. Small additions might make those contracts more general and make channels more applicable. In my opinion, you should be able to attempt to send on a channel that could be closed in the same way that you are allowed to check if an interface contains a specific concrete type without panicing. In my experience, this would allow for a number of useful patterns that are very hard to express right now.

Nil channels blocking is definitely a deliberate design decision and has valid use cases. I use them frequently when I have a channel based design. It also isn't the thing that most people first expect since they have the opposite analog for using anything else that is nil: panics. The article which I assume you read makes only this point.

I never attempted to lift statements that are obviously opinion based (anything that has a judgement of something good or bad) as objective fact.

Here's a proposal I worked on with a coworker to make channels better that might give you more of an idea of why I'm suggesting that the current design has flaws: https://github.com/golang/go/issues/14601

Given how much weight channels are in the language specification and memory model, it would be nice if they were more generally applicable and easier to use for more concurrency situations.

147. zeeboo ◴[] No.11213771{8}[source]
My criticism is that the design limits the places where they are valid. I'm not trying to use a hammer where a screwdriver is required, I'm saying that if the hammer was designed differently, we'd be able to use it in more situations appropriately.

It's as if someone created a gun that fired backwards and I said "hey, it might be better if the gun fired forwards. we'd be able to use it in more situations." and people responded with "you shouldn't use a gun that fires backwards if you want to fire forwards." I totally agree, but it's missing the point.

148. jtolds ◴[] No.11213854{5}[source]
I perhaps communicated poorly, but the point of that section was to try and explain that the CSP model (only using channels) was untenable in Go (even though it doesn't necessarily have to be in general), and that you'd almost certainly end up not just using channels in a real program, which it seems you agree with.
149. AnimalMuppet ◴[] No.11213875{9}[source]
Only if go doesn't have a good way of handling that use case (even if it is something completely different from channels). I don't know enough to know whether it does.
150. pmarreck ◴[] No.11213900{5}[source]
Restricting input based on type hierarchies can reduce a certain class of bugs, yes, but careful use of guards as well as typespecs and unit test coverage (which you should have, anyway) can accomplish much of what type restrictions can
replies(1): >>11214838 #
151. azth ◴[] No.11213916{6}[source]
> Hmm, I'll have to check that out. I don't know that I would call Rust's ownership model super easy to reason about, but it is nice that the compiler prevents you from doing so much stupid $#^&.

It's much better get compile time errors than deal with very hard to reproduce data races.

replies(1): >>11213942 #
152. kazinator ◴[] No.11213942{7}[source]
Only, as usual, in situations when all else is equal.

By the way, on a related note, data races themselves are easier to reproduce than the visible negative consequences of those races on the execution of that program. That's the basis of tools like the "Helgrind" tool in Valgrind. That is to say, we can determine that some data is being accessed without a consistently held lock even when that access is working fine by dumb luck. We don't need an accident to prove that racing was going on, in other words. :)

replies(1): >>11214691 #
153. ◴[] No.11214091[source]
154. im_down_w_otp ◴[] No.11214370{5}[source]
Save the following code in "someone_was_wrong_on_the_internet.erl" and then run "dialyzer --src someone_was_wrong_on_the_internet.erl"

  -module(someone_was_wrong_on_the_internet).
  -export([init/0, fizzbuzzer/1]).
  
  -spec init() -> list(pos_integer() | binary()).
  init() ->
      List = [-1, 0, 0.1, 1, 2, 3, 5, 15, 2, 3, 5, 15, 1],
      [fizzbuzzer(Result) || Result <- List].
  
  -spec fizzbuzzer(pos_integer()) -> pos_integer() | binary().
  fizzbuzzer(Number) when Number rem 15 =:= 0 ->
      <<"FizzBuzz">>;
  fizzbuzzer(Number) when Number rem 5 =:= 0 ->
      <<"Buzz">>;
  fizzbuzzer(Number) when Number rem 3 =:= 0 ->
      <<"Fizz">>;
  fizzbuzzer(Number) ->
      Number.

Dialyzer will fail the type check until you remove [-1, 0, 0.1] from the list. Not with a particularly helpful error, but it does fail it nonetheless.

The code itself is a valid program that runs, but it produces incorrect output, because 0 rem 15 =:= 0, so you get <<"FizzBuzz">> where you'd expect to get a 0 in the list. By running Dialyzer in my build chain I can catch that my implementation doesn't match my constraints at compile-time. In a way that I otherwise would have only found at runtime.

Though while creating this little pointless example one thing I'm not super clear on is why Dialyzer fails to notice that my return type from

  fizzbuzzer(Number) ->
      Number.
if I change it to

  fizzbuzzer(Number) ->
      -Number.
will return a neg_integer() and fail to satisfy the return spec. Despite that I've told it the input must be a be a pos_integer(). Unless I enable the -Wspecdiffs flag, in which case it notices the problem.
155. catnaroek ◴[] No.11214691{8}[source]
> By the way, on a related note, data races themselves are easier to reproduce than the visible negative consequences of those races on the execution of that program.

Perhaps, but a data race by itself isn't sufficiently loud to catch my attention (no idea about yours), unless it consistently has visible consequences during debugging - preferably not too long after the data race itself takes place.

> That is to say, we can [emphasis mine] determine that some data is being accessed without a consistently held lock even when that access is working fine by dumb luck.

By “we”, do you mean human beings or computers? And, by “can”, do you mean “in theory” or “in practice”? Also, “only when we're lucky” or “reliably”?

> We don't need an accident to prove that racing was going on, in other words.

What I want to prove is the opposite - that there are no races going on.

156. ◴[] No.11214716{6}[source]
157. pmarreck ◴[] No.11214838{6}[source]
Was something I said factually wrong? User im_down_w_otp put up an example of what I'm talking about (minus the unit testing) so what gives?
replies(1): >>11216739 #
158. david-given ◴[] No.11215827{7}[source]
> There's a thing where if you declare a type A, and then a derived type B, methods on A have to be declared before type B gets declared...

Yes, that sounds very familiar.

> The specific error message is: you declared a package, which is basically a header in C parlance.

Oh, FFS. That snippet is not, in fact, pointing at the piece of code I was actually asking about --- ideone changed it when I wasn't looking. The one you saw is unfinished and broken.

This one is the one I was meaning: http://ideone.com/skZRIb

The .Foo selector isn't found; changing it to Foo(object) reports that apparently Foo isn't a dispatching method on MyClass1... which makes no sense, because this is the same code as you had. My suspicion is that there's something magic about declaring classes in packages?

replies(1): >>11218311 #
159. sagichmal ◴[] No.11216739{7}[source]
I suspect dismissing static typing whole cloth with "unit tests and certain guards can give you most of the benefits" comes off badly to some people.
replies(1): >>11225202 #
160. msbarnett ◴[] No.11218311{8}[source]
> which makes no sense, because this is the same code as you had. My suspicion is that there's something magic about declaring classes in packages?

Yeah.

Dispatching methods on a type consist of the type's "primitive operations". The Ada 95 Rationale spells it out: "Just as in Ada 83, derived types inherit the operations which "belong" to the parent type - these are called primitive operations in Ada 95. User-written subprograms are classed as primitive operations if they are declared in the same package specification as the type and have the type as parameter or result."

It seems like a wart that you're not in an "anonymous" package in situations like your example, but I also guess it probably doesn't come up much in "real" programs.

161. pron ◴[] No.11222537{3}[source]
> you end up paying the price of LOCK XCHG or whatever mutexes translate to

But channels use locks internally. The choice of channels vs. mutexes is one of design, not implementation. Also, mutexes are blocking; LOCK XCHG isn't. Sure, mutexes also use LOCK XCHG (but so do channels, and nearly all concurrent data sctructures), but they also block (as do channels).

> your only choice of handling it is by means of mutexes, which has really terrible performance characteristics

That's just not true. There is a way to translate any locking algorithm to a non-blocing one (in fact, wait-free, which is the "strongest" non-blocking guarantee), yet only only a handful of wait-free algorithms are used in practice. Why? Because it's hard to make them more efficient than locks in the general case.

> not to mention that you'll have problems if you want (soft) real-time behavior, because now you can end up with both dead-locks and live-locks.

Again, channels are blocking data structures.

> If you end up doing such synchronization in Go, then Go's M:N multi-threading ends up doing more harm than good, because if you need such synchronization, you also need to fine tune your thread-pools and at this point 1:1 would be better.

I don't know where you're getting that. AFAIK, Go's mutexes don't block the underlying thread; only the goroutine.

The question of which concurrency mechanism should be used is a difficult one (and in general, more than one is necessary; even Erlang has shared, synchronized mutable state with its ETS tables), but you are very misinformed about how concurrency constructs are built and about their performance behavior.

162. pmarreck ◴[] No.11225202{8}[source]
I don't see how "can accomplish much of what type restrictions can" is the equivalent of "dismissing static typing whole cloth"

I choose wording carefully for a reason

163. nfirvine ◴[] No.11228064[source]
"Novices to the language have a tendency to overuse channels." "The explicit instruction to [new] Go programmers is that they should avoid channels... [is] false." -- Isn't this a contradiction?