Most active commenters
  • chrisseaton(4)

←back to thread

Go channels are bad

(www.jtolds.com)
298 points jtolds | 24 comments | | HN request time: 1.581s | source | bottom
1. Jabbles ◴[] No.11210740[source]
Effective Go has always said:

Do not communicate by sharing memory; instead, share memory by communicating.

This approach can be taken too far. Reference counts may be best done by putting a mutex around an integer variable, for instance.

https://golang.org/doc/effective_go.html#sharing

replies(3): >>11210862 #>>11210978 #>>11210990 #
2. poizan42 ◴[] No.11210862[source]
Reference counts are best done using interlocked increment/decrement primitives.
replies(2): >>11210871 #>>11211088 #
3. catnaroek ◴[] No.11210871[source]
s/interlocked/atomic/
4. api ◴[] No.11210978[source]
I thought Go had gc. Why would you ever need reference counts?
replies(2): >>11211015 #>>11211294 #
5. mike_hearn ◴[] No.11210990[source]
I'm not sure making an absolute statement ("do not...") followed by "... actually do, sometimes" is helpful. How is this different to any other language that gives you a toolbox of synchronisation primitives?
6. voidlogic ◴[] No.11211015[source]
Non-memory resources that exist within your application.

Example: Perhaps once no instances of an object are in use, you want to remove that object from persistent storage such as a DB.

replies(1): >>11211115 #
7. chrisseaton ◴[] No.11211088[source]
I wonder if there are any compilers which can replace

    mutex.lock { x++ }
With a 'lock xaddl x 1' instruction.
replies(4): >>11211325 #>>11211581 #>>11211703 #>>11212909 #
8. api ◴[] No.11211115{3}[source]
Does Go not have finalizers? These are mostly solved problems since the Smalltalk era. Haven't learned Go yet and from what I read I'd be better off with Rust or something that would stretch my brain more like Haskell. When I read about it I get the sense that we are reinventing stuff from the 90s. But hey, it's hip.
replies(3): >>11211141 #>>11211151 #>>11211331 #
9. catnaroek ◴[] No.11211141{4}[source]
Sometimes you need to guarantee that resources are cleaned up in a timely fashion. Finalizers don't help here.
10. hacknat ◴[] No.11211151{4}[source]
Piggy backing off @catnaroek, go does have finalizers too though.
11. chrsm ◴[] No.11211294[source]
Example: Recently built a small service that responds to requests and walks various files looking for data.

The service can be asked to unload (close) the file, but it's hard to say whether it's in-use at the time without some kind of reference count to current requests using the file.

replies(1): >>11211343 #
12. nly ◴[] No.11211325{3}[source]
It's conceivable, if you made mutexes compiler/language intrinsic, but as long as you're calling pthread_mutex_lock, the compiler has to assume that that pthread library, which is linked dynamically, is interchangeable, and can do anything it likes to memory. That includes mutating x
replies(1): >>11213078 #
13. pcwalton ◴[] No.11211331{4}[source]
Go does have runtime.SetFinalizer: https://golang.org/pkg/runtime/#SetFinalizer

But beware of finalizer ordering issues.

14. nly ◴[] No.11211343{3}[source]
That's kinda what dup() is for. Reference counting happens in the kernel.
15. pjmlp ◴[] No.11211581{3}[source]
Java and .NET will do it if you make use of InterlockedIncrement APIs.
replies(1): >>11211672 #
16. chrisseaton ◴[] No.11211672{4}[source]
But I think InterlockedIncrement is just 'lock xaddl x 1', so using InterlockedIncrement would be to do it manually.

I'm asking if any compiler can take a statement which uses a high level, general purpose lock and increments a variable inside it using conventional language expressions, and convert it to use 'lock xaddl x 1' (perhaps via InterlockedIncrement or whatever other intrinsics you have) instead.

I only know Java well, not .NET, but I'm pretty sure no Java compiler does it.

replies(1): >>11213205 #
17. mike_hearn ◴[] No.11211703{3}[source]
It's not quite the same thing but recent JVMs can translate synchronised blocks into Intel TSX transactions, which means multiple threads can run inside the lock at once, with rollback and retry if interference is detected at the hardware (cache line) level. So yeah .... almost. But it's fancy and cutting edge stuff.
replies(1): >>11211715 #
18. ◴[] No.11211715{4}[source]
19. jupp0r ◴[] No.11212909{3}[source]
There ist std::atomic in recent C++ flavors
replies(1): >>11213355 #
20. pcwalton ◴[] No.11213078{4}[source]
That hasn't inhibited optimizations for a long time. Disassemble a call to printf("Hello world") in optimized clang output and look at what it turns into.
replies(2): >>11213389 #>>11214716 #
21. pjmlp ◴[] No.11213205{5}[source]
Ah, I missed the point.
22. chrisseaton ◴[] No.11213355{4}[source]
Right. My question is whether we can translate locks which only reference a single variable, to use something like std::atomic, automatically.
23. chrisseaton ◴[] No.11213389{5}[source]
Yes if the library is covered by a standard just like the language is, then the compiler can make assumptions. Also threads are a language feature in C and C++ now.
24. ◴[] No.11214716{5}[source]