←back to thread

354 points misonic | 1 comments | | HN request time: 0.208s | source
Show context
samsartor ◴[] No.42468798[source]
GNNs have been a bit of a disappointment to me. I've tried to apply them a couple times to my research but it has never worked out.

For a long time GNNs were pitched as a generalization of CNNs. But CNNs are more powerful because the "adjacency weights" (so to speak) are more meaningful: they learn relative positional relationships. GNNs usually resort to pooling, like described here. And you can output an image with a CNN. Good luck getting a GNN to output a graph. Topology still has to be decided up front, sometimes even during training. And the nail in the coffin is performance. It is incredible how slow GNNs are compared to CNNs.

These days I feel like attention has kinda eclipsed GNNs for a lot of those reasons. You can make GNNs that use attention instead of pooling, but there isn't much point. The graph is usually only traversed in order to create the mask matrix (ie attend between nth neighbors) and otherwise you are using a regular old transformer. Often you don't even need the graph adjacencies because some kind of distance metric is already available.

I'm sure GNNs are extremely useful to someone somewhere but my experience has been a hammer looking for a nail.

replies(5): >>42468874 #>>42468882 #>>42469313 #>>42469395 #>>42472618 #
1. stephantul ◴[] No.42468874[source]
Same! I’ve seen many proposals to use a GNN for some problem for which we used a “flat” model, e.g., taking into account HTML structure when predicting labels for pages. Even when it seemingly made a lot of sense to use them, it didn’t work.