←back to thread

354 points misonic | 1 comments | | HN request time: 0s | source
Show context
cherryteastain ◴[] No.42469706[source]
There are a lot of papers using GNNs for physics simulations (e.g. computational fluid dynamics) because the unstructured meshes used to discretize the problem domain for such applications map very neatly to a graph structure.

In practice, every such mesh/graph is used once to solve a particular problem. Hence it makes little sense to train a GNN for a specific graph. However, that's exactly what most papers did because no one found a way to make a GNN that can adjust well to a different mesh/graph and different simulation parameters. I wonder if there's a breakthrough waiting just around the corner to make such a generalization possible.

replies(2): >>42470241 #>>42470602 #
magicalhippo ◴[] No.42470241[source]
Naive question:

Words in sentences kinda forms graphs, referencing other words or are leafs being referenced, both inside sentences and between sentences.

Given the success of the attention mechanism in modern LLMs, how well would they do if you trained a LLM to process an actual graph?

I guess you'd need some alternate tokenizer for optimal performance.

replies(4): >>42470363 #>>42470817 #>>42472814 #>>42474565 #
1. whimsicalism ◴[] No.42472814[source]
yep you're now pretty much at state-of-the-art