Most active commenters
  • jandrewrogers(3)

←back to thread

A graph explorer of the Epstein emails

(epstein-doc-explorer-1.onrender.com)
322 points cratermoon | 11 comments | | HN request time: 0.593s | source | bottom
Show context
pickpuck ◴[] No.45958412[source]
What if we extended this idea beyond one dataset to all discrete news events and entities: people, organizations, places.

Just like here you could get a timeline of key events, a graph of connected entities, links to original documents.

Newsrooms might already do this internally idk.

This code might work as a foundation. I love that it's RDF.

replies(10): >>45958506 #>>45958629 #>>45959158 #>>45959273 #>>45959323 #>>45959385 #>>45960015 #>>45960134 #>>45960357 #>>45963779 #
1. jandrewrogers ◴[] No.45959323[source]
This has been attempted many times. They all fail the same way.

These general data models start to become useful and interesting at around a trillion edges, give or take an order of magnitude. A mature graph model would be at least a few orders of magnitude larger, even if you aggressively curated what went into it. This is a simple consequence of the cardinality of the different kinds of entities that are included in most useful models.

No system described in open source can get anywhere close to even the base case of a trillion edges. They will suffer serious scaling and performance issues long before they get to that point. It is a famously non-trivial computer science problem and much of the serious R&D was not done in public historically.

This is why you only see toy or narrowly focused graph data models instead of a giant graph of All The Things. It would be cool to have something like this but that entails some hardcore deep tech R&D.

replies(5): >>45959382 #>>45960002 #>>45960262 #>>45960362 #>>45961019 #
2. babelfish ◴[] No.45959382[source]
I don't have any experience on graph modeling, but it seems like Neo4j should be able to support 1 trillion edges, based on this (admittedly marketing) post of theirs? https://neo4j.com/press-releases/neo4j-scales-trillion-plus-...
replies(1): >>45959854 #
3. jandrewrogers ◴[] No.45959854[source]
The graph database market has a deserved reputation for carefully crafting scaling claims that are so narrowly qualified as to be inapplicable to anything real. If you aren't deep into the tech you'll likely miss it in the press releases. It is an industry-wide problem, I'm not trying to single out Neo4j here.

Using this press release as an example, if you pay attention to the details you'll notice that this graph has an anomalously low degree. That is, the graph is very weakly connected, lots of nodes and barely any edges. Typical graph data models have much higher connectivity than this. For example, the classic Graph500 benchmark uses an average degree of 16 to measure scale-out performance.

So why did they nerf the graph connectivity? One of the most fundamental challenges in scaling graphs is optimally cutting them into shards. Unlike most data models, no matter how you cut up the graph some edges will always span multiple shards, which becomes a nasty consistency problem in scale-out systems. Scaling this becomes exponentially harder the more highly connected the graph. So basically, they defined away the problem that makes graphs difficult to scale. They used a graph so weakly connected that they could kinda sorta make it work on a thousand(!) machines even though it is not representative of most real-world graph data models.

replies(1): >>45972450 #
4. michelpp ◴[] No.45960002[source]
There are open source projects moving toward this scale, the GraphBLAS for example uses an algebraic formulation over compressed sparse matrix representations for graphs that is designed to be portable across many architectures, including cuda. It would be nice if companies like nivida could get more behind our efforts, as our main bottleneck is development hardware access.

To plug my project, I've wrapped the SuiteSparse GraphBLAS library in a postgres extension [1] that fluidly blends algebraic graph theory with the relational model, the main flow is to use sql to structure complex queries for starting points, and then use the graphblas to flow through the graph to the endpoints, then joining back to tables to get the relevant metadata. On cheap hetzner hardware (amd epyc 64 core) we've achieved 7 billion edges per second BFS over the largest graphs in the suitesparse collection (~10B edges). With our cuda support we hope to push that kind of performance into graphs with trillions of edges.

[1] https://github.com/OneSparse/OneSparse

5. stevage ◴[] No.45960262[source]
>These general data models start to become useful and interesting at around a trillion edges

That is a wild claim. Perhaps for some very specific definition of "useful and interesting"? This dataset is already interesting (hard to say whether it's useful) at a much tinier scale.

replies(2): >>45960301 #>>45960504 #
6. zozbot234 ◴[] No.45960301[source]
This is not a "general purpose data model", though. A better example would be Wikidata which at about 100M nodes and 1B edges (so orders of magnitude less than that 1T claim) is already enabling plenty of useful queries about all sorts of publicly-available data and entities.
7. theteapot ◴[] No.45960362[source]
> It would be cool to have something like this ..

Aren't LLMs something like this?

replies(1): >>45960459 #
8. djtango ◴[] No.45960459[source]
An LLM probabilistically produces tokens over its model which is why it can hallucinate whilst an actual graph model would not have that issue
9. jandrewrogers ◴[] No.45960504[source]
It was a widely observed heuristic going back to the days when the Semantic Web was trendy. The underlying reason is also obvious once stated.

Almost every non-trivial graph data model about the world is a graph of human relationships in the population. If not directly then by proxy. Population scale human relationship graphs commonly pencil out at roughly 1T edges, a function of the population size. It is also typically the highest cardinality entity. Even the purpose isn’t a human relationship graph, they all tend to have one tacitly embedded with the scale implied.

If you restrict the set of human entities, you either end up with big holes in the graph or it is a graph that is not generally interesting (like one limited to company employees).

The OP was talking about generalizing this to a graph of people, places, events, and organizations, which always has this property.

It is similar to the phenomenon that a vast number of seemingly unrelated statistics are almost perfectly correlated with GDP.

10. mmooss ◴[] No.45961019[source]
> It is a famously non-trivial computer science problem and much of the serious R&D was not done in public historically.

Could you point us to any public research on this issue? Or the history of the proprietary research? Just the names might help - maybe there are news articles, it's a section in someone's book, etc.

11. babelfish ◴[] No.45972450{3}[source]
Thanks for taking the time to respond! Inspired me to go read the Facebook TAO paper.