To be more explicit, I feel like the framing isn’t great for humanity. It turns into “I proclaim X, so let me find a source” instead of “I wonder about X, let me look for authoritative information.”
The site spits out one or two links (in my case always just one), and then it’s on the user to click through and parse the paper. That feels incomplete. A Perplexity-style interface would be more useful, where an LLM reads the abstract (at minimum) and explains the nuance.
Take one example on the homepage: “Meat reduces cancer risk.” The site points to a study. I read it. The study absolutely does not prove that meat reduces cancer risk. What it actually shows is a small, observational, and potentially confounded association between higher animal protein intake and slightly lower cancer mortality. The authors explicitly state it isn’t conclusive.
But in conversation, if someone says “meat reduces cancer risk,” I disagree, and they whip out this site—it suddenly looks like “science proves it.” Except it doesn’t. And let’s be honest: neither of us is going to sit down and read an eight-page study in the middle of a chat. I only did because (a) I find it interesting, and (b) I’m stuck on a plane with Viasat wifi speeds blocking real work.
Without diligence, the tool risks turning weak claims into “settled fact.” Further, as science and the scientific method are under continued assault in the free world, tools that oversimplify or misrepresent evidence risk deepening mistrust and reinforcing narratives that science is just a weapon to win arguments rather than a process to seek truth.