14 points ddp26 | 5 comments | | HN request time: 0.599s | source

Hey HN, I’m Dan. I’ve been working on forecasting for the last six years at Google, then Metaculus, and now at FutureSearch.

For a long time, I thought prediction markets, “superforecasting”, and AI forecasting techniques had nothing to say about the stock market. Stock prices already reflect the collective wisdom of investors. The stock market is basically a prediction market already.

Recently, though, AI forecasting has gotten competitive with human forecasters. And we’ve found a way of modeling long-term company outcomes that is amenable to our forecasting approach.

Iteration by iteration, my skepticism has turned to light distrust, and then into actually getting value out of our company forecasts. Like most AI companies, we draw from improvements against our LLM-agent benchmarks, Deep Research Bench and Bench to the Future.

But at the end of the day, to evaluate it, I think it’s best to follow Charlie Munger’s advice: try to disprove what Stockfisher says. Now, even after an hour of carefully reading through a stock, I generally can’t find any major flaws, and I hear the same from other serious investors.

And, since it’s software, not human analysis, we applied it at scale. We have long-term forecasts for revenue, margins, and payout ratios for every company in the S&P 500.

We do a Buffett-style intrinsic valuation, that doesn’t take the stock price into account. This means you can sort by the difference between what the stock market says, and what Stockfisher says. Some very interesting neglected companies rise to the top, and we’re starting to process the mid-caps and small-caps too.

Stockfisher is live at https://platform.stockfisher.app, no sign in required, to see what we have on the top stocks. (I was happy to see Alphabet comes out well. Meta looks good too, but Amazon and Apple do not.) I would love for you to judge for yourself if the quality of analysis is up to your standards.

1. mckennameyer ◴[] No.45917161[source]
Super interesting direction. I've been pretty skeptical of “AI for stock picking” for the same reasons you mention. Curious how you handle the challenge of companies pivoting into new business areas that don't have historical precedent? For example, Apple's shift into services or Amazon's AWS dominance weren't really predictable from their earlier financials.
replies(1): >>45917523 #
2. mikegreenspan ◴[] No.45917280[source]
This seems like a really interesting usecase of LLMs. I’m curious how you’ve validated the outputs and what gives you confidence that the forecasts are good and not affected by hallucinations or incomplete information?
replies(1): >>45917487 #
3. ddp26 ◴[] No.45917487[source]
Constant iteration, mostly!

The most interesting aspect of this is backtesting. Quant models get run on past data to see if their predictions work.

When you use LLMs agents, though, you run into their memorized knowledge of the world. And then there's the fact that they do their research on the open internet. It makes backtesting hard - but not impossible.

We wrote about how we do our pastcasting validation here: https://stockfisher.app/backtesting-forecasts-that-use-llms

4. ddp26 ◴[] No.45917523[source]
One interesting finding in Stockfisher data is that a lot of these business pivots are actually planned by managers years in advance, in their 10-K and 10-Q filings.

Yes, managers are not good forecasters. But they do get certain things right. And if you figure out the patterns of what types of manager promises tend to play out, and assess them individually for their reliability, you can reason about these business model changes decently well.