←back to thread

36 points dchu17 | 1 comments | | HN request time: 0.368s | source

Hey HN, this is David from Aluna (YC S24). We work with diagnostic labs to build datasets and evals for oncology tasks.

I wanted to share a simple RL environment I built that gave frontier LLMs a set of tools that lets it zoom and pan across a digitized pathology slide to find the relevant regions to make a diagnosis. Here are some videos of the LLM performing diagnosis on a few slides:

(https://www.youtube.com/watch?v=k7ixTWswT5c): traces of an LLM choosing different regions to view before making a diagnosis on a case of small-cell carcinoma of the lung

(https://youtube.com/watch?v=0cMbqLnKkGU): traces of an LLM choosing different regions to view before making a diagnosis on a case of benign fibroadenoma of the breast

Why I built this:

Pathology slides are the backbone of modern cancer diagnosis. Tissue from a biopsy is sliced, stained, and mounted on glass for a pathologist to examine abnormalities.

Today, many of these slides are digitized into whole-slide images (WSIs)in TIF or SVS format and are several gigabytes in size.

While there exists several pathology-focused AI models, I was curious to test whether frontier LLMs can perform well on pathology-based tasks. The main challenge is that WSIs are too large to fit into an LLM’s context window. The standard workaround, splitting them into thousands of smaller tiles, is inefficient for large frontier LLMs.

Inspired by how pathologists zoom and pan under a microscope, I built a set of tools that let LLMs control magnification and coordinates, viewing small regions at a time and deciding where to look next.

This ended up resulting in some interesting behaviors, and actually seemed to yield pretty good results with prompt engineering:

- GPT 5: explored up to ~30 regions before deciding (concurred with an expert pathologist on 4 out of 6 cancer subtyping tasks and 3 out of 5 IHC scoring tasks)

- Claude 4.5: Typically used 10–15 views but similar accuracy as GPT-5 (concurred with the pathologist on 3 out of 6 cancer subtyping tasks and 4 out of 5 IHC scoring tasks)

- Smaller models (GPT 4o, Claude 3.5 Haiku): examined ~8 frames and were less accurate overall (1 out of 6 cancer subtytping tasks and 1 out of 5 IHC scoring tasks)

Obviously, this was a small sample set, so we are working on creating a larger benchmark suite with more cases and types of tasks, but I thought this was cool that it even worked so I wanted to share with HN!

Show context
Utkarsh_Mood ◴[] No.45906169[source]
Do you think finetuning these LLMS would bring about comparable results to specific models trained for this?
replies(1): >>45906293 #
dchu17 ◴[] No.45906293[source]
I think so. It feels like there is more to be squeezed from just better prompts but was going to play around with fine-tuning Qwen3
replies(1): >>45907580 #
Utkarsh_Mood ◴[] No.45907580[source]
fair enough. I wonder if fine-tuning over different modalities like IMC, H&E etc would help it generalize better across all
replies(1): >>45908211 #
1. dchu17 ◴[] No.45908211[source]
Yeah I think one of the things that would be interesting is to see how well it generalizes across tasks. It seems like the existence of pathology foundation models means there is certainly a degree of generalizability (at least across tissues) but I am not too sure yet about generalizability across different modalities (there are some cool biomarker-prediction models though)