Insights

/

Mar 10, 2026

AI First Drafts of Radiology Reports: From 205 Dictated Words to 57

New Lantern CEO Shiva Suri breaks down how AI first drafts of radiology reports work, reducing a 205-word dictation to 57 words using OCR, viewer signals, and generative AI in a unified platform.

/

AUTHOR

Jordan LeClaire

This post is adapted from New Lantern CEO Shiva Suri's tech talk at SIIM 2025. Watch the full presentation here.

Radiology feels like death by a thousand cuts. Not the interpretation. Radiologists are extraordinary at finding disease in images. That's what you trained for, and it's probably the part of the job you actually enjoy. The death by a thousand cuts is everything that happens after you find the disease: making the measurements, dictating the report field by field, tabbing between sections, wrestling with speech-to-text on dates and short phrases, paging for critical results, and repeating the whole cycle forty times a day.

More than half of a radiologist's time goes toward dictating reports. And the tools built to help with that process haven't fundamentally changed in over a decade.

We think AI can fix this. Not by replacing the radiologist, but by drafting the report before you start dictating.

Why a Widget Won't Cut It

The first product I ever built for radiology was a widget. A legacy C# application where you could hit a hotkey and it would auto-generate the impression section of a report. Radiologists liked it. They started asking for more: "Can you automate the comparison section? What if I just free-dictate the findings? Can you pull in the tech sheet data?"

The answer was no. A widget sitting on top of a disconnected stack can only do so much. The integrations required to make it work across different reporters, different PACS systems, and different EHRs were more complex than the AI itself. I tried it, and I'm glad to say I failed, because it proved a point: the solution to automating non-radiology work isn't another bolt-on tool. It's one AI-native platform where the worklist, viewer, and reporter all share the same data layer.

Think of it like a jigsaw puzzle. Nobody hangs a jigsaw puzzle on the wall. You buy one complete painting.

Three Inputs That Power an AI First Draft

With a unified platform, there are three categories of signal that can feed an AI-generated first draft of the radiology report.

1. Optical character recognition for tech sheets.

Radiologists should never have to dictate anything that a technologist already wrote down on paper. This is most obvious with ultrasound worksheets (thyroid, carotid, renal artery, venous duplex), but it extends to DEXA, calcium scoring, mammography, and any exam with a structured tech sheet.

The AI pipeline works in three steps. First, parse the handwriting and figure out what each character means. Second, post-process with a large language model to understand the clinical context ("this is a thyroid ultrasound, these are nodule measurements, this is the composition field"). Third, inject the parsed values into the correct sections of whatever report template that specific radiologist prefers. Bob and Bill might use completely different templates. The AI needs to handle both.

And the critical final piece: inference. For a thyroid ultrasound, you need to calculate a TI-RADS score. For other exams it might be a Fleischner classification or a BI-RADS category. Primitive DICOM SR integrations can plug a measurement into a template field, but they can't do the reasoning step. AI can.

The goal: any exam with a tech sheet should go from several minutes of manual parsing to several seconds of verification.

2. Viewer interaction signals.

Imagine a future where the software just understood what you were doing as you worked, so you never had to dictate a single word. We're not there yet, but a unified platform gets us closer.

Two viewer interaction signals are especially valuable. The first is measurements. If you measure a lesion in the lungs, the AI should understand the anatomical location and place both the current measurement and the prior measurement into the correct section of the report automatically.

The second is comparisons. Radiologists everywhere hate dictating dates. It's tedious to look through priors and dictate the relevant ones. And speech-to-text is notoriously bad at dates, even with state-of-the-art models. The better solution: when a radiologist hangs a relevant prior, the platform captures that as a signal and auto-populates the comparison section with the correct date and study description. No dictation required.

3. Free dictation of positive findings.

Even with OCR and viewer signals, most reports still need some spoken input from the radiologist. But the key insight is that the input can be dramatically simpler than what radiologists dictate today.

Instead of dictating a fully formatted report with punctuation, paragraph breaks, negatives, and section headers, the radiologist just free-dictates their positive findings. No periods. No new lines. No "new paragraph." Just say what you see and let AI handle the rest.

From there, the AI places the findings in the correct report sections, expands short phrases into properly formatted sentences, generates the impression based on ACR criteria (LI-RADS, BI-RADS, TI-RADS, Fleischner, and so on), and produces a report that sounds like you. That last part matters. A generic off-the-shelf model won't match your dictation style. The model needs to be fine-tuned on your historical reports and then continue learning through in-context signals as your style evolves over time.

End-to-End: CTA Chest for Pulmonary Embolism

Here's what this looks like in practice with a real (de-identified) CTA chest report.

Without AI assistance: 205 words dictated. 32 distinct fields to tab through. 8 metadata values entered manually. 19 punctuation commands and formatting instructions.

With AI assistance: 57 words dictated, primarily free-form positive findings in a single input field. 1 distinct field (no tabbing). Zero metadata values (handled by OCR plus HL7/EHR integration). Zero punctuation commands.

Same report. Same clinical content. A fraction of the manual effort.

Less Rules, More AI

For 20 years, radiology software has tried to solve workflow problems with rules: templates, macros, structured fields, hanging protocol configurations. It hasn't worked. The rule-based approach doesn't scale, it's brittle when the data is messy, and it puts the burden of configuration on the radiologist.

We think the future is less rules and more AI. Not AI that replaces the radiologist. AI that eliminates the parts of the job that have nothing to do with clinical thinking.

Radiologists who harness AI to generate their report first drafts will significantly outpace those who don't. Not because the AI is smarter than the doctor. Because the doctor finally gets to spend their time on the work that actually requires a doctor.

Watch the full SIIM 2025 talk here. If you'd like to see how this works in practice, reach out to our team.

This post is adapted from New Lantern CEO Shiva Suri's tech talk at SIIM 2025. Watch the full presentation here.

Radiology feels like death by a thousand cuts. Not the interpretation. Radiologists are extraordinary at finding disease in images. That's what you trained for, and it's probably the part of the job you actually enjoy. The death by a thousand cuts is everything that happens after you find the disease: making the measurements, dictating the report field by field, tabbing between sections, wrestling with speech-to-text on dates and short phrases, paging for critical results, and repeating the whole cycle forty times a day.

More than half of a radiologist's time goes toward dictating reports. And the tools built to help with that process haven't fundamentally changed in over a decade.

We think AI can fix this. Not by replacing the radiologist, but by drafting the report before you start dictating.

Why a Widget Won't Cut It

The first product I ever built for radiology was a widget. A legacy C# application where you could hit a hotkey and it would auto-generate the impression section of a report. Radiologists liked it. They started asking for more: "Can you automate the comparison section? What if I just free-dictate the findings? Can you pull in the tech sheet data?"

The answer was no. A widget sitting on top of a disconnected stack can only do so much. The integrations required to make it work across different reporters, different PACS systems, and different EHRs were more complex than the AI itself. I tried it, and I'm glad to say I failed, because it proved a point: the solution to automating non-radiology work isn't another bolt-on tool. It's one AI-native platform where the worklist, viewer, and reporter all share the same data layer.

Think of it like a jigsaw puzzle. Nobody hangs a jigsaw puzzle on the wall. You buy one complete painting.

Three Inputs That Power an AI First Draft

With a unified platform, there are three categories of signal that can feed an AI-generated first draft of the radiology report.

1. Optical character recognition for tech sheets.

Radiologists should never have to dictate anything that a technologist already wrote down on paper. This is most obvious with ultrasound worksheets (thyroid, carotid, renal artery, venous duplex), but it extends to DEXA, calcium scoring, mammography, and any exam with a structured tech sheet.

The AI pipeline works in three steps. First, parse the handwriting and figure out what each character means. Second, post-process with a large language model to understand the clinical context ("this is a thyroid ultrasound, these are nodule measurements, this is the composition field"). Third, inject the parsed values into the correct sections of whatever report template that specific radiologist prefers. Bob and Bill might use completely different templates. The AI needs to handle both.

And the critical final piece: inference. For a thyroid ultrasound, you need to calculate a TI-RADS score. For other exams it might be a Fleischner classification or a BI-RADS category. Primitive DICOM SR integrations can plug a measurement into a template field, but they can't do the reasoning step. AI can.

The goal: any exam with a tech sheet should go from several minutes of manual parsing to several seconds of verification.

2. Viewer interaction signals.

Imagine a future where the software just understood what you were doing as you worked, so you never had to dictate a single word. We're not there yet, but a unified platform gets us closer.

Two viewer interaction signals are especially valuable. The first is measurements. If you measure a lesion in the lungs, the AI should understand the anatomical location and place both the current measurement and the prior measurement into the correct section of the report automatically.

The second is comparisons. Radiologists everywhere hate dictating dates. It's tedious to look through priors and dictate the relevant ones. And speech-to-text is notoriously bad at dates, even with state-of-the-art models. The better solution: when a radiologist hangs a relevant prior, the platform captures that as a signal and auto-populates the comparison section with the correct date and study description. No dictation required.

3. Free dictation of positive findings.

Even with OCR and viewer signals, most reports still need some spoken input from the radiologist. But the key insight is that the input can be dramatically simpler than what radiologists dictate today.

Instead of dictating a fully formatted report with punctuation, paragraph breaks, negatives, and section headers, the radiologist just free-dictates their positive findings. No periods. No new lines. No "new paragraph." Just say what you see and let AI handle the rest.

From there, the AI places the findings in the correct report sections, expands short phrases into properly formatted sentences, generates the impression based on ACR criteria (LI-RADS, BI-RADS, TI-RADS, Fleischner, and so on), and produces a report that sounds like you. That last part matters. A generic off-the-shelf model won't match your dictation style. The model needs to be fine-tuned on your historical reports and then continue learning through in-context signals as your style evolves over time.

End-to-End: CTA Chest for Pulmonary Embolism

Here's what this looks like in practice with a real (de-identified) CTA chest report.

Without AI assistance: 205 words dictated. 32 distinct fields to tab through. 8 metadata values entered manually. 19 punctuation commands and formatting instructions.

With AI assistance: 57 words dictated, primarily free-form positive findings in a single input field. 1 distinct field (no tabbing). Zero metadata values (handled by OCR plus HL7/EHR integration). Zero punctuation commands.

Same report. Same clinical content. A fraction of the manual effort.

Less Rules, More AI

For 20 years, radiology software has tried to solve workflow problems with rules: templates, macros, structured fields, hanging protocol configurations. It hasn't worked. The rule-based approach doesn't scale, it's brittle when the data is messy, and it puts the burden of configuration on the radiologist.

We think the future is less rules and more AI. Not AI that replaces the radiologist. AI that eliminates the parts of the job that have nothing to do with clinical thinking.

Radiologists who harness AI to generate their report first drafts will significantly outpace those who don't. Not because the AI is smarter than the doctor. Because the doctor finally gets to spend their time on the work that actually requires a doctor.

Watch the full SIIM 2025 talk here. If you'd like to see how this works in practice, reach out to our team.