Longitudinal MRI Analysis

AI-Powered MS Monitoring for Clinical Teams

Automatically detect new and enlarging lesions, track disease progression over time, and generate structured clinical reports — with built-in quality control that blocks unreliable results.

<2min
Processing
3D
Volumetric
CE-MDR
Pathway

Validated on public benchmark datasets

MSSEG-2 ISBI Longitudinal MS Insight MS Longitudinal CE-MDR Pathway
01 / Technology

Clinical-Grade Intelligence

A complete longitudinal MS analysis platform — from DICOM input to structured clinical report — built for safety, transparency, and regulatory readiness from day one.

Hybrid Dual-Model Segmentation

Union ensemble of nnU-Net V1 and V2 models for robust WMH detection. Dual-model consensus reduces false negatives while maintaining specificity across scanner types.

Per-Lesion Longitudinal Tracking

Deterministic lesion matching between baseline and follow-up with persistent IDs. Every lesion is classified as stable, enlarged, shrunk, new, or resolved — with split/merge detection.

Fail-Closed Quality Control

11 automated QC checks with three-tier gating (PASS/WARN/FAIL). If any check fails, the clinical report is blocked entirely — no unreliable results ever reach the clinician.

Brain Volume Quantification

Lesion-excluded atrophy estimation via Jacobian determinant maps. Symmetric diffeomorphic registration (ANTs SyN) ensures unbiased volumetric comparison between timepoints.

Clinical-Grade Reporting

Structured PDF reports with traffic-light QC indicator, MAGNIMS-aligned lesion change tables, colour-coded overlays, and volume trend cards — designed for under 1 minute read time.

DICOM SEG & PACS Integration

Native DICOM Segmentation objects with colour-coded labels (new/stable/resolving). Separate clinical and internal export channels prevent data leakage to PACS.

Technical Architecture

A 10-stage automated pipeline — from DICOM ingestion to structured clinical report — with full audit trail and fail-closed safety.

Hybrid nnU-Net Ensemble

Our segmentation engine combines nnU-Net V1 and V2 in a union ensemble strategy. The self-configuring nnU-Net architecture automatically adapts preprocessing, network topology, and training to dataset characteristics. By fusing both model generations, we capture lesions that either model alone might miss — achieving higher sensitivity without sacrificing specificity. The pipeline processes full 3D FLAIR and T1-weighted volumes natively, preserving spatial context across slices for accurate boundary delineation of periventricular and cortical lesions.

Deterministic Lesion Tracking

Individual lesions receive persistent IDs across timepoints using a weighted match score combining IoU overlap and centroid distance with Gaussian decay. Each lesion is classified into explicit dynamics: stable, enlarged, shrunk, new, resolved — with automated split/merge event detection. Match confidence is tiered (HIGH/MEDIUM/LOW) based on overlap quality and centroid proximity, enabling clinicians to assess tracking reliability per lesion.

Fail-Closed QC Gating

11 automated quality checks span input geometry, brain mask coverage, registration quality (NCC, Jacobian statistics), segmentation plausibility, and longitudinal consistency. Unlike advisory-only QC models, our fail-closed paradigm blocks clinical report generation entirely when any check fails — ensuring no unreliable quantification reaches clinical review. The three-tier traffic light (GREEN/YELLOW/RED) provides instant confidence assessment.

Shadow Model Validation

An independent LST-AI segmentation runs in parallel as an internal shadow model. A colour-coded comparison map highlights regions of agreement and disagreement between models, supporting continuous internal quality assurance without affecting clinical outputs.

Pydantic-Validated Output
All metrics flow through a single validated schema (metrics_all.json) — the sole source of truth for reporting, export, and audit
Full Audit Trail
Every run is traceable: UUID, git commit, QC gate linkage, per-step structured JSON logs — designed for MDR regulatory audit
MAGNIMS-Aligned Reporting
Clinical reports use standardised terminology for lesion dynamics, aligned with MAGNIMS consensus guidelines
Per-Lesion Confidence
Each lesion receives an explainable confidence score based on overlap, centroid shift, volume, and topology — not a black box
IEC 62304 Documentation
Full compliance document set: SRS, SAD, SDP, risk management, SOUP register, V&V plan, cybersecurity, and IFU
01c / Why Lävi

What Sets Us Apart

Existing MS MRI platforms like icobrain (icometrix) and Quantib ND are established — but leave gaps we are purpose-built to fill.

01

Fail-Closed Safety, Not Advisory Warnings

Most commercial platforms flag quality issues as warnings but still produce results. Our pipeline blocks clinical output entirely when QC fails. This fail-closed paradigm means unreliable quantification never reaches a clinician — a fundamental difference in patient safety philosophy aligned with MDR requirements for Class IIa medical devices.

02

Per-Lesion Explainability

Competitors typically report aggregate lesion counts and total volumes. Lävi tracks every individual lesion with a persistent ID, assigns confidence scores with explicit contributing factors, and classifies temporal stability. Clinicians see not just "5 new lesions" but which specific lesions are new, how confident the system is in each one, and why.

03

Built-In Shadow Model Validation

An independent second segmentation engine (LST-AI) runs in parallel, producing continuous internal comparison between models. This dual-model architecture enables ongoing self-validation that goes beyond standard single-model commercial products — catching potential model drift or failure modes in production.

04

Regulatory-First Architecture

While competitors often retrofit compliance documentation onto existing products, Lävi was designed with CE-MDR, IEC 62304, ISO 14971, and EU AI Act requirements embedded from the first sprint. Full compliance documentation — SRS, SAD, risk management, SOUP register — is maintained alongside the codebase, not as afterthought appendices.

05

Founded by a Neuroradiologist Who Codes

Our founder is both a practicing radiologist with neuroimaging research publications and the developer of the core pipeline. This eliminates the typical disconnect between clinical need and technical implementation. Every design decision — from report layout to QC thresholds — is informed by real-world diagnostic experience at the MRI reading station.

06

Transparent, Auditable Pipeline

Every processing run generates a complete audit trail: UUID, git commit hash, per-stage timing, structured QC logs, and provenance metadata. The entire output contract is Pydantic-validated against a strict schema. No silent failures, no untraceable results — designed for regulatory audit from day one.

02 / How It Works

From MRI Scan to Clinical Report

Three steps. Fully automated. Under two minutes.

MRI Scans In

Baseline and follow-up DICOM sequences uploaded from any 1.5T or 3T scanner

AI Analysis

Lesion detection, volumetrics, longitudinal tracking, and 11-point quality control

Clinical Report

Structured PDF with lesion changes, volume trends, and traffic-light QC status

Fail-Closed Quality Control

Most radiology AI systems produce results regardless of input quality. If the registration fails or the segmentation is unreliable, you may still get numbers — just unreliable ones.

Lävi takes the opposite approach.

Our system runs 11 automated quality checks on every case — covering image geometry, brain extraction, registration accuracy, segmentation plausibility, and longitudinal consistency. If any critical check fails, the clinical report is blocked entirely. No partial results. No misleading numbers. No silent failures.

This fail-closed paradigm means clinicians can trust that any report they receive has passed a rigorous quality gate. Cases that don't meet the threshold are flagged for manual review instead of being quietly reported with caveats buried in footnotes.

GREEN — All checks passed. Safe for clinical use.
YELLOW — Warnings present. Use with noted caveats.
RED — Critical failure. Report blocked. Manual review required.

Rigorous Validation at Every Stage

We follow a multi-phase validation approach to ensure our technology performs reliably before it reaches clinical practice.

Phase 1

External Benchmarking

Performance evaluation on publicly available, expert-annotated MS datasets including MSSEG-2, ISBI Longitudinal MS Challenge, and Insight MS Longitudinal. These benchmarks test generalizability across different scanner types, field strengths, and acquisition protocols from multiple international sites.

Ongoing
Phase 2

Clinical Validation Study

Prospective validation study with a clinical partner in Estonia. The study will evaluate diagnostic performance on real-world clinical data, comparing AI-generated segmentations against expert neuroradiologist annotations. Currently in contract negotiation phase with the clinical validation partner.

In preparation
Phase 3

Multi-Site Validation

Broader validation across multiple clinical sites to demonstrate consistent performance across different patient populations, scanner configurations, and clinical workflows — a key requirement for CE marking under the Medical Device Regulation.

Planned
04 / Regulatory Roadmap

Path to Market

A clear regulatory strategy aligned with the EU Medical Device Regulation (CE-MDR) framework.

2025

Foundation

External validation on public benchmarks. Development of quality management system (QMS). Clinical partnership establishment.

2026

Clinical Validation

Prospective clinical validation study. Ethics committee approval and data collection. Performance evaluation against expert annotations.

2027

CE-MDR Process

Initiation of CE marking process under MDR. Technical documentation, risk management, and conformity assessment with Notified Body.

2028

Market Entry

CE mark approval and commercial launch in the European market. Integration partnerships with hospital PACS systems.

Who We Are

A focused team at the intersection of AI, medical imaging, and clinical software development.

AM

Andreas Müürsepp

Founder & CEO

Radiologist at Tartu University Hospital and founder of ACME Diagnostics. Research background in neuroimaging at the University of Tartu Department of Radiology, with published work on brain MRI analysis including cortical thickness mapping and grey matter morphometry. Combines hands-on clinical radiology experience with software development skills — a rare intersection that drives the design of Lävi's AI platform from both the clinical and technical side.

LinkedIn
AH

Alvar Haug

Infrastructure & Security

Responsible for cloud infrastructure, deployment pipelines, and security architecture. Ensuring the platform meets the stringent data protection and availability requirements of clinical healthcare environments.

LinkedIn

We are actively building our team. If you're passionate about clinical AI, medical imaging, or regulatory affairs — get in touch.

06 / About

Lävi Clinical Suite

Lävi Clinical Suite OÜ develops AI-powered medical imaging software to advance the diagnosis and monitoring of neurological conditions. Our core technology focuses on MS lesion detection and brain volume quantification, designed to support clinical decision-making with quantitative, reproducible insights.

We are committed to rigorous clinical validation and regulatory compliance, building software that clinicians can trust in real-world diagnostic workflows. Based in Tartu, Estonia — a growing hub for health technology and life sciences in the Nordics and Baltics.

Headquarters
Tartu, Estonia
Entity
Lävi Clinical Suite OÜ
Focus
AI-Powered Neuroimaging Diagnostics
Email
andreas@lavi-clinical.com
Phone
+372 501 2118

Interested in a Partnership
or a Demo?

Whether you're a clinical partner, investor, or fellow researcher — we'd love to hear from you.

Request a Demo Follow on LinkedIn