Automatically detect new and enlarging lesions, track disease progression over time, and generate structured clinical reports — with built-in quality control that blocks unreliable results.
A complete longitudinal MS analysis platform — from DICOM input to structured clinical report — built for safety, transparency, and regulatory readiness from day one.
Union ensemble of nnU-Net V1 and V2 models for robust WMH detection. Dual-model consensus reduces false negatives while maintaining specificity across scanner types.
Deterministic lesion matching between baseline and follow-up with persistent IDs. Every lesion is classified as stable, enlarged, shrunk, new, or resolved — with split/merge detection.
11 automated QC checks with three-tier gating (PASS/WARN/FAIL). If any check fails, the clinical report is blocked entirely — no unreliable results ever reach the clinician.
Lesion-excluded atrophy estimation via Jacobian determinant maps. Symmetric diffeomorphic registration (ANTs SyN) ensures unbiased volumetric comparison between timepoints.
Structured PDF reports with traffic-light QC indicator, MAGNIMS-aligned lesion change tables, colour-coded overlays, and volume trend cards — designed for under 1 minute read time.
Native DICOM Segmentation objects with colour-coded labels (new/stable/resolving). Separate clinical and internal export channels prevent data leakage to PACS.
A 10-stage automated pipeline — from DICOM ingestion to structured clinical report — with full audit trail and fail-closed safety.
Our segmentation engine combines nnU-Net V1 and V2 in a union ensemble strategy. The self-configuring nnU-Net architecture automatically adapts preprocessing, network topology, and training to dataset characteristics. By fusing both model generations, we capture lesions that either model alone might miss — achieving higher sensitivity without sacrificing specificity. The pipeline processes full 3D FLAIR and T1-weighted volumes natively, preserving spatial context across slices for accurate boundary delineation of periventricular and cortical lesions.
Individual lesions receive persistent IDs across timepoints using a weighted match score combining IoU overlap and centroid distance with Gaussian decay. Each lesion is classified into explicit dynamics: stable, enlarged, shrunk, new, resolved — with automated split/merge event detection. Match confidence is tiered (HIGH/MEDIUM/LOW) based on overlap quality and centroid proximity, enabling clinicians to assess tracking reliability per lesion.
11 automated quality checks span input geometry, brain mask coverage, registration quality (NCC, Jacobian statistics), segmentation plausibility, and longitudinal consistency. Unlike advisory-only QC models, our fail-closed paradigm blocks clinical report generation entirely when any check fails — ensuring no unreliable quantification reaches clinical review. The three-tier traffic light (GREEN/YELLOW/RED) provides instant confidence assessment.
An independent LST-AI segmentation runs in parallel as an internal shadow model. A colour-coded comparison map highlights regions of agreement and disagreement between models, supporting continuous internal quality assurance without affecting clinical outputs.
Existing MS MRI platforms like icobrain (icometrix) and Quantib ND are established — but leave gaps we are purpose-built to fill.
Most commercial platforms flag quality issues as warnings but still produce results. Our pipeline blocks clinical output entirely when QC fails. This fail-closed paradigm means unreliable quantification never reaches a clinician — a fundamental difference in patient safety philosophy aligned with MDR requirements for Class IIa medical devices.
Competitors typically report aggregate lesion counts and total volumes. Lävi tracks every individual lesion with a persistent ID, assigns confidence scores with explicit contributing factors, and classifies temporal stability. Clinicians see not just "5 new lesions" but which specific lesions are new, how confident the system is in each one, and why.
An independent second segmentation engine (LST-AI) runs in parallel, producing continuous internal comparison between models. This dual-model architecture enables ongoing self-validation that goes beyond standard single-model commercial products — catching potential model drift or failure modes in production.
While competitors often retrofit compliance documentation onto existing products, Lävi was designed with CE-MDR, IEC 62304, ISO 14971, and EU AI Act requirements embedded from the first sprint. Full compliance documentation — SRS, SAD, risk management, SOUP register — is maintained alongside the codebase, not as afterthought appendices.
Our founder is both a practicing radiologist with neuroimaging research publications and the developer of the core pipeline. This eliminates the typical disconnect between clinical need and technical implementation. Every design decision — from report layout to QC thresholds — is informed by real-world diagnostic experience at the MRI reading station.
Every processing run generates a complete audit trail: UUID, git commit hash, per-stage timing, structured QC logs, and provenance metadata. The entire output contract is Pydantic-validated against a strict schema. No silent failures, no untraceable results — designed for regulatory audit from day one.
Three steps. Fully automated. Under two minutes.
Baseline and follow-up DICOM sequences uploaded from any 1.5T or 3T scanner
Lesion detection, volumetrics, longitudinal tracking, and 11-point quality control
Structured PDF with lesion changes, volume trends, and traffic-light QC status
Most radiology AI systems produce results regardless of input quality. If the registration fails or the segmentation is unreliable, you may still get numbers — just unreliable ones.
Lävi takes the opposite approach.
Our system runs 11 automated quality checks on every case — covering image geometry, brain extraction, registration accuracy, segmentation plausibility, and longitudinal consistency. If any critical check fails, the clinical report is blocked entirely. No partial results. No misleading numbers. No silent failures.
This fail-closed paradigm means clinicians can trust that any report they receive has passed a rigorous quality gate. Cases that don't meet the threshold are flagged for manual review instead of being quietly reported with caveats buried in footnotes.
We follow a multi-phase validation approach to ensure our technology performs reliably before it reaches clinical practice.
Performance evaluation on publicly available, expert-annotated MS datasets including MSSEG-2, ISBI Longitudinal MS Challenge, and Insight MS Longitudinal. These benchmarks test generalizability across different scanner types, field strengths, and acquisition protocols from multiple international sites.
Prospective validation study with a clinical partner in Estonia. The study will evaluate diagnostic performance on real-world clinical data, comparing AI-generated segmentations against expert neuroradiologist annotations. Currently in contract negotiation phase with the clinical validation partner.
Broader validation across multiple clinical sites to demonstrate consistent performance across different patient populations, scanner configurations, and clinical workflows — a key requirement for CE marking under the Medical Device Regulation.
A clear regulatory strategy aligned with the EU Medical Device Regulation (CE-MDR) framework.
External validation on public benchmarks. Development of quality management system (QMS). Clinical partnership establishment.
Prospective clinical validation study. Ethics committee approval and data collection. Performance evaluation against expert annotations.
Initiation of CE marking process under MDR. Technical documentation, risk management, and conformity assessment with Notified Body.
CE mark approval and commercial launch in the European market. Integration partnerships with hospital PACS systems.
A focused team at the intersection of AI, medical imaging, and clinical software development.
Radiologist at Tartu University Hospital and founder of ACME Diagnostics. Research background in neuroimaging at the University of Tartu Department of Radiology, with published work on brain MRI analysis including cortical thickness mapping and grey matter morphometry. Combines hands-on clinical radiology experience with software development skills — a rare intersection that drives the design of Lävi's AI platform from both the clinical and technical side.
LinkedInResponsible for cloud infrastructure, deployment pipelines, and security architecture. Ensuring the platform meets the stringent data protection and availability requirements of clinical healthcare environments.
LinkedInWe are actively building our team. If you're passionate about clinical AI, medical imaging, or regulatory affairs — get in touch.
Lävi Clinical Suite OÜ develops AI-powered medical imaging software to advance the diagnosis and monitoring of neurological conditions. Our core technology focuses on MS lesion detection and brain volume quantification, designed to support clinical decision-making with quantitative, reproducible insights.
We are committed to rigorous clinical validation and regulatory compliance, building software that clinicians can trust in real-world diagnostic workflows. Based in Tartu, Estonia — a growing hub for health technology and life sciences in the Nordics and Baltics.
Whether you're a clinical partner, investor, or fellow researcher — we'd love to hear from you.