Let’s have a closer look at the specifications and implementation of the three main components of Saké.
Image Database Server
Ability to store and retrieve a variety of medical images (X-rays, MRIs, CT scans, etc.) and corresponding metadata
DICOM (Digital Imaging and Communications in Medicine): the international standard for medical images and related information, allows for image type flexibility
Images accessible to various users across the globe
Images stored on Google Cloud, metadata (ordering of stack, patient information, etc.) pre-generated by Python scripts
Focus on lung CT scans due to 1) partner interest 2) problem severity (200k+ new cases in US each year) 3) problem complexity (compared to other medical diagnosis problems, this appears more tractable)
Accessible to radiologists across the globe
Standardized, easy-to-use annotation framework
Automated segmentation and propagation to adjacent slices.
Precise fine-tuning of the segments.
Originally, we presume we need to implement the entire front-end from scratch, until we realize there is considerable amount of software readily available
Investigate platforms: Stanford’s EPAD, Osirix, Dana Farber’s Imaging Platform
Decide on: OHIF Viewer (Open Health Imaging Foundation) - the leader in open-source web-based medical imaging annotation
Communicates with REST backend on the Smart Server via AJAX requests
Backend of segmentation process for the viewer
Machine learning pipeline that can be easily upgraded
ML assists doctors in detecting ROIs
Recursive flood-fill algorithm that takes in a seed point and expands the boundaries of an annotation until reaching a given threshold
2D ROI is propagated to adjacent 2D images on the Z-axis.
Inspiration: William Gray Roncal, VESICLE: Volumetric Evaluation of Synaptic Interfaces using Computer vision at Large Scale