Logo

Publikacije (31)

Nazad
Jianning Li, Antonio Pepe, C. Gsaxner, Gijs Luijten, Yuan Jin, Narmada Ambigapathy, Enrico Nasca, Naida Solak et al.

Prior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedback

K. Ntatsis, Niels Dekker, Viktor van der Valk, Tom Birdsong, Dženan Zukić, S. Klein, M. Staring, Matthew Mccormick

—Image registration plays a vital role in understanding changes that occur in 2D and 3D scientific imaging datasets. Registration involves finding a spatial transformation that aligns one image to another by optimizing relevant image similarity metrics. In this paper, we introduce itk-elastix , a user-friendly Python wrapping of the mature elastix registration toolbox. The open-source tool supports rigid, affine, and B-spline deformable registration, making it versatile for various imaging datasets. By utilizing the modular de-sign of itk-elastix , users can efficiently configure and compare different registration methods, and embed these in image analysis workflows.

Tomasz J. Czernuszewicz, Adam M. Aji, Christopher J. Moore, S. Montgomery, Brian Velasco, Gabriela Torres, Keerthi S. Anand, Kennita A. Johnson et al.

Shear wave elastography (SWE) is an ultrasound‐based stiffness quantification technology that is used for noninvasive liver fibrosis assessment. However, despite widescale clinical adoption, SWE is largely unused by preclinical researchers and drug developers for studies of liver disease progression in small animal models due to significant experimental, technical, and reproducibility challenges. Therefore, the aim of this work was to develop a tool designed specifically for assessing liver stiffness and echogenicity in small animals to better enable longitudinal preclinical studies. A high‐frequency linear array transducer (12‐24 MHz) was integrated into a robotic small animal ultrasound system (Vega; SonoVol, Inc., Durham, NC) to perform liver stiffness and echogenicity measurements in three dimensions. The instrument was validated with tissue‐mimicking phantoms and a mouse model of nonalcoholic steatohepatitis. Female C57BL/6J mice (n = 40) were placed on choline‐deficient, L‐amino acid‐defined, high‐fat diet and imaged longitudinally for 15 weeks. A subset was sacrificed after each imaging timepoint (n = 5) for histological validation, and analyses of receiver operating characteristic (ROC) curves were performed. Results demonstrated that robotic measurements of echogenicity and stiffness were most strongly correlated with macrovesicular steatosis (R2 = 0.891) and fibrosis (R2 = 0.839), respectively. For diagnostic classification of fibrosis (Ishak score), areas under ROC (AUROCs) curves were 0.969 for ≥Ishak1, 0.984 for ≥Ishak2, 0.980 for ≥Ishak3, and 0.969 for ≥Ishak4. For classification of macrovesicular steatosis (S‐score), AUROCs were 1.00 for ≥S2 and 0.997 for ≥S3. Average scanning and analysis time was <5 minutes/liver. Conclusion: Robotic SWE in small animals is feasible and sensitive to small changes in liver disease state, facilitating in vivo staging of rodent liver disease with minimal sonographic expertise.

Dženan Zukić, Anne Haley, C. Lisle, James Klo, K. Pohl, Hans J. Johnson, Aashish Chaudhary

We present an open-source web tool for quality control of distributed imaging studies. To minimize the amount of human time and attention spent reviewing the images, we created a neural network to provide an automatic assessment. This steers reviewers’ attention to potentially problematic cases, reducing the likelihood of missing image quality issues. We test our approach using 5-fold cross validation on a set of 5217 magnetic resonance images.

A. Butskova, Rain Juhl, Dženan Zukić, Aashish Chaudhary, K. Pohl, Qingyu Zhao

Jared Vicory, David Allemang, Dženan Zukić, Jack Prothero, M. McCormick, B. Paniagua

Shape analysis is an important and powerful tool in a wide variety of medical applications. Many shape analysis techniques require shape representations which are in correspondence. Unfortunately, popular techniques for generating shape representations do not handle objects with complex geometry or topology well, and those that do are not typically readily available for non-expert users. We describe a method for generating correspondences across a population of objects using a given template. We also describe its implementation and distribution via SlicerSALT, an open-source platform for making powerful shape analysis techniques more widely available and usable. Finally, we show results of this implementation on mouse femur data.

Ruoqiao Zhang, Dženan Zukić, Darrin W. Byrd, A. Enquobahrie, A. Alessio, K. Cleary, F. Banovac, Paul Kinahan

Tomasz J. Czernuszewicz, V. Papadopoulou, J. Rojas, Rajalekha M Rajamahendiran, J. Perdomo, James Butler, Max Harlacher, Graeme O'Connell et al.

Background: Preclinical ultrasound (US) and contrast-enhanced ultrasound (CEUS) imaging have long been used in oncology to noninvasively measure tumor volume and vascularity. While the value of preclinical US has been repeatedly demonstrated, these modalities are not without several key limitations that make them unattractive to cancer researchers, including: high user-variability, low throughput, and limited imaging field-of-view (FOV). Herein, we present a novel robotic preclinical US/CEUS system that addresses these limitations and demonstrates its use in evaluating tumors in 3D in a rodent model. Methods: The imaging system was designed to allow seamless whole-body 3D imaging, which requires rodents to be imaged without physical contact between the US transducer and the animal. To achieve this, a custom dual-element transducer was mounted on a robotic carriage, submerged in a hydrocarbon fluid, and the reservoir sealed with an acoustically transmissive top platform. Eight NOD/scid/gamma (NSG) female mice were injected subcutaneously in the flank with 8×109 786-O human clear-cell renal cell carcinoma (ccRCC) cells. Weekly imaging commenced after tumors reached a size of 150 mm3 and continued until tumors reached a maximum size of 1 cm3 (∼4-5 weeks). An additional six nude athymic female mice were injected subcutaneously in the flank with 7 × 105 SVR angiosarcoma cells to perform an inter-operator variability study. Imaging consisted of 3D B-mode (conventional ultrasound) of the whole abdomen ( Results: Wide-field US images reconstructed from 3D volumetric data showed superior FOV over conventional US. Several anatomical landmarks could be identified within each image surrounding the tumor, including the liver, small intestines, bladder, and inguinal lymph nodes. Tumor boundaries were clearly delineated in both B-mode and BVD images, with BVD images showing heterogeneous microvessel density at later timepoints suggesting tumor necrosis. Excellent agreement was measured for both inter-reader and inter-operator experiments, with alpha coefficients of 0.914 (95% CI: 0.824-0.948) and 0.959 (0.911-0.981), respectively. Conclusion: We have demonstrated a novel preclinical US imaging system that can accurately and consistently evaluate tumors in rodent models. The system leverages cost-effective robotic technology, and a new scanning paradigm that allows for easy and reproducible data acquisition to enable wide-field, 3D, multi-parametric ultrasound imaging. Note: This abstract was not presented at the meeting. Citation Format: Tomasz Czernuszewicz, Virginie Papadopoulou, Juan D. Rojas, Rajalekha Rajamahendiran, Jonathan Perdomo, James Butler, Max Harlacher, Graeme O9Connell, Dzenan Zukic, Paul A. Dayton, Stephen Aylward, Ryan C. Gessner. A preclinical ultrasound platform for widefield 3D imaging of rodent tumors [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2019; 2019 Mar 29-Apr 3; Atlanta, GA. Philadelphia (PA): AACR; Cancer Res 2019;79(13 Suppl):Abstract nr 1955.

Dženan Zukić, Darrin W. Byrd, Paul Kinahan, A. Enquobahrie

Multicenter clinical trials that use positron emission tomography (PET) imaging frequently rely on stable bias in imaging biomarkers to assess drug effectiveness. Many well-documented factors cause variability in PET intensity values. Two of the largest scanner-dependent errors are scanner calibration and reconstructed image resolution variations. For clinical trials, an increase in measurement error significantly increases the number of patient scans needed. We aim to provide a robust quality assurance system using portable PET/computed tomography “pocket” phantoms and automated image analysis algorithms with the goal of reducing PET measurement variability. A set of the “pocket” phantoms was scanned with patients, affixed to the underside of a patient bed. Our software analyzed the obtained images and estimated the image parameters. The analysis consisted of 2 steps, automated phantom detection and estimation of PET image resolution and global bias. Performance of the algorithm was tested under variations in image bias, resolution, noise, and errors in the expected sphere size. A web-based application was implemented to deploy the image analysis pipeline in a cloud-based infrastructure to support multicenter data acquisition, under Software-as-a-Service (SaaS) model. The automated detection algorithm localized the phantom reliably. Simulation results showed stable behavior when image properties and input parameters were varied. The PET “pocket” phantom has the potential to reduce and/or check for standardized uptake value measurement errors.

Tomasz J. Czernuszewicz, V. Papadopoulou, J. Rojas, Rajalekha M Rajamahendiran, J. Perdomo, James Butler, Max Harlacher, Graeme O'Connell et al.

Noninvasive in vivo imaging technologies enable researchers and clinicians to detect the presence of disease and longitudinally study its progression. By revealing anatomical, functional, or molecular changes, imaging tools can provide a near real-time assessment of important biological events. At the preclinical research level, imaging plays an important role by allowing disease mechanisms and potential therapies to be evaluated noninvasively. Because functional and molecular changes often precede gross anatomical changes, there has been a significant amount of research exploring the ability of different imaging modalities to track these aspects of various diseases. Herein, we present a novel robotic preclinical contrast-enhanced ultrasound system and demonstrate its use in evaluating tumors in a rodent model. By leveraging recent advances in ultrasound, this system favorably compares with other modalities, as it can perform anatomical, functional, and molecular imaging and is cost-effective, portable, and high throughput, without using ionizing radiation. Furthermore, this system circumvents many of the limitations of conventional preclinical ultrasound systems, including a limited field-of-view, low throughput, and large user variability.

Paul Yushkevich, Artem Pashchinskiy, I. Oguz, S. Mohan, J. Schmitt, J. Stein, Dženan Zukić, Jared Vicory et al.

Dženan Zukić, Jared Vicory, Matthew Mccormick, L. Wisse, G. Gerig, Paul Yushkevich, S. Aylward

This document describes a new class, itk::MorphologicalContourInterpolator, which implements a method proposed by Albu et al. in 2008. Interpolation is done by first determining correspondence between shapes on adjacent segmented slices by detecting overlaps, then aligning the corresponding shapes, generating transition sequence of one-pixel dilations and taking the median as result. Recursion is employed if the original segmented slices are separated by more than one empty slice.This class is n-dimensional, and supports inputs of 3 or more dimensions. `Slices’ are n-1-dimensional, and can be both automatically detected and manually set. The class is efficient in both memory used and execution time. It requires little memory in addition to allocation of input and output images. The implementation is multi-threaded, and processing one of the test inputs takes around 1-2 seconds on a quad-core processor.The class is tested to operate on both itk::Image and itk::RLEImage. Since all the processing is done on extracted slices, usage of itk::RLEImage for input and/or output affects performance to a limited degree.This class is implemented to ease manual segmentation in ITK-SNAP (www.itksnap.org). The class, along with test data and automated regression tests is packaged as an ITK remote module https://github.com/KitwareMedical/ITKMorphologicalContourInterpolation.

Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!

Pretplatite se na novosti o BH Akademskom Imeniku

Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo

Saznaj više