Logo
User Name

Dženan Zukić

Društvene mreže:

Jianning Li, Antonio Pepe, C. Gsaxner, Gijs Luijten, Yuan Jin, Narmada Ambigapathy, Enrico Nasca, Naida Solak, G. Melito et al.

Prior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedback

K. Ntatsis, Niels Dekker, Viktor van der Valk, Tom Birdsong, Dženan Zukić, S. Klein, M. Staring, Matthew Mccormick

—Image registration plays a vital role in understanding changes that occur in 2D and 3D scientific imaging datasets. Registration involves finding a spatial transformation that aligns one image to another by optimizing relevant image similarity metrics. In this paper, we introduce itk-elastix , a user-friendly Python wrapping of the mature elastix registration toolbox. The open-source tool supports rigid, affine, and B-spline deformable registration, making it versatile for various imaging datasets. By utilizing the modular de-sign of itk-elastix , users can efficiently configure and compare different registration methods, and embed these in image analysis workflows.

Tomasz J. Czernuszewicz, Adam M. Aji, Christopher J. Moore, S. Montgomery, Brian Velasco, Gabriela Torres, Keerthi S. Anand, Kennita A. Johnson, A. Deal et al.

Shear wave elastography (SWE) is an ultrasound‐based stiffness quantification technology that is used for noninvasive liver fibrosis assessment. However, despite widescale clinical adoption, SWE is largely unused by preclinical researchers and drug developers for studies of liver disease progression in small animal models due to significant experimental, technical, and reproducibility challenges. Therefore, the aim of this work was to develop a tool designed specifically for assessing liver stiffness and echogenicity in small animals to better enable longitudinal preclinical studies. A high‐frequency linear array transducer (12‐24 MHz) was integrated into a robotic small animal ultrasound system (Vega; SonoVol, Inc., Durham, NC) to perform liver stiffness and echogenicity measurements in three dimensions. The instrument was validated with tissue‐mimicking phantoms and a mouse model of nonalcoholic steatohepatitis. Female C57BL/6J mice (n = 40) were placed on choline‐deficient, L‐amino acid‐defined, high‐fat diet and imaged longitudinally for 15 weeks. A subset was sacrificed after each imaging timepoint (n = 5) for histological validation, and analyses of receiver operating characteristic (ROC) curves were performed. Results demonstrated that robotic measurements of echogenicity and stiffness were most strongly correlated with macrovesicular steatosis (R2 = 0.891) and fibrosis (R2 = 0.839), respectively. For diagnostic classification of fibrosis (Ishak score), areas under ROC (AUROCs) curves were 0.969 for ≥Ishak1, 0.984 for ≥Ishak2, 0.980 for ≥Ishak3, and 0.969 for ≥Ishak4. For classification of macrovesicular steatosis (S‐score), AUROCs were 1.00 for ≥S2 and 0.997 for ≥S3. Average scanning and analysis time was <5 minutes/liver. Conclusion: Robotic SWE in small animals is feasible and sensitive to small changes in liver disease state, facilitating in vivo staging of rodent liver disease with minimal sonographic expertise.

Dženan Zukić, Anne Haley, C. Lisle, James Klo, K. Pohl, Hans J. Johnson, Aashish Chaudhary

We present an open-source web tool for quality control of distributed imaging studies. To minimize the amount of human time and attention spent reviewing the images, we created a neural network to provide an automatic assessment. This steers reviewers’ attention to potentially problematic cases, reducing the likelihood of missing image quality issues. We test our approach using 5-fold cross validation on a set of 5217 magnetic resonance images.

A. Butskova, Rain Juhl, Dženan Zukić, Aashish Chaudhary, K. Pohl, Qingyu Zhao

Jared Vicory, David Allemang, Dženan Zukić, Jack Prothero, M. McCormick, B. Paniagua

Shape analysis is an important and powerful tool in a wide variety of medical applications. Many shape analysis techniques require shape representations which are in correspondence. Unfortunately, popular techniques for generating shape representations do not handle objects with complex geometry or topology well, and those that do are not typically readily available for non-expert users. We describe a method for generating correspondences across a population of objects using a given template. We also describe its implementation and distribution via SlicerSALT, an open-source platform for making powerful shape analysis techniques more widely available and usable. Finally, we show results of this implementation on mouse femur data.

Ruoqiao Zhang, Dženan Zukić, Darrin W. Byrd, A. Enquobahrie, A. Alessio, K. Cleary, F. Banovac, Paul Kinahan

Tomasz J. Czernuszewicz, V. Papadopoulou, J. Rojas, Rajalekha M Rajamahendiran, J. Perdomo, James Butler, Max Harlacher, Graeme O'Connell, Dženan Zukić et al.

Background: Preclinical ultrasound (US) and contrast-enhanced ultrasound (CEUS) imaging have long been used in oncology to noninvasively measure tumor volume and vascularity. While the value of preclinical US has been repeatedly demonstrated, these modalities are not without several key limitations that make them unattractive to cancer researchers, including: high user-variability, low throughput, and limited imaging field-of-view (FOV). Herein, we present a novel robotic preclinical US/CEUS system that addresses these limitations and demonstrates its use in evaluating tumors in 3D in a rodent model. Methods: The imaging system was designed to allow seamless whole-body 3D imaging, which requires rodents to be imaged without physical contact between the US transducer and the animal. To achieve this, a custom dual-element transducer was mounted on a robotic carriage, submerged in a hydrocarbon fluid, and the reservoir sealed with an acoustically transmissive top platform. Eight NOD/scid/gamma (NSG) female mice were injected subcutaneously in the flank with 8×109 786-O human clear-cell renal cell carcinoma (ccRCC) cells. Weekly imaging commenced after tumors reached a size of 150 mm3 and continued until tumors reached a maximum size of 1 cm3 (∼4-5 weeks). An additional six nude athymic female mice were injected subcutaneously in the flank with 7 × 105 SVR angiosarcoma cells to perform an inter-operator variability study. Imaging consisted of 3D B-mode (conventional ultrasound) of the whole abdomen ( Results: Wide-field US images reconstructed from 3D volumetric data showed superior FOV over conventional US. Several anatomical landmarks could be identified within each image surrounding the tumor, including the liver, small intestines, bladder, and inguinal lymph nodes. Tumor boundaries were clearly delineated in both B-mode and BVD images, with BVD images showing heterogeneous microvessel density at later timepoints suggesting tumor necrosis. Excellent agreement was measured for both inter-reader and inter-operator experiments, with alpha coefficients of 0.914 (95% CI: 0.824-0.948) and 0.959 (0.911-0.981), respectively. Conclusion: We have demonstrated a novel preclinical US imaging system that can accurately and consistently evaluate tumors in rodent models. The system leverages cost-effective robotic technology, and a new scanning paradigm that allows for easy and reproducible data acquisition to enable wide-field, 3D, multi-parametric ultrasound imaging. Note: This abstract was not presented at the meeting. Citation Format: Tomasz Czernuszewicz, Virginie Papadopoulou, Juan D. Rojas, Rajalekha Rajamahendiran, Jonathan Perdomo, James Butler, Max Harlacher, Graeme O9Connell, Dzenan Zukic, Paul A. Dayton, Stephen Aylward, Ryan C. Gessner. A preclinical ultrasound platform for widefield 3D imaging of rodent tumors [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2019; 2019 Mar 29-Apr 3; Atlanta, GA. Philadelphia (PA): AACR; Cancer Res 2019;79(13 Suppl):Abstract nr 1955.

...
...
...

Pretplatite se na novosti o BH Akademskom Imeniku

Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo

Saznaj više