CaReRa: Content Based Case Retrieval in Radiology

Based on the fact that clinical experience plays a key role in the performance of medical professionals, it is conjectured that a Clinical Experience Sharing (CES) platform, i.e. a searchable collective clinical experience knowledge-base accessible by a large community of medical professionals, would be of great practical value in clinical practice as well as in medical education.  Such a CES would be composed of a multi-modal medical case database, would incorporate a Content Based Case Retrieval (CBCR) engine and would be specialized for different domains. Project CaReRa aims at developing such a CES for the domain of liver cases. During the course of the project, multi-modal case data will be collected, anonymized and stored in a structural database, CBCR technologies will be developed, experiments for the assessment of its impact on the clinical workflow as well as medical education will be designed and conducted.

Two typical use cases are:

  1. Given a difficult case to be diagnosed, the medical expert may retrieve similar past cases, review their data (image and non-image), their diagnosis and the follow-up information for comparative decision making. This is a way of sharing medical experience among a community, which would improve the individuals' performance.
  2. A medical student and/or a resident doctor may retrieve seemingly similar yet actually different cases. Thus he/she can highlight the differences between similar cases, which is extremely critical in diagnostic decision making where subtle difference may be of great importance.

Download the infosheet...

The ONLIRA and LICO ontologies are available in owl format under "Downloads" below. The file extension should be changed to "zip" after download.

Related Publications

CaReRa-WEB (CRR-Web)
Try our CRR-WEB (if you experience connection problems, please send an email to acarbu@boun.edu.tr) to upload, browse, search and review. Login to "Demo" version, using "demo / demo" as account name and password. IMPORTANT: We suggest you to use Firefox. In your first attempt, your browser may warn you with "Your connection is not secure". In that case, please proceed to "Advanced" options and "Add Exception".
Once you login using demo account, there are two options: Browser and Data Upload. Browser allows you to browse (view) submitted patients. Currently, you can perform conventional search using keywords. The search engine, being developed at CaReRa project, that uses a case as a query will be accesible from this page. Data Upload allows you to upload a new patient to CaReRaWeb. The "Demo" user has limited access and authorization for demonstration of the system.
 

Data Upload: On the right hand side, a list of required fields are displayed. Whenever a required field is filled with some data, a green check mark (otherwise a red cross mark) is shown. Once the radiologist provides information about all required fields, the patient can be submitted. Otherwise, the patient information is saved and can be completed later.

Patient Tab: In this tab, patient information (demographic, regular drugs, past and chronic diseases, surgeries and genetic diseases) is provided. Past and chronic diseases of a patient are listed. Note that diseases are provided using ICD-10 codes. Other diseases are listed as free text.

Study Tab: In this tab, patient complaints, physical examination results, prediagnosis, currently used non-regular drugs, diagnosis and laboratory results are provided. Physical examination results are listed. The radiologist's observation is that the patient has "Jaundice" and "Ascites".
Series Tab: In this tab, imaging observations of liver are described. All the imaging observation fields and values are automatically retrieved from ONLIRA. In this figure, the radiologist reports that the liver position in the body is normal and the liver contour is regular.
 
Pathology Tab: In this tab, imaging observations of lesions are described. A DICOM viewer is integrated to CaReRaWeb. The DICOM image is visualized and a rectangular area where the lesion is observed is marked by the radiologist. Then, imaging observations of marked lesion are specified. All the imaging observation fields and values are automatically retrieved from ONLIRA. In this figure, marked rectangular area contains multiple lesions that are located at Segment 8 in the right lobe of the liver. The size of the largest lesion in this area is measured as 21x20mm. This lesion is not adjacent to gallbladder.
Browser: The radiologist can search in all submitted patients by providing some keywords. Another option is to list all submitted patients.

Links

Related Projects:

 

Workshops:

  • imageCLEF 2014 liver CT annotation task has been organized by CaReRa project
Core Collaborators

Computer Aided Medical Image Annotation
Neda B. Marvasti, et al.
A radiologist-in-the- loop semi-automatic CMIA system is proposed. It is based on a Bayesian tree structured model, linked to RadLex. The experiments with liver lesions in computed tomography (CT) images. show that on average 7.50 (out of 29) manual annotations is sufficient for 95% accuracy in liver lesion annotations. The proposed system guides the radiologist to input the most critical information in each iteration and uses a network model to update the full annotation online. The results also suggest that the domain-aware models perform better than the domain-blind models learned from data.
 
Figure: The domain-aware network model constructed manually by exploiting the domain knowledge
Semantic Annotations vs Low-level Image Features
Neda B. Marvasti, et al.

Low-level image features (CoG) have been widely used in content based image retrieval systems, where as it has been accepted that high-level semantic descriptors have a higher potential both in terms of retrieval performance and interpretability. The latter is especially important in medical applications, where the MDs need to understand the output of computer systems and reason them. In this work, we have compared the low-level image features (CoG) and high-level semantic features (UsE) in radiological (specifically liver lesion CT images) image retrieval. The study has been presented in 1st ACM MM Workshop on Multimedia Indexing and Information Retrieval for Healthcare (ACM MM'13).

Figure: NDCG (Normalized Discounted Cumulative Gain) vs Number of Retrieved Cases/Images, using a linear combination of UsE and CoG (alpha=0 --> UsE only)

LIVERworks: Liver CRR Query Platform
Neda B. Marvasti, et al.

LIVERworks is a desktop application to build a CRR query from a given case. Its main functionalities include CT preprocessing (liver, vessel and lesion segmentation), image feature (CoG) extraction, semantic feature prediction/annotation (UsE) and queryin the CRR-Db via CRR-Web. The current in-house application targets medical professionals and research groups

ONLIRA: Ontology of Liver for Radiology
Nadin Kokciyan, et al.

Radiologists inspect CT scans and record their observations for purposes of communication and further use. A description language with clear semantics is essential for consistent interpretation by medical profes- sionals as well as automated tasks. RadLex is a large lexi- con that extends SNOMED CT and DICOM towards this purpose. While, the the vocabulary is extensive, RadLex has not yet specified some of the semantic relations. ONLIRA (Ontology of the Liver for Radiology) focuses on a semantic specification of imaging observations of CT scans for the liver. ONLIRA extends RadLex with semantic relationships that describe and relate the concepts. Thus, automated processing tasks, such as identifying similar patients, are supported.

3D CT Liver Segmentation
Serkan Çimen
An in-house semi-automatic liver segmentation method has been developed by means of several improvements over existing algorithms, following similar approaches. Namely, an initial conservative segmentation is obtained by adaptive thresholding using a GMM model of the voxel distribution in the user delineated VOI. A smooth non-singular vector field flowing outwards is obtained by means of Poisson Eqn., along which the sampled 1D profiles are graded according to their probability of being a true liver boundary normal. Finally, minCut-maxFlow graph-cut algorithm is applied without using any regional terms.
 
Figure: 1D profile classification based edge maps and resultant segmentation masks. Left: SDF; Right: Poisson Eqn.
3D Vessel & Lesion Segmentation
Neda B. Marvasti

The infamous Frangi maps have been applied for vessel segmentation with automatic global threshold selection for these Hessian based vesselness maps. Namely, the threshold is selected by keeping track of a significant change in the histogram of the segmented voxels' CT values as a threshold is varied. The method is to be applied to contrast enhanced liver CT, hence the vessels are expected to be bright and a too low edgeness threshold would result in increasing number of segmented voxels with relatively low CT values. The change in histograms is tracked by means of Chi-squared histogram difference and a global vesselness threshold is selected. The lesions are segmented in non-normal-tissue and non-vessel regions by means of graph-cuts where the boundary terms (the n-links) in the graph are set to be sensitive to an estimated difference between the probabilities of being background (normal tissue).

Figure: Liver, vessels and lesion segmentation masks.