Lecture: Combining Text and Visual Features for Multimodal Classification and Retrieval of Biomedical Images by Dr. Mahmudur Rahman on 8/9/2011

Combining Text and Visual Features for Multimodal Classification and Retrieval of Biomedical Images

Brown Bag Lecture by Dr. Mahmudur Rahman | 8/9/2011 11AM-12PM | 7th Floor Conference Room, Bldg 38A

Abstract: The search for relevant and actionable information is key to achieving clinical and research goals in biomedicine. In this talk I present a multimodal retrieval system to search images in full-text biomedical articles using classification and relevance feedback techniques. The retrieval system is developed in Java with the help of several image processing and machine learning libraries that integrates both visual and text features. Text keywords from associated metadata provide the context and are indexed using the vector space model of information retrieval. To represent images at different levels of abstraction, global colors, texture, and edge-related features are extracted in addition to the local concept and keypoints based features. The image and text feature vectors are combined and trained for SVM classification at a global level for image modality (e.g., CT, MR, X-ray, etc.) detection and image filtering purposes. These features also can be used to expand a multimodal query using related keywords and/or concepts computed from the top retrieved relevant images based on correlation analysis and user-feedback. The retrieval system thus support cross-modal multiple query expansion and propagate user-perceived semantics between modalities. An evaluation of the search on imageCLEFmed’10 dataset of 77,000 images and topics demonstrates improved precisions of the multimodal framework compared to using only a single modality or without using any classification or feedback information.

Bio: Dr. Mahmudur Rahman joined the Communications Engineering Branch, the Lister Hill National Center for Biomedical Communications at the National Library of Medicine (NLM) in Nov. 2008. He received the PhD in Computer Science in May 2008 from Concordia University, Montreal, Canada with an emphasis on Medical informatics and Image Retrieval. He is currently working with Dr. Sameer Antani and Dr. Dina Demner-Fushman on multiple imaging projects, such as Multimodal Retrieval in Context and Concept feature space, Machine Learning and Interactive algorithm for Content-Based Image Retrieval, and Chest X-ray screening/TB image analysis Project. Over the course of his doctoral and post-doctoral research, he has authored a book, published articles in several journals, such as IEEE Transaction on Information Technology in Biomedicine, Computerized Medical Imaging and Graphics, Journal of Visual Communication and Image Representation, and presented his works in numerous conferences and workshops, such as, CBMS, ISBI, CIVR, ACM-MIR, SPIE, CLEF, etc.

Lecture: A Mathematical Analysis of its Basic Concepts by Dr. Stefan Jaeger on 8/23/2011
Lecture: The DAE Platform: A Next-Generation Framework for Experimental Pattern Recognition Research by Dr. Bart Lamiroy on 7/26/2011