Research group RationAI
RationAI research group, founded by researchers from the Faculty of Informatics and Institute of Computer Science at Masaryk University, concentrates on developing cutting-edge AI methods in biomedicine. We aim to create an appropriate environment that will maximally support cooperation between domain experts and computer science specialists, focusing on explaining the behavior of these AI methods (explainable AI, XAI). As an indispensable part of such an effort, we consider traceable development, training, and validation of the AI methods using automated provenance information generation and robust visualization systems. Furthermore, for specific domains, we also develop a trusted environment for the validation of AI methods using evaluation metrics developed in tight cooperation with the domain experts.
We have active collaboration with groups from Masaryk Memorial Cancer Institute (pathology), MagicWare company (obesitology), Biobanking and BioMolecular resources Research Infrastructure - European Research Infrastructure Consortium (large biomedical datasets), Medical University Graz (pathology, data security). We welcome students of all levels to participate in exciting interdisciplinary research in artificial intelligence.
What we focus on?
Research directions concerning inter-domain communication:
Development of self-explanatory AI methods.
Development of XAI methods providing insight into the generic AI methods that would allow “communication” of domain experts with the AI systems.
Development of federated XAI methods.
Advanced visualization methods including the development of UI usable by specialists in medicine and natural sciences (MDs, pathologists, biochemists).
Use of explainability to develop novel (X)AI methods by iteratively embedding input from domain experts on explainability results.
Research concerning validation: IVDR
Specific methods for validation of AI methods for individual domains such as providing validation evidence for regulatory purposes such as In Vitro Diagnostic Regulation. The methods will be validated in a trusted environment, including generating trusted data and AI method provenance, so that the methods can be deployed, e.g., for real-world validation in compliance with the In Vitro Diagnostic (IVD) regulation.
Research concerning data sharing infrastructures and implementation of FAIRness support:
Research in privacy risks related to large-scale [multimodal] health data sharing for AI research purposes. Concretely, our goal is to develop federated machine learning methods.
Join our team!
If you are interested in research concerning artificial intelligence, neural networks, or processing of biomedical data, contact us.