ReaLearn: Reasoning and Learning
for Trustworthy AI
グェン研究室 NGUYEN Laboratory
講師：ラチャラク ティーラデチ（RACHARAK Teeradaj）
AI, Computational Logic, Machine Learning
Knowledge Representation and Reasoning, Explanation, Deep Learning, Hybrid AI, Trustworthy AI
We welcome students who want to build AI systems that humans can trust! Preferred skills are Discrete Mathematics, Linear Algebra, Probability, Statistics, Algorithmic Thinking, and English. Experience in programming is also a plus.
Students will be trained to broaden their knowledge of AI covering both (1) Knowledge Representation and Reasoning (KRR) and (2) Machine Learning (ML), and apply the knowledge to develop Trustable AI systems. They will learn to use diverse input including both structured (e.g. tabular data and knowledgebases) and unstructured data (e.g. text, images, and videos). They will develop AIs based on their interest, including natural language processing, computer vision, and/or logical reasoning, and to deal with the explainability, fairness, privacy, and transparency issues in their AIs. Gaining broad AI knowledge while earning a desired skill can be expected from joining this lab.
【就職先企業・職種】 Software Industry, AI Industry
Our research laboratory aims to address research questions lied in between pure (logic) reasoning and (pure) machine learning for Trustworthy AI, especially Explainable AI (XAI).
Our study covers two mainstreams of AI, i.e., (1) Knowledge Representation and Reasoning (KRR) and (2) Machine Learning (ML). Our end goal is to build AI systems that humans can trust! Indeed, we aim to develop innovative research in Trustworthy AI that connects the great success of AI research in the past to the next decade of AI. Our research themes are described below.
Theme 1: KRR and Explanation Formalism
Figure 1 Explanation of Logic-based Argumentation Systems
Since the 90s, explanation in KRR is studied to support users for various purposes, such as, for explaining decisions or debugging knowledgebases. However, most explanations are still difficult to understand by humans and are available for specific reasoning. Hence, we research human-friendly explanation formalisms and implement novel XAI systems, e.g., the virtual knowledge graph.
Theme 2: Integration of KRR and ML for Trustworthy AI
How to enhance ML development with the explainability, fairness, privacy, and transparency, especially those with deep learning?
Three promising directions of hybrid AI are currently studied.
2.1. Symbolic → Neural: The crux here is an efficient design that improves representation learning using knowledgebases. For instance, one can utilize a knowledge graph to improve the quality of vector representations (word embedding).
2.2. Neural → Symbolic: Unlike the above, this direction is to enhance reasoning of KR formalisms with ML. For instance, using pre-trained embedding to enhance logic reasoning.
2.3. Neural ↔ Symbolic: Is it possible to develop a new learning paradigm that complements the pros and cons of both KRR and ML? How about using ML for representation learning and KRR for explainable prediction, for example?
Figure 2 Integration of Pure Reasoning with Pure Learning
- Teeradaj Racharak, Interpretable Decision Tree Ensemble Learning with Abstract Argumentation for Binary Classification, ICONIP 2022
- Teeradaj Racharak, Doing Analogical Reasoning in Dynamic Assumption-based Argumentation Frameworks, ICTAI 2022
- Wei Kun Kong, Teeradaj Racharak, Yiming Cao, Cheng Peng, Minh Le Nguyen, KGWE: A Knowledge-guided Word Embedding Fine-tuning Model, ICTAI 2021
使用装置CPU/GPU Cluster Machines
To enhance student’s abilities, I conduct:
1. Intensive discussion on one-to-one meetings,
2. Research progress and group discussion on lab meeting,
3. Guidance on improving scientific paper writing and presentation skills,
4. Reading groups to enhance research thinking.
In addition, I will support students to broaden their network with leading experts in respective fields.