News & Events

Press Release

Click Away the Bias: New System to Make AI Training Easier and More Accurate

Scientists devise a simple one-click system that can remove the biases commonly found in AI training datasets

Deep neural networks (DNNs) that enable AI are often trained on datasets, but this can create problems as many datasets have co-occurrence biases. This makes the AI less capable of accurately identifying features. To solve this problem, researchers from Japan and China have created a novel human-in-the-loop system which uses a single-click attention annotating system. This system makes the DNN more accurate, while using less training data and being more time and cost efficient.

In the past few years, "AI" has become a major buzzword in technology. The prospect of a computer being able to do tasks which only a human could perform is a captivating thought indeed! AIs can be created using multiple different methods, but one of the most popular ones right now involves the use of deep neural networks (DNNs). These structures try to mimic the neural connections and function of the brain and are generally trained on a dataset before they are deployed in the real world. By training them on a dataset beforehand, DNNs can be 'taught' to identify features in an image. So, e.g., a DNN may be taught to identify an image with a boat in it by being trained on a dataset of images with boats.

However, the training dataset can cause problems if it is not properly designed. For instance, with respect to the previous example, since images of boats are generally taken when the boat is in water, the DNN may recognize only water, instead of the boat, and still say that the image has a boat in it. This is called a co-occurrence bias and it is a very common problem that is encountered while training DNNs. To solve this problem, a team of researchers, include Yi He, a researcher from Japan Advanced Institute of Science and Technology (JAIST), Senior Lecturer Haoran Xie from JAIST, Associate Professor Xi Yang of Jilin University, Project Lecturer Chia-Ming Chang of the University of Tokyo, and Professor Takeo Igarashi, has reported a new human-in-the-loop system. A paper detailing this system has been published in the proceedings of the 28th Annual Conference on Intelligent User Interface (ACM IUI 2023). According to Prof. Xie, "There are some existing methods to solve the co-occurrence bias by either reorganizing the dataset or telling the system to focus on specific areas of the image. But reorganizing the dataset can be very difficult, while current methods for marking regions of interest (ROI) require extensive, pixel-by-pixel annotations by humans hired to do so, which incurs a high cost. Thus, we created a much simpler attention method which helps humans point out ROI in the image using a simple one-click method. This drastically reduces time and costs for DNN training, and thus, deployment."

The team realized that previous approaches for attention guidance were inefficient because they were not designed to be interactive. Thus, they proposed a new interactive method to annotate images through one click. Users simply left click on parts of the image that are to be identified and, if need be, right click on parts of the image that should be ignored. Thus, in case of the images with boats, users will left click on the boat and right click on the water around it. This helps the DNN identify the boat better and reduces the effects of the co-occurrence bias inherent to training datasets. To reduce the images that need to be annotated, a new active learning strategy using a Gaussian mixture model (GMM) was devised.

This new system was tested against existing ones, both numerically and through user surveys. The numerical analyses showed that the new active learning method was more accurate than any of the existing ones, while user surveys showed that the click-based system reduced the time required to annotate ROI by 27%, and 81% of the participants preferred it over other systems.

"Our work can drastically improve the transferability and interpretability of neural networks by increasing their accuracy for real-world applications. When systems make correct and clear decisions, it increases the confidence users have in AI and makes it easier to deploy these systems in the real world. Thus," Prof. Xie concludes, "our work focuses on increasing the trustworthiness of DNN deployments, which can have a major impact on the application and development of AI technologies in society."
The team believes their work could have a strong influence on the tech industry and enable more applications of AI technologies in the near future. In today's rapidly developing world, this is an important contribution!

pr20230404-1.png

Image Title: A Novel Click-based AI Training System
Image Caption: By designing a single-click attention-directing user interface and a specially designed active learning strategy, this system can train DNNs more accurately and efficiently.
Image Credit: Haoran Xie from JAIST
License Type: Original content

Reference

Title of original paper: Efficient Human-in-the-loop System for Guiding DNNs Attention
Authors: Yi He, Xi Yang, Chia-Ming Chang, Haoran Xie, Takeo Igarashi
Journal: Proceedings of the 28th Annual Conference on Intelligent User Interface (ACM IUI 2023)
DOI: https://doi.org/10.1145/3581641.3584074
Project Video: https://youtu.be/2MD-z6vXKJ4
Project Page: https://yang-group.github.io/#/ProjectPageIUI2023
Source Codes: https://github.com/ultratykis/Guiding-DNNs-Attention

Funding information

  1. JST Core Research for Evolutionary Science and Technology (CREST) Grant Number JPMJCR17A1 (Led by Prof. Takeo Igarashi)
  2. JAIST Research Grant for the Establishment of an Advanced Research Base ((Led by Prof. Haoran Xie)
  3. JAIST Research Grant for Research Center for Cohabitative-AI×Design (Led by Prof. Kazunori Miyata)

Media contact
Senior Lecturer Haoran Xie
Creative Society Design Research Area
Japan Advanced Institute of Science and Technology
1-1 Asahidai, Nomi, Ishikawa, 923-1292, Japan
xie@jaist.ac.jp

Professor Takeo Igarashi
Department of Creative Informatics
Graduate School of Information Science and Technology, The University of Tokyo,
7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
takeo@acm.org

April 4, 2023

PAGETOP