学生のGALAJIT, Kasornさん(博士後期課程3年、ヒューマンライフデザイン領域鵜木研究室)が2020年度第5回EMM研究会において優秀学生発表賞を受賞しました。



Audio Information Hiding in Sub-signals by deploying Singular Spectrum Analysis and Psychoacoustic Model

Kasorn Galajit(本学), Jessada Karnjana(NECTEC), Masashi Unoki(本学)

This paper presents an improved scheme for hiding information in audio signals by using singular spectrum analysis (SSA) and a psychoacoustic model. The host signal is decomposed into many sub-signals by the singular spectrum analysis, and sub-signals are grouped as suggested by the psychoacoustic model into two groups, which are audible and less audible groups. The watermark bits are embedded into the less audible sub-signals by modifying the singular spectra of matrices that represent those sub-signals according to an embedding rule. In this work, a new method for mapping between frequencies and singular-value indices, based on a recommendation by the psychoacoustic model is proposed to improve the inaudibility property. In addition, a frame selection method that selects frames to be embedded is included in the improved scheme to increase the extraction precision. The scheme's performance is evaluated in terms of the sound quality of the watermarked signal, compared with the host signal, and the robustness of the scheme. The experimental results show that the proposed scheme has significant improvement in the sound quality of the watermarked signal and the robustness, i.e., the sound quality and the robustness are improved by approximately 34.07 % and 6.09 %, respectively, compared with the previously proposed method.

I am greatly honored to win the award from an Enriched Multimedia conference. The conference held online that I might lose opportunities to discuss and exchange my research with other researchers face-to-face. However, it is challenging to present my work online and have a great discussion via online meetings. I am highly grateful to Professor Masashi Unoki and Dr. Jessada Karnjana for their constant guidance during my research. Also, I would like to thank all members of the Acoustic Information Science laboratory for their cooperation.