Committee |
Date Time |
Place |
Paper Title / Authors |
Abstract |
Paper # |
EA, ASJ-H, ASJ-MA, ASJ-SP |
2022-07-07 16:50 |
Hokkaido |
|
Extension of a signal safeguarding-based acoustic system measurement method
-- Reducing requirement for signal-dependent and random response measurement -- Hideki Kawahara (Wakayama Univ.), Kohe Yatabe (Tokyo Univ. A.T.) EA2022-21 |
We propose an extension of a signal safeguarding-based acoustic measurement method that enables using any sounds, such a... [more] |
EA2022-21 pp.42-46 |
SP, IPSJ-MUS, IPSJ-SLP [detail] |
2022-06-18 15:00 |
Online |
Online |
[Poster Presentation]
Subjective intensity of musical beats: a psychophysical quantification Sotaro Kondoh (UTokyo), Kazuo Okanoya (Teikyo Univ. UTokyo), Ryosuke O. Tachibana (UTokyo) SP2022-20 |
Meter is the core of the musical structure. We perceive meter as one strong beat and several weak beats. The intensity o... [more] |
SP2022-20 pp.88-89 |
HCS, HIP, HI-SIGCOASTER [detail] |
2022-05-16 10:20 |
Okinawa |
Okinawa Industry Support Center (Primary: On-site, Secondary: Online) |
Reconsidering the Groove of Music Satoshi Kawase (Koge Gakuin Univ.) HCS2022-22 HIP2022-22 |
The aim of this study is to show what groove represents during music listening. The results of a large-scale online surv... [more] |
HCS2022-22 HIP2022-22 pp.108-111 |
HCS |
2022-03-11 14:00 |
Online |
Online |
Sound Signal Generation for the Visually Impaired to Recognize the Scenery Using Music
-- Depth Information with LIDAR and the Expression by Pulse Width Modulation -- Naoki Aoyama, Shigeyoshi Nakajima, Tetsuo Tsujioka, Ikuo Oka, Hitoshi Watanabe (Osaka City Univ.) HCS2021-66 |
In this paper, we propose a system that converts depth information with LIDAR into acoustic signals by PWM, and grasps s... [more] |
HCS2021-66 pp.31-36 |
EMM |
2022-03-07 15:00 |
Online |
(Primary: Online, Secondary: On-site) (Primary: Online, Secondary: On-site) |
[Poster Presentation]
Music pattern for brain wave activity using Generative Adversarial Network Takumi Wada, Midori Nagai, Hyunho Kang (NITTC) EMM2021-99 |
(To be available after the conference date) [more] |
EMM2021-99 pp.40-45 |
SIS |
2022-03-03 14:45 |
Online |
Online |
Few-Shot Music Artist Classification Tianshuai Yu, Yoshimasa Tsuruoka (Tokyo Univ.) SIS2021-36 |
Music artist classification is known as a task in the field of Music Information Retrieval(MIR). Recently, due to the im... [more] |
SIS2021-36 pp.32-37 |
MBE, NC (Joint) |
2022-03-03 15:30 |
Online |
Online |
The investigations to music player and non-music player about relationship between the ability of working memory and the skills of playing music instruments. Yutaro Otaka, Takamasa Shimada (Tokyo Denki Univ.) MBE2021-98 |
For us, working memory is an important function that is indispensable in our daily lives. Previous research has shown th... [more] |
MBE2021-98 pp.47-50 |
EA, SIP, SP, IPSJ-SLP [detail] |
2022-03-02 15:35 |
Okinawa |
(Primary: On-site, Secondary: Online) |
[Poster Presentation]
Suppression of alpha power in the prediction of familiar melodies Shuma Ito, Toshihisa Tanaka (TUAT) EA2021-91 SIP2021-118 SP2021-76 |
There is a relationship between melody recognition and language processing in the human brain.
For language memory, it... [more] |
EA2021-91 SIP2021-118 SP2021-76 pp.171-176 |
EA, SIP, SP, IPSJ-SLP [detail] |
2022-03-02 15:35 |
Okinawa |
(Primary: On-site, Secondary: Online) |
[Poster Presentation]
Selective attention to music inducing neural entrainment variation and alpha-band spatial modulation Kana Mizokuchi, Toshihisa Tanaka (TUAT), Takashi G. Sato, Yoshifumi Shiraki (NTT) EA2021-95 SIP2021-122 SP2021-80 |
A human can pay selective attention to music or speech from various sounds.
It has been reported that in a speech doma... [more] |
EA2021-95 SIP2021-122 SP2021-80 pp.195-200 |
AI |
2022-02-28 14:00 |
Miyazaki |
Youth Hostel Sunflower MIYAZAKI (Primary: On-site, Secondary: Online) |
AI2021-19 |
(To be available after the conference date) [more] |
AI2021-19 pp.41-46 |
MVE, IPSJ-CVIM, VRSJ-SIG-MR |
2022-01-27 15:20 |
Online |
Online |
Appropriateness of Facial Expressions of Chord Empathic Agents on User's Sense of Music Hibiki Takemura, Mako Ishida, Tomoko Yonezawa (Kansai Univ.) MVE2021-32 |
In this study, we examined how humans feel music and an agent with facial expressions corresponding to the agent’s expre... [more] |
MVE2021-32 pp.13-18 |
CCS |
2021-11-18 13:00 |
Osaka |
Osaka Univ. (Primary: On-site, Secondary: Online) |
Making Simple Music from Elementary Cellular Automata Wataru Kojima, Toshimichi Saito (HU) CCS2021-18 |
Elementary cellular automata (ECAs) are simple digital dynamical systems in which time, space, and state are all discret... [more] |
CCS2021-18 pp.7-10 |
PRMU, IPSJ-CVIM, IPSJ-NL |
2021-05-20 14:55 |
Online |
Online |
Improvement of Live Music Video Presence Using Zooming Ai Oishi, Eiji Kamioka (SIT) PRMU2021-2 |
Although there are various types of live music video contents, it is difficult for the viewers to get a sense of presenc... [more] |
PRMU2021-2 pp.7-12 |
EA, US, SP, SIP, IPSJ-SLP [detail] |
2021-03-04 14:30 |
Online |
Online |
Estimation of Attentional Direction using EEG during Simultaneous Presentation of Music from Two Sources Kana Mizokuchi, Toshihisa Tanaka (TUAT), Takashi G. Sato, Yoshifumi Shiraki (NTT) EA2020-83 SIP2020-114 SP2020-48 |
People can pay selective attention to music or speech from various sounds. It has been reported that when multiple
beat... [more] |
EA2020-83 SIP2020-114 SP2020-48 pp.140-145 |
CAS, ICTSSL |
2021-01-28 09:35 |
Online |
Online |
On a music generation using LSTM neural network with Julia Takaaki Kobayashi, Kazuya Ozawa, Kaito Isogai, Hideaki Okazaki (Shonan Inst Tech) CAS2020-38 ICTSSL2020-23 |
We discuss how to build a Long Short-Term Memory Neural Network (LSTM) for music generations, using Flux. [more] |
CAS2020-38 ICTSSL2020-23 pp.4-6 |
HCGSYMPO (2nd) |
2020-12-15 - 2020-12-17 |
Online |
Online |
Music generation that stops at a specified time Kaku Oeda, Heitoh Zen (Chiba Univ.) |
In this study, we aim to automatically generate jingles. Jingles are short songs that are inserted into scene changes in... [more] |
|
EA |
2020-12-14 09:40 |
Online |
Online |
Sound source separation method for use in live concerts Ryotaro Yamada, Kota Takahashi (UEC) EA2020-47 |
In live concerts, musical instrument sounds are mixed at vocal microphone, which makes it difficult to mix properly. To ... [more] |
EA2020-47 pp.7-12 |
HCS |
2020-11-01 14:30 |
Online |
Online |
Influences of types of lessons on children's music ensembles Satoshi Kawase (YMF/Kobe Gakuin Univ.), Masahiro Okano (JSPS/Ritsumeikan Univ.), Yoshitaka Kumasaka (YMF), Chika Nagisa (YMF/Tokyo Coll. of Mus.) HCS2020-48 |
The aim of this study was to investigate associations between types of musical training for children and coordination in... [more] |
HCS2020-48 pp.32-35 |
EA, ASJ-H |
2020-07-21 11:00 |
Online |
Online |
Possibilities of Gamification for Learning How to Use an Interactive Speech Synthesizer "Voice Pad" Daiki Goto (Hokkai Gakuen Univ.), Naofumi Aoki, Keisuke Ai (Hokkaido Univ.), Kunitoshi Motoki (Hokkai Gakuen Univ.) EA2020-11 |
This study has developed an interactive speech synthesizer that can enable users to synthesize speech as playing musical... [more] |
EA2020-11 pp.63-66 |
EA, ASJ-H |
2020-07-21 11:50 |
Online |
Online |
Evaluation of components for new auditory signal in automobile Yoshinori Kamizono, Takao Onoye, Wataru Kobayashi (Osaka Univ.), Daisuke Okamoto EA2020-13 |
The purpose of this study is to contribute to driving environment with safe and comfortable by next-generation auditory ... [more] |
EA2020-13 pp.71-78 |