IEICE Technical Committee Submission System
Conference Schedule
Online Proceedings
[Sign in]
Tech. Rep. Archives
    [Japanese] / [English] 
( Committee/Place/Topics  ) --Press->
 
( Paper Keywords:  /  Column:Title Auth. Affi. Abst. Keyword ) --Press->

All Technical Committee Conferences  (Searched in: All Years)

Search Results: Conference Papers
 Conference Papers (Available on Advance Programs)  (Sort by: Date Descending)
 Results 1 - 11 of 11  /   
Committee Date Time Place Paper Title / Authors Abstract Paper #
PRMU, IPSJ-CVIM, IPSJ-DCC, IPSJ-CGVI 2023-11-17
09:20
Tottori
(Primary: On-site, Secondary: Online)
Co-speech Gesture Generation with Variational Auto Encoder
Shihichi Ka, Koichi Shinoda (Tokyo Tech) PRMU2023-29
Co-speech gesture generation is the study of generating gestures from speech. In prior works, deterministic methods lear... [more] PRMU2023-29
pp.74-79
SP, IPSJ-MUS, IPSJ-SLP [detail] 2023-06-23
13:50
Tokyo
(Primary: On-site, Secondary: Online)
[Poster Presentation] Generation of colored subtitle images based on emotional information of speech utterances
Fumiya Nakamura (Kobe Univ.), Ryo Aihara (Mitsubishi Electric), Ryoichi Takashima, Tetsuya Takiguchi (Kobe Univ.), Yusuke Itani (Mitsubishi Electric) SP2023-11
Conventional automatic subtitle generation systems based on speech recognition do not take into account paralinguistic i... [more] SP2023-11
pp.54-59
HCS 2021-01-23
14:10
Online Online Development of a generative model for face motion during dialogue
Shota Takashiro, Yutaka Nakamura, Yusuke Nishimura, Hiroshi Ishiguro (Osaka Univ.) HCS2020-54
Various devices, such as smart speakers, have been developed to support human daily life. Among them, interactive robots... [more] HCS2020-54
pp.12-16
HIP, HCS, HI-SIGCOASTER [detail] 2020-05-14
15:50
Online Online Music generation and emotion estimation from EEG for inducing affective states
Kana Miyamoto, Hiroki Tanaka, Satoshi Nakamura (NAIST) HCS2020-9 HIP2020-9
We propose a music generation feedback system that compares the user's emotion estimated by EEG with the desired emotion... [more] HCS2020-9 HIP2020-9
pp.41-45
NC, MBE 2019-12-06
11:25
Aichi Toyohashi Tech Mathematical Model for Generating Human Foot-Lifting Movements onto One-Up Stair-Step from Stair Rise
Toshikazu Matsui, Shu Kitabatake (Gunma Univ) MBE2019-49 NC2019-40
This research formulates a mathematical model generating foot-lifting movements from only the step rise without any info... [more] MBE2019-49 NC2019-40
pp.25-30
AI 2018-12-07
15:55
Fukuoka  
Toyoaki Kuwahara, Yuichi Sei, Yasuyuki Tahara, Akihiko Ohsuga (UEC) AI2018-30
The emotion estimation by speech makes it possible to estimate with higher precision with the development of deep learni... [more] AI2018-30
pp.25-29
MI 2015-03-03
09:36
Okinawa Hotel Miyahira Generation of CT images at arbitrary time phase using chest 4D-CT images
Michiaki Nakagawa, Yasushi Hirano, Shoji Kido (Yamaguchi Univ.) MI2014-91
Recently,development of Computer-Aided Diagnosis(CAD)technology is remarkable. CAD technologies for respiratory system a... [more] MI2014-91
pp.177-182
HCGSYMPO
(2nd)
2012-12-10
- 2012-12-12
Kumamoto Kumamoto-Shintoshin-plaza Evaluation of low-dimensional parametric description of 3D faces representing their dynamic deformation caused by facial expressions -- On the PCA of time series data of feature points obtained by a motion capture system --
Syunsuke Nagata, Syunta Yamamoto, Yoshinori Inaba, Shigeru Akamatsu (Hosei Univ.)
In this research, we made a comparative study on the discrimination capability among several categories of facial expres... [more]
HCS 2010-03-08
- 2010-03-09
Shizuoka   Quality Improvement of Eye-contacted Facial Motion Image for Network Communication Environment
Takuma Funahashi, Takayuki Fujiwara, Hiroyasu Koshimizu (Chukyo Univ.) HCS2009-78
In this research, we pay attention to the disagreement of eye-contact in teleconference caused by the separation between... [more] HCS2009-78
pp.37-38
NLC 2008-02-08
11:30
Niigata Yuzawa Culture Center Constructing Suffix Expression Patterns to Analyze Dialogue Act and Emotions
Masato Tokuhisa, Kousuke Maeta, Jin'ichi Murakami, Satoru Ikehara (Tottori Univ.) NLC2007-95
In this paper, we explain how to construct suffix patterns for analyzing dialogue acts and emotions from text-dialogue. ... [more] NLC2007-95
pp.45-50
NC 2006-07-14
16:00
Tokyo Waseda University Recognition of human movements by using the ground reaction force and motion capture data
Yuka Ariki (NAIST), Jun Morimoto (ATR)
Motion capture data have been commonly used to generate human-like
behaviours for humanoid robots and virtual characte... [more]
NC2006-44
pp.37-41
 Results 1 - 11 of 11  /   
Choose a download format for default settings. [NEW !!]
Text format pLaTeX format CSV format BibTeX format
Copyright and reproduction : All rights are reserved and no part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Notwithstanding, instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. (License No.: 10GA0019/12GB0052/13GB0056/17GB0034/18GB0034)


[Return to Top Page]

[Return to IEICE Web Page]


The Institute of Electronics, Information and Communication Engineers (IEICE), Japan