Committee |
Date Time |
Place |
Paper Title / Authors |
Abstract |
Paper # |
HCGSYMPO (2nd) |
|
Mie |
Sinfonia Technology Hibiki Hall Ise |
Relationship between driver's score of cognitive test and gaze time information Kakeru Yamaguchi, Minoru Nakayama (Titech), Chayn Sun, Cecilia Xia (Curtin Univ) |
Driving requires a high level of cognitive capacity, but it tends to decline as older. The data set covered in this stud... [more] |
|
WIT, HI-SIGACI |
2018-12-05 16:15 |
Tokyo |
AIST Tokyo Waterfront |
Motion Analysis of Visual Line in Eye-Gaze-Based Character Input Process using an Onscreen Keyboard Kana Kato, Tyohiko Hayashi, Shigehito Tanahashi (Niigata Univ.) WIT2018-43 |
Recently, as an input method of assistive devices for persons with severe physical disabilities, eye-gaze input is getti... [more] |
WIT2018-43 pp.53-58 |
CAS, NLP |
2018-10-18 11:25 |
Miyagi |
Tohoku Univ. |
A Modeling on Eye Movements in Visual Search Yoshinobu Maeda, Ryunosuke Kodera, Masayuki Tsuno (Niigata Univ.), Makoto Ozawa, Taishin Nomura (Osaka Univ.), Akira Tsukada (NIT Toyama College) CAS2018-40 NLP2018-75 |
According to Hick’s law, the time to find a target symbol from many alternatives is relatively short, i.e., it is not pr... [more] |
CAS2018-40 NLP2018-75 pp.19-24 |
MBE, NC (Joint) |
2018-05-19 16:50 |
Toyama |
Univ. of Toyama |
Consideration on Frequency Distribution of Eye Movement in Visual Search Ryunosuke Kodera (Niigata Univ.), Makoto Ozawa, Taishin Nomura (Osaka Univ.), Akira Tsukada (NIT,Toyama College), Yoshinobu Maeda (Niigata Univ.) MBE2018-5 |
You can find a target symbol quickly with your eye movement. Why does the eye movement enable to do it? In the previous ... [more] |
MBE2018-5 pp.23-26 |
CQ, MVE, IE, IMQ (Joint) [detail] |
2018-03-08 15:30 |
Okinawa |
Okinawa Industry Support Center |
Eye-Gaze Differences and Saliency Prediction Across Different age Groups Onkar Krishna, Kiyoharu Aizawa (Univ. of Tokyo) IMQ2017-37 IE2017-129 MVE2017-79 |
The human visual system has developed the ability to process a scene by selecting the most relevant parts of the scene u... [more] |
IMQ2017-37 IE2017-129 MVE2017-79 pp.67-72 |
HIP |
2017-10-24 10:20 |
Kyoto |
Kyoto Terrsa |
Age-related changes in fixation eye movements Shohei Ohtani (Kindai Univ.), Hiroyuki Sakai (Toyota Central R&D Labs., Inc), Takeshi Kohama (Kindai Univ.) HIP2017-70 |
The purpose of this study is to clarify the influence of aging on fixation eye movements. We measured pupil diameter and... [more] |
HIP2017-70 pp.61-64 |
IMQ |
2017-10-06 13:55 |
Hyogo |
Kobe Univesity |
Evaluation of fatigue focused on eye movement and lip movement Miyuki Suganuma, Yuki Kurosawa, Shinya Mochiduki, Yuko Hoshino, Mitsuho Yamada (Tokai Univ.) IMQ2017-14 |
We have been analyzed the concentration of drivers by the change of eye movement during gazing point while driving and s... [more] |
IMQ2017-14 pp.5-8 |
HIP |
2016-09-28 13:35 |
Nara |
|
Influence of target patterns on dynamic characteristics of fixation eye movements Shohei Ohtani, Takeshi Kohama, Sho Kikkawa, Hisashi Yoshida (Kindai Univ.) HIP2016-55 |
During a fixation, our eyes make small and involuntary eye movements,
and minute ballistic movements are called microsa... [more] |
HIP2016-55 pp.61-66 |
HIP |
2016-09-28 15:25 |
Nara |
|
An analysis of spatial sampling properties of peripheral vision based on a retinal neuron network model Kensuke Kubo, Takeshi Kohama, Hisashi Yoshida (Kindai) HIP2016-59 |
In this study, we propose a mathematical model of a retinal network which replicates spatial sampling properties of peri... [more] |
HIP2016-59 pp.79-84 |
PRMU, IPSJ-CVIM, IBISML [detail] |
2016-09-06 09:30 |
Toyama |
|
Evaluation Metrics for Saliency Map Compensating Center-Bias Effect Takao Yamanaka (Sophia Univ.) PRMU2016-76 IBISML2016-31 |
In order to evaluate the saliency maps (a probabilistic map of an image to predict human-eye fixations), a wide variety ... [more] |
PRMU2016-76 IBISML2016-31 pp.173-178 |
MBE, NC (Joint) |
2016-05-21 09:55 |
Toyama |
University of Toyama |
Gaze feature analysis for the salient area during presentation of an image inducing pleasant or unpleasant emotion Keisuke Terai, Hironobu Takano, Kiyomi Nakamura (Toyama Prefectural Univ) MBE2016-2 |
We tried to develop the emotional estimation method which combines the gazeinformation with the visual saliency. The exp... [more] |
MBE2016-2 pp.7-11 |
HIP |
2015-09-29 13:55 |
Kyoto |
Kyoto Terrsa |
A mathematical model of retina considering the characteristics of the distribution of photoreceptor and ganglion cells Kensuke Kubo, Takeshi Kohama, Hisashi Yoshida (Kindai) HIP2015-83 |
In this study, we propose a mathematical model of a retinal network considering distribution characteristics of retinal ... [more] |
HIP2015-83 pp.55-60 |
HIP |
2015-09-29 15:45 |
Kyoto |
Kyoto Terrsa |
The effects of cognitive loads derived from voice or manual responses on microsaccade rate Yuma Nakai, Syohei Ohtani, Yuji Kanoh (Kinki Univ.), Masaya Yamamoto, Shinichi Ueda, Masayuki Kurihara (Tokai Rika.), Takeshi Kohama, Hisashi Yoshida (Kinki Univ.) HIP2015-87 |
Voice recognition technologies are widely applied to the general-purpose devices. However, cognitive-loads of voice oper... [more] |
HIP2015-87 pp.79-84 |
US |
2015-08-24 13:00 |
Tokyo |
Tokyo Institute of Technology, Ookayama Campus |
Influence estimation of the process for histology on the measurement of acoustic properties using high frequency So Irie, Kenji Yoshida, Tadashi Yamaguchi (Chiba Univ.) US2015-38 |
In bioacoustics microscopy, the effect of tissue preparation, e.g. formalin fixation, on acoustic characteristic of sect... [more] |
US2015-38 pp.1-6 |
IMQ |
2014-12-20 17:15 |
Aichi |
Nagoya University |
Analysis of gaze movement while viewing ultra-high-definition images at short distance Hideaki Takahira, Ayane Miura, Shinya Mochiduki, Yumeko Yamazaki, Mitsuho Yamada (Tokai Univ.) IMQ2014-24 |
The spread of the 4K resolution and 8K resolution movies is advanced, and people can enjoy movies with high presence by ... [more] |
IMQ2014-24 pp.45-48 |
HIP |
2014-09-26 15:50 |
Nara |
Nara Prefectural New Public Hall |
Quantitative measurements of attentional concentration based on analyses of microsaccades and drift eye movements Takeshi Kohama, Sho Kikkawa, Hisashi Yoshida (Kinki Univ.) HIP2014-59 |
The purpose of this study is to evaluate the effect of visual attention on the dynamics of fixation eye movements object... [more] |
HIP2014-59 pp.87-92 |
NLC |
2014-02-07 10:35 |
Kyoto |
Campus Plaza Kyoto |
The Integrated Narrative Generation System, Inter-textulality, Text Mining Takashi Ogata, Jumpei Ono (Iwate Prefectural Univ.) NLC2013-53 |
In this paper, we firstly describe the backgrounds and thoughts of the integrated narrative generation system which we h... [more] |
NLC2013-53 pp.33-38 |
HCGSYMPO (2nd) |
2012-12-10 - 2012-12-12 |
Kumamoto |
Kumamoto-Shintoshin-plaza |
Comparison of eye movements during facial impression judgment in different personality traits
-- Analysis of fixation locations and durations -- Ayumi Maruyama, Ayumi Matsuyama, Natsuko Nakamura, Yoshinori Inaba (Hosei Univ.), Hanae Ishi (Sendai KOSEN), Jiro Gyoba (Tohoku Univ.), Shigeru Akamatsu (Hosei Univ.) |
We investigated whether different features in the face image are gazed at while making impression judgments in different... [more] |
|
MBE, NC (Joint) |
2012-03-14 15:00 |
Tokyo |
Tamagawa University |
Real-time emulation of the retinal circuits responding selectively to moving objects during fixational eye movements. Yukihiro Yokoyama, Hirotsugu Okuno, Yuki Hayashida, Tetsuya Yagi (Osaka Univ.) NC2011-132 |
We have developed an emulator to reproduce neural activities in the retinal neural circuit that can detect a moving obje... [more] |
NC2011-132 pp.63-67 |
PRMU, MVE, CQ, IPSJ-CVIM [detail] |
2012-01-20 11:10 |
Osaka |
|
Analysis and synthesis of saccades and involuntary eye movements in fixation during conversations Tomoyori Iwao, Daisuke Mima, Hiroyuki Kubo, Akinobu Maejima, Shigeo Morishima (Waseda Univ.) PRMU2011-170 MVE2011-79 |
For generation of human motions naturally, it is important to synthesize realistic human eye movements in Computer Graph... [more] |
PRMU2011-170 MVE2011-79 pp.239-244 |