業績リスト (Updated on 20-April-2015)

ジャーナル論文

  1. 小松孝徳・鈴木健太郎・植田一博・開一夫・岡夏樹 (2003). パラ言語情報を利用した相互適応的な意味獲得プロセスの実験的分析, 『認知科学』, vol.10 (1), 121-138.
  2. 小松孝徳・鈴木健太郎・植田一博・開一夫・岡夏樹 (2003). 発話理解学習を利用した適応的インターフェイス―人間同士のコミュニケーション成立過程からの知見, 『システム制御情報学会論文誌』, vol.16 (6), 260-269.
  3. 小松孝徳・長崎康子 (2005). ビープ音からコンピュータの態度が推定できるのか?−韻律情報の変動が情報発信者の態度推定に与える影響, 『ヒューマンインタフェース学会論文誌』, vol.7 (1), 19-26.
  4. 宇都宮淳・小松孝徳・植田一博・岡夏樹 (2005). 段階的相互適応を考慮した意味獲得モデルの構築, 『日本知能情報ファジィ学会誌』, vol.17 (3), 298-313.
  5. Komatsu, T., Utsunomiya, A., Suzuki, K., Ueda, K., Hiraki, K., and Oka, N. (2005). Experiments toward a mutual adaptive speech interface that adopts the cognitive features humans use for communication and induces and exploits users' adaptation, International Journal of Human-Computer Interaction, vol.18 (3), 243-268.
  6. 小松孝徳 (2006). 視覚的なSubtle Expressionsからのコンピュータの態度推定, 『ヒューマンインタフェース学会論文誌』, vol.8 (1), 167-175.
  7. 小松孝徳・鈴木昭二・鈴木恵二・松原仁・小野哲雄・坂本大介・佐藤崇正・内本友洋・岡田孟・北野勇・棟方渚・佐藤智則・高橋和之・本間正人・長田純一・畑雅之・乾英雄 (2006). 非ロボット技術者のための直感的ロボットオーサリングシステムの提案, 『日本バーチャルリアリティ学会論文誌』, vol.11 (2), 213-224.
  8. 大塚庄一郎・小松孝徳・米田隆志 (2006). 親和性のあるロボットデザイン - Virtual Robotを利用した音声教示によるインタラクションと機能的デザインの評価 -, 『ヒューマンインタフェース学会論文誌』, vol.8 (3), 343-352.
  9. 大本義正・植田一博・大野健彦・小松孝徳 (2006).複数の非言語情報を利用した嘘の読み取りとその自動化.『ヒューマンインタフェース学会誌』,Vol.8 (4), 555-564.
  10. Komatsu, T., and Morikawa, K. (2007). Entrainment in the rate of utterance in speech dialogs between users and an auto response system, Journal of Universal Computer Science, vol.17 (2), 186-198.
  11. 小松孝徳・山田誠二(2008).エージェントの外見の違いがユーザの態度解釈に与える影響 ―外見の異なるエージェントからの同一人工音の提示実験,『日本知能情報ファジィ学会誌』,Vol.20 (4), 500-512.
  12. Yong, X., Ueda, K., Komatsu, T., Okadome, T., Hattori, T., Sumi, Y. and Nishida, T. (2009). WOZ experiments for understanding mutual adaptation. AI & Society: The Journal of Human-Centred Systems, Vol.23 (3), 10.1007/s00146-007-0134-1
  13. 小松孝徳・山田誠二(2009).適応ギャップがユーザのエージェントに対する印象変化に与える影響,『人工知能学会論文誌』,vol.24 (2),232-240.
  14. 小松孝徳・秋山広美(2009).ユーザの直感的表現を支援するオノマトペ表現システム,『電子情報通信学会論文誌A』,vol. J92-A (11), 752-763.
  15. Akita J., Komatsu, T., Ito, K., Ono T., and Okamoto, M.(2009). CyARM: Haptic Sensing Device for Spatial Localization on Basis of Exploration by Arms, Advances in Human-Computer Interaction, Vol. 2009, Article ID 901707, 6 pages, doi:10.1155/2009/901707.
  16. 小松孝徳・山田誠二・小林一樹・船越孝太郎・中野幹生(2010).Artificial Subtle Expressions: エージェントの内部状態を直感的に伝達する手法の提案,『人工知能学会論文誌』,vol.25 (6),733-741 .
  17. Yong, X., Ohmoto, Y., Okada, S., Ueda, K., Komatsu, T., Okadome, T., Kamei, K., Sumi, Y. and Nishida, T.(2010). Formation conditions of mutual adaptation in human-agent collaborative interaction, Applied Intelligence, vol. 2010, DOI 10.1007/s10489-010-0255-y.
  18. Yong, X., Ohmoto, Y., Okada, S., Ueda, K., Komatsu, T., Okadome, T., Kamei, K., Sumi, Y. and Nishida, T.(2010). Active adaptation in human-agent collaborative interaction, Journal of Intelligent Information Systems, vol. 2010, DOI 10.1007/s10844-010-0135-2.
  19. Komatsu, T., and Yamada, S. (2011). Adaptation gap hypothesis: How differences between users’ expected and perceived agent functions affect their subjective impression, Journal of Systemics, Cybernetics and Informatics, vol.9 (1), 67 - 74.
  20. 船越孝太郎・小林一樹・中野幹生・小松孝徳・山田 誠二(2011).対話の低速化とArtificial Subtle Expressionによる発話衝突の抑制,『人工知能学会論文誌』,vol.26 (2), 353-365.
  21. 前田唯・秋田純一・小松孝徳(2011).擬似的な不規則画素配置による画像のジャギー解消効果の評価,『ヒューマンインタフェース学会論文誌』,Vol.13(2), 167-176.
  22. Komatsu, T., and Yamada, S. (2011). How does the agents' appearance affect users' interpretation of the agents' attitudes - Experimental investigation on expressing the same artificial sounds from agents with different appearances, International Journal of Human-Computer Interaction, vol. 27 (3), 260-279.
  23. Komatsu, T., Kurosawa, R., and Yamada, S. (2012). How does the Difference between Users' Expectations and Perceptions about a Robotic Agent Affect Their Behavior? International Journal of Social Robotics, vol. 4 (2), 109-116.
  24. 戸本裕太郎・中村剛士・加納政芳・小松孝徳(2012).音素特徴に基づくオノマトペの可視化,『日本感性工学会論文誌』,vol. 11 (4), 545-552.
  25. 中村優希・秋田純一・小松孝徳(2012).動画像における擬似的不規則画素配置によるジャギー解消効果の評価,『映像情報メディア学会誌』,vol. 66 (12), 1 - 3.
  26. 小松孝徳・山田誠二・小林一樹・船越孝太郎・中野幹生(2012).確信度表出における人間らしい表現とArtificial Subtle Expressionsとの比較,『人工知能学会論文誌』,vol.27 (5),263-270.
  27. 中村聡史・小松孝徳(2013).スポーツの勝敗にまつわるネタバレ防止手法:情報曖昧化の可能性,『情報処理学会論文誌』,vol.54 (4), 1402-1412.
  28. Ito, J., Kanoh, M., Nakamura, T., and Komatsu, T. (2013). Editing Robot Motion Using Phonemic Feature of Onomatopoeia, Journal of Advanced Computational Intelligence and Intelligent Informatics, vol.17 (2), 227-236.
  29. 寺田和憲・山田誠二・小松孝徳・小林一樹・船越孝太郎・中野幹生・伊藤昭(2013).移動ロボットによるArtificial Subtle Expressionsを用いた確信度表出,『人工知能学会論文誌』,vol.28 (3), 311-319.
  30. 平田佐智子・中村聡史・小松孝徳・秋田喜美(2015).国会会議録コーパスを用いたオノマトペ使用の地域比較,『人工知能学会論文誌』,vol.30 (1), 274-281.
  31. 田中恒彦・岡嶋美代・小松孝徳(2015).診断横断的行動療法でオノマトペがなぜ有用か?,『人工知能学会論文誌』,vol.30 (1), 282-290.
  32. 岩佐和典・小松孝徳(2015).視覚による触質感認知と不快感に対する命名の影響 −触覚オノマトペによる検討−,『人工知能学会論文誌』,vol.30 (1), 265-273.
  33. 伊藤惇貴・加納政芳・中村剛士・小松孝徳(2015).オノマトペの音象徴属性値の調整のための一手法,『人工知能学会論文誌』,vol.30 (1), 364-371.
  34. 坂本大介・小松孝徳・五十嵐健夫(2015).パラ言語情報を用いた携帯端末の操作手法,『ヒューマンインタフェース学会論文誌』,to appear.
  35. 小林一樹・船越孝太郎・小松孝徳・山田誠二・中野幹生(2015).ASEに基づく相槌によるロボットとの対話体験の向上,『人工知能学会論文誌』,vol.30 (4), to appear.

    国際会議論文

    1. Oka, N., Morikawa, K., Komatsu, T., Suzuki K., Kazuo, H., Ueda, K., and Omori, T. (2001). Embodiment without a Physical Body, In Proceedings of International Workshop of Developmental Embodied Cognition 2001 (DECO2001), (CD-ROM).
    2. Komatsu, T., Suzuki, K., Hiraki, K., Ueda, K., and Oka, N. (2001). Analysis of Speech/Action-Based Communication: In terms of Mutual Adaptation and Prosodic Effects, In Proceedings of International Conference on Cognitive Science 2001 (ICCS2001), pp143-147.
    3. Komatsu, T., Suzuki, K., Ueda, K., Hiraki, K., and Oka, N. (2002). What is Important for an Autonomous Interactive Robot? In Proceedings of the Seventh International Conference on the Simulation of Adaptive Behavior (SAB02), pp.399-400.
    4. Komatsu, T., Suzuki, K., Ueda, K., Hiraki, K., and Oka, N. (2002). Mutual Adaptive Meaning Acquisition by Paralanguage Information: Experimental Analysis of Communication Establishing Process, In Proceedings of the 24th Annual Meeting of the Cognitive Science Society (CogSci2002), pp.548-553.
    5. Komatsu, T., Utsunomiya, A., Suzuki, K., Ueda, K., Hiraki, K., and Oka, N. (2003). Toward a Mutual Adaptive Interface by Utilizing a User's Cognitive Features, In Proceedings of IEEE International Symposium on Computational Intelligence in Robotics and Automation (IEEE/CIRA2003), pp.1102-1107.
    6. Komatsu, T., Utsunomiya, A., Suzuki, K., Ueda, K., Hiraki, K., and Oka, N. (2003). Toward a Mutual Adaptive Interface: An interface induces a user's adaptation and utilize this induced adaptation, and vice versa, In Proceedings of the 25th Annual Meeting of the Cognitive Science Society (CogSci2003), pp.687-692.
    7. Komatsu, T., Utsunomiya, A., Suzuki, K., Ueda, K., Hiraki, K., and Oka, N. (2003). Toward a Mutual Adaptive Interface: An interface and a user induces and utilize the partner's adaptation, In Proceedings of the 7th International Conference on Knowledge-Based Information & Engineering Systems (KES2003, also LNAI 2774), pp.1101-1108.
    8. Komatsu, T., Utsunomiya, A., Ueda, K., and Oka, N. (2003). Can you communicate with me? - An experimental design how humans regard artificial agents as communication partners, In Proceedings of the 12th International Workshop on Robot and Human Interactive Communication (RO-MAN 2003), (CD-ROM).
    9. Komatsu, T., Utsunomiya, A., Suzuki, K., Ueda, K., Hiraki, K., and Oka, N. (2003). A meaning acquisition model which recognizes user's speeches like a pet animal is doing, In Proceedings of the 2nd International Conference on Computational Intelligence, Robotics and Automation Systems (CIRAS2003), (CD-ROM).
    10. Utsunomiya, A., Komatsu, T., Suzuki, K., Ueda, K., Hiraki, K., and Oka, N. (2003). Construction of Meaning Acquisition Model Using Prosodic Information: Toward a Smooth Human-Agent Interaction, In Proceedings of the 10th International Conference on Human-Computer Interaction (HCII2003), pp.543-547.
    11. Komatsu, T., and Nagasaki, Y. (2004). Can we estimate the speaker's emotional state from her/his prosodic features? - Effects of F0 contour's slope and duration on perceiving disagreement, hesitation, agreement and attention, In Proceedings of the 18th International Congress on Acoustics (ICA2004), pp.2227-2230
    12. Nagasaki, Y., and Komatsu, T. (2004). Fundamental Frequency as a cue to estimate speakers' emotional state, In Proceedings of the 18th International Congress on Acoustics (ICA2004), pp.2231-2234
    13. Nagasaki, Y., and Komatsu, T. (2004). The Superior Effectiveness of the F0 Range for Identifying the Context from Sounds without Phonemes, In Proceedings of the 8th International Conference on Spoken Language Processing (INTERSPEECH2004), (CD-ROM).
    14. Nagasaki, Y., and Komatsu, T. (2004). Can People Perceive Different Emotions from a Non-emotional Voice by Modifying its F0 and Duration?, In Proceedings of the 2nd International Conference on Speech Prosody (SP2004), pp.667-670.
    15. Komatsu, T., Ohtsuka, S., Ueda, K., Komeda, T., and Oka, N. (2004). A method for estimating whether a user is in smooth communication with an interactive agent in human-agent interaction, In Proceedings of the 8th International Conference on Knowledge-Based Intelligent Information & Engineering Systems (KES2004), pp. 371-377.
    16. Utsunomiya, A., Komatsu, T., Suzuki, K., Ueda, K., Hiraki, K., and Oka, N. (2004). A meaning acquisition model which induces and utilize human's adaptation, In Proceedings of the 8th International Conference on Knowledge-Based Information and Engineering Systems (KES2004), pp. 378-384.
    17. Komatsu, T. (2004). How can a life-like agent evoke its emotions for users? - A method for inducing user's natural behaviors to establish "mutual adaptation", In Proceedings of the 8th International Conference on the Simulation of Adaptive Behavior (SAB2004), pp.415-424.
    18. Komatsu, T. (2005). Can we assign attitudes to a computer based on its beep sounds?, In Proceedings of the Affective Interactions: The computer in the affective loop Workshop at Intelligent User Interface 2005 (IUI2005), pp. 35-37.
    19. Ohmoto, Y., Ueda, K., and Komatsu, T. (2005). Sensing of intention that appears as various nonverbal information in face-to-face communication, In Proceedings of AISB 2005 Symposium: Conversational Informatics for Supporting Social Intelligence & Interaction, pp. 52-57.
    20. Komatsu, T., and Morikawa, K. (2005). Entrainment of rate of utterances in speech dialogs between users and an auto response system, In Proceedings of the 9th International Conference on Knowledge-Based Intelligent Information & Engineering Systems (KES2005), pp.868-874.
    21. Akita, J., Ito, K., Komatsu, T., Ono, T., and Okamoto, M. (2005). CyARM: Direct Perception Device by Dynamic Touch, In Proceedings of the 13th International Conference on Perception and Action (ICPA13), pp.87-90.
    22. Komatsu, T. (2005). Can we assign attitudes to a computer based on its beeps?, In Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI-05), pp. 1962-1963.
    23. Komatsu, T. (2005). Subtle expressivity for making humans estimate certain attitudes, In Proceedings of 2005 IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN2005), pp.241-246.
    24. Morikawa, K, and Komatsu, T. (2005). Human entrainment of rate of utterances when communicating with an auto response system, In Proceedings of 2005 IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN2005), pp.247-252.
    25. Okamoto, M., Akita, J., Ito, K., Ono, T., and Komatsu, T. (2005). See it by Hand- CyARM, In Proceedings of the 2nd International Conference on Enactive Interface (CD-ROM).
    26. Komatsu, T. (2005). Toward making humans empathize with artificial agents by means of subtle expressions, In Proceedings of the 1st International Conference on Affective Computing and Intelligent Interaction (ACII2005), pp. 458-465.
    27. Komatsu, T., Suzuki, S., Suzuki, K., Ono, T., Matsubara, H., Uchimoto, T., Okada, H., Kitano, I., Sakamoto, D., Sato, T., Honma, M., Sato, T., Osada, J., Hata, M. and Inui, H. (2005). Reconfigurable robot with intuitive authoring system−“Dress-Up Robot”−, In Proceedings of the 36th International symposium on Robotics (ISR2005), (CD-ROM).
    28. Komatsu, T., Ono, T., Akita, J., Ito, K., and Okamoto, M. (2005). See it by Hand - CyARM: Enhancing interaction ability without using visual information, In Proceedings of the 3rd International Conference on Computational Intelligence, Robotics and Autonomous Systems (CIRAS2005), (CD-ROM).
    29. Komatsu, T., Iwaoka, T., and Nambu, M. (2006). The Effect of Prior Interaction Experience with Real/virtual Robot on Participants' Leaving Message Task, In Proceedings of the Ninth International Conference on Control, Automation, Robotics and Vision (ICARCV 2006), (CD-ROM).
    30. Komatsu, T., Iwaoka, T., and Nambu, M. (2006). Leaving a message with the PaPeRo robot: The effect of interaction experience with real or virtual PaPeRo on impression evaluation, In Proceedings of the 5th International Conference on Entertainment Computing (ICEC2006), pp. 27-32.
    31. Komatsu, T. (2006). Audio subtle expressions affecting user's perceptions, In Proceedings of 2006 International Conference on Intelligent User Interface (ACM-IUI2006), pp.306-308.
    32. Yamada, S., and Komatsu, T. (2006). Designing simple and effective expression of robot's primitive minds to a human, In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2006), pp. 2614-2619.
    33. Ono, T., Komatsu, T., Akita, J., Ito, K., and Okamoto, M. (2006). CyARM: Interactive Device for Environment Recognition and Joint Haptic Attention using Non-Visual Modality, In Proceedings of the 10th International Conference on Computers Helping People with Special Needs (ICCHP2006), pp. 1251-1258.
    34. Munekata, N., Komatsu, T., and Matsubara, H. (2007). Marching Bear: An Interface System Encouraging User’s Emotional Attachment and Providing an Immersive Experience, In Proceedings of the 6th International Conference on Entertainment Computing (ICEC2007), pp. 340-349.
    35. Yong, X., Ueda, K., Komatsu, T., Okadome, T., Sumi, Y. and Nishida, T. (2007). Can Gesture Establish An Independent Communication Channel?, In Proceedings of the International Conference on Control, Automation and Systems 2007, (CD-ROM).
    36. Komatsu, T., Ohtsuka, S., Ueda, K., and Komeda, T. (2007). Comprehension of Users' Subjective Interaction States during their Interaction, In Proceedings of the 2nd International Conference on Affective Computing and Intelligent Interaction (ACII2007), pp.168-279.
    37. Komatsu, T., and Yamada, S. (2007). Effects of Robotic Agents’ Appearances on Users’ Interpretation of the Agents’ Attitudes: Towards an Expansion of “Uncanny Valley” assumption, In Proceedings of the 2007 IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN2007), pp. 380-385.
    38. Komatsu, T., and Yokoyama, K. (2007). Experiments to Clarify Whether the “Gain and Loss of Esteem” Could be Observed in Human-Robot Interaction, In Proceedings of the 2007 IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN2007), pp. 457-462.
    39. Komatsu, T., and Yamada, S. (2007). How appearance of robotic agents affects how people interpret the agents' attitudes, In Proceedings of the 2007 International Conference on Advances in Computer Entertainment Technology (ACM-ACE2007), pp.123-126.
    40. Komatsu, T., and Yamada, S. (2007). How do robotic agents' appearances affect people's interpretations of the agents' attitudes?, In Extended Abstract of the ACM-CHI2007 (in work-in-progress session),pp.2519-2525.
    41. Komatsu, T., and Yamada, S. (2008). Effect of Agent Appearance on People's Interpretation of Agent's Attitude. In Extended Abstract of the ACM-CHI2008 (in work-in-progress session), pp. 2919-2924.
    42. Komatsu, T., and Yamada, S. (2008). How Does Appearance of Agents Affect How People Interpret the Agents' Attitudes ? Experimental Investigation on Expressing the Same Information from Agents Having Different Appearance, In Proceedings of the 2008 IEEE Congress on Evolutionary Computation (IEEE CEC 2008) within 2008 IEEE World Congress on Computational Intelligence (WCCI 2008), pp. 1935-1940.
    43. Komatsu, T., and Yamada, S. (2008). People's Interpretations of Agents' Attitude from Artificial Sounds Expressed by Agents with Different Appearances, In Proceedings of the 30th Annual Meeting of the Cognitive Science Society (CogSci 2008), pp. 2492-2497.
    44. Komatsu, T., and Nambu, M. (2008). Effects of the Agents' Appearance on People's Estimations about the Agents' Abilities: Questionnaire Investigations for Liberal Arts and Informatics Students, In Proceedings of the 2008 IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2008), pp. 142-147.
    45. Komatsu, T., and Abe, Y. (2008). Comparing an On-screen Agent with a Robotic Agent in Non-face-to-face Interactions, In Proceedings of the 8th International Conference on Intelligent Virtual Agents (IVA08), pp. 498-504.
    46. Komatsu, T., Ohmoto, Y., Ueda, K., Okadome, T., Kamei, K., Yong, X., Sumi, H., and Nishida, T. (2008). In Proceedings of the 7th International Workshop on Social Interaction Design (SID08), (CD-ROM).
    47. Yong, X., Ohmoto, Y., Ueda, K., Komatsu, T., Okadome, T., Kamei, K., Okada, S.,Sumi, Y., and Nishida, T.(2008). Two-Layered Communicative Protocol Model in a Cooperative Directional Guidance Task, In Proceedings of the 7th International Workshop on Social Interaction Design (SID08), (CD-ROM).
    48. Mizuno, R., Ito, K., Akita, J., Ono, T., Komatsu, T., and Okamoto, M. (2008). Shape Perception using CyARM - Active Sensing Device, In Proceedings of the 6th International Conference of Cognitive Science (ICCS2008), pp. 182-185.
    49. Munekata, N., Komatsu, T., and Matsubara, H. (2008).An Interface System “Marching Bear” Evoking User's Emotional Attachment, SICE Annual Conference 2008 (SICE2008), pp. 477-480.
    50. Komatsu, T., and Kuki, N. (2009). Can Users React Toward an On-screen Agent as if They are Reacting Toward a Robotic Agent?, In Proceedings of the 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI2009), pp.217-218.
    51. Mizuno, R., Ito, K., Akita, J., Ono, T., Komatsu, T., and Okamoto, M. (2009). Use’s Motion for Shape Perception using CyARM, In Proceedings of the HCI International 2009, pp. 185-191. 
    52. Yong, X,. Ohmoto, Y., Ueda, K., Komatsu, T., Okadome, T., Kamei, K., Okada, S., Sumi, Y., and Nishida, T. (2009). A Platform System for Developing a Collaborative Mutually Adaptive Agent, In Proceedings of the 22nd International Conference on Industrial, Engineering & Other Applications of Applied Intelligent Systems (IEA-AIE 2009), pp. 576- 585.
    53. Komatsu, T., Kaneko, H., and Komeda, T. (2009). Investigating the Effects of Gain and Loss of Esteem on Human-Robot Interaction, In Proceedings of the 1st International Conference on Social Robotics (ICSR2009), pp.87-94.
    54. Komatsu, T., and Kuki, N. (2009). Investigating the Contributing Factors to Make Users React Toward an On-screen Agent as if They are Reacting Toward a Robotic Agent, In Proceedings of the 18th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN2009), pp. 651-656.
    55. Yong, X,. Ohmoto, Y., Ueda, K., Komatsu, T., Okadome, T., Kamei, K., Okada, S., Sumi, Y., and Nishida, T. (2009). Actively Adaptive Agent for Human-Agent Collaborative Task, In Proceedings of the 2009 International Conference on Active Media Technology (AMT'09), pp. 19-30.
    56. Yong, X,. Ohmoto, Y., Ueda, K., Komatsu, T., Okadome, T., Kamei, K., Okada, S., Sumi, Y., and Nishida, T. (2009). Establishing Adaptation Loop in Interaction between Human User and Adaptive Agent, In Proceedings of 2009 IEEE International Conference on Intelligent Computing and Intelligent Systems (ICIS 2009), pp. 647-651.
    57. Komatsu, T., and Seki, Y. (2010). Users' reactions toward an on-screen agent appearing on different media, In Proceedings of the 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI2010), pp. 163-164.
    58. Komatsu, T., and Yamada, S. (2010). Effects of Adaptation Gap on Users・Differences in Impressions of Artificial Agents, In Proceedings of the 14th. World Multiconference on Systemics, Cybernetics and Informatics (WMSCI 2010), pp. 6-11.
    59. Komatsu, T., Yamada, S., Kobayashi, K., Funakoshi, K., and Nakano, M. (2010). Artificial Subtle Expressions: Intuitive Notification Methodology of Artifacts, In Proceedings of the 28th ACM Conference on Human Factors in Computing Systems (CHI2010), pp. 1941-1944.
    60. Funakoshi, K., Kobayashi, K., Nakano, M., Komatsu, T. and Yamada, S. (2010). Reducing Speech Collisions by Using an Artificial Subtle Expression in a Decelerated Spoken Dialogue? Should communication robots respond quickly?, In Proceedings of the 2nd International Symposium on New Frontiers in Human-Robot Interaction, pp. 34-41.
    61. Tomoto, Y., Nakamura, T,, Kanoh M., and Komatsu, T. (2010). Visualization of Similarity Relationships by Onomatopoeia Thesaurus Map, In Proceedings of the 2010 IEEE International Conference Fuzzy Systems (FUZZ-IEEE 2010), pp. 3304-3309.
    62. Komatsu, T., Yamada, S., Kobayashi, K., Funakoshi, K., and Nakano, M. (2010). Proposing Artificial Subtle Expressions as an Intuitive Notification Methodology for Artificial Agents' Internal States, In Proceedings of the 32nd Annual Meeting of the Cognitive Science Society (CogSci2010), pp. 447-452.
    63. Funakoshi, K., Kobayashi, K., Nakano, M., Komatsu, T. and Yamada, S. (2010). Non- humanlike Spoken Dialogue: a Design Perspective, in Proceedings of the 11th Annual SIGdial Meeting on Discourse and Dialogue (SIGDIAL2010), pp. 176-184.
    64. Okamoto, M., Komatsu, T., Ito, K., Akita, J. and Ono, T. (2011). FutureBody: Design of Perception Using the Human Body, In Proceedings of the 2nd Augmented Human International Conference (AH 2011), article No. a-35.
    65. Komatsu, T., Kurosawa, R., and Yamada, S. (2011). Difference between Users’ Expectations and Perceptions about a Robotic Agent (Adaptation Gap) Affect Their Behaviors, In Proceedings of the HRI 2011 workshop Expectations in intuitive human-robot interaction, published as PDF.
    66. Komatsu, T., Yamada, S., Kobayashi, K., Funakoshi, K., and Nakano, M. (2011). Effects of Different Types of Artifacts on Interpretations of Artificial Subtle Expressions (ASEs), In Extended Abstract of the ACM-CHI2011 (in work-in-progress session), pp. 1249-1254.
    67. Komatsu, T., Seki, Y., Sasama, Y., Yamaguchi, T., and Yamada, K. (2011). Investigation of Users’ Reactions Toward Various Kinds of Artificial Agents: Comparison of an Robotic Agent with an On-screen Agent, In Proceedings of the HCI International 2001, pp. (2) 470-478.
    68. Kobayashi, K., Funakoshi, K., Yamada, S., Nakano, M., Komatsu, T. and Saito, Y. (2011). Blinking Light Patterns as Artificial Subtle Expressions in Human-Robot Speech Interaction, In Proceedings of the 20th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2011), pp. 181-186.
    69. Komatsu, T., Yamada, S., Kobayashi, K., Funakoshi, K., and Nakano, M. (2011). Interpretations of Artificial Subtle Expressions (ASEs) in Terms of Different Types of Artifact: A Comparison of An On-screen Artifact with A Robot, In Proceedings of the 4th International Conference on Affective Computing and Intelligent Interaction (ACII2011), pp. (2) 22-30.
    70. Komatsu, T. (2012). Quantifying Japanese Onomatopoeias: Toward Augmenting Creative Activities with Onomatopoeias, In Proceedings of the 3rd Augmented Human International Conference (AH12), article No. a-15, DOI: 10.1145/2160125.2160140.
    71. Komatsu, T., Kobayashi, K., Yamada, S., Funakoshi, K., and Nakano, M. (2012). Can Users Live with Overconfident or Unconfident Systems?: A Comparison of Artificial Subtle Expressions with Human-like Expression, In Extended Abstract of the ACM-CHI2012 (in work-in-progress session), pp. 1595-1600.
    72. Nakamura, S., and Komatsu, T. (2012). Study of Information Clouding Methods to Prevent Spoilers of Sports Match, In Proceedings of the 11th International Working Conference on Advanced Visual Interfaces (AVI2012), pp.661-664.
    73. Komatsu, T., Kobayashi, K., Yamada, S., Funakoshi, K., and Nakano, M. (2012). C How Can We Live with Overconfident or Unconfident Systems?: A Comparison of Artificial Subtle Expressions with Human-like Expression, In Proceedings of the 34th annual meeting of the Cognitive Science Society (CogSci2012), pp. 1816-1821.
    74. Kobayashi, K., Funakoshi, K., Yamada, S., Nakano, M., Komatsu, T. and Saito, Y. (2012). Impressions Made by Blinking Light Used to Create Artificial Subtle Expressions and by Robot Appearance in Human-Robot Speech Interaction, In Proceedings of the 21st IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2012), pp. 215 - 220.
    75. Akita, K., Nakamura, S., Komatsu, T. and Hirata, S. (2012). A Quantitative Approach to Mimetic Diachrony, In Proceedings of the 22nd Japanese/Korean Linguistics Conference, abstract.
    76. Ito, J., Kanoh, M., Arisawa, R., Nakamura, T., and Komatsu, T. (2012). An Operation Plane Using a Neural Network for Intuitive Generation of Robot Motion, In Proceedings of the 6th International Conference on Soft Computing and Intelligent Systems and the 13th International Symposium on Advanced Intelligent Systems (SCIS-ISIS2012), pp.498-501.
    77. Komatsu, T. and Terashima, H. (2012). MOYA-MOYA Drawing: Proposing a drawing tool system which can utilize users’ expressed onomatopoeias as an drawing effect, In Proceedings of the 6th International Conference on Soft Computing and Intelligent Systems and the 13th International Symposium on Advanced Intelligent Systems (SCIS-ISIS2012), pp.494-497.
    78. Komatsu, T. and Seki, Y. (2013). Directing Robot Motions with Paralinguistic Information, In Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI2013), pp. 169-170.
    79. Komatsu T., and Kiyokawa, S. (2013). Quantifying Japanese Onomatopoeias Based on Sound Symbolism, In Proceedings of the 9th International Symposium on Iconicity in Language and Literature (Iconicity), to appear.
    80. Komatsu, T., and Takahashi, H. (2013). How Does Unintentional Eye Contact with a Robot Affect Users’ Emotional Attachment to It? Investigation on the Effects of Eye Contact and Joint Attention on users’ emotional attachment to a robot, In Proceedings of the 15th International Conference on Human-Computer Interaction (HCII2013), pp. 363-372.
    81. Yamada, S., Terada, K., Kobayashi, K., Komatsu, T., Funakoshi, K., and Nakano, M. (2013). Expressing a Robot's Confidence with Motion-based Artificial Subtle Expressions, In Extended Abstract of the ACM-CHI2013 (in work-in-progress session), pp. 1023-1028.
    82. Sakamoto, D., Komatsu, T., and Igarashi, T. (2013). Voice Augmented Manipulation: Using Paralinguistic Information to Manipulate Mobile Devices, In Proceedings of the 15th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI 2013), 69-78.
    83. Hirata-Mogi, S., Nakamura, S., Komatsu, T., and Akita, K. (2013). Comparison of onomatopoeia use among areas using Diet records of Japan, In Proceedings of the 2nd Asian Conference on Information Systems (ACIS2013), pp. 641-645.
    84. Komatsu, T., and Inaoka, S. (2013). Why are specific onomatopoeias evoked from specific images? - Investigating correlations between image features and quantified onomatopoeias, In Proceedings of the 2nd Asian Conference on Information Systems (ACIS2013), pp. 631-634.
    85. Komatsu, T., Kobayashi, K., Yamada, S., Funakoshi, K., and Nakano, M. (2014). Augmenting Expressivity of Artificial Subtle Expressions (ASEs): Preliminary Design Guideline for ASEs, In Proceedings of the 5th Augmented Human International Conference (AH2014), a-40.
    86. Komatsu, T. (2015). Choreographing Robot Behaviors by Means of Japanese Onomatopoeias, In Extended Abstract of the 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI2015), pp. 23 -24.
    87. Komatsu, T., and Kuwahara, C. (2015). Extracting Users' Intended Nuances from Their Expressed Movements: In Quadruple Movements, In Proceedings of the 6th Augmented Human International Conference (AH2015), pp. 175-176.
    88. Komatsu, T., Prada, R., Kobayashi, K., Yamada, S., Funakoshi, K., and Nakano, M. (2015). Is Interpretation of Artificial Subtle Expressions Language-Independent? : Comparison among Japanese, German, Portuguese, and Mandarin Chinese, In Extended Abstract of the ACM-CHI2015 (in work-in-progress session), to appear.
    89. Komatsu, T., Prada, R., Kobayashi, K., Yamada, S., Funakoshi, K., and Nakano, M. (2015). Investigating Ways of Interpretations of Artificial Subtle Expressions Among Different Languages: A Case of Comparison Among Japanese, German, Portuguese and Mandarin Chinese, In Proceedings of the 37th annual meeting of the Cognitive Science Society (CogSci2015), to appear.

    査読付き国内会議

    1. 秋山広美・小松孝徳 (2009).ユーザの直感的表現を支援するオノマトペ意図理解システムインタラクション2009,インタラクティブ発表 (A14).
    2. 中村聡史・小松孝徳 (2012).スポーツの勝敗にまつわるネタバレ防止手法の検討,インタラクション20102,口頭発表 .

    著書

    1. 植田一博・小松 孝徳 (2006).共発達の構成論.鈴木 宏昭(編) 『知性の創発と起源(知の科学シリーズ)』(オーム社)所収(第7章), 179-203,ISBN-13: 978-4274202698.
    2. Ueda, K., and Komatsu, T (2007). Mutual Adaptation: A New Criterion for Designing and Evaluating Human-Computer Interaction, Engineering Approaches to Conversational Informatics, In T. Nishida (Eds.), John Wiley & Sons, Ltd, 381-402, ISBN-13: 978-0470026991.
    3. Yamada, S., and Komatsu, T (2007). Designing simple and effective expression of robot’s primitive minds to a human, In N. Sarkar (Eds.), Human-robot Interaction, In-Tech Education and Publishing, 481-496, ISBN 978-3-902613-13-4.
    4. 山田誠二・角所考・小野哲雄・竹内勇剛・小松孝徳(2007).HAIの方法論.山田誠二(監著)『人とロボットの<間>をデザインする』(東京電機大学出版)所収(第2章)23-66. 
    5. 小松孝徳 (2007).人間の直感を揺さぶるエージェント.山田誠二(監著)『人とロボットの<間>をデザインする』(東京電機大学出版)所収(第5章),114-144, ISBN-13: 978-4501543808. 
    6. Komatsu, T. (2010). Comparison an On-screen Agent with a Robotic Agent in an Everyday Interaction Style: How to Make Users React Toward an On-screen Agent as if They are Reacting Toward a Robotic Agent, In D. Chugo (Eds), Human-Robot Interaction, INTECH, 85-100, ISBN: 978-953-307-051-3.

    解説論文

    1. 小松孝徳・開一夫・岡夏樹 (2002). 人間とロボットとの円滑なコミュニケーションを目指して, 『人工知能学会誌』, vol.17 (6), 679-686.
    2. 山田誠二・角所考・小松孝徳 (2006). 人間とエージェントの相互適応と適応ギャップ,『人工知能学会誌』,Vol.21(6), 648-653.
    3. 小松孝徳 (2007).人間と人工物のインタラクション−私のブックマーク特集,『人工知能学会誌』,Vol. 22 (4), 565-569.
    4. 小松孝徳,棟方渚(2008).ポジティブ・バイオフィードバックのエンタテイメント利用,『人工知能学会誌』,Vol.23 (3), 342-347.
    5. 小松孝徳 (2008).文献紹介 - Fostering Common Ground in Human-Computer Interaction by Sara Kiesler,『人工知能学会誌』,Vol.23 (4), 581-582.
    6. 小松孝徳 (2008) .書評―感情:人を動かしている適応プログラム,『認知科学』,Vol.15 (4), 716-718.
    7. 小松孝徳(2009).HAIにおける心理学実験と生体情報:ユーザを知るということ,『人工知能学会誌』,Vol. 24 (6), 833-839.
    8. 小松孝徳(2009).書評―ソーシャルブレインズ,『認知科学』,Vol. 16 (4), 530-531.

    科学研究費

    1. 2005〜2006年:科学研究費・若手研究(B)(研究代表者,課題名「人間の認知的な特性を利用してユーザの自然なふるまいを誘発するインタフェースの提案」)2,000千円
    2. 2005〜2008年:科学研究費・基盤研究(B)(研究分担者,課題名「CyARM 非視覚的モダリティを用いた空間印象認識装置の研究」,研究代表者:岡本 誠(公立はこだて未来大学))22,000千円
    3. 2006〜2007年:科学研究費・萌芽研究(研究分担者,課題名「エージェントの態度表出における外見と表現の関係の実験的解明」,研究代表者:山田 誠二(国立情報学研究所))3,800千円
    4. 2007〜2008年:科学研究費・若手研究(B)(研究代表者,課題名「対話エージェントが現れるメディアの違いがユーザに与える影響の考察」)2,100千円
    5. 2009〜2010年:科学研究費・若手研究(B)(研究代表者,課題名「ユーザの直感的表現を支援するオノマトペ意図理解システムの開発」)3,400千円
    6. 2010〜2012年:科学研究費・基盤研究(C)(研究分担者,研究代表者:岡本誠(公立はこだて未来大学)課題名「知覚デザイン:非視覚モダリティを用いた知覚拡張インターフェースの研究」」)3,400千円
    7. 2011〜2012年:科学研究費・若手研究(B)(研究代表者,課題名「人工エージェントの内部状態を直観的かつ正確に伝達する低コストな表現手法の提案」)3,300千円
    8. 2010〜2014年:科学研究費・基盤研究(C)(研究代表者:秋田純一(金沢大学),課題名「擬似的な不規則配置を持つ高画質な映像システムの開発」), 4,100千円.
    9. 2013〜2015年:科学研究費・基盤研究(C)(研究代表者,課題名「ユーザの操作意図を漏れなく情報機器に伝達することができる音声入力手法の提案」)3,700千円
    10. 2013〜2015年:科学研究費・挑戦的萌芽研究(研究分担者,研究代表者:中村聡史(明治大学),課題名「情報の曖昧化に関する研究 」),3,500千円.
    11. 2015〜2017年:科学研究費・基盤研究(C)(研究分担者,研究代表者:小林稔(明治大学),課題名「大きさの印象を共有可能とする画像インタフェース手法の提案 」),3,500千円.