To conduct interactive operations having rich variations depending on the feeling condition of a user.
In a voice recognition section 2, user's voice is recognized and phoneme information of the voice is extracted. In an interactive control section 3, conceptual information of the words and the phrases included in the voice recognition result obtained by the section 2 is extracted. An image inputting section 6 photographs the face of the user and outputs face image information. In a physiological information inputting section 7, physiological information such as the pulse rate of the user is detected. Then, a user feeling information updating section 8 estimates the feeling of the user based on the phoneme, the conceptual, the face image and the physiological information. In the section 3 and a sentence generating section 4, an output sentence is generated and outputted to the user based on the estimated result of the feeling.
AOYANAGI SEIICHI
TANAKA MIYUKI
YOKONO JUN
OE TOSHIO
JPH0981632A | 1997-03-28 | |||
JPH0956703A | 1997-03-04 | |||
JPH11305985A | 1999-11-05 |
Next Patent: VOICE RECOGNITION ADDRESS RETRIEVING DEVICE AND ON- VEHICLE NAVIGATION SYSTEM