Interactive Visualization and Sonification for Monitoring Complex Processes

(download pdf version)

Authors: Thomas Hermann, Christian Niehus and Helge Ritter
to appear in Proceedings of the International Conference on Auditory Display, ICAD 2003, July 6 - 9 2003, Boston ,MA, USA
 

On this page, sound examples, briefly described in section 5, are provided.
 

  • "blue cube" scenario
  • "red cube" scenario
  • "bad lighting" scenario
  • "module absent" scenario

  •  

     


    "blue cube" scenario

    File/Track:
    Simple Sonification AVDisplay-BlueCube1Phrase-simple.wav
    phrase 00:05:10..00:06:12   (6,36 MB)
    AVDisplay-BlueCube1-simple.mp3
    complete scenario   (3,00 MB)
    Musical Sonification AVDisplay-BlueCube1Phrase-musical.wav
    phrase 00:05:10..00:06:12   (5,46 MB)
    AVDisplay-BlueCube1-musical.mp3
    complete scenario   (3,07 MB)
    Description:
    Passed Time Current Activity
    0 min. 00 sec. The first module of the "Visual Attention" group introduces the attention loop.
    0 min. 01 sec. The first module of the "Integration" group is initialized.
    0 min. 03 sec. A new object (an aggregate with of an unknown amount of pieces) enters the view of the attention loop. In this context this object will be added to the object memory.
    0 min. 03 sec. The same object leaves the hand model. The same object can't be in the robot hand and on the table at the same time.
    0 min. 19 sec. The first module of the "Robot Arm" group performs a movement to a predicted position of the robot arm.
    0 min. 26 sec. The "mainpat" module of the "Robot Arm" group asks the hand camera for a position correction.
    0 min. 51 sec. Because of an absent "Robot Arm" module ("state batch") the user initializes the module of this group.
    0 min. 51 sec. The "mainpat" module notifies to the user that the "Robot Arm" group is ready to move to a predicted position.
    2 min. 17 sec. An user charges the system with grasping a blue cube by verbal and gesture interaction.
    3 min. 48 sec. A new "Speech Understanding" module generates a connection to the "Visual Attention" group. Its task is to combine a linguistic phrase with a gesture phrase.
    4 min. 04 sec. A new "Visual Attention" module "LookForHand" is initialized to detect the human hand, which performs the gesture interaction.
    4 min. 04 sec. In combination with the "LookForHand" module the "Get3DPoint" module is also initialized. Its task is to compute the spherical point of the finger-tip.
    4 min. 11 sec. The task including the gesture is completely known.
    5 min. 10 sec. A new verbal and gesture interaction gives a new instruction to the system.
    5 min. 28 sec. The evaluation of the picture from the stereo camera provides an inconsistent spherical coordinate of the finger-tip.
    5 min. 33 sec. A new computation of the finger-tip position supplies a capable spherical point.
    5 min. 40 sec. The "Integration" module "M7-ResourceCtrl" transmits the coordinates to the "Robot Arm" group.
    6 min. 05 sec. The robot arm drives to the predicted object position.
    6 min. 11 sec. The "mainpat" module of the "Robot Arm" group asks the hand camera for a position correction.
    Duration: about 6 minutes and 20 seconds
    top of the page


    "red cube" scenario

    File/Track:
    Simple Sonification AVDisplay-RedCube1Phrase-simple.wav
    phrase 00:00:51..00:02:03   (5,46 MB)
    AVDisplay-RedCube1-simple.mp3
    complete scenario   (985 KB)
    Musical Sonification AVDisplay-RedCube1Phrase-musical.wav
    phrase 00:00:51..00:02:03   (5,54 MB)
     AVDisplay-RedCube1-musical.mp3
    complete scenario   (920 KB)
    Description:
    Passed Time Current Activity
    0 min. 00 sec. The first module of the "Visual Attention" group introduces the attention loop.
    0 min. 04 sec. A blue cube leaves the view of the attention group. In this context this object will be deleted from the object memory.
    0 min. 04 sec. The same object enters the memory of the hand model.
    0 min. 07 sec. An user charges the system with grasping a red cube by verbal and gesture interaction.
    0 min. 07 sec. A new "Speech Understanding" module generates a connection to the "Visual Attention" group.
    0 min. 16 sec. A new "Visual Attention" module "LookForHand" is initialized to detect the human hand, which performs the gesture interaction.
    0 min. 19 sec. In combination with the "LookForHand" module the "Get3DPoint" module, which computes the spherical point of the finger-tip, is also initialized.
    0 min. 24 sec. The task including the gesture is completely known ("Speech Understanding" module "M7-whypmem" notifies a founded gesture).
    0 min. 25 sec. The "Integration" module "M7-ResourceCtrl" transmits the coordinates to the "Robot Arm" group.
    0 min. 39 sec The last "Visual Attention" module finishes its loop. In this and the following scenarios the visual attention loop stops if the robot arm begins moving. After its moving they will take up their observation.
    0 min. 52 sec. The modules of the "Robot Arm" group are initialized.
    0 min. 53 sec. The "mainpat" module notifies to the user that the "Robot Arm" group is ready to move to a predicted position.
    0 min. 57 sec. The robot arm drives to the predicted object position.
    1 min. 02 sec. The "mainpat" module of the "Robot Arm" group asks the hand camera for a position correction.
    1 min. 05 sec. The "Robot Arm" module "ps2serv" performs the derived correction movement.
    1 min. 18 sec. The "Robot Arm" module "mainpat" notifies after 7 correction movements that the best correction has been reached.
    1 min. 19 sec. The "Robot Arm" module "state batch" generates a grasping action.
    1 min. 21 sec. The red cube is raised by the robot.
    1 min. 24 sec. The robot drives to another position. After this movement it will turn back to its current position.
    1 min. 27 sec. The robot takes off the red cube at its original position.
    1 min. 29 sec. The robot turns back to its home position.
    1 min. 34 sec. The "mainpat" module notifies to the user that the "Robot Arm" group is ready to move to a predicted position.
    1 min. 46 sec. The modules of the "Visual Attention" group take up their listening.
    Duration: about 2 minutes
    top of the page


    "bad lighting" scenario

    File/Track:
    Simple Sonification AVDisplay-BadLighting1-simple.wav 
    complete scenario   (9,18 MB)
    AVDisplay-BadLighting1-simple.mp3
    complete scenario   (833 KB)
    Musical Sonification AVDisplay-BadLighting1-musical.wav
    complete scenario   (9,22 MB)
    AVDisplay-BadLighting1-musical.mp3
    complete scenario   (837 KB)
    Description:
    Passed Time Current Activity
    0 min. 00 sec. The first module of the "Visual Attention" group introduces the attention loop. This is another module than in the other scenarios. The recording starts when the attention loop is already running.
    0 min. 07 sec. The "Visual Attention" module "PicsFromEyes" notifies that the lighting condition is getting poorer.
    0 min. 27 sec. Another "Visual Attention" module notifies a bad condition. This follows the bad condition of module "PicsFromEyes".
    0 min. 39 sec. An user charges the system with grasping a red cube by verbal and gesture interaction.
    0 min. 39 sec. The "Integration" module "M7-ResourceCtrl" starts the search for the communicated gesture.
    1 min. 25 sec. The "Integration" module "M7-ResourceCtrl" stops the search for the communicated gesture (a gesture couldn't be found).
    1 min. 27 sec. The task including the gesture can't be transposed ("Speech Understanding" module "M7-whypmem" notifies that no gesture were found).
    Duration: about 1 minute and 45 seconds
    top of the page


    "module absent" scenario

    File/Track:
    Simple Sonification AVDisplay-ModuleAbsent1Phrase-simple.wav
    phrase 00:02:15..00:03:25   (6,13 MB)
    AVDisplay-ModuleAbsent1-simple.mp3
    complete scenario   (1,65 MB)
    Musical Sonification AVDisplay-ModuleAbsent1Phrase-musical.wav
    phrase 00:02:15..00:03:25   (6,17 MB)
    AVDisplay-ModuleAbsent1-musical.mp3
    complete scenario   (1,65 MB)
    Description:
    Passed Time Current Activity
    0 min. 00 sec. The first module ("handcam0") of the "Robot Arm" group is instantiated.
    0 min. 05 sec. After three "info" messages the module "handcam0" transmits an "exit" message.
    0 min. 07 sec. Three unknown NEO/NST modules start their "loop" activities. Unknown modules always are sonificated by the simple sonificator.
    0 min. 20 sec. One of the unknown NEO/NST modules notifies a change of its state followed by an "action" message.
    0 min. 48 sec. After three further state changes of one of the unknown NEO/NST modules another unknown NEO/NST module performs a similar activity.
    1 min. 00 sec. After the unknown module activities have been turned off the "Visual Attention" module "PicsFromEyes" begins the attention circle.
    1 min. 05 sec. The first "Integration" module "M7-ObjectMem" is initialized.
    1 min. 07 sec. The "Integration" module "M7-Handmodelle" is initialized.
    1 min. 09 sec. The "Integration" module "nbg_VIEW.3D.avdtime" is initialized.
    1 min. 11 sec. The modules of the "Speech Understanding" group and another "Integration" module are initialized.
    1 min. 39 sec. The modules of the "Robot Arm" group except the module "handcam0" are initialized.
    1 min. 40 sec. The "mainpat" module notifies to the user that the "Robot Arm" group is ready to move to a predicted position.
    2 min. 19 sec. An user charges the system with grasping a red cube by verbal and gesture interaction.
    2 min. 28 sec. The "Visual Attention" module "LookForHand" results the position of the hand used for the gesture.
    2 min. 30 sec. In combination with the "LookForHand" module the "Get3DPoint" module computes the spherical point of the finger-tip. Its coordinates are inconsistent. This is why this module sends an "error" message.
    2 min. 34 sec. Another evaluation of the hand results in new coordinates.
    2 min. 36 sec. The module "Get3DPoint" now derives a correct position of the finger-tip.
    2 min. 40 sec. The task including the gesture is completely known ("Speech Understanding" module "M7-whypmem" notifies a founded gesture).
    2 min. 41 sec. The "Integration" module "M7-ResourceCtrl" transmits the coordinates to the "Robot Arm" group.
    2 min. 57 sec. The robot arm drives to the predicted object position.
    3 min. 02 sec. The "mainpat" module of the "Robot Arm" group asks the hand camera for a position correction.
    3 min. 04 sec. Because of the absent module "handcam0" the "Robot Arm" module "mainpat" notifies that the communicated object can't be found.
    3 min. 04 sec. The "mainpat" module of the "Robot Arm" group asks once more the hand camera for a position correction.
    3 min. 06 sec. Because of the absent module "handcam0" the "Robot Arm" module "mainpat" notifies again that the communicated object can't be found.
    3 min. 07 sec. The robot turns back to its home position.
    3 min. 12 sec. The "mainpat" module notifies to the user that the "Robot Arm" group is ready to move to a predicted position.
    3 min. 16 sec. The modules of the "Visual Attention" group take up their listening.
    Duration: about 3 minutes and 30 seconds

    top of the page


    modified 2003-02-10 Christian Niehus