Robot video system utilize several image and video processing modules. Depends on control system configuration robot.video structure can be diefferent. Following modules can be used UImageTool, UCamera, UKinect (Kinect SDK), UObjectDetector, UMoveDetector, UColorDetector, UFacET.

Main video functions

   // run this function to start all image capture and video procesing 
robot.video.Run, 
   // stop video data poling
robot.video.Stop;

Camera device

This part of robot video structure contains full copy od UCamera module. If you are going to use RGB camera this module must be enabled in the robot configuration.

    // acces image
robot.video.camera.image;
   // 1 if camera gets new image
robot.video.camera.notify; 
   // set max image refresh in fps (def. 100)
robot.video.camera.fps; 
   // rotate image (0 - 0 deg, 1 - 90 deg, 2 - 180 deg, 3 - 270 deg)
robot.video.camera.imgFlip; 
   // returns image width in pixels
robot.video.camera.width; 
   // returns image height in pixels
robot.video.camera.height; 
   // capture new image, use this function to manualy capture
robot.video.camera.getImage(); 
   // get capture property
robot.video.camera.getCaptureProperty(value); 
   // set capture property
robot.video.camera.setCaptureProperty(property_id, value); 
   // property_id:
CV_CAP_PROP_POS_MSEC = 0,
CV_CAP_PROP_POS_FRAMES = 1,
CV_CAP_PROP_POS_AVI_RATIO = 2,
CV_CAP_PROP_FRAME_WIDTH = 3,
CV_CAP_PROP_FRAME_HEIGHT = 4,
CV_CAP_PROP_FPS = 5,
CV_CAP_PROP_FOURCC = 6,
CV_CAP_PROP_FRAME_COUNT = 7,
CV_CAP_PROP_FORMAT = 8,
CV_CAP_PROP_MODE = 9,
CV_CAP_PROP_BRIGHTNESS = 10,
CV_CAP_PROP_CONTRAST = 11,
CV_CAP_PROP_SATURATION = 12,
CV_CAP_PROP_HUE = 13,
CV_CAP_PROP_GAIN = 14,
CV_CAP_PROP_EXPOSURE = 15,
CV_CAP_PROP_CONVERT_RGB = 16,
CV_CAP_PROP_WHITE_BALANCE_BLUE_U = 17,
CV_CAP_PROP_RECTIFICATION = 18,
CV_CAP_PROP_MONOCROME = 19,
CV_CAP_PROP_SHARPNESS = 20,
CV_CAP_PROP_AUTO_EXPOSURE = 21,
CV_CAP_PROP_GAMMA = 22,
CV_CAP_PROP_TEMPERATURE = 23,
CV_CAP_PROP_TRIGGER = 24,
CV_CAP_PROP_TRIGGER_DELAY = 25,
CV_CAP_PROP_WHITE_BALANCE_RED_V = 26.

Kinect device (raw)

This part of robot video structure contains full copy od UKinect module. See the module documentation for details. If you are going to use human, 3D face or gesture detection, UKinect must be enabled in the robot configurtation. Kinect part of robot video structure required also UImageTool.

    // get image from RGB kinect camera
robot.video.kinect.colorImage;
   // get skeleton image
robot.video.kinect.skeletonImage;
   .
   .
   .
   // see UKinect documentation for all functions description

Important raw functions to enable visualisation on Kinect device images (default disabled to avoid using of system resources).

    // enable/disable depth visualization (depthImage)
robot.video.kinect.depthVisualization;
    // enable/disable skeleton visualization (skeletonImage)
robot.video.kinect.skeletonVisualization;
    // enable/disable face visualization (faceImage)
robot.video.kinect.faceVisualization;
    // enable/disable interaction visualization (interImage)
robot.video.kinect.interactionVisualization;
 

 Human detector

This part of structure is based on Kinect device. It is an extension of raw data gathered from Kinect device. Human detector fuse data from skeleton and face detector. You can use only one of them, for example pause 3D face data processing for power saver in no light environment (face detector require good lighting conditions). In other hand if your body is not in full of Kinect view, face detector can sill detect users. To use face tracking you must enable this option in robot configuration.

   // enable human detector (true,false)
robot.video.humanDetector.enable;
   // returns true if detect human
robot.video.humanDetector.visible;
   // get detected user ID
robot.video.humanDetector.user;
   // get human 3D position relative to the Kinect device [mm]
robot.video.humanDetector.position;
   // get human position on the image [pixels]
robot.video.humanDetector.positionOnImage;
   // set tracking mode
   // 0 - full body
   // 1 - upper body (seat mode),
robot.video.humanDetector.trackingMode;
   // set user chooser mode
   // 0 - default (new skeleton gives new tracking candidate)
   // 1 - track the closest skeleton
   // 2 - track two closest skeletons
   // 3 - track the one skeleton and keep it
   // 4 - track two skeletons and keep them
   // 5 - track the most active skeleton
   // 6 - track two most active skeletons
robot.video.humanDetector.chooserMode;
   // get human orientation (yaw, pitch) relative to the robot head [deg]
   // use this function to follow user by robot head 
robot.video.humanDetector.orientation;
   // get human detector image 
   // if you want to see this image set 
   // robot.video.kinect.skeletonVisualization = true;
robot.video.humanDetector.image;
   // pause face tracking if you don't need it
robot.video.humanDetector.faceTrackingPause;
   // returns 0 if hand is down
   // 1 if right hand is above neck
   // 2 if left hand is above neck
   // 3 if booth hands are above neck
robot.video.humanDetector.isHandAboveNeck;
   // returns 0 if hand is down
   // 1 if right hand is above head
   // 2 if left hand is above head
   // 3 if booth hands are above head 
robot.video.humanDetector.isHandUp;

 Right and left hand detector

This part of structure is also an extension of raw data gathered from Kinect device. 

   // returns true if right/left hand is visible
robot.video.humanDetector.hand[right|left].visible;
   // get right/left hand 3D position relative to the Kinect device [mm]
robot.video.humanDetector.hand[right|left].position;
   // get right/left hand position on the image [pixels]
robot.video.humanDetector.hand[right|left].positionOnImage;
   // get right/left hand orientation (yaw, pitch) relative to the robot head [deg]
   // use this function to follow user right/left hand by robot head 
robot.video.humanDetector.hand[right|left].orientation;

Functions for right/left hand object color detection

   // get right/left hand object color value in RGB
robot.video.humanDetector.hand[right|left].color.value;
   // set right/left hand color detector window
robot.video.humanDetector.hand[right|left].color.window;
   // set right/left hand color detector blur operation value
robot.video.humanDetector.hand[right|left].color.blur;
   // get right/left hand color detector image (updated every get color.value function call)
robot.video.humanDetector.hand[right|left].color.image;

This is interaction (gesture recognition) section. If you want to use interaction features you must enable interaction kinect mode in the robot configuration.

   // returns true if hand is tracked
robot.video.humanDetector.hand[right|left].interaction.tracked;
   // returns true if hand is active
robot.video.humanDetector.hand[right|left].interaction.active;
   // hand is in the interactive zone and is actively being monitored for interaction
robot.video.humanDetector.hand[right|left].interaction.interactive;
   // hand is in a pressed state,
robot.video.humanDetector.hand[right|left].interaction.pressed;
   // get the progress toward a press action relative to the UI,
   // his value is calculated by combining raw position data with data
robot.video.humanDetector.hand[right|left].interaction.press;
   // get event status
   // 0 - none
   // 1 - griped event
   // 2 - grip release
robot.video.humanDetector.hand[right|left].interaction.event;
   // get the X,Y coordinate of the hand pointer relative to the UI
   // this value is calculated by combining raw position data with data
robot.video.humanDetector.hand[right|left].interaction.x;
robot.video.humanDetector.hand[right|left].interaction.y;
   // get the raw undajusted horizontal and vertical position of the hand
   // there are no units associated with this value
robot.video.humanDetector.hand[right|left].interaction.rawX;
robot.video.humanDetector.hand[right|left].interaction.rawY;
   // get the unadjusted extension of the hand. Values range from 0.0 to 1.0, 
   // where 0.0 represents the hand being near the shoulder, and 1.0 represents 
   // the hand being fully extended. There are no units associated with this value
robot.video.humanDetector.hand[right|left].interaction.rawZ;

Torso detector

This part of structure is an extension of raw data gathered from Kinect device. 

   // returns true if user torso is visible
robot.video.humanDetector.torso.visible;
   // get user torso 3D position relative to the Kinect device [mm]
robot.video.humanDetector.torso.position;
   // get user torso position on the image [pixels]
robot.video.humanDetector.torso.positionOnImage;
   // get user torso orientation (yaw, pitch) relative to the robot head [deg]
   // use this function to follow user torso by robot head 
robot.video.humanDetector.torso.orientation;

Functions for user torso object color detection

   // get user torso object color value in RGB
robot.video.humanDetector.torso.color.value;
   // set user torso color detector window
robot.video.humanDetector.torso.color.window;
   // set user torso color detector blur operation value
robot.video.humanDetector.torso.color.blur;
   // get user torso color detector image (updated every get color.value function call)
robot.video.humanDetector.torso.color.image;

Head detector

This part of structure is an extension of raw data gathered from Kinect device. 

   // returns true if user head is visible (from user skeleton detector)
robot.video.humanDetector.head.visible;
   // returns tru if face is tracking in 3D
robot.video.humanDetector.head.faceIsTracking;
   // get user head 3D position relative to the Kinect device [mm]
   // if head was detected by face detector it received [x,y,z,pitch,yaw,roll]
   // if detector is based on skeleton detector only it received [x,y,z] position vector 
robot.video.humanDetector.head.position;
   // get user head position on the image [pixels]
robot.video.humanDetector.head.positionOnImage;
   // get user head orientation (yaw, pitch) relative to the robot head [deg]
   // use this function to follow user head by robot head 
robot.video.humanDetector.torso.orientation;
   // return true if user face is oriented on the robot head
robot.video.humanDetector.head.oriented;

Functions for user face features

   // get face image 
robot.video.humanDetector.head.image;
   // set human face image scale (window size)
robot.video.humanDetector.head.scale;
   // get user face points, see UKinect documentation
robot.video.humanDetector.head.facePoints;
   // get user face AUs, see UKinect documentation
robot.video.humanDetector.head.faceAU;
   // get user face SUs, see UKinect documentation
robot.video.humanDetector.head.faceSU;

Object detector

This part of structure utilise UObjectDetector module it was full copied to the robot.video structure.

   // enable object detector (true, false)
robot.video.objectDetector.enable;
   // set object detector video source "kinect" or "camera"
robot.video.objectDetector.source;
   // 1 if any objects detected
robot.video.objectDetector.visible;
   // access to image
robot.video.objectDetector.image;
   // image with detected object
robot.video.objectDetector.object;
   // object position
robot.video.objectDetector.x;
   // object position
robot.video.objectDetector.y;
   // detected object height
robot.video.objectDetector.objectHeight;
   // detected object width
robot.video.objectDetector.objectWidth;
   // set object detector mode
   // 1 - detect all visible objects, return values are vectors
   // 0 - detect the biggest object, return values are scalars
robot.video.objectDetector.multi;
   // set image scale for processing
robot.video.objectDetector.scale;
   // image width (determined by scale)
robot.video.objectDetector.width;
   // image height (determined by scale)
robot.video.objectDetector.height;
   // set path and name to cascade
robot.video.objectDetector.cascade;
   // specifies how much the image size is reduced at each image scale (default 1.1)
robot.video.objectDetector.scaleFactor;
   // specifies how many neighbors should each candidate rectangle have to retain it (default 10)
robot.video.objectDetector.minNeighbors;
   // the minimum possible object size. Smaller objects are ignored (default 30)
robot.video.objectDetector.size;
   // processing time
robot.video.objectDetector.time;
   // algorithm performance
robot.video.objectDetector.fps;

Color detector

This part of structure utilise UColorDetector module. Their four instantions (colors) were full copied to the robot.video structure.

   // enable color detector (true,false)
robot.video.color1|2|3|4Detector.enable;
   // set color detector video source "kinect" or "camera"
robot.video.color1|2|3|4Detector.source;
   // 1 if color detected on the image
robot.video.color1|2|3|4Detector.visible;
   // color field position
robot.video.color1|2|3|4Detector.x;
   // color field position
robot.video.color1|2|3|4Detector.y;
   // set image scale for processing
robot.video.color1|2|3|4Detector.scale;
   // access to image
robot.video.color1|2|3|4Detector.image;
   // image width (determined by scale)
robot.video.color1|2|3|4Detector.width;
   // image height (determined by scale)
robot.video.color1|2|3|4Detector.height;
   // set color range (in HSV color space) 
   // H - hue varies from 0 (0 deg - red color) to 255 (360 deg - red again) 
   // S - saturation from 0 to 255 
   // V - value from 0 to 255
robot.video.color1|2|3|4Detector.SetColor(H_min, H_max, S_min, S_max, V_min, V_max);
   // processing time
robot.video.color1|2|3|4Detector.time;
   // algorithm performance
robot.video.color1|2|3|4Detector.fps;

Move detector

This part of structure utilise UMoveDetector module it was full copied to the robot.video structure.

   // enable move detector (true,false)
robot.video.moveDetector.enable;
   // set move detector video source "kinect" or "camera"
robot.video.moveDetector.source;
   // 1 if any movement detected
robot.video.moveDetector.visible;
   // object position
robot.video.moveDetector.x;
   // object position
robot.video.moveDetector.y;
   // set image scale for processing
robot.video.moveDetector.scale;
   // access to image
robot.video.moveDetector.image;
   // image width (determined by scale)
robot.video.moveDetector.width;
   // image height (determined by scale)
robot.video.moveDetector.height;
   // time window for analysis (in seconds)
robot.video.moveDetector.duration;
   // number of cyclic frame buffer used for motion detection (depend on FPS)
robot.video.moveDetector.frameBuffer;
   // difference between frames threshold depends on image noise
robot.video.moveDetector.diffThreshold;
   // region smooth filter parameter
   // ATTENTION! it must be odd value. Set smaller value for small image (higher scale)
robot.video.moveDetector.smooth;
   // processing time
robot.video.moveDetector.time;
   // algorithm performance
robot.video.moveDetector.fps;

Face features (FacET) detector

This part of structure utilise UFacetDetector module it was full copied to the robot.video structure.

   // enable facet detector (true,false)
robot.video.facetDetector.enable;
   // set move detector video source "kinect" or "camera"
robot.video.facetDetector.source;
   // access to image variable
robot.video.facetDetector.image;
   // image width (determined by scale)
robot.video.facetDetector.width;
   // image height (determined by scale)
robot.video.facetDetector.height;
   // set image scale for processing
robot.video.facetDetector.scale;
   // number of detected faces
robot.video.facetDetector.faces;
   // list of face X coordinate (pixels)
robot.video.facetDetector.roix[];
   // list of face Y coordinate (pixels)
robot.video.facetDetector.roiy[];
   // list of face declination angle (not verified, for future use)
robot.video.facetDetector.angle[];
   // list of left eyebrow bend angle (top)
robot.video.facetDetector.LEbBnd[];
   // list of left eyebrow declination angle (side)
robot.video.facetDetector.LEbDcl[];
   // list of distance between the right eyelids (rel. eyeball subregion)
robot.video.facetDetector.LEyOpn[];
   // list of distance between left pupil and eyebrow top (rel. eye subregion)
robot.video.facetDetector.LEbHgt[];
   // list of right eyebrow bend angle (top)
robot.video.facetDetector.REbBnd[];
   // list of right eyebrow declination angle (side)
robot.video.facetDetector.REbDcl[];
   // list of distance between the right eyelids (rel. eyeball subregion)
robot.video.facetDetector.REyOpn[];
   // list of distance between right pupil and eyebrow top (rel. eye subregion)
robot.video.facetDetector.REbHgt[];
   // list of aspect ratio of the lips bounding box (percents)
robot.video.facetDetector.LiAspt[];
   // list of Y position of the left corner of the lips (rel. lips bounding box)
robot.video.facetDetector.LLiCnr[];
   // list of Y position of the right corner of the lips (rel. lips bounding box)
robot.video.facetDetector.RLiCnr[];
   // list of number of horizontal wrinkles in the center of the forehead
robot.video.facetDetector.Wrnkls[];
   // list of nostrils baseline width (rel. face width)
robot.video.facetDetector.Nstrls[];
   // list of area of the visible teeth (rel. lips bounding box)
robot.video.facetDetector.TeethA[];
   // processing time
robot.video.facetDetector.time;
   // algorithm performance
robot.video.facetDetector.fps;

Photo camera

This part of section utilise UImageTool.

   // take a photo from "camera" or "kinect"
robot.video.photo.Take("source");
   // save photo to the given path
robot.video.photo.Save("path/fileName.jpg");

Displays

This part of section use UDisplayImage module. Set the vector of window names in the _ImageDisplayWindows variable in the robot configuration.

    // set image to the display window
robot.video.display[0|1|2|3].show(image);

 

 

EMYS and FLASH are Open Source and distributed according to the GPL v2.0 © Rev. 0.9.1, 15.05.2017

FLASH Documentation