OpenCV- Vision to computers O_O So shall we say that computers are new humans?
Could Computers and human interact through Gestures? Indeed yes and in fact, in many ways. So we thought of simple hand movements to get responses from computer.
Check few interesting tutorial here which we went through.
*Motion detection through Background subtraction technique.By the difference in frames motion could be detected. This difference can be used to get the shape of object in motion. This has few disadvantages and mainly cannot detect object in its static position
*Disparity mapping
Disparity refers to the difference in location of an object in corresponding two images as seen by the left and right eye which is created due to parallax of stereo cameras .This technique is very useful to detect the object depth, like hand which will be closest to camera during gestures. A very useful and highly accurate and may be the next big thing in Computer-Vision with cameras being replaced with kinects and other such powerful devices. However this technique is not possible with the web-camera presently available with the computers.
*Haar-Cascade classifiers
This idea, proposed by Viola and Jones, is used to rapidly detect any object, like human faces, eyes, hands and many more using AdaBoost classifier cascades that are based on Haar-like features and not pixels. This was the stepping stone in face detection. A haar-like feature contains a detection window. This window contains rectangular regions at specific locations. The pixel intensities are summed up in each region and the difference between these sums are calculated. This difference is used to detect subsections of the image. The target window is then moved over the input image and for each subsection and haar-like feature is calculated. This difference is then compared with the learned value that seperates the objects from non-objects.
Before continuing, we are talking about just the built-in camera and not Kinect or Stereo Camera or any other external sources.
We worked on the above mentioned ideas to get to know them well, may not be completely, but let us tell you it was indeed very helpful.As I already mentioned haar gives better accuracy and its the built-in camera that is being used. So we are building our own haar-like feature for hand and using it to detect hand(open palm).
By using the technique of Detection and Tracking, simple gestures can be defined and corresponding action can be taken. There are many other techniques available for detection of objects like based on color and lot more to explore and learn. Tracking can be done using Meanshift, Camshift and other such techniques which will be explained in the forthcoming posts.
Stay in touch folks cause we will be shortly telling how to make your own haar and about tracking techniques.
Until then see-ya.
Could Computers and human interact through Gestures? Indeed yes and in fact, in many ways. So we thought of simple hand movements to get responses from computer.
Check few interesting tutorial here which we went through.
Object Detection Using OpenCv
So the next question that arises is HOW? How to detect object(hand)?*Motion detection through Background subtraction technique.By the difference in frames motion could be detected. This difference can be used to get the shape of object in motion. This has few disadvantages and mainly cannot detect object in its static position
*Disparity mapping
Disparity refers to the difference in location of an object in corresponding two images as seen by the left and right eye which is created due to parallax of stereo cameras .This technique is very useful to detect the object depth, like hand which will be closest to camera during gestures. A very useful and highly accurate and may be the next big thing in Computer-Vision with cameras being replaced with kinects and other such powerful devices. However this technique is not possible with the web-camera presently available with the computers.
*Haar-Cascade classifiers
This idea, proposed by Viola and Jones, is used to rapidly detect any object, like human faces, eyes, hands and many more using AdaBoost classifier cascades that are based on Haar-like features and not pixels. This was the stepping stone in face detection. A haar-like feature contains a detection window. This window contains rectangular regions at specific locations. The pixel intensities are summed up in each region and the difference between these sums are calculated. This difference is used to detect subsections of the image. The target window is then moved over the input image and for each subsection and haar-like feature is calculated. This difference is then compared with the learned value that seperates the objects from non-objects.
Implementation of Haarcascades
Before continuing, we are talking about just the built-in camera and not Kinect or Stereo Camera or any other external sources.
We worked on the above mentioned ideas to get to know them well, may not be completely, but let us tell you it was indeed very helpful.As I already mentioned haar gives better accuracy and its the built-in camera that is being used. So we are building our own haar-like feature for hand and using it to detect hand(open palm).
By using the technique of Detection and Tracking, simple gestures can be defined and corresponding action can be taken. There are many other techniques available for detection of objects like based on color and lot more to explore and learn. Tracking can be done using Meanshift, Camshift and other such techniques which will be explained in the forthcoming posts.
Stay in touch folks cause we will be shortly telling how to make your own haar and about tracking techniques.
Until then see-ya.
Looking forward to hear more about making my own haar and detecting my custom gestures.
ReplyDelete