Saturday, 1 February 2014

When it all began

Hi folks,

Human-computer interaction has always been an  area where a lot of interesting  work  has been carried out with an intention to provide users an interface that is more intuitive to use than the existing ones. As we think of intuitive interaction with machines, the first thought that strikes our minds is, "why not interact with machines as we do with each other?". To enable this, we intend to build an application that will help us make everyday tasks easy to execute and more humanly with less machine interaction.


The ideation phase 


Three people, different interests, loads of hypothetical ideas along with a few feasible ones. Picking an idea to implement was a challenge. 
How many of you use pen drives or hard disks or USB cables or even cluttered mail inbox to transfer files across devices?  I think a lot of us do. As a solution to the above question  we finalized on developing an application which can recognize human gestures as commands, interpret it and perform tasks such as documents, images and other media file sharing across devices. As simple as that!

Opening with openCV


Human gesture recognition needs image processing. When searching the internet for gestures and image processing the best results that pop up are openCV and MatLab. Since openCV was open source, it meant we could start our project right away.  Installation of openCV was a smooth process.  We followed this link.  For a group of people who are very new to image processing, understanding the bulk of openCV APIs can be challenging. Walking through  sample codes and also few interesting projects that we found on internet,  we are slowly getting acquainted with it. This is just the beginning. There is lot more to be explored, experimented and learnt  :) 

Find a few interesting related work here : Swp ,  Flutter ,  MIT Media labs TUI




No comments:

Post a Comment