Tag related to 2012

Posts

Farewell mouse and keyboard?

Is it time to say farewell to our mice and keyboards? That is what Leap motion claims. In the past months, we have heard of many promising projects aiming to substitute mouse or keyboard input to a personal computer – an aspect of HCI discussed and researched for over a decade.

So what is the Leap? It is a USB device that allows us to interact with software on laptops and desktops, by sensing hand and finger movements in a very precise (claimed) way. Leap motion argues it is a breakthrough in HCI as it uses a mathematical approach to 3D, touch-free motion sensing and motion control software.

Leap motion also claims a limitless usage for this device,  as stated on their website, providing a few examples:

  • Artists can use The Leap to emulate a stylus or easily create 3D images.
  • Anyone can use The Leap to interact with Windows 7/8 or Mac OS X by clicking, grabbing, scrolling and using familiar gestures like pinch to zoom in 3D space.
  • Users pointing a pen at the signature line of a document to sign it in space.
  • Engineers can interact more easily with 3D modeling software.
  • Gamers can play more easily and many will modify with Leap in mind.
  • Surgeons can control 3D medical data with their hands without taking off their gloves.

The Leap is currently capable for interacting with Microsoft or MAC platforms, with Linux to follow. Pre-orders are out, for 69.99 USD for a limited number. In addition, Leap Motion is willing to distribute free developer kits to qualified people, in order  to promote the device and enable developers to “go for it”.

You can read all about it here. You can also check out their blog.

Workshop – I: How would you design an AR marker (to minimize the computational cost of detection and tracking)

Our `Workshop’ series tries to revisit some basic – or not so basic – concepts of AR/VR. We will attempt to keep this fairly frequent and try to answer genuine questions that crop up – mostly asked by students and researchers when they first start working on these fields. We are pretty sure this will turn out to be an exercise for us as well! An opportunity to go back to the drawing board, the papers and the references of each aspect.

So, without any further delay, our first workshop is about… the AR Marker!

Most people involved in AR have come across AR markers one way or the other. Nowadays, many SDKs, applications and efforts from AR enthusiast are based on ARToolkit from HIT Lab, which if we are not hugely mistaken has made the concept widely known. In our humble opinion one (if not the…) most important contributions in AR, particularly in its currently popular, mostly-handheld and often `gimmicky’ incarnation.  But how does one create one of those markers and what effect has their design in tracking performance?

AR Markers need to obey a series of rules in order to be easily detectable by even the most basic of libraries. Border, background and marker image arrangement follow some rules, to make detection and tracking easier.

To begin with, markers must be square. The border is used to detect the marker in the first place and needs to be of contrasting colour to the background – hence most markers are black and white. The border is usually twice as big as the marker image with a thickness about 1/4 of the marker.

Fig.1 An example of an AR Marker

The marker image itself must not be rotationally symmetric, for the algorithm needs to be able to detect orientation. Increased complexity of the image affects the distance from which the AR marker is trackable. Marker images are usually black on white background but can be coloured as well. ARToolkit has provisioning for faster tracking when the marker image is in colour. The `Pro’ (4.4) version of ARToolkit uses a 2D bark-code in the marker image to enhance tracking performance when many markers are used at the same time. Too-fine detail in the marker image, when used to differentiate markets, can cause problems as algorithms generally sample the marker at a resolution of 16×16 pixels.  Last but not least it is advisable to use the same camera to do calibration and tracking.

References:

[1] ARToolkit –  http://www.hitl.washington.edu/artoolkit/
[2] ARToolworks – http://www.artoolworks.com/

Samsung’s Smart Window

Back from the Xmas holidays and right after tackling some things at work – aka ‘the demo’ – I had time too look into recent developments and news. Naturally CES 2012 was one of the high points of 2012. And I stumbled upon the most impressive transparent screen I have seen. Here is a video to enjoy, while I catch up on my reading for all things AR/VR people expect this year!

CES – Samsung’s Smart Window by MobileNations

Now, I can think of various different scenarios someone could use this. Imagine windows on museum exhibits with superimposed information, or actual windows that depict magnificent digital stained glass. Pretty cool stuff!