Cameras that see around corners
A lot of research has been going on in the last couple of years on cameras with the ability to record data not in ithe line of sight. Bin Bai and co at Xi'an Jiaotong University in China, who done just that using a single pixel camera that can see around corners. The technique is similar to that used with other single pixel cameras. The trick is to first randomise the light that the pixel detects, record the resulting light intensity, and then repeat this process thousands of times. The randomisation process changes the intensity of light each time the pixel records it. These differences in intensity are not random but instead correlated with the scene in front of the pixel. So producing an image is simply a question of mining this data to find the correlation. And the more data that is collected, the better the image becomes. So by recording the intensity of light many times, it is possible to create a high-resolution picture with a single pixel
The secret is the creation of 1-pixel cameras and an algorithm created in 2004. A few years ago, electronic stores were full of 1- or 2-megapixel cameras.Then along came cameras with 3-megapixel chips, 10 megapixels, and even 60 megapixels.Unfortunately, these multi-megapixel cameras create enormous computer files. So the first thing most people do, if they plan to send a photo by e-mail or post it on the Web, is to compact it to a more manageable size. Thus, a strange dynamic has evolved, in which camera engineers cram more and more data onto a chip, while software engineers design cleverer and cleverer ways to get rid of it. In 2004, mathematicians discovered a way to bring this “armsrace” to a halt. Why make 10 million measurements, theyasked, when you might need only 10 thousand to adequately describe your image? Wouldn’t it be better if you could just acquire the 10 thousand most relevant pieces of informationat the outset? Thanks to Emmanuel Candes of Caltech, Terence Tao of the University of California at Los Angeles, JustinRomberg of Georgia Tech, and David Donoho of StanfordUniversity, a powerful mathematical technique can reduce thedata a thousandfoldbeforeit is acquired. Their technique,calledcompressed sensing, has become a new buzzword inengineering, but its mathematicalroots are decades old. As a proof of concept, Richard Baraniuk and Kevin Kelly of Rice University even developedasingle-pixel camera. In 2015, a group of scientists led by Genevieve Gariepy had developed a state-of-the-art detector which, with some clever data processing techniques, can turn walls and floors into a “virtual mirror”, giving the power to locate and track moving objects out of direct line of sight.The shiny surface of a mirror works by reflecting scattered light from an object at a well-defined angle towards your eye. Because light scattered from different points on the object is reflected at the same angle, your eye sees a clear image of the object. In contrast, a non-reflective surface scatters light randomly in all directions, and creates no clear image.However, as the researchers at Heriot-Watt University and the University of Edinburgh recognised, there is a way to tease out information on the object even from apparently random scattered light. Their method, published in Nature Photonics, relies on laser range-finding technology, which measures the distance to an object based on the time it takes a pulse of light to travel to the object, scatter, and travel back to a detector.