Stanford’s New 4D Camera Gives Robots, VR, and Cars Wider View

  • iReviews
  • July 27,2017
Advertising Disclosure: Many or all of the companies featured provide compensation to us. These commissions are how we maintain our free service for consumers. Compensation, along with hours of in-depth research, determines where & how companies appear on our site.

New technology out of Stanford University will improve robot vision, virtual reality, autonomous cars, delivery drones, and more.

 

This so-called perfect “eye” is a 4D camera. And it compiles more data with its nearly 140 degrees of information than previous cameras can in one image. The Stanford researchers claim their technology to be the “first-ever single-lens, wide field of view, light field camera.”

A Vision of the Future

In 1996, two Stanford professors, Marc Levoy and Pat Hanrahan, published a paper about light field photography. Light field photography creates a 4D image by capturing a 2D view with data about the direction and distance of the light hitting the lens. Users can refocus their images after it’s taken because of that data about the light in the photo. An example of its benefit is that robots can refocus their vision when it’s raining or if something obscures their camera.

 

The innovative 4D camera uses light field photography—it utilizes the photo’s data to create a 4D image. While this photo isn’t exactly what a human needs to see to process the imagery, it’s perfect for robots to get a clear picture of a wider view.

Donald Dansereau, a postdoctoral fellow in electrical engineering, and one of the creators of the camera, says, “We want to consider what would be the right camera for a robot that drives or delivers packages by air. We’re great at making cameras for humans but do robots need to see the way humans do? Probably not.”

“A 2D photo is like a peephole because you can’t move your head around to gain more information about depth, translucency or light scattering. Looking through a window, you can move and, as a result, identify features like shape, transparency and shininess.”
– Dansereau

 

Dansereau worked closely with assistant professor of electrical engineering Gordon Wetzstein and colleagues from the University of California, San Diego. They recently presented their 4D camera at the computer vision conference CVPR 2017.

Many Possibilities of 4D Cameras

While the camera can work like a conventional camera at far distances, it’s really designed for the closer world. This type of camera simplifies a lot of technology that robots and machines use. Robots that need to get into tight spaces, landing drones, and self-driving cars will all improve. It’s also perfect for seamlessly rendering augmented and virtual reality. “It’s at the core of our field of computational photography,” says Wetzstein. ”It’s a convergence of algorithms and optics that’s facilitating unprecedented imaging systems.”

 

The large field of view takes up nearly one-third of the circle around the camera. It is the result of a specially designed spherical lens. A major hurdle for the researchers was the problem of translating a spherical image onto a flat sensor. But the experts from UCSD and the theories from Wetzstein’s lab came together for an elegant solution.

 

Wetzstein says, “It could enable various types of artificially intelligent technology to understand how far away objects are, whether they’re moving and what they’ve made of. This system could be helpful in any situation where you have limited space and you want the computer to understand the entire world around it.”

 

Although the camera is a proof-of-concept right now, the team plans to create a more compact prototype soon. A smaller camera means easier to test on robots and cars.

 

Dansereau says, “Many research groups are looking at what we can do with light fields but no one has great cameras. We have off-the-shelf cameras designed for consumer photography. This is the first example I know of a light field camera built specifically for robotics and augmented reality. I’m stoked to put it into peoples’ hands and to see what they can do with it.”

 

Sources: Stanford, engadget