Advertisement

Electrical engineers synch 98 cameras together, create world’s first gigapixel camera

Electrical engineers synch 98 cameras together, create world’s first gigapixel camera

Device captures an amazing amount of detail, could be on market within five years


Engineers from Duke University and the University of Arizona have successfully synched up 98 tiny cameras to develop a single prototype camera with a resolution five times better than a person with 20/20 vision looking out over a 120-degree horizontal field.

Electrical engineers synch 98 cameras together, create world’s first gigapixel camera

One gigapixel image (top) shows minute details (bottom) of the skyline in Seattle, Washington.

Research for the camera was supported by the Defense Advanced Research Projects Agency (DARPA). David Brady, professor at the Michael J. Fitzpatrick Professor of Electric Engineering at Duke’s Pratt School of Engineering, led the project team, which included scientists from the University of Arizona, the University of California-San Diego, and Distant Focus Corp.

Specs

The team suggests that while one gigapixel is nice, their new camera — dubbed the AWARE-2 — has the potential to capture a whopping 50 gigapixels of data . . . in a single shot.

For reference, that’s 50,000 megapixels total: most professional photographers use 40-megapixel cameras, while the standard point-and-click cameras that you, me, and the rest of the world out there uses, is able to capture 8- to 10-megapixels per shot.

For those of you out there itching to point out the fact that there are four 1.4-gigapixel cameras being used in the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) at the University of Hawaii’s Institute for Astronomy, please note that they focus on a view of the sky just three degrees wide .

What’s more, each one has to use a 1.8-meter mirror and large array of light-sensing chips to accomplish this feat.

The AWARE-2 sidesteps these issues by using 98 microcameras, each with a 14-megapixel sensor, grouped around a shared spherical lens.

Altogether, they’re able to gather in a field of view 120 degrees wide and 50 degrees tall.

How it works

Each microcamera runs its own autofocus and exposure algorithm. This allows for every part of the image, whether it’s bright, dark, near, or far, to be visible in the final result. Image processing software is then used to stitch together the 98 sub-images into a single, large image at the rate of three frames per minute.

“Each one of the microcameras captures information from a specific area of the field of view,” Brady explained. “A computer processor essentially stitches all this information into a single, highly detailed image. In many instances, the camera can capture images of things that photographers cannot see themselves but can then detect when the image is viewed later.”

With all of this technology, the AWARE-2 camera stands at about two-and-a-half feet square, and about 20 inches deep.

Electrical engineers synch 98 cameras together, create world’s first gigapixel camera

Gigapixel camera.

What’s odd and certainly worth noting is the fact that only about three percent of the camera is actually made up of optical elements: The rest is made up of electronics and processors for getting the highly detailed image.

This is, obviously, the area most in need of improvement. Researchers recognize that they need to miniaturize the electronics and increase the camera’s processing ability to make it more practical.

“The camera is so large now because of the electronic control boards and the need to add components to keep it from overheating,” Brady said, “As more efficient and compact electronics are developed, the age of hand-held gigapixel photography should follow.”

It also needs to be more efficient.

“The development of high-performance and low-cost microcamera optics and components has been the main challenge in our efforts to develop gigapixel cameras,” Brady adds. “While novel multiscale lens designs are essential, the primary barrier to ubiquitous high-pixel imaging turns out to be lower power and more compact integrated circuits, not the optics.”

Brady and the rest of the team expect the AWARE-2 will first find use in automated military surveillance systems, before eventually making its way to market for researchers, media companies, and consumers.

Taking a different approach

The software used with the AWARE-2, which is in charge of combining the input from all of the microcameras, was specially developed by an Arizona team led by Michael Gehm, assistant professor of electrical and computer engineering at the University of Arizona.

“Traditionally, one way of making better optics has been to add more glass elements, which increases complexity,” Gehm said. “This isn’t a problem just for imaging experts. Supercomputers face the same problem, with their ever more complicated processors, but at some point the complexity just saturates, and becomes cost-prohibitive.”

“Our current approach, instead of making increasingly complex optics, is to come up with a massively parallel array of electronic elements,” he continues. “A shared objective lens gathers light and routes it to the microcameras that surround it, just like a network computer hands out pieces to the individual work stations. Each gets a different view and works on their little piece of the problem. We arrange for some overlap, so we don’t miss anything.”

The team also plans to write software to allow these images to be viewed: as it stands, the amount of data these cameras generate is too much to store in conventional file formats, post on YouTube, or e-mail to a friend. They will have to develop a program which will allow the user to determine which data is worth storing and displaying, and create better interfaces for viewing and sharing gigapixel images.

Outlook

The Duke group is already moving forward by building a gigapixel camera with more sophisticated electronics; one which is capable of capturing ten images per second. They expect it to be finished by the end of the year.

The cameras can currently be made for about $100,000 each, though large-scale manufacturing will likely cut costs down to about $1,000.

The camera was originally described in Nature. You can purchase the team’s report here. ■

Story reference and images: nature.com and phys.org

Advertisement



Learn more about Electronic Products Magazine

Leave a Reply