When designing and choosing video enhancement software, there are a variety of factors to keep in mind. One key piece, often overlooked, is the benefit of video enhancement software that’s general enough to be used in a wide variety of configurations. The truth is, it’s not practical, in terms of budget or time, to have to reinvent the wheel for every single use case.
There are so many different combinations of hardware and software that could potentially be used in drones, smartphones, smart glasses, wearable cameras, and other cameras in motion. It pays to find the perfect video enhancement software that can be used today, tomorrow and even further down the line in a variety of specific use cases.
Video enhancement for drones and it’s future potential
Looking towards the future doesn’t just mean keeping an eye out for what to do next. It also means planning ahead for the software you’re writing now, instead of only making easy gains for current, specific problems.
Writing software is hard in general, and writing a high-performance video enhancement software development kit (SDK) is even harder. Taking the time to make sure your code doesn’t just solve one problem on one particular set of hardware takes significant effort. That said, making software general and customizable should always be a priority.
Let’s first take a look at drones, one of the hottest products in technology today. Their future seems bright, both as consumer products and in other expanding commercial applications. Vision processing capabilities that will enable the future of drones include collision avoidance, broader autonomous navigation, terrain analysis, and subject tracking. Collision avoidance is not only relevant for fully autonomous navigation but also for “co-pilot” assistance when the drone is primarily controlled by a human – analogous to today’s driver-assisted systems in cars.
These key features are poised to expand the drone market by making drones more capable and easier to use. The algorithms that will be used, whether they exist today or will be researched in the years to come, are typically applicable to a wide range of problems.
Scalable implementation
But a general problem domain isn’t enough. The implementation – the software itself – must also be sufficiently general in order to unlock the future potential use cases for drones and similar cameras in motion.
Both video stabilization and video enhancement in general are challenged to maintain generality. Deploying this software on large devices like surveillance aircraft in the defense industry is entirely different from small devices like smartphones, bodycams, and small drones. Integrating video stabilization on budget hardware is also vastly different from top-quality hardware.
So while algorithms may stay the same (with small tweaks) for a long time, hardware and sensor data tend to look and work very differently across different devices. Generality doesn’t just mean maintaining openness to different problems and use cases, it means decreasing the time required to integrate video enhancement into completely new hardware.
Adapting to different hardware configurations
The word hardcoding is frequently used in software development contexts to denote something that is written specifically for exactly one configuration, though this is generally frowned upon. A simple CPU will have to modify them one by one, which can take significant time. The easy way out is to hardcode this option into the software, since it will always work, and there is always a CPU. But it’s also a very slow, time-consuming option.
Most CPUs have multiple cores, meaning they can carry out several instructions at once, and then you can divide the big chunk of data into smaller subsets for each core that can be processed simultaneously, considerably reducing the time required. Some devices have graphics cards and some have even more specialized hardware, like FPGAs or DSPs. All of these can be leveraged to improve performance.
For both smartphones and drones, the cost, performance and power consumption of different subsystems are taken into account when designing a product. Size and weight are especially important for drones. Different technologies deliver different tradeoffs, and video enhancement software needs to be capable of adjusting to this easily, preferably even auto adjusting.
The best way to process the data also depends on other factors, such as if there’s other software running at the same time also in need of system resources. In short, setting up a render pipeline in a smart way takes more time than a hardcoded solution (in the short run), but in the long run it allows you to create a more efficient and scalable platform. This empowers integrators to balance performance vs. computational cost as they see fit.
More to implementation than meets the eye
A typical implementation project often requires more than just installing a packaged product. There’s normally more work needed for integration and fine-tuned algorithms. Clients may request everything from customized products to pre-testing or characterization evaluation. The results are distilled down to a few important key variables.
A “semi-automatic” process centered on maintaining generality takes some effort in the short run but certainly pays off when scaling up in the long run. Ultimately, it’s easier to adapt the software to different devices, hardware configurations, and client needs while making it easy to add additional features and serve more clients. And this is just barely scratching the surface of what is possible.
The customizable platform for your current and future needs
Today, video performance is just as important as still photo performance for consumers, especially in mobile phones. That makes hardware and software that provides video stabilization even more important than before.
Even so, if you purchase hardware or software only for video stabilization and then later need to integrate something new to enable object tracking or other features, this could be an expensive and time-consuming dilemma – and one that could be avoided altogether. A general, customizable video enhancement platform that can be integrated into different devices and configurations is the answer.
By future-proofing your product, you’ll have the ability to easily add and upgrade performance and features over time as needed. The potential for new vision processing advancements in drones and similar cameras is enormous – and you’ll be right on track to have the newest technology as long as your software can adapt easily.
About the author
Johan Svensson, chief technology officer (CTO) at IMINT, holds a MSc in Engineering Physics from Umeå University and has experience from GE Healthcare’s organization, where he had a number of senior roles in project management and product development. He has also held senior engineer roles in optics and sensor technology during his time at GE. Outside office hours, Johan is a skilled and enthusiastic photographer.