By Warren Miller, contributing writer
If you’ve been worried about the possible ramifications of human beings developing machines that can think for themselves, there’s something new for you to worry about — machines that can develop artificial intelligences by themselves. Last year, Google researchers revealed AutoML, an artificial intelligence (AI) capable of creating its own artificial intelligences. AutoML, in turn, recently developed an AI better at recognizing images than any created by humans.
NASNet, the brainchild of AutoML, recognized 82.7% of images that it assessed on ImageNet image classification data set, a 1.2% improvement on the next best previously established result. AutoML is short for Automated Machine Learning — it uses a technique called reinforcement learning to teach its “progeny” how to recognize objects (and people) in images. By showing NASNet images and then evaluating its recognition performance repeatedly, AutoML was able to improve upon its own creation.
Building AI can be a laborious, time-consuming process for AutoML’s human counterparts. Designing an algorithm for a specific task is relatively straightforward (easy for me to say), but those algorithms require the assimilation of immense amounts of data in order to learn and evolve. By automating the process of providing that data to AI machines, the entire field of AI development could grow exponentially. “Today, these are handcrafted by machine-learning scientists and literally only a few thousands of scientists around the world can do this,” Wired quoted Google CEO Sundar Pichai as saying. “We want to enable hundreds of thousands of developers to be able to do it.”
Google’s AI can create better machine-learning code than humans can. Image source: Pixabay.
Image recognition has a myriad of potential practical applications. Self-driving cars use it to identify obstacles or possible dangers in their paths, and some countries (Australia, for instance) are experimenting with using facial-recognition algorithms instead of passports. If you’re more of a glass-half-empty kind of person, it may have occurred to you that this type of AI could also be used in surveillance software by an Orwellian government. Google is sensitive to these kinds of possibilities — DeepMind, a research outfit owned by Google’s parent company, Alphabet, has recently created a think tank focusing on the moral and ethical questions that AI raises. Additionally, many of Google’s contemporaries (Amazon, Facebook, Apple) are members of the Partnership on AI to Benefit People and Society, an organization seemingly founded for the sole purpose of preventing the events predicted in “Terminator 2: Judgement Day” from ever becoming a reality. Governments around the world are also devising regulations for the proliferation of AI into areas of concern, such as surveillance or weapon design.
This development does beg the question about all of those new jobs that AI was supposed to create for those of us who expected to craft complex algorithms, monitor performance, and implement improvements. Does it mean that these jobs will be relegated to special AI bots instead? But then who will evaluate how well the AI creation bots are doing? Won’t that be a job for actual human engineers? Well, maybe not. Maybe it will just be AI bots all the way up.
Learn more about Electronic Products Magazine