Advertisement

Our top AI predictions for 2018

AI CEOs, advertisers, price fixers, and more: AI begins to reshape society across all levels as an increasing number of industries realize its potential

Shutterstock_AI_2

By Brian Santo, contributing writer

Only someone who hasn’t been watching the slow, incremental progress in artificial intelligence since the late ’60s could be surprised that artificial intelligence (AI) is so rapidly being integrated into so many aspects of technological life today. AI is popping up seemingly everywhere, from helping to respond to a voice command aimed at some gadget on your counter or wrist to driving autonomous vehicles to discerning possible communications network security breaches.

AI is still being refined. There is still progress to be made in the development of the technology. There are still applications that are untapped. Here, we provide a handful of what we consider to be sound predictions for AI in 2018 based on recent activity in the field.

AI gets deeper in SDN
AI is already used in cloud management, for identifying possible breaches in network security, and is already available as a cloud-based tool.

As long as software-defined networking (SDN) has been around, it is still a developing technology. Furthermore, SDN is dependent on a populous set of developing technologies.

Transitioning a network to SDN is a delicate process. Managing flexible and almost endlessly configurable networks is already complex, and it is only going to become more so. The management process is already mostly automated; it is inevitable that managing these sophisticated networks for maximum efficiency and effectiveness will increasingly become the domain of AI.

AI-based pricing will become more common
AI is already being used to control variable pricing for goods and services. Examples include gas at filling stations in Europe, office products at Staples, and vacation rentals through Vacasa. Using AI to set prices is going to become increasingly common in 2018.

The increased use will practically guarantee at least one AI-based variable pricing incident that will provoke a backlash.

Uber executives thought that surge pricing was a brilliant idea, until they started gouging people trying to escape from dangerous circumstances. Drug companies dramatically raise prices on commonly used medicines all the time; society tends to accept double-digit percentage increases but will occasionally draw the line at triple-digit hikes. The point is that there are limits, and the precise location of those limits is not always obvious even to some humans.

AI-based pricing systems are designed to probe where those limits are, down to the level of the individual consumer, by constantly adjusting pricing, ideally with each adjustment carefully modulated so that they don’t provoke a strong response. “Ideally” is the operative notion here. At least one AI is likely to stray far beyond where common sense would have suggested that it stop.

Pricing variability aimed at specific individuals is possible but unlikely to be deployed in 2018, with the possible exceptions of some circumscribed market experiments. There is no way to make it seem fair, at least not in the next year.

AI-based pricing is also likely to result in competitive pricing practices that, had they been engaged in by humans, would amount to anti-competitive collusion. AIs will constantly evaluate multiple factors as they adjust pricing, but profit will always be the most important factor, and the only way to maximize profit is for everyone to overcharge by the same amount, thus keeping prices inflated and competitive with each other. The trick is to not overcharge by so much that it aggravates consumers en masse . It remains to be seen if AIs run into that pitfall.

From the standpoint of the AI industry and its enterprise customers, that’s all okay. Any backlash against AI pricing might be intense but will certainly be brief because any violations will be rooted in qualities that are inherent in markets, not in AI. The industry will assert — correctly — that any misstep was a learning experience for the AI. It will also assert — almost certainly incorrectly — that the infraction won’t be repeated.

More scrutiny of AIs, yet more opacity
More companies will commercialize AIs. As they do, more questions about how they operate are going to arise.

AIs have become so sophisticated that humans can no longer understand how some of them arrive at their results. This year, there was at least one instance of an AI developing its own language. We are well past the point when humans can’t be entirely sure that they can identify everything that AIs are even doing. 

The concern is that if you don’t know how an AI is making decisions, how can anyone be certain that it will make decisions that have outcomes that its managers will find appropriate? The question pops up in many applications, notably in the context of the development of autonomous vehicles. An AI driving a motor vehicle should avoid collisions, but sometimes collisions are unavoidable. If given the choice to inevitably hit either a man or a moose, which is the appropriate choice? Hitting the moose will do far more damage to the vehicle.

This is why efforts are already under way to train AIs to be able to explain their operation to humans; to show how they arrived at a result; to, in effect, justify themselves.

Which is all well and good. But that answers how I deal with my AI so that I can trust it. Next year, we’re going to have to confront a different question: How do I trust your AI — and there are a lot of components to that.

How do I know that you, as the AI’s manager, have enough discernment to create boundaries for the AI that are reasonable? What’s reasonable? How can I be assured that you know how to control your AI — or even bother to try to institute controls? If I buy your AI to run my network, or my medical facility, or my electric utility, or my business, will you have any obligation to show me how your AI works? How will I know that I can trust your AI to operate the way that I want it to?

AIs will spark minimum salary discussion
Last year, both Stephen Hawking and Elon Musk weighed in on whether unregulated progress with AI will be safe or not. With two of the most prominent people on the planet opining on AI, the subject became so popular that even your mom has an opinion on the singularity now. Considering the singularity has already led to some hard questions about the consequences of AI and how we should respond.

In 2018, jobs lost to AIs will be relatively few but enough to count. What happens when AIs start becoming more capable and start taking over more jobs and more types of jobs, which they will inevitably become capable of doing?

We’ve had technological revolutions before that have led to job displacement. Some people believe that AI will simply instigate another displacement, but there is a growing population of influential thinkers who believe that this time it will be different — this time, the jobs will simply be gone. Prominent venture capitalist Vinod Khosla , for example, believes, “For the first time, we may have more jobs destroyed than created.”

In capitalist systems, economy and society are inextricably linked. If there are fewer jobs, what will we do? 

In 2018, nothing will be done. That said, discussions about guaranteed minimum salary — already ongoing in Silicon Valley and in Washington think tanks — will briefly emerge as a question immediate enough and important enough that even your mom will be aware of it and develop an opinion on the subject.

AI will be used to beat at least one CEO into submission
AI has already begun to be incorporated into business systems. AI is used in trading; for mining databases for potentially useful information; for monitoring production chains and making sure sales channels respond accordingly; for analyzing consumer behavior; for identifying potential system problems and suggesting which to prioritize for correction; and much more.

One of the things that AI supposedly cannot do yet is exercise consistently good judgement. Exercising appropriate judgement is supposed to be a core competency of CEOs, which seems to make CEOs safe from losing their jobs to AIs any time soon.

But sometimes it just doesn’t matter what a CEO does; circumstances make a negative result inevitable. Meanwhile, some CEOs simply exercise consistently miserable judgement. Others feel that they can get away with balancing the interests of all of a company’s constituencies (customers, employees, the company itself, investors) during a financial era when investor return is the single most important priority — and if it isn’t, it is inarguable that some investors believe that it is.

Investor return is rooted in a company’s quarterly financial performance, and there are only so many factors involved in calculating financial performance — few enough that they can be represented in a set of an AI’s algorithms. Any AI can manage a company for investor return, no judgement required. And what difference would it make to be run by an AI than a human for those companies whose CEOs are already managing for investor return?

The thing is, for an AI to replace a CEO, the CEO would have to sign off on it, and which CEO would do that (as opposed to, say, signing off on replacing customer service representatives with AI-based chatbots)?

So in 2018, we do not expect an AI to replace a CEO. What we do expect, however, is that some investor group somewhere will trot out an analysis performed by an AI that demonstrates that the human CEO of a company that the group has an interest in is underperforming compared to the AI and will demand that the CEO change his or her behavior and start making decisions that conform to the AI’s.

AI-based ad placement
Every new innovation that can be used in some manner for decision-making, whether it’s used for increasing options or making the actual decisions, will immediately be applied in advertising in an effort to make consumers’ ad experience better.

And every new innovation makes advertising more aggravating.

The use of AI in any phase of ad development and placement will increase in 2018. And it won’t help because nothing ever does.

So stupid it’s inevitable in 2018

  • AIs will develop their own cryptocurrency. Forbes will consider putting those AIs on its list of world’s richest people.
  • Some politician will declare that “AIs are people, my friend.”
  • Next year, AI will do this column.

Advertisement

Leave a Reply