Dark A.I. - A Cybersecurity Time Bomb?

A.I. has no moral compass; at least not for the near future. The same algorithms and learning capability can be used for dark endeavors as easily as they can be for good. Dark A.I. may be equally as competent as its better-intentioned twin.

All it will take to launch Dark A.I. will be to feed it some initial examples that focus on negative traits, let it collect from the same ocean of data available to good A.I., then turn it loose to learn and grow.

A.I. algorithms will continue to be public information so there’s no protection by limiting access to the algorithms themselves. The bad guys will have the same guns as the good guys.

At the rate the industry is heading, the bullets won’t be too hard to come by either. Getting or hacking into data hasn’t been much of a problem to date. As we’ve seen, a few hacks here or there can breach millions or even hundreds of millions of emails and provide a king’s treasure of information.

Combining hacked data with so much publicly available information on social media, it’s easy to see how Dark A.I. could get going quickly. Dark A.I. may not have as much access to all the data available to good A.I., but it would be a mistake to think it won’t get enough to do significant damage.

Dark A.I. will read texts, emails and vmails to learn all the important dates, names, locations, and relationships in your life. It will know your GPS history. It will scan your photos to learn people, places and things. It will learn your schedule and when your home is empty. It will know where your children go to pre-school, your dog’s name, your financial health, your daily patterns, habits, friends, vices and a wealth of other information. It will have this for you, your children, your friends and the nanny who cares for your kids.

At first, Dark A.I. will focus on the low hanging fruit. It will use the assortment of numbers and names as possible passwords to break into your protected data. Or, the Hackers will try to sell the information to others who may try to benefit through highly focused physical robbery, or worse.

While the aggregation of this information will provide opportunity for economic and political exploitation, it doesn’t stop at information aggregation. Dark A.I. will be able to deduce things about you and your relationships. Like Sherlock Holmes on steroids, the power of deductive reasoning will be formidable and will only grow over time. It will find vulnerabilities that you think are private and even vulnerabilities that you do not know you have. It will generate a detailed psychological profile about you and hundreds of millions of others, and rank everyone that aligns with these vulnerabilities.

Nefarious individuals will deploy Dark A.I. for personal financial gain.

Foreign nationals may do the same, but will also use Dark A.I. to seek spies from the entire population of their adversaries. They will search for vulnerabilities in people who have access to critical information, such as government officials, military personnel, defense contractors, and perhaps most critically, employees of technology companies who have access to data for many millions of users. Dark A.I. will be able to find the needle no matter how large the haystack.

With such potential risk, the question may be asked: why doesn’t there seem to be any focus on Dark A.I. and instead the focus is mostly on Internet infrastructure? A few thoughts come to mind.

First, Infrastructure attacks are immediately tangible. There is no delay in the impact, or the timing of the impact, from the attack. Where an infrastructure attack is like a bomb going off, a Dark A.I. attack will be like a biological virus that takes a longer time to spread and to be felt. It’s easier to see and focus on the tangible.

Second, an infrastructure is more black or white. Any software that attacks the power grid, the financial markets or medical institutions is easily categorized as intrinsically bad. Unlike A.I., there is no good side. It’s harder to address a problem with no clear way of separating the bad from the good.

Third, the focus on infrastructure is a focus on the how of the Internet. The how is distinct from the why. When infrastructure goes down, the attack doesn’t force the question as to whether it should have been up in the first place. Instead, the focus is getting the infrastructure back online as quickly as possible to restore things to their prior state. And the assumption is that in doing so, the damage will be largely repaired.

In contrast, a Dark A.I. attack would be generally on the why of the Internet. It brings into question whether making this data so readily available results in more bad than good. The very thing that is supposed to help us, hurts us. The effects on this sort of attack are longer lasting. It could degrade peoples’ will to use the very tool that is supposed to help them. Things don’t go back as easily to the prior state as they do when infrastructure is restored. This sort of a conundrum is much more difficult to digest and easier to postpone for later time.

Fourth, the companies leading the A.I. revolution, the Tech industry, are all focused on the good. Every innovation is considered to be something that betters the human condition. Humans are biased and it’s difficult for us to see the negative in something we desire to see as being intrinsically good.

And, since the A.I. visionaries generally reside in the Tech Industry or academia, there are no experts in government to ask the questions, see beyond the horizon, or sound the alarm. That won’t happen until actual Dark A.I. attacks occur at a significant level to get public attention.

But, Dark A.I., won’t stop with the why, over time it will also learn on how to attack the how. It will eventually find vulnerabilities in infrastructure as it tests for weaknesses and learns from its failures.

When it comes to Dark A.I. attacks, it’s really not a question of if. Rather, it is more a question of when we can do what, to mitigate how much.

We’ve put together some broad categories in addressing the issues of Dark A.I. These categories are arranged below in what is thought to be a general decreasing level of effectiveness and practicality.

First, we can find ways to limit access to data. While industry is making advancements in this area, governments can also contribute. Governments can set regulations on data security to ensure industry compliance. Governments may need to set standards for employees with access to large volumes of user data, much like there are standards for doctors, lawyers and bankers. Considering that there has been a recent trend in classifying Information as the new most valuable resource, these sorts of regulations may be justifiable. Such efforts may need to be at an international level to be effective, making them far more difficult to implement.

Second, perhaps it’s possible to use good A.I. to combat Dark A.I. Conceptually, this approach would be similar to using venom to create anti-venom. Such an approach would be an ongoing battle. For example, databases could be seeded with false information where only the good A.I. knows the real from the fake. This would lead Dark A.I. into creating false, and hopefully, ineffective knowledge.

Third, society can mitigate some of the impact of Dark A.I. by simply giving up the concept of privacy. It seems a significant goal of Dark A.I. would be extortion, which is about revealing privacy. If people either don’t care, or if regulations are put into place to grant immunity to some types of secrets revealed through Dark A.I., some of the impact may be mitigated. However, this approach doesn’t address the ability of Dark A.I. to directly hack into systems for economic gain, stealing technological secrets, or attacking infrastructure.

Fourth, governments could attempt to restrict access to the algorithms. On so many levels, such an approach would be unfeasible. The algorithms already exist and are public knowledge. A.I. is embedded in our academic institutions, and thus our human history. This knowledge won’t become secret again.

Fifth, simply do nothing and believe that Dark A.I. won’t be a problem. Since Cybersecurity is already a major issue and will only become an even larger problem, it’s foolish to think that the Hackers are not going to use the better tool now that one has come along.

There may be other options in addressing Dark A.I., but the most important consideration is when more than how. Once something has been learned, it probably can’t be unlearned. A proactive approach to stop or mitigate the impact of Dark A.I. is the most crucial attribute in dealing with it. But then again, maybe there could be a possible way to use A.I. to regress Dark A.I.