Artificial intelligence (AI) is being discussed quite a bit, in fact, maybe the term is used too much. After all, it means different things in different situations, and vendors are using the term particularly loosely to tie into a hot market and hopefully sell more product. When it comes to information and cyber security – a realm so vital to a company’s reputation – there’s a need to see through the hype and ask the questions that really matter.

For starters, to me, the main question is not whether AI will find its way in our daily life, but what will it mean to us as cybersecurity professional? What are the security risks when adopted by various business lines for different purposes? What are the benefits to our profession? And what are the risks of not considering the opportunity of AI to help us do our job better – or of failing to monitor closely what the business will use it for?

Like so many technology disruptions, AI will change part of the business landscape and it will also shape our own cybersecurity backyard. The logic is implacable, when there is a business opportunity, there are investments to be made and AI presents potential across many aspects of our modern life.

In its simplest essence, AI perceives and processes information from its environment and can use incomplete data to maximize the success rate of completing certain tasks. As you can guess, I just described tons of activities that we as human do every day in the medical, financial, industrial, psychological, analytical and military sectors.

At the moment, I don’t think we should overly focus on its potential to replace cognitive beings. Instead, we should appreciate that AI can leverage broader data input, discover patterns we can’t easily distinguish and is capable of sorting and matching data much faster than we can. Moreover, it never gets tired and can work 24×7. Ultimately, this will result in potentially better and faster decision making, where emotions or artistic sense might not be the primary quality by which we measure output.

That said, all of AI is not “rosy,” and when matched with robotics, it can be the stuff of nightmares. AI comes with challenges, and while it can autonomously trigger actions based on an algorithmic logic, the logic must be flawless. If not, it will create a lot of “mistakes” and very fast. The necessary algorithms rely on data, hence input quality must be tightly controlled, otherwise, garbage-in, garbage-out, right? So, it’s imperative organizations decide what should and shouldn’t be automated, and it’s an approach that needs to be validated by humans first. AI strategy done well can effectively address a skill shortage, but done wrong and with a “set and forget” mentality, it’ll backfire.

Still, keep in mind that AI can also reflect some of the flaws of its creator. Because humans come with their fair share of challenges, let’s focus on two examples.

Trust, either the lack of or too much of it, can make AI react in a way we did not foresee. Sometimes, emotionless decision-making might be best, sometimes not. The more AI we create, the more we will need to deploy a transposable trust model for this community to interact with each other. After all, in the AI world, there is often little-to-no space for human interference if you want to capture its full benefit.

Transparency is another issue. As a society, we are not ready to entrust to machines many of the things we currently make decisions on – and security is a particularly sensitive area. Without transparency and accountability of the AI, how can we start tackling the notion of responsibility when something goes wrong? And, we must consider at what point will use of AI be mandatory under certain conditions? Will tomorrow’s doctor be personally liable for not using an AI and misdiagnosing potentially cancerous cells construction? As happens with humans, what if the physician simply failed to update a codebase?

It’s no secret that there’s is a lack of qualified security personnel today. That said, I feel it is our responsibility to explore ways to use AI as soon as possible in order to remove any item from our task list that can be automated. As a rule of thumb, I think of the Pareto Principal – AI should do the 80 percent of the job so we can focus on the 20 percent where human interaction and decision-making is a must.

Pareto likely never saw this coming, yet, the formula applies to our profession. AI could allow us to free up time and deliver more value with the same salary cost structure.

And believe me, we will need time because part of that 20 percent will be required to analyze the new business risks of using AI in the real world, one fraught with real and increasing security challenges.

Kudelski Security Team