Artificial intelligence, or AI, is increasingly integrating into day-to-day activities. From asking your cell phone for driving directions, to kicking back at the end of the day and letting Netflix suggest a movie to watch, AI is taking on more responsibilities. But amid a surge in digital crime, a new issue —the question of who decides on cybersecurity measures — is gaining traction. It comes down to a matter of trust and, to no one’s surprise, AI is at the center of the controversy.
The hunt for threats, and searches for anomalies, is a key component of cybersecurity — if something does not “smell” right, it should be flagged and investigated. But as the efficiency and volume of cyber attacks rise exponentially – to the tune of an average business or other organization being targeted more than 1,000 times a week, according to published reports – the question is a matter of who will hunt down and respond to the cascade of threats.
Recent tech advances suggest the answer will involve AI. Microsoft, for example, already uses AI to help evaluate cybersecurity solutions and has gamified the machine learning process, where an “attacker” tries to steal confidential information from a network, a “defender” attempts evict the attackers or mitigate their actions, and the reward is “winning” the game.
But what will happen when AI can rate a company’s security environment and suggest how to best configure and maintain it? Entrepreneurs and others should start to consider these kinds of issues now, because they will surely have to consider them in the future. We will soon have to ask ourselves why, if we have a tool that can perform an audit, should we not take the extra step and let the tool go ahead and do the entire job.
More Tech Intelligence
We can illustrate the point now. Say a business decides to implement multifactor authentication – which requires a user to provide two or more verification factors, like first entering a password and then entering a set of characters sent to them on a mobile or other device – before they can gain access to a file or other resource. And say a decision is made to have MFA loaded and active on all devices so no employee can sign on without satisfying the multiple challenges.
But then an exception comes up where, for some reason, an employee needs access but cannot satisfy the MFA challenges. Or they need a device, pronto, and it has not been configured for MFA. As things currently stand, a human will decide to allow or disallow the exception. What happens when the human enables multiple exceptions – each one “reasonable” on its own – but now these many exceptions put the company at greater risk of a cyber intrusion? Or what happens if an automated cybersecurity tool audits a network, finds multiple weaknesses and flags them, but no one follows up on the recommendations?
These and other scenarios appear to weigh the argument in favor of letting AI carry the ball all the way on cybersecurity. But some cautionary points should also be raised. For example, what if an AI program is created with certain biases? Those shortcomings – even if unintentional – could affect all its decisions, resulting in a broad swath of companies, across industries, sharing the same vulnerabilities.
The solution, however, is not necessarily an either/or, where we either end up in a situation like “The Matrix”, relinquishing all power to AI; or the reverse, where we simply pull the plug. Instead, we need to strike a balance and think about gaining more of an understanding of the way that AI arrives at its decisions. The demand for faster cybersecurity and other setups is increasing, and the only way to meet this demand is to increase the involvement of the AI component.
But humans must retain the ultimate responsibility for pulling levers and auditing AI decisions, as opposed to simply accepting them without question. The challenge then will fall to cybersecurity providers to train and promote data scientists and others who can peek beyond the curtains of AI and understand the decision-making processes. Just as businesses that leverage AI will likely replace those that do not, cybersecurity providers that understand how AI operates will replace those that do not.
Carl Mazzanti is president of eMazzanti Technologies in Hoboken, providing IT consulting services for businesses ranging from home offices to multinational corporations.
Click Here For The Original Source.
————————————————————————————-