There are a lot of reasons why humans need AI’s help in cybersecurity. The skills shortage is one. Handling the monotonous tasks that lead to burnout is another. The ability to do the type of deep dives humans can’t is a third.
Yet a lot of people, including those in IT and cybersecurity, remain skeptical of AI’s benefits in cybersecurity. Or maybe they are wary of AI taking jobs away from humans. But the truth is, AI can’t function without human interaction. As a Trustwave blog post pointed out, “Many systems today can use data analytics to detect anomalies in their environment, but they can’t tell you if that anomaly is something good or something bad.”
Humans and AI have to develop a partnership in order to get the most out of technology.
AI Is Not Out to Replace You
This misconception simply can’t be stressed enough. The robots are not out to replace you; they are there to allow you to do your job better.
“The best analogy for AI and cybersecurity professionals exists between mathematicians and calculators,” said Tim Wade, technical director, CTO Team at Vectra, in an email comment. “Did calculators materially change how math was conducted? Absolutely—for starters, they largely eliminated the need to maintain proficiency with slide rules and log tables. Were (the few) mathematicians associated with creating log tables displaced? Maybe?”
What happened instead is that calculators not only enabled faster calculations but extended the exploration of mathematical concepts to a much wider audience. AI and cybersecurity professionals have a similar relationship, said Wade.
“There is no shortage of tasks for people, and these people will stand on the shoulders of their AI tooling.”
How AI Functions as a Security Tool
Where AI excels is at managing noise, an area that humans struggle with. But at the same time, AI needs human interaction to recognize context and designate problems.
AI assists with cybersecurity in many ways depending on how it is integrated into a company or organization’s systems, explained Cody Michaels, application security consultant at nVisium.
“Weak AI—that is to say, AI that doesn’t do much in the way of thinking—are nothing more than glorified algorithms going through a predefined and static ‘if this, then that’ style of functionality,” said Michaels in an email interview. “Stronger AI will perform actions based on models and variables that can be self-altered depending on the main intent of the system.”
An example of a weak AI is packet monitoring or virus-style scanning based on flagged known threats. Conversely, strong AI both replicates what the weak AI can do as well as builds on it, including tasks such as alerting humans to suspicious behavior on the network/system from a user with valid credentials.
“While many aren’t even aware of this fact, we all actually do have a style of typing in the same way we, let’s say, have a unique style of walking. So, a strong AI would be able to spot and flag the fact that someone who only has a words-per-minute typing speed of 15 is suddenly typing like they’re on an episode of ‘Mr. Robot,’” said Michaels.
The AI/Human Partnership
“Automate the boring stuff,” software developer Al Sweigart once stated. That’s where AI shines, with the boring tasks that must be done but can be mind-numbingly boring for most cybersecurity professionals. (Do you want to read logs all day to find a singular anomaly? Of course not. No one does.) So, working with AI to strengthen your security posture is all about time and resource management, according to Michaels.
“My advice for anyone looking to implement AI in their processes is to start small,” Michaels stated. “For example, think about a handful of tasks that you can document in full beforehand so there will be a low impact if done incorrectly a few times. Once this type of task is being taken care of, you can move on to bigger hurdles because now you’ve got more bandwidth to deal with them.”
Think of AI as the coworker who offers a second look at what you’ve done, to make sure everything is correct and there are no glaring errors that could cause problems.
“Just keep in mind that this is another layer to your security stance,” said Michaels. “AI is a tool like any other, but this particular tool has the promise to grow in effectiveness and return, as you continue to scale and include more data, as well as the time you put into working with it.”
By Security Boulevard