We are living a historical peak moment in hype for artificial intelligence. In a matter of one year generative models have sneaked into our everyday lives and become the biggest headache for teachers correcting homework. This fast development and general usage has resulted in spreading the idea of artificial intelligence as a digital servant that thinks for us and gives us entertaining conversations. This popular concept might actually be underestimating the potential of these tools for the good will, but also for the bad. By the way, I used a free image generator in a webpage to generate the picture below, for it to work as a post miniature in the blog feed.
AI as a (new) threat
AI in threat detection
AI does one single thing but it does it extremely well: it learns patterns. That's why it works fine for generation of data (creating it from the patterns it knows) and detection (observing patterns it knows among bigger amounts of data). And if AI is good at recognizing our face in a picture, it definitely can be good at detecting attack patterns in huge amounts of log data.
If you are familiar with security monitoring you might have an idea of how a SIEM in a fairly big organization is capable of ingesting several bytes of log data everyday. If you don't know what a SIEM is, you can understand it as a system that gathers log data from different sources into the same place for it to be analyzed for security purposes. A log source can be any device connected to the network that is logging whatever happens in it (log in/log out operations, network connections, file access/creation/deletion... you name it). In a company with several hundred laptops, a VPN server, a proxy server and several other intranet services, you can imagine the vast amount of data that is being generated every second. And all that data is (should be) analyzed in search for possible threats and security events affecting your business.
Throughout the last years, machine learning ("AI") algorithms have come really handy in supporting analysts in recognizing suspicious patterns among the data. When you have so much information to analyze in search for threats, it saves a lot of precious time to drive the focus to where the attack could actually be. This has powerful applications in, for example, insider threat detection (a company's own employees trying to damage it). Insider detection involves analyzing behavioral patterns in the employees that could evidence something bad is happening. These patterns could be simple things such as an employee connecting at strange hours or abnormal amounts of attached files being sent to external email addresses, which could be a signal of important data being leaked. Imagine having to monitor all this data manually. Instead, AI algorithms, through simple statistics, are capable of quickly identifying this sort of behaviors and highlighting them to the analyst for a more in depth study of the relevant cases. This is just an example of how AI can serve the purpose of efficiently monitoring an organization for cybersecurity threats.
From the cybersecurity point of view, it is clear AI is gaining relevance and there is a lot more to come in the short term. As it has happened with every technological advance or revolution, there are and will be unforeseen implications. Not being agile enough in managing those implications will open the door to considerably negative consequences. There will be more to talk about in this matters.
Thank you for reading. Feel free to share your thoughts in the comments.

Comments
Post a Comment