Artificial intelligence (AI) has revolutionized the way we live and work, offering limitless possibilities and opportunities. However, as with any technological advancement, it also presents new challenges and insecurities.
As engineers, we need to understand the risks and significance of securing data and infrastructure to prevent exploitation.
I was honored recently with a spot on a panel at the Cyber Summit hosted by Blu Ventures. My co-panelists all had varied backgrounds, which led to such an interesting and amazing discussion. Richard Puckett, the CISO of Boeing, Rod Little of KPMG, Tim O’Shea CTO of DeepSig, and I were able to share our insights and opinions on AI and what it means for cybersecurity.
Join me for some key takeaways from those conversations as we navigate the complexities of the digital landscape - and discover how we can secure our future with AI.
In today's ever-evolving technological landscape, artificial intelligence (AI) has emerged as a powerful tool that has revolutionized the way we work and live, from chatbots and voice assistants to self-driving cars. However, as with any new technology, AI also presents new challenges in security, making cybersecurity a critical concern.
Here are some thoughts.
Ultimately, I think the advent of AI is overwhelmingly positive.
LLM chatbots are especially helpful to us in terms of cyber operations process flows related to speed and efficiency. Human operators are capable of making highly nuanced decisions based on a lot of factors.
Some of those factors can be derived directly from data, whereas some require a deeper understanding of human relationships and emotions.
In any event, what humans do not do quickly is parse data. LLMs can help us sort massive amounts of data in real time to support human operators in making effective decisions. Think of them as a right-hand man, or of AI as a tool to expedite our processes.
As with all tools, though, there is the risk of inaccurate or even malicious data feeding these LLMs. We need to be able to trust that we are getting accurate information to realize the efficiency gain - which means we also need to be able to trust the data being safe and secure.
Unfortunately, AI introduces new cybersecurity risks, as it offers opportunities for exploitation when the proper precautions and protections aren't involved.
We’ve seen this over and over again. New technology offers limitless possibilities for growth and advancement. But without a focus on cybersecurity and regulation, it also all but guarantees exploitation and misuse.
When AI is involved, it’s not necessarily "break in and steal" but the ability to take action as a bad actor. Think about this in terms of everyday use.
We’ve all seen or heard about pranksters feeding bad information to public chatbots, causing them to be especially impolite or even downright crass. While this may feel like harmless fun to some, the implications become a lot more serious when we consider a bad actor accessing the AI that's driving a car. It's now not harmless fun, but a dangerous liability.
That's not to scare you, but rather, to make you aware of the risks. The more we understand the threat surface, standards will develop to help all of us protect against these types of attacks.
AI systems are nothing without data. Their algorithms function based only on the data they have available to them.
Knowing this, we can safely assume the attack surface is the data itself. There’s no need for a bad actor to have a deep understanding of machine learning algorithms if they can easily input bad data into those complex systems without it. The role of AI and machine learning in cyber security, then, becomes ultra-pronounced.
That's why basic cybersecurity principles, such as infrastructure security, are especially important. This can't be emphasized enough: default security configurations are misconfigurations.
According to IBM, misconfigurations (or in some cases, default configurations) are some of the most common causes of data breaches.
There will likely be lots of tools created to develop LLMs, and they are unlikely to be secure out of the box. I don’t like saying this, and if I’m proven wrong, I’ll be happy. But I’ve seen this for years, and I don’t see it changing.
We will need to develop best practices, integrate them into a framework, and make sure security configurations are compliant. That work will continue as the cyber community develops more best practices and regulations.
We can all agree that testing always matters, but certainly, I think it goes beyond just testing and model validation.
Even if we perfectly validate that a model produces accurate and trustworthy results based on datasets that we test with, there are still opportunities for malice within AI. Basic cyber hygiene is going to be critically important in AI.
We have to protect the data feeding the AI systems, and we have to secure the systems that the AI models live on.
Regulators - and the cyber community as a whole - need to make sure that we focus on the cornerstones of cybersecurity, including both data protection and also the security of the infrastructure where that data lives.
AI will continue to grow in importance, which means that the data that feeds AI models will become even more critical to protect.
Ensuring the confidentiality, integrity, and availability of the data means we are keeping our focus on the most important aspect of AI in these complex systems. That holds true whether it be it cars, chatbots, or national security systems.
But at the same time, we need to make sure we continue to protect both the infrastructure and access to that infrastructure where critical data lives.
Here's another example to get you thinking.
Botnets have existed for decades and are a perfect target for AI to be exploited. Larger ones have global reach, making them a threat to both commercial and state entities.
How they are penetrating networks is a very interesting topic. Most likely, that will be through botnets learning techniques to exploit APIs, making any public API vulnerable.
Being public, these APIs will already be under threat, but this will increase the importance of following industry best practices and frameworks for protecting them.
Large enterprises and governments trust security compliance frameworks. These frameworks allow them to validate that they are taking measures to protect themselves and the critical APIs that they own.
As the threat of intelligent botnets rises, the cybersecurity community will have to respond by continuing to create standards that mitigate their risk.
There are always going to be people using technology for good and those using it for bad. We’re not going to be able to stop that (unless you're a superhero, of course - this might be a good job for Bruce Wayne, don't you think?).
But back here in reality, cybersecurity practitioners are going to have to harness AI just as well, if not better than, the bad actors.
The good news is that AI should make our jobs easier to defend ourselves. But we have to keep improving and getting smarter. We need to stay one step ahead of the "bad guys."
AI may be new today, but we’ve seen similar cycles in the past. I remember when everyone was moving to the cloud, and everyone said, “How are we going to protect ourselves? It’s going to be so scary.”
But what happened was we realized that the cloud would be the new normal. We had to keep doing what we’ve always done, which is to develop ways to harness the technology ourselves and defend against attacks.
It always comes back to the same things: securing the baseline, and making sure configurations are correct.
If your IT assets are not doing the basic, bare minimum, an attacker is going to break into your system. They've been doing that forever without AI. Simple as that. It's just that AI will make them more efficient at doing it. The opportunity is to secure systems now so that we can prevent the same mistakes that fueled the last huge technology leap.
With Sicura, you can do just that. It's the best way to improve security and ensure compliance.
Sign up for a demo or contact our team today to learn more about how Sicura can transform your security and compliance processes - even in the wake of the AI revolution.
Artificial intelligence (AI) has revolutionized the way we live and work, offering limitless possibilities and opportunities. However, as with any technological advancement, it also presents new challenges and insecurities.
As engineers, we need to understand the risks and significance of securing data and infrastructure to prevent exploitation.
I was honored recently with a spot on a panel at the Cyber Summit hosted by Blu Ventures. My co-panelists all had varied backgrounds, which led to such an interesting and amazing discussion. Richard Puckett, the CISO of Boeing, Rod Little of KPMG, Tim O’Shea CTO of DeepSig, and I were able to share our insights and opinions on AI and what it means for cybersecurity.
Join me for some key takeaways from those conversations as we navigate the complexities of the digital landscape - and discover how we can secure our future with AI.
In today's ever-evolving technological landscape, artificial intelligence (AI) has emerged as a powerful tool that has revolutionized the way we work and live, from chatbots and voice assistants to self-driving cars. However, as with any new technology, AI also presents new challenges in security, making cybersecurity a critical concern.
Here are some thoughts.
Ultimately, I think the advent of AI is overwhelmingly positive.
LLM chatbots are especially helpful to us in terms of cyber operations process flows related to speed and efficiency. Human operators are capable of making highly nuanced decisions based on a lot of factors.
Some of those factors can be derived directly from data, whereas some require a deeper understanding of human relationships and emotions.
In any event, what humans do not do quickly is parse data. LLMs can help us sort massive amounts of data in real time to support human operators in making effective decisions. Think of them as a right-hand man, or of AI as a tool to expedite our processes.
As with all tools, though, there is the risk of inaccurate or even malicious data feeding these LLMs. We need to be able to trust that we are getting accurate information to realize the efficiency gain - which means we also need to be able to trust the data being safe and secure.
Unfortunately, AI introduces new cybersecurity risks, as it offers opportunities for exploitation when the proper precautions and protections aren't involved.
We’ve seen this over and over again. New technology offers limitless possibilities for growth and advancement. But without a focus on cybersecurity and regulation, it also all but guarantees exploitation and misuse.
When AI is involved, it’s not necessarily "break in and steal" but the ability to take action as a bad actor. Think about this in terms of everyday use.
We’ve all seen or heard about pranksters feeding bad information to public chatbots, causing them to be especially impolite or even downright crass. While this may feel like harmless fun to some, the implications become a lot more serious when we consider a bad actor accessing the AI that's driving a car. It's now not harmless fun, but a dangerous liability.
That's not to scare you, but rather, to make you aware of the risks. The more we understand the threat surface, standards will develop to help all of us protect against these types of attacks.
AI systems are nothing without data. Their algorithms function based only on the data they have available to them.
Knowing this, we can safely assume the attack surface is the data itself. There’s no need for a bad actor to have a deep understanding of machine learning algorithms if they can easily input bad data into those complex systems without it. The role of AI and machine learning in cyber security, then, becomes ultra-pronounced.
That's why basic cybersecurity principles, such as infrastructure security, are especially important. This can't be emphasized enough: default security configurations are misconfigurations.
According to IBM, misconfigurations (or in some cases, default configurations) are some of the most common causes of data breaches.
There will likely be lots of tools created to develop LLMs, and they are unlikely to be secure out of the box. I don’t like saying this, and if I’m proven wrong, I’ll be happy. But I’ve seen this for years, and I don’t see it changing.
We will need to develop best practices, integrate them into a framework, and make sure security configurations are compliant. That work will continue as the cyber community develops more best practices and regulations.
We can all agree that testing always matters, but certainly, I think it goes beyond just testing and model validation.
Even if we perfectly validate that a model produces accurate and trustworthy results based on datasets that we test with, there are still opportunities for malice within AI. Basic cyber hygiene is going to be critically important in AI.
We have to protect the data feeding the AI systems, and we have to secure the systems that the AI models live on.
Regulators - and the cyber community as a whole - need to make sure that we focus on the cornerstones of cybersecurity, including both data protection and also the security of the infrastructure where that data lives.
AI will continue to grow in importance, which means that the data that feeds AI models will become even more critical to protect.
Ensuring the confidentiality, integrity, and availability of the data means we are keeping our focus on the most important aspect of AI in these complex systems. That holds true whether it be it cars, chatbots, or national security systems.
But at the same time, we need to make sure we continue to protect both the infrastructure and access to that infrastructure where critical data lives.
Here's another example to get you thinking.
Botnets have existed for decades and are a perfect target for AI to be exploited. Larger ones have global reach, making them a threat to both commercial and state entities.
How they are penetrating networks is a very interesting topic. Most likely, that will be through botnets learning techniques to exploit APIs, making any public API vulnerable.
Being public, these APIs will already be under threat, but this will increase the importance of following industry best practices and frameworks for protecting them.
Large enterprises and governments trust security compliance frameworks. These frameworks allow them to validate that they are taking measures to protect themselves and the critical APIs that they own.
As the threat of intelligent botnets rises, the cybersecurity community will have to respond by continuing to create standards that mitigate their risk.
There are always going to be people using technology for good and those using it for bad. We’re not going to be able to stop that (unless you're a superhero, of course - this might be a good job for Bruce Wayne, don't you think?).
But back here in reality, cybersecurity practitioners are going to have to harness AI just as well, if not better than, the bad actors.
The good news is that AI should make our jobs easier to defend ourselves. But we have to keep improving and getting smarter. We need to stay one step ahead of the "bad guys."
AI may be new today, but we’ve seen similar cycles in the past. I remember when everyone was moving to the cloud, and everyone said, “How are we going to protect ourselves? It’s going to be so scary.”
But what happened was we realized that the cloud would be the new normal. We had to keep doing what we’ve always done, which is to develop ways to harness the technology ourselves and defend against attacks.
It always comes back to the same things: securing the baseline, and making sure configurations are correct.
If your IT assets are not doing the basic, bare minimum, an attacker is going to break into your system. They've been doing that forever without AI. Simple as that. It's just that AI will make them more efficient at doing it. The opportunity is to secure systems now so that we can prevent the same mistakes that fueled the last huge technology leap.
With Sicura, you can do just that. It's the best way to improve security and ensure compliance.
Sign up for a demo or contact our team today to learn more about how Sicura can transform your security and compliance processes - even in the wake of the AI revolution.