We must understand that there will be an incessant battle between the evolution of AI attack models and AI defense.
As a transformative force, generative artificial intelligence (AI) encompasses machine learning (ML) algorithms capable of generating new content, from images to text, by learning from vast amounts of data.
Gartner 2025
By Lori Perri | 4-minute read | September 5, 2023
Big Picture
Generative AI has sparked extensive interest in artificial intelligence pilots, but organizations often don’t consider the risks until AI models or applications are already in production or use. A comprehensive AI trust, risk, security management (TRiSM) program helps you integrate much-needed governance upfront, and proactively ensure AI systems are compliant, fair, reliable and protect data privacy.
Download Now: A Workbook for Planning Your AI Strategy
If you have any doubt that AI TRiSM is needed, consider these six drivers of risk, many of which stem from users simply not understanding what is really happening inside AI models.
Work EmailContinue
“Organizations that do not consistently manage AI risks are exponentially more inclined to experience adverse outcomes, such as project failures and breaches. Inaccurate, unethical or unintended AI outcomes, process errors and interference from malicious actors can result in security failures, financial and reputational loss or liability, and social harm. AI misperformance can also lead to suboptimal business decisions.”
3 things to tell your peers
1AI TRiSM capabilities are needed to ensure the reliability, trustworthiness, security and privacy of AI models.
2 They drive better outcomes related to AI adoption, achieving business goals and ensuring user acceptance.
3 Consider AI TRiSM a set of solutions to more effectively build protections into AI delivery and establish AI governance.
eWEEK SECURITY ANALYSIS: While infosec specialists use AI for benign purposes, threat actors mishandle it to orchestrate real-world attacks. At this point, it is hard to say for sure who is winning.
Written by
David BalabanPublished April 29, 2021 Share
eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.
The rapid evolution of artificial intelligence algorithms has turned this technology into an element of critical business processes. The caveat is that there is a lack of transparency in the design and practical applications of these algorithms, so they can be used for different purposes.
Whereas infosec specialists use AI for benign purposes, threat actors mishandle it to orchestrate real-world attacks. At this point, it is hard to say for sure who is winning. The current state of the balance between offense and defense via machine learning algorithms has yet to be evaluated.
There is also a security principles gap regarding the design, implementation and management of AI solutions. Completely new tools are required to secure AI-based processes and thereby mitigate serious security risks.
The global race to develop advanced AI algorithms is accelerating non-stop. The goal is to create a system in which AI can solve complex problems (e.g., decision-making, visual recognition and speech recognition) and flexibly adapt to circumstances. These will be self-contained machines that can think without human assistance. This is a somewhat distant future of AI, however.
At this point, AI algorithms cover limited areas and already demonstrate certain advantages over humans, save analysis time and form predictions. The four main vectors of AI development are speech and language processing, computer vision, pattern recognition–in addition to reasoning and optimization.
Huge investments are flowing into AI research and development along with machine learning methods. Global AI spending in 2019 amounted to $37.5 billion, and it is predicted to reach a whopping $97.9 billion by 2023. China and the U.S. dominate the worldwide funding of AI development.
Transportation, manufacturing, finance, commerce, health care, big-data processing, robotics, analytics and many more sectors will be optimized in the next five to 10 years with the ubiquitous adoption of AI technologies and workflows.
With reinforcement learning in its toolkit, AI can play into attackers’ hands by paving the way for all-new and highly effective attack vectors. For instance, the AlphaGo algorithm has given rise to fundamentally new tactics and strategies in the famous Chinese board game Go. If mishandled, such mechanisms can lead to disruptive consequences.
Let us list the main advantages of the first generation of offensive tools based on AI:
At the same time, AI can help infosec experts to identify and mitigate risks and threats, predict attack vectors and stay one step ahead of criminals. Furthermore, it is worth keeping in mind that a human being is behind any AI algorithm and its practical application vectors.
Let us try to outline the balance between attacking and defending via AI. The main stages of an AI-based attack are as follows:
Now, let us provide an example of how AI can be leveraged in defense:
The expanding range of attack vectors is only one of the current problems related to AI. Attackers can manipulate AI algorithms to their advantage by modifying the code and abusing it at a completely different level.
AI also plays a significant role in creating Deepfakes. Images, audio, and video materials fraudulently processed with AI algorithms can wreak information havoc making it difficult to distinguish the truth from the lies.
To summarize, here are the main challenges and systemic risks associated with AI technology, as well as the possible solutions:
The current evolution of security tools: The infosec community needs to focus on AI-based defense tools. We must understand that there will be an incessant battle between the evolution of AI attack models and AI defenses. Enhancing the defenses will be pushing the attack methods forward, and therefore this cyber-arms race should be kept within the realms of common sense. Coordinated action by all members of the ecosystem will be crucial to eliminating risks.
Operations security (OPSEC): A security breach or AI failure in one part of the ecosystem could potentially affect its other components. Cooperative approaches to operations security will be required to ensure that the ecosystem is resilient to the escalating AI threat. Information sharing among participants will play a crucial role in activities such as detecting threats in AI algorithms.
Building defense capabilities: The evolution of AI can turn some parts of the ecosystem into low-hanging fruit for attackers. Unless cooperative action is taken to build a collective AI defense, the entire system’s stability could be undermined. It is important to encourage the development of defensive technologies at the nation-state level. AI skills, education, and communication will be essential.
Secure algorithms: As industries become increasingly dependent on machine learning technology, it is critical to ensure its integrity and keep AI algorithms unbiased. At this point, approaches to concepts such as ethics, competitiveness, and code-readability of AI algorithms have not yet been fully developed.
Algorithm developers can be held liable for catastrophic errors in decisions made by AI. Consequently, it is necessary to come up with secure AI development principles and standards that are accepted not only in the academic environment and among developers, but also at the highest international level.
These principles should include secure design (tamper-proof and readable code), operational management (traceability and rigid version control) and incident management (developer responsibility for maintaining integrity).
David Balaban is a computer security researcher with over 17 years of experience in malware analysis and antivirus software evaluation. He runs MacSecurity.net and Privacy-PC.com projects that present expert opinions on contemporary information security matters, including social engineering, malware, penetration testing, threat intelligence, online privacy, and white hat hacking. Mr. Balaban has a strong malware troubleshooting background, with a recent focus on ransomware countermeasures.
David Balaban https://privacy-PC.com David Balaban is a computer security researcher with over 17 years of experience in malware analysis and antivirus software evaluation. He runs MacSecurity.net and Privacy-PC.com projects that present expert opinions on contemporary information security matters, including social engineering, malware, penetration testing, threat intelligence, online privacy, and white hat hacking. Mr. Balaban has a strong malware troubleshooting background, with a recent focus on ransomware countermeasures.
Our team will perform regular vulnerability scans to identify potential weaknesses in your network, and provide you with a plan to address them.
Our access control systems ensure that only authorized personnel have access to your network and sensitive information.
We provide comprehensive disaster recovery services to ensure that your business can quickly recover from any cyber attack or data loss event.
Say something interesting about your business here.
What's something exciting your business offers? Say it here.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.