- January 2, 2024
- Posted by: damian
- Category: Security

The rapid adoption of generative artificial intelligence (AI) is raising significant concerns within the security community, prompting a need for a balanced perspective on the associated risks. According to a recent ISACA survey, the majority of organizations lack training on the authorized use of generative AI, emphasizing the urgency for security professionals to understand and address these challenges effectively.
Drawing parallels to the initial apprehension surrounding the shift to cloud platforms in the healthcare industry, it is crucial to separate valid concerns from exaggerated ones. The concerns related to cloud computing, such as security, data privacy, and compliance, were once overstated but are now effectively addressed with cloud platforms being widely adopted. Similarly, the rapid rise of generative AI presents an opportunity for security professionals to pivot by investing in AI training and providing teams with the necessary tools to enhance capabilities and manage associated risks.
Generative AI deployment models include public (e.g., ChatGPT) and private, with accessibility and control being the main distinctions. Public generative AI offers broad availability with limited customization, while private generative AI allows tailored solutions for specific organizational needs, providing greater control over usage, data handling, and behavior. Companies face heightened security risks with public generative AI platforms due to increased potential for compromising sensitive information, leading some organizations to impose bans or restrictions on widely recognized generative AI platforms.
To promote responsible usage of generative AI, IT teams, risk functions, and security teams must collaborate effectively. IT teams will restrict access to specific generative AI models, the risk function will identify and mitigate associated risks, and security teams will ensure strict privacy and security measures in handling data used for training and fine-tuning AI models. Security teams will also work with legal and compliance teams to address ethical and legal risks, including issues like intellectual property infringement and the generation of deceptive content.
While concerns around AI may be inflated during the hype cycle, there are undeniably significant risks that require careful consideration. Deepfake technology raises questions about voice and speech pattern emulation, posing potential threats if not adequately addressed. AI’s impact on creative fields, workforce displacement concerns, and evolving threats in areas like cyber threat hunting add complexity to the AI risk landscape.
In the midst of intense discussions about AI’s impact on the security community, security professionals must focus on real risks and avoid getting caught up in every headline. By making informed choices, such as minimizing reliance on public generative AI tools, investing in ongoing AI training, updating organizational policies, and addressing bias, fairness, and ethical considerations, enterprises can confidently navigate the complex AI risk landscape.
Recent Posts
Contact us at
customer.services@
davisinsureservices.com
or submit an inquiry via the contact button below.