As the field of artificial intelligence (AI) continues to develop rapidly, the industry faces ongoing challenges in ensuring that security measures keep pace with technological advancements. Recent research from PSA Certified highlights these concerns, particularly among global technology decision-makers who are responsible for implementing AI in various sectors. The survey, which included 1,260 respondents, reveals that a significant portion of the industry is apprehensive about the potential security risks associated with AI’s rapid growth.
One of the key findings from the survey is that 68% of technology leaders are concerned that the speed at which AI is advancing is outstripping the industry’s ability to safeguard products, devices, and services effectively. This has led to a growing interest in edge computing as a potential solution. Edge computing, which processes data locally on devices rather than relying on centralized cloud systems, is seen as a way to enhance security, efficiency, and privacy. In fact, 85% of respondents believe that security concerns will drive more AI use cases to the edge, where data can be managed more securely and with greater control.
However, the shift towards edge computing also brings its own set of challenges, particularly in terms of ensuring that devices at the edge are secure. David Maidment, Senior Director of Market Strategy at Arm (a co-founder of PSA Certified), emphasized the importance of integrating security with AI, stating that “one doesn’t scale without the other.” He cautioned that while AI offers significant opportunities, it also presents similar opportunities to bad actors who may exploit vulnerabilities in the system.
Despite widespread recognition of the importance of security, there is a noticeable gap between awareness and action. The survey found that only 50% of respondents believe their current security investments are adequate to address the risks posed by AI. Moreover, many organizations are neglecting essential security practices, such as independent certifications and threat modeling, which are critical for identifying and mitigating potential vulnerabilities.
Maidment underscored the need for those in the connected device ecosystem to adhere to best practices in security, even as they pursue new AI capabilities. He stressed that the entire value chain must take collective responsibility to ensure that consumer trust in AI-driven services is maintained. This involves a holistic approach to security, embedded throughout the AI lifecycle—from the deployment of devices to the ongoing management of AI models operating at the edge.
Despite these concerns, the survey also revealed a sense of optimism within the industry. A majority of decision-makers (67%) believe that their organizations are equipped to handle the potential security risks associated with AI’s rapid growth. Furthermore, there is a growing recognition of the need to prioritize security investment, with 46% of respondents focusing on strengthening security measures, compared to 39% who are prioritizing AI readiness.
In conclusion, while the industry is optimistic about the potential of AI, it is clear that there is a pressing need to address security concerns as AI technology continues to evolve. Organizations must ensure that they are taking the necessary steps to mitigate potential security risks, particularly as they adopt new AI-enabled use cases. A proactive approach to security, incorporating best practices and a security-by-design philosophy, will be essential for building and maintaining consumer trust in AI technologies.