On Thursday, 25 September, the TDL autumn 2025 webinar series continued with an in-depth discussion on the security challenges associated with modern autonomous systems.
From self-driving cars to smart manufacturing and robotics systems, autonomous computing is on the rise. It is the future. But the advantages of autonomous computing come with security challenges. Hacking, sensor manipulation, communication issues, problems with legacy components in the architecture, data quality, AI attacks and many other concerns come into play and need to be addressed. The panellists shared their insights on these security issues as well as their views on the optimal path forward.
Background
Each participant was asked to comment on their view on the most important security, privacy and trust challenges for autonomous systems which were identified for this webinar as:
What are the top three security threats for autonomous systems?
What is the importance of the security of software components versus the security of hardware components (e.g., sensors)?
How the “human in the loop” component plays out for fully autonomous systems?
· Multifaceted frameworks were designed that can be used, with adaptations, for the fully autonomous systems, e.g., NIST CPS (cyber-physical systems) framework[1].
· Will such a framework improve the integrated view of the security for autonomous systems?
Fully autonomous systems are especially dependent on the quality of data. What can be done on the data side to improve security (and, where relevant, privacy) of autonomous systems?
Physical security is frequently overlooked. Does it have particular importance for some autonomous systems? What is the interplay of security and safety, security and privacy in this field?
What are especially promising areas of technology and process innovations that can improve security (and trust and privacy) for autonomous systems? Where do we need to invest to achieve the best results?
Speakers
The distinguished panel of experts were:
Ed Griffor, Associate Director for Cyber-Physical and Autonomous Systems, NIST
Himanshu Khurana, Technology and Product Strategy Advisor, Former CTO, Powin
Konstantinos Loupos, R&D Director at Inlecom Innovation
The webinar was moderated by TDL Strategic Advisor, Claire Vishik.
Autonomous Systems Security Challenges
The panel opened with a discussion on security challenges in autonomous systems, initially focusing on the attack surfaces and threats at four levels: cyber, transduction, physics and operators. The rapid advancement of autonomous systems due to AI and machine learning capabilities was emphasised, suggesting a need for a process similar to ISO’s functional safety approach[2] to address potential threats which triggered an inquiry about the integration of physical elements and vulnerabilities in autonomous systems, particularly how they relate to AI system quality and reasoning capabilities.
Autonomous Vehicle Security Challenges
The discussion continued on the security challenges in autonomous vehicles, particularly focusing on the perception system’s vulnerabilities. It was explained that, while the control stack is robust, the perception pipeline remains a weak point. This touched on the concept of “superintelligence” in relation to autonomous vehicles, though it was noted that its definition remains unclear. A solution can be found in several areas, e.g., devising constrained testing environments and the use of Rasmussen’s performance model[3] to assess autonomous systems’ behaviour.
Autonomous System Security Design Challenges
The increasing autonomy in industrial control systems, smart cities and IoT environments were discussed next, highlighting the need for security considerations in systems that evolve from non-connected to connected. The importance of designing systems with autonomy from the start was emphasised and the challenge of maintaining security over a system’s long operational life addressed. It was further suggested leveraging existing secure components and reducing custom plumbing to improve data integrity, while it was agreed that, while redesigning everything is impractical, incorporating better components and systematic approaches is feasible.
Autonomous Vehicle AI Safety Challenges
The discussion moved on to the challenges of simulating all conditions for autonomous vehicles with an emphasis on the importance of maintaining security and safety over their long operational lifetimes. The need for careful consideration of both the training and operational stages of AI systems in autonomous vehicles was highlighted, including potential software vulnerabilities and data poisoning risks. This touched on the role of explainable AI and trustworthy AI technologies in enhancing user understanding of AI decision-making processes, while acknowledging the limitations of human intervention in critical milliseconds.
Autonomous Systems Data Security Enhancements
The group discussed improving data security in autonomous systems, pointing out the need for comprehensive security across the entire system, including sensors and cloud analytics, and suggesting the use of redundant data sources for verification. The experience with two security revolutions was shared, including the concept of zero-knowledge proofs, and the idea of cooperative perception between autonomous agents to enhance decision-making proposed. The discussion highlighted the importance of data integrity, redundancy and advanced techniques, like zero-knowledge proofs, to address emerging sophisticated data manipulation attacks.
Personalised Security for Autonomous Systems
The conversation continued on the concept of individual security needs and how zero-knowledge proofs can address varying risk appetites. It was explained that security approaches should be personalised rather than one-size-fits-all, drawing parallels to how autonomous systems perceive and process information. The shift from asynchronous to real-time security perceptions in autonomous systems was noted which raised questions about data-related security improvements. It was suggested embedding information in the environment to reduce processing demands on autonomous systems, similar to how physical infrastructure aids human drivers.
AI Decision-Making and Privacy Challenges
The panel discussed the technical benefits of diversity in systems and the challenges of offloading decision-making to AI, with an emphasis on the importance of auditable data pipelines and privacy-by-design approaches. Questions were raised about the necessity of privacy in autonomous systems, particularly in environments without human intervention, which suggested implementing a rights-based framework for data management to ensure future adaptability. The discussion concluded on the need for safety-focused security measures, as security becomes relevant only when there’s potential harm. It was pointed out that privacy principles can be adapted to autonomous systems that do not contain or interact with PII (personally identifiable information) and that it will be important to use these principles to design systems that can collaborate in complex environments.
Enhancing Security in Autonomous Systems
The panel discussed the intersection of physical and cyber security, highlighting how advances in manufacturing and 3D printing could be used to enhance security by embedding information in the physical world. It was suggested focusing on privacy-by-design, including identity management and AI supply chains, while the importance of improving test frameworks for autonomous systems was mentioned. The discussion concluded with a recognition of the need for a holistic approach to security that considers all aspects of autonomous systems, from data to physical environments.
[1] https://www.nist.gov/publications/framework-cyber-physical-systems-volume-1-overview
[2] https://www.iso.org/standard/68383.html
[3] https://last9.io/blog/understanding-the-rasmussen-model-for-failures/