The Evolution of Computing
The influence of security, privacy and trust on the overall technology direction
The latest in the 2025 series of TDL webinars took place on Thursday, 16 June with a discussion on the evolution of computing, in the context of the influence of security, privacy and trust on the overall technology direction. During the webinar, moderated by TDL strategic advisor, Claire Vishik, a group of distinguished speakers discussed various aspects of this connection and what the technologies we use today - and will use tomorrow - owe to innovation in security and privacy.
Background
There is a lot of research on the various types of trade-offs, from usability to performance, associated with security and privacy technologies. It is commonly understood that solutions in these areas need to minimise these trade-offs to be successful., not least so that a new generation of students can be optimally trained to respond to these challenges and trade-offs.
But how do security, privacy and trust considerations affect the overall development of computing technologies? This topic is not well studied.
The aim of the webinar was to solicit the views of a group of experts on the influence of security, trust and privacy technologies on technology directions in general, exploring the major influences in their own specific areas of interest, considering the technologies that were developed for a specific purpose and then became state of the art. The next objective was to ascertain the major concerns in transposing technology approaches from a specific area into more general domains; and whether the incorporation of such technologies in a specific area into a general domain improved security, privacy and trust - and in which way.
The specific trade-offs associated with privacy, security and trust, in both hardware and software, including resources, non-volatile memory, performance, integrability and others, all could have a potential impact on adoption. It is particularly germane today to understand what changes, if any, artificial intelligence brings to the field, for both hardware and software, and which ones have been beneficial, and which neutral or negative.
Looking to the future, the evolution and influence of security/trust technologies on the computing environment is a nuanced and complex topic. The session offered a glimpse into the future deployment of security/trust/privacy technologies, created initially for a specific domain and then enhanced with the latest technologies; and a view on what new challenges would be faced.
The Speakers
Moderated by TDL Strategic Advisor, Claire Vishik, the speakers were
· Elisa Bertino, Samuel D. Conte Distinguished Professor of Computer Science, Purdue University, and research director at CERIAS (the Center for Education and Research in Information Assurance and Security)
· Domenic Forte, Professor and Steven A. Yatauro Fellow, Electrical and Computer Engineering Department, University of Florida; Associate Director, Florida Institute for National Security (FINS)
· Steven Murdoch, Professor of Security Engineering and Head of the Information Security Research Group, University College London
· Sandip Ray, Warren B. Nelms Endowed Professor, Department of Electrical and Computer Engineering, University of Florida
Security Technologies in Computing Evolution
The discussion opened with a series of personal perspectives on the influence of security, trust and privacy technologies on general computing architectures, drawing from decades of experience in hardware, software and network security. This included the example of trusted computing and trusted platform modules (TPMs), noting how societal and regulatory perceptions shaped their development.
Cryptography and Security Evolution
The panellists highlighted various aspects of cryptography and security technologies particularly pertinent to themselves:
· the evolution of security architectures from simple firewalls to more complex system trade-offs between reliability and entropy.
· the incorporation of quantum cryptography, noting that integration challenges depend on application domains.
· physically unclonable functions (PUFs), their implementation in hardware and the associated success of end-to-end encryption in transforming trust dynamics and privacy, while expressing hope for the development of privacy-enhancing technologies.
· the evolution of security validation in IoT and automotive systems, highlighting how security tools and methodologies have shifted from being used primarily by black hat experts to being integrated into architectural design processes. It was emphasised that the biggest impact of security validation is not in the tools themselves but in developing a mindset that identifies potential vulnerabilities and corner cases. The challenge in widespread security adoption lies in the difficulty of quantifying security quality and the need for better awareness that security should be considered a core feature rather than an afterthought.
A discussion followed on the imbalance in funding between breaking systems and building secure systems, while there is a need for specialised verification techniques for different security categories and a recognition of the importance of managing security practices effectively.
Digital Privacy and Security Solutions
The group discussed the challenges and potential solutions for achieving privacy and security in digital systems, focusing on the differences between simulations and hardware implementations, and the role of digital twins in improving security and performance. The discussion explored the use of digital twins to bridge the gap between simulation and real-world hardware, while taking into account the importance of both encryption and privacy, distinguishing between secrecy and privacy preferences. The need for user-friendly privacy technologies was highlighted, suggesting that usability and understanding a user's mental model are crucial for widespread adoption. The discussion also touched on the challenges of metadata protection and the trade-offs involved in implementing privacy measures, bearing in mind the varying approaches between the US, Europe and other jurisdictions.
AI Security and Explainability Challenges
The group discussed the use of artificial intelligence in security and privacy, observing that, while AI can speed up threat management, it often sacrifices precision. The challenge of AI opacity was mentioned in explaining its recommendations, citing an example from aircraft systems where conflicting AI suggestions once led to a pilot making an incorrect decision. There was an emphasis put on the need for domain-specific foundational models that prioritise explainability, with the rider that both the AI models and their training data must come from secure, reputable sources. It was concluded that much of the current AI work in security relies on prompt engineering rather than objective analysis, and there's a need for greater awareness of AI limitations among users who may be tempted to trust AI outputs without verification.
AI's Evolving Role in Security
While AI has made significant impacts in other fields, its application in security is still developing, particularly in areas like security testing and vulnerability identification. The need to train engineers in cybersecurity through initiatives like cyber-informed engineering was highlighted, emphasising the understanding of cyber risks without requiring expertise in cybersecurity. The importance of practical, hands-on learning in security was stressed, with the suggestion to develop exploration platforms for safe, controlled learning environments. The group agreed that while new, innovative technologies and processes are important, maintaining strong foundational security knowledge is crucial for long-term success in the field.
Security Technology: Progress and Challenges
The panellists shared their mixed outlooks on the future of security technology, highlighting both progress and challenges. The need to balance security with other product features was emphasised, while potential threats from powerful interests opposing privacy technologies were mentioned. An optimism was expressed about security becoming less of an afterthought but with a warning about new threats from AI. Overall, based on their lifetime's experience, the panellists remained positive about continued progress despite increasing system complexity.
Considerations for Continued Research
Some of the areas considered important for future effort highlighted during the discussion included:
· improving the explainability of AI in security applications,
· exploring the development of secure exploration platforms for hands-on security training,
· potentially using digital twins and immersive technologies; and
· continuing research on balancing security with other product features and making security more understandable to non-experts.
Conclusions
The panellists recognised that this is a complex area and were overall neutral about our ability in the short-term to significantly improve security both within its own discipline and as an integral part of products, processes and systems. The aspirational goals pursued by this group, such as having no need for tradeoffs in security or having super reliable validation of implementations, are difficult to achieve fully. But one of the panelists was optimistic: we have achieved significant progress to date, and we are on track to making further significant improvements in the future. The panel concluded on this positive note.