Future trends in cybersecurity – sharing insights from NDSS
Top security researchers gathered in San Diego for the annual Network and Distributed Systems Security Symposium (NDSS) at the end of February. The event is a forum for information exchange among researchers of network and distributed systems security. Here, I highlight a few topics and trends I found interesting and relevant for the telco industry.
In general there is a continued strong and even increasing interest in security, as reflected by the growing number of submissions and expanded symposium program. In particular, upwards-trending submission topics for NDSS this year (as reported by the program chairs) include software/firmware analysis, trustworthy computing, blockchains and cryptocurrencies, and critical infrastructures. Most of these topics are highly relevant for mobile communication systems and connect directly to work at Ericsson Research.
Dr. Deborah Frincke, Director of Research at NSA, held a keynote where she painted a complex picture of the challenges in cyber defense. She encouraged attendees to reflect on certain items. One was anticipatory thinking - don't neglect impactful possible futures, the low probability but potentially high-risk ones, when dealing with the most plausible futures. Others included: increase focus on human behavior, use visualization tools answering the right questions, deceptive defenses, augmenting intelligence using machines, adversarial ML/AI and resilient AI. This was good food for thought and some of these topics, like adversarial ML, later had a whole session dedicated to it in the symposium program.
Let's take a closer look then at a sampling of topics of interest from our point of view:
Trusted execution and Hardware factors
The last few years have seen a lot of interest and excitement building around trusted execution mechanisms, like secure enclaves. Here the symposium talks demonstrated both work on how to reap the benefits of such functionality, as well as evidence of how other aspects of modern hardware can impose risks. A paper on a system called "SANCTUARY" proposed a way of building user-space enclave mechanisms using the ARM TrustZone trusted execution environment.
By anchoring the security enclave mechanism in TrustZone while keeping the enclave application codes in user space, the aim is to avoid adding applications to the trusted computing base (TCB) inside the TrustZone secure world. Program obfuscation is a technique that has been used to hide secrets in software or to hinder manipulation of it. Trusted execution mechanisms offer attractive new ways to accomplish such protection, and another paper examined use of SGX for program obfuscation and addressed mitigation of side channel leakage. The previously much-publicized Meltdown and Spectre attacks, on the other hand, demonstrated insecurity stemming from hardware mechanisms. The paper on "ExSpectre" followed this line of work by demonstrating novel ways to hide malware using speculative execution mechanisms.
Software and Firmware analysis
Obviously, avoiding security flaws in software and firmware will be a high priority for any system developer, and there were multiple sessions devoted to new techniques in this area. Looking beyond only the CPU and software running on it, the "PeriScope" fuzzing framework is a tool for finding OS kernel vulnerabilities that are possible to exploit from peripheral devices (rather than the system call boundary). Then from fuzzing to another popular technique for analyzing software, symbolic execution.
While symbolic execution is being used successfully in many cases to find software flaws, it is also known that the complex dependencies and conditions are difficult or may not be possible to handle; and in the paper on "Neuro-Symbolic Execution", the authors proposed combining symbolic execution with neural networks to address tough cases.
Considering instead what an adversary might try to learn about a program, the paper on "Profit" considered encrypted network traffic as a side channel and the potential to infer certain aspects of program behavior by analyzing the traffic.
Understanding threats and protecting IoT
Another important aspect is detecting and understanding threats in the wild. In particular, network-based threats, and looking ahead, IoT, are areas of interest for us. Considering the detection side by combining network traffic analysis with host software analysis, a paper on DNS analysis to identify malware proposed endpoint-based DNS monitoring to be able to tie DNS lookups to the originating process for both better context and being able to nail down the offending process(es). Addressing instead understanding of threats in the wild, another paper on measurement and analysis of the Hajime IoT botnet, focused on the network mechanisms which use a peer-to-peer architecture and exploit the BitTorrent DHT (making it hard to block without disrupting that service). Continuing on the topic of IoT security, a paper proposed "IoTGuard", a system for enforcing policies on IoT network traffic.
Adversarial Machine Learning and attacks on speech recognition
Adversarial Machine Learning means considering the potential threats inherent in applying ML for uses where an adversary has incentive to attack the system by trying to fool it, poison its models, or extract the model or information about original input data. It's an area that has seen a proverbial explosion in activity over the last few years. Both because of the obvious importance as use of ML spreads ever wider in society, and possibly also helped by highly evocative examples from the domains of image and speech recognition. Not altogether surprising then, the program also encompassed sessions devoted to adversarial machine learning and attacks on speech recognition systems. A mitigation against adversarial test example-ML attacks, based on deriving certain invariants from neural network models, was proposed in one study; and while at least one of the papers on attacks against speech recognition was also based on adversarial ML, another paper targeted earlier preprocessing stages in the system, triggering speech recognition services, like voice assistants, with noise-like audio.
There were two papers looking at 5G security. Applying formal analysis methods to examine the protocols being standardized is a laudable effort benefitting the whole ecosystem. There was one study to that end focusing on the 5G-AKA protocol which, among the reported findings, identified a point in the specifications related to subscriber identity request/response mapping in the communications between the Authentication Server Function (AUSF) and Authentication credential and Repository Function (ARPF) where some security properties rely on other specifications and implementation in a manner that may not be obvious. The second paper described inference of information about subscriber location and subscriber identifier through monitoring of the paging channel (assuming an adversary located in the same cell) in conjunction with messages sent towards the subscriber. Experimental results for 4G were reported, but the arguments for transference to 5G appear less certain as the latest 5G specifications no longer use permanent identifiers to determine paging timing.
All in all, there was a lot of interesting work and many more interesting papers presented at the event than I've mentioned here. The exciting atmosphere at the conference is reassuring because it confirms that there will be more high-quality research work in relevant security areas in the future, and we look forward to new insights and results in areas that will have an impact also on future mobile communication systems.
Read more about security research.
For an overview of security topics at Ericsson, please visit this link