Grant Call 1 - List of Projects
Project Title:
Enhanced function signature recovery for control-flow integrity enforcement on compiler optimized executables
Lead Principal Investigator:
Associate Professor Gao Debin
Abstract:
Control-Flow Integrity (CFI) enforcement is a promising technique in producing trustworthy software. This project focuses on function signature recovery, which is a critical step in CFI enforcement when source code is not available. Current approaches rely on the assumption of matching function signatures at caller and callee sites in an executable; however, various compiler optimizations violate well-known calling conventions and result in unmatched function signatures recovered. We 1) investigate the extent to which modern compiler optimizations would impact the function signature recovery in Linux and Windows x86-64 architectures; 2) propose a novel hybrid rule- and heuristic-based inferencing mechanism to improve the accuracy; 3) apply domain knowledge to enhance existing machine-learning techniques to handle caller and callee signature recovery for both direct and indirect functions; and 4) implement CFI through binary rewriting. Our objective is to design and implement an automatic system to produce CFI-enforced program executables.
Project Title:
SpecTest: Specification-based Compiler Fuzzing
Lead Principal Investigator:
Associate Professor Sun Jun
Abstract:
Compilers are a key technology of software development. They are relevant for not only general purpose programming languages (like C/Java) but also many domain specific languages. Compilers are error-prone, especially concerning less-used language features. Existing compiler testing techniques often rely on weak test oracles which prevents them from finding deep semantic errors. In this project, we aim to develop a novel specification-based fuzzing method named SpecTest for compilers. SpecTest has three components: an executable specification of the language, a fuzzing engine which generates test cases for programs in the language, and a code mutator which generates new programs for testing the compiler. SpecTest identifies compiler bugs by comparing the abstract execution of the specification and concrete execution of compiled program. Furthermore, with the mutator, SpecTest can systematically test those less-used language features. SpecTest will be evaluated with the newly developed Solidity compiler and more mature compilers like GCC/JVM.
Project Title:
Evaluating the Trustworthiness of Deep Learning Systems
Lead Principal Investigator:
Professor Dong Jin Song
Abstract:
Deep learning (DL) systems have gained much popularity in real-world
applications. It is therefore important to test and improve the
trustworthiness of DL systems, especially for those that are safety- and
security-critical. The research on determining the trustworthiness of DL
systems remains at an early stage, due to the inherent uncertainty of the
systems’ behaviors and outputs. This project aims to develop new
techniques for determining the trustworthiness of DL systems, with an
emphasis on uncovering and accounting for the underlying uncertainties.
The new techniques will address two important problems—the design of
oracles for deep neural nets (DNNs) that provide insight about additional
properties of the DNNs, and the discrimination between intrinsically
imprecise behavior and faulty behavior of systems using DNNs. Using these
techniques, we will then develop techniques to support the removal of
discovered faults in DL systems and to improve the accuracy and
interpretability of the systems.
Project Title:
A Novel Hybrid Kernel Symbolic Execution Framework for Malware Analysis
Lead Principal Investigator:
Associate Professor Ding Xuhua
Abstract:
Today’s malware analysis tools, especially those on kernel attacks, face the barrier of insufficient code path coverage to fully expose malicious behaviours, as that requires systematic exploration of kernel states. Although symbolic execution is the well-established solution for benign programs’ code coverage, it does not overcome that barrier because of its susceptibility to attacks from the running target under analysis and incapability of managing complex kernel execution. This project is to innovate cutting-edge techniques to automatically and systematically generate code paths for maliciously-influenced kernel behaviours. The outcome includes a system infrastructure for an analysis tool to transparently and securely control the target in either user or kernel mode; and hybrid symbolic execution algorithms based on that infrastructure to explore kernel execution paths. It not only enhances kernel-oriented security analysis, testing, and certification with a widened code coverage, but also strengthens malware analysis with a deepened understanding of underlying kernel activities.
Project Title:
Improving Trustworthiness of Real-world AI systems through Adversarial Attack and Effective Defense
Lead Principal Investigator:
Associate Professor Bo An
Abstract:
Artificial intelligence research has achieved a great success in recent years and inspired many innovative applications in industry, including recommender systems, self-driving cars, and face recognition. Therefore, the security of AI systems has become a critical concern in both industry and academia. Unfortunately, the study on the security of AI systems is far from satisfactory for industrial deployment to assist real-world AI systems. The main limitations of existing work include the failure of considering practical and implementable adversarial attacks for real-world attacker, and neglecting the specific need of deployed AI systems on balancing the safety and performance. Based on our past experience in solving a broad class of practical problems with AI techniques, we will provide a comprehensive toolkit to address the above limitations which can be deployed to assist real-world AI systems on safety evaluation and provide an adaptive solution to satisfy practical needs.
Project Title:
Trustworthy Distributed Software with Safety and Liveness Guarantees
Lead Principal Investigator:
Associate Professor Chin Wei-Ngan
Abstract:
Distributed software are increasing important in the digital world, with its footprint firmly anchored from intelligent appliances to large-scale cloud systems. The developers' efforts in going distributed is much steeper than that for conventional software development, most of which were not designed with sound concurrency in mind. For this reason, the current trust in the evolving distributed systems is often based on non-rigorous techniques, such as informal specification and non-exhaustive testing. As a result, there is an emerging opportunity to develop programming methodology that emphasizes on trustworthiness of distributed code. An important objective arising with this opportunity is to bring the scientific principles of rigorous mathematical specification and verification at the core of the distributed software design and implementation. In particular, we propose techniques which look at three different properties of trustworthy distributed software: functional correctness, safety and liveness. To validate our proposal we plan to use test-cases designed for the most popular language to date, namely C, but also for applications written in Go, an emerging programming language for building distributed software.
Project Title:
CertiChain: A Framework for Mechanically Verifying Blockchain Consensus Protocols
Lead Principal Investigator:
Associate Professor Ilya Sergey
Abstract:
The idea of blockchain consensus has enabled many new decentralised applications including digital currencies, verifiable replicated computations, application-specific arbitration, and transparent voting. Despite the growth in numbers of implemented blockchain-based solutions, the lack of systematic machine-assisted methodology for specifying and verifying properties of those systems leads to corruption of replicated data, monetary losses, and the overall decline of public trust in the enabling technology.
In this project we develop CertiChain - a framework for computer-aided verification of correctness properties of blockchain-based distributed consensus protocols: their safety, liveness, and probabilistic security guarantees. We build on the recent advances in applying mechanised interactive theorem proving for establishing machine-checked correctness of safety-critical distributed systems. We employ the state-of-the-art techniques from foundational formal verification to distributed systems to address the challenges posed by implementations of blockchain protocols. Our approach enables modular and scalable reasoning about composite systems, making verification efforts reusable and maintainable.