Resilience

Coming Soon: Machine-Checked Mathematical Proofs in Everyday Software and Hardware Development

Saal Dijkstra
Adam Chlipala
Most working engineers view machine-checked mathematical proofs as an academic curiosity, if they have ever heard of the concept at all. In contrast, activities like testing, debugging, and code review are accepted as essential. They are woven into the lives of nearly all developers. In this talk, I will explain how I see machine-checked proofs enabling new everyday activities for developers of computer software and hardware. These activities have the potential to lower development effort dramatically, at the same time as they increase our assurance that systems behave correctly and securely. I will give a cosmological overview of this field, answering the FAQs that seem to stand in the way of practicality; and I will illustrate the principles with examples from projects that you can clone from GitHub today, covering the computing stack from digital hardware design to cryptographic software and applications.
Today's developers of computer software and hardware are tremendously effective, compared to their predecessors. We have found very effective ways of modularizing and validating our work. The talk is about ammunition for these activities from a perhaps-unexpected source. Modularity involves breaking a complex system into a hierarchy of simpler pieces, which may be written and understood separately. Structured programming (e.g., using loops and conditionals instead of gotos) helps us read and understand parts of a single function in isolation, and data abstraction lets us encapsulate important functionality in objects, with guarantees that other code can only access the private data by calling public methods. That way, we can convince ourselves that the encapsulated code upholds certain essential properties, regardless of which other code it is linked with. Systematic unit testing also helps enforce contracts for units of modularity. Each of these techniques can be rerun automatically, to catch regressions in evolving systems, and catch those regressions in a way that accurately points the finger of responsibility to particular modules. Validation is an important part of development that encompasses testing, debugging, code review, and anything else that we do to raise our confidence that the system behaves as intended. Experienced engineers know that validation tends to take up the majority of engineering effort. Often that effort involves mentally taxing activities that would not otherwise come up in coding. One example is thinking about test-case coverage, and another is including instrumentation that produces traces to consult during debugging. It is not hard for working developers to imagine great productivity gains from better ways to break systems into pieces or raise our confidence in those pieces. The claim I will make in this talk is that a key source of such insights has been neglected: machine-checked mathematical proofs. Here the basic functionality is an ASCII language for defining mathematical objects, stating theorems about them, and giving proofs of theorems. Crucially, an algorithm checks that purported proofs really do establish the theorems. By going about these activities in the style of programming, we inherit usual supporting tools like IDEs, version control, continuous integration, and automated build processes. But how could so esoteric a task as math proofs call for that kind of tooling, and what does it have to do with building real computer systems? I will explain a shared vision to that end, developed along with many other members of my research community. Let me try to convince you that all of the following goals are attainable in the next 10 years. We will have complete computer systems implementing moderately complex network servers for popular protocols, proved to implement those protocols correctly, from the level of digital circuits on up. We will remove all deployed code (hardware or software) from the trusted computing base, shifting our trust to much smaller specifications and proof checkers. Hobbyists will be able to design new embedded computing platforms by mixing and matching open-source hardware and software components, also mixing and matching the proofs of these components, guaranteeing no bugs at the digital-abstraction level or higher, with no need for debugging. New styles of library design will be enabled by the chance to attach a formal behavioral specification to each library. For instance, rank-and-file programmers will able to assemble their own code for cryptographic protocols, with code that looks like reference implementations in Python, but getting performance comparable to what experts handcraft in assembly today. Yet that benefit would come with no need to trust that library authors have avoided bugs or intentional backdoors, perhaps even including automatic proofs of cryptographic security properties. Main technical topics to cover to explain my optimism: The basic functionality of proof assistants and why we should trust their conclusions How to think about system decomposition with specifications and proofs, including why, for most components, we do not need to worry about specification mistakes The different modes of applying proof technology to check or generate components The engineering techniques behind cost-effective proof authoring for realistic systems A hardware case study: Kami, supporting component-based digital hardware authoring with proofs A software case study: Fiat Cryptography, supporting correct-by-construction auto-generation of fast code for elliptic-curve cryptography Pointers to where to look next, if you would like to learn more about this technology

Additional information

Type lecture
Language English

More sessions

12/27/17
Resilience
Sebastian Jünemann
Saal Borg
Gesundheit als entscheidender Teil von Glück und Zufriedenheit ist bis in ihre kleinsten Teilbereiche „durchkapitalisiert“. Und dieser Prozess macht auch vor humanitärer Hilfe und Krisenintervention nicht halt. In diesem Talk gehen wir auf verschiedene Beispiele ein und erklären, wie CADUS mit seinem Makerspace versucht, dieses Problem auf vielen Ebenen zu hacken.
12/27/17
Resilience
Clifford Wolf
Saal Clarke
Formal hardware verification (hardware model checking) can prove that a design has a specified property. Historically only very simple properties in simple designs have been provable this way, but improvements in model checkers over the last decade enable us to prove very complex design properties nowadays. riscv-formal is a framework for formally verifying RISC-V processors directly against a formal ISA specification. In this presentation I will discuss how the complex task of verifying a ...
12/27/17
Resilience
Alastair Reid
Saal Borg
Formal verification of software has finally started to become viable: we have examples of formally verified microkernels, realistic compilers, hypervisors etc. These are huge achievements and we can expect to see even more impressive results in the future but the correctness proofs depend on a number of assumptions about the Trusted Computing Base that the software depends on. Two key questions to ask are: Are the specifications of the Trusted Computing Base correct? And do the implementations ...
12/27/17
Resilience
Saal Clarke
We shall explain the renewed interest in mix networks. Like Tor, mix networks protect metadata by using layered encryption and routing packets between a series of independent nodes. Mix networks resist vastly more powerful adversary models than Tor though, including global passive adversaries. In so doing, mix networks add both latency and cover traffic. We shall outline the basic components of a mix network, touch on their roles in resisting active and passive attacks, and discuss how the ...
12/28/17
Resilience
Katharine Jarmul
Saal Adams
In the past decade, machine learning researchers and theorists have created deep learning architectures which seem to learn complex topics with little intervention. Newer research in adversarial learning questions just how much “learning" these networks are doing. Several theories have arisen regarding neural network “blind spots” which can be exploited to fool the network. For example, by changing a series of pixels which are imperceptible to the human eye, you can render an image ...
12/28/17
Resilience
raichoo
Saal Dijkstra
Systems are getting increasingly complex and it's getting harder to understand what they are actually doing. Even though they are built by human individuals they often surprise us with seemingly bizarre behavior. DTrace lights a candle in the darkness that is a running production system giving us unprecedented insight into the system helping us to understand what is actually going on. We are going implement `strace`-like functionality, trace every function call in the kernel, watch the scheduler ...
12/28/17
Resilience
Mike Sperber
Saal Dijkstra
Hacker culture overcomes limitations in computer systems through creativity and tinkering. At the same time, hacker culture has shaped the practice of software development to this day. This is problematic - techniques effective for breaking (into) a computer systems are not necessarily suitable for developing resilient and secure systems. It does not have to be this way: We can approach software development as a methodical, systematic activity rather than tinkering, and teach it accordingly. ...