RISC-V

Open ESP

The Heterogeneous Open-Source Platform for Developing RISC-V Systems
ESP is an open-source research platform for RISC-V systems-on-chip that integrate many hardware accelerators. ESP provides a vertically integrated design flow from software development and hardware integration to full-system prototyping on FPGA. For application developers, ESP offers domain-specific automated solutions to synthesize new accelerators for their software and map it onto the heterogeneous SoC architecture. For hardware engineers, ESP offers automated solutions to integrate their accelerator designs into the complete SoC. The participants in this FOSDEM20 event will learn how to use ESP from the viewpoints of both application developers and hardware engineers by following a series of short hands-on tutorials embedded in the lecture. Conceived as a heterogeneous integration platform and tested through years of teaching at Columbia University, ESP is intrinsically suited to foster collaborative engineering of RISC-V based SoCs across the open-source community.
What is ESP? ESP is an open-source research platform to design and program heterogeneous systems-on-chip (SoCs). A heterogeneous SoC combines multiple general-purpose processor cores and many specialized hardware accelerators. What does ESP offer? ESP provides automated solutions to (a) synthesize new accelerators, (b) integrate them with RISC-V processors and other third party accelerators into a complete SoC, (c) rapidly prototype the SoC on an FPGA board, and (d) run software applications that take advantage of these accelerators. ESP contributes to the open-source movement by supporting the realization of more scalable architectures for SoCs that integrate more heterogeneous components, thanks to a more flexible design methodology that accommodates different specification languages and design flows. What is an example of an SoC prototype realized with ESP? With ESP's automation capabilities, it is easy to realize FPGA-based prototypes of complete SoCs. For example, an SoC may feature the Ariane RISC-V processor core booting Linux, a multi-plane network-on-chip supporting a partitioned memory hierarchy with multiple DRAM controllers, and tens of loosely-coupled accelerators that execute coarse-grained tasks exchanging large amount of data with DRAM through direct-memory access (DMA). These accelerators can be third-party open-source hardware components that “speak” the AXI protocol (e.g. the NVIDIA NVDLA accelerator for deep learning) or new accelerators that can be synthesized with different design flows from specifications written in different languages, including: C with Xilinx Vivado HLS, SystemC with Cadence Stratus HLS, Keras TensorFlow and PyTorch with hls4ml, Chisel, SystemVerilog, Verilog, and VHDL. Why is ESP relevant now? Information technology has entered the age of heterogeneous computing. Across a variety of application domains, computing systems rely on highly heterogeneous architectures that combine multiple general-purpose processors with specialized hardware accelerators. The complexity of these systems, however, threatens to widen the gap between the capabilities provided by semiconductor technologies and the productivity of computer engineers. ESP tackles this challenge by raising the level of abstraction in the design process, simplifying the domain-specific programming of heterogeneous architectures, and leveraging the potential of the emerging open-source hardware movement. What are some key ingredients of the ESP approach? Building on years of research on communication-based system-level design at Columbia University, ESP combines an architecture and a methodology. The flexible tile-based architecture simplifies the integration of heterogeneous components by balancing regularity and specialization. The companion methodology raises the level of abstraction to system-level design, thus promoting closer collaboration among software programmers and hardware engineers. Through the automatic generation of device drivers from pre-designed templates, ESP simplifies the invocation of accelerators from user-level applications executing on top of Linux. Through the automatic generation of a multi-plane network-on-chip from a parameterized model, the ESP architecture can scale to accommodate many processors, tens of accelerators, and a distributed memory hierarchy. A set of ESP Platform Services provides pre-validated solutions for accelerators configuration, memory management, sharing of system resources, and dynamic frequency scaling, among others. How does ESP differ from other RISC-V based platforms? To date, the majority of the open-source hardware (OSH) efforts related to RISC-V have focused on the development of processor cores that implement the RISC-V ISA and small-scale SoCs that connect these cores with tightly-coupled functional units and coprocessors, typically through bus-based interconnects. Meanwhile, there have been less efforts in developing solutions for large-scale SoCs that combine RISC-V cores with many loosely-coupled components, such as coarse-grain accelerators, interconnected with a network-on-chip (NoC). Compared to other RISC-V related projects, ESP is focused on scalability (with the NoC-based architecture), heterogeneity (with emphasis on loosely-coupled accelerators), and flexibility (with support of different design flows). Just like the ESP architecture simplifies the integration of heterogeneous components developed by different teams, the ESP methodology embraces the use of heterogeneous design flows for component development. Who are the authors of ESP? ESP has been developed by the System-Level Design (SLD) group in the Department of Computer Science at Columbia University during the past seven years. The SLD group has published over a dozen scientific papers in peer-reviewed conferences and journals to describe the most innovative aspects of ESP. ESP has been released as an open-source project via GitHub in the summer 2019.

Additional information

Type devroom

More sessions

2/1/20
RISC-V
Greg Chadwick
K.3.401
Ibex implements RISC-V 32-bit I/E MC M-Mode, U-Mode and PMP. It uses an in order 2 stage pipe and is best suited for area and power sensitive rather than high performance applications. However there is scope for meaningful performance gains without major impact to power or area. This talk describes work done at lowRISC to analyse and improve the performance of Ibex. The RTL of an Ibex system is simulated using Verilator to run CoreMark and Embench and the traces analysed to identify the major ...
2/1/20
RISC-V
Dan Petrisko
K.3.401
BlackParrot is a Linux-capable, cache-coherent RISC-V multicore, designed for efficiency and ease of use. In this talk, we will provide an architectural overview of BlackParrot, focusing on the design principles and development process as well as the software and hardware ecosystems surrounding the core. We will also discuss the project roadmap and our plans to engage the open-source community. Last, we will demonstrate a multithreaded RISC-V program running on top of Linux on a multicore ...
2/1/20
RISC-V
K.3.401
HammerBlade is an open source RISC-V manycore that has been under development since 2015 and has already been silicon validated with a 511-core chip in 16nm TSMC. It features extensions to the RISC-V ISA that target GPU-competitive performance for parallel programs (i.e. GPGPU) including graphs and ML workloads. In this talk we will overview the components of the HW and software ecosystem in the latest version, and show you how to get up and running as an open source user or contributor in ...
2/1/20
RISC-V
Schuyler Eldridge
K.3.401
The burgeoning RISC-V hardware ecosystem includes a number of microprocessor implementations [1, 3] and SoC generation frameworks [1, 2, 7]. However, while accelerator “sockets” have been defined and used (e.g., Rocket Chip’s custom coprocessor/RoCC), accelerators require additional collateral to be generated like structured metadata descriptions, hardware wrappers, and device drivers. Requiring manual effort to generate this collateral proves both time consuming and error prone and is at ...
2/1/20
RISC-V
Karthik Swaminathan
K.3.401
RISC-V processors have gained acceptance across a wide range of computing domains, from IoT to embedded/mobile class and even in server-class processing systems. In processing systems ranging from connected cars and autonomous vehicles, to those on-board satellites and spacecrafts, these processors are targeted to function in safety-critical systems, where Reliability, Availability and Serviceability (RAS)-based considerations are of paramount importance. Along with potential system ...
2/1/20
RISC-V
K.3.401
RISC-V application, OS, and firmware development has been slowed by the lack of "real hardware" available for developers to work with. With the rise of FPGAs in the cloud and the recent release of the OpenPiton+Ariane manycore platform on Amazon's F1 cloud FPGA platform, we propose using 1-12 core OpenPiton+Ariane processors emulated on F1 to develop RISC-V software and firmware. In this talk, we will give an accelerated tutorial on how to get started with OpenPiton+Ariane, the spec-compliant ...
2/1/20
RISC-V
Ofer Shinaar
K.3.401
We would like to present and overlay technique for RISCV, develop by WD and open sourced. This FW feature acts as a software “paging” manager. It is to be threaded with the Real-Time code and to the toolchain. Cacheable Overlay Manager RISC-V (ComRV), a technique which fits limited memory embedded devices (as IoT’s), and does not need any HW support.