FPGA Architecture for Deep Learning: Survey and Future Directions

# · 🔥 128 · 💬 51 · 11 days ago · arxiv.org · matt_d · 📷
View a PDF of the paper titled Field-Programmable Gate Array Architecture for Deep Learning: Survey & Future Directions, by Andrew Boutros and 2 other authors. View PDF HTML. Abstract:Deep learning is becoming the cornerstone of numerous applications both in datacenters and at the edge. Their diverse high-speed IOs also enable directly interfacing the FPGA to the network and/or a variety of external sensors, making them suitable for both datacenter and edge use cases. As DL has become an ever more important workload, FPGA architectures are evolving to enable higher DL performance. In this article, we survey both academic and industrial FPGA architecture enhancements for DL. First, we give a brief introduction on the basics of FPGA architecture and how its components lead to strengths and weaknesses for DL applications. We survey DL-specific enhancements to traditional FPGA building blocks such as logic blocks, arithmetic circuitry, and on-chip memories, as well as new in-fabric DL-specialized blocks for accelerating tensor computations. Finally, we discuss hybrid devices that combine processors and coarse-grained accelerator blocks with FPGA-like interconnect and networks-on-chip, and highlight promising future research directions.
FPGA Architecture for Deep Learning: Survey and Future Directions



Send Feedback | WebAssembly Version (beta)