Skip to content

Commit

Permalink
Update readme and license
Browse files Browse the repository at this point in the history
  • Loading branch information
hanchenye committed Oct 15, 2023
1 parent 0a5ef4c commit 99f1786
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 2 deletions.
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
==============================================================================
The LLVM Project is under the Apache License v2.0 with LLVM Exceptions:
As an incubator project with ambition to become part of the LLVM Project,
ScaleHLS is under the same license.
ScaleHLS and HIDA are under the same license.
==============================================================================
Apache License
Version 2.0, January 2004
Expand Down
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,9 @@ ScaleHLS is a High-level Synthesis (HLS) framework on [MLIR](https://mlir.llvm.o

By using the MLIR framework that can be better tuned to particular algorithms at different representation levels, ScaleHLS is more scalable and customizable towards various applications coming with intrinsic structural or functional hierarchies. ScaleHLS represents HLS designs at multiple levels of abstraction and provides an HLS-dedicated analysis and transform library (in both C++ and Python) to solve the optimization problems at the suitable representation levels. Using this library, we've developed a design space exploration engine to generate optimized HLS designs automatically.

For more details, please see our [ScaleHLS (HPCA'22)](https://doi.org/10.1109/HPCA53966.2022.00060) and [HIDA (ASPLOS'24)](https://doi.org/10.1145/3617232.3624850) paper:
Working with a set of neural networks modeled in PyTorch, comparing to the baseline designs without manual directives insertion and code-rewriting, that are only optimized by Xilinx Vivado HLS, ScaleHLS improves the performances with up to 3825.0x better. Furthermore, HIDA (ScaleHLS 2.0) achieves up to 8.54x higher throughput compared to ScaleHLS. Meanwhile, dsespite being fully automated and able to handle various applications, HIDA achieves 1.29$\times$ higher throughput over [DNNBuilder](https://github.com/IBM/AccDNN), the SOTA RTL-based neural network accelerator on FPGAs.

For more details, please see our [ScaleHLS (HPCA'22)](https://arxiv.org/abs/2107.11673) and [HIDA (ASPLOS'24)](https://doi.org/10.1145/3617232.3624850) paper:
```bibtex
@inproceedings{ye2022scalehls,
title={ScaleHLS: A New Scalable High-Level Synthesis Framework on Multi-Level Intermediate Representation},
Expand Down

0 comments on commit 99f1786

Please sign in to comment.