Skip to content

Commit 496846d

Browse files
committed
User API and documentation.
1 parent 00f82a3 commit 496846d

File tree

2 files changed

+21
-5
lines changed

2 files changed

+21
-5
lines changed

docs/api/configuration.rst

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -101,6 +101,20 @@ Finally, one then uses the configuration to create an hls model:
101101
backend='Vitis'
102102
)
103103
104+
To target an oneAPI Board Support Package (BSP) enabled FPGA for offload acceleration, you can specify the ``part`` argument to be the path to your BSP and the BSP variant. Then, set ``use_oneapi_bsp=True``.
105+
106+
.. code-block:: python
107+
108+
hls_model = hls4ml.converters.convert_from_keras_model(
109+
model,
110+
hls_config=config,
111+
output_dir="my_project_dir",
112+
io_type="io_parallel",
113+
backend="oneAPI",
114+
part="/path/to/my/bsp:bsp_variant",
115+
use_oneapi_bsp=True
116+
)
117+
104118
See :py:class:`~hls4ml.converters.convert_from_keras_model` for more information on the various options. Similar functions exist for ONNX and PyTorch.
105119

106120
----
@@ -132,6 +146,9 @@ It looks like this:
132146
ClockPeriod: 5
133147
IOType: io_parallel # options: io_parallel/io_stream
134148
149+
# oneAPI Offload Acceleration flag.
150+
UseOneAPIBSP: True
151+
135152
HLSConfig:
136153
Model:
137154
Precision: fixed<16,6>
@@ -156,6 +173,7 @@ The backend-specific section of the configuration depends on the backend. You ca
156173
For Vivado backend the options are:
157174

158175
* **Part**\ : the particular FPGA part number that you are considering, here it's a Xilinx Virtex UltraScale+ VU13P FPGA
176+
* **UseOneAPIBSP**\ : path to the oneAPI Board Support Package (and the BSP variant) to enable offload acceleration with an Altera FPGA. This is only needed if you are using the oneAPI backend.
159177
* **ClockPeriod**\ : the clock period, in ns, at which your algorithm runs
160178
Then you have some optimization parameters for how your algorithm runs:
161179
* **IOType**\ : your options are ``io_parallel`` or ``io_stream`` which defines the type of data structure used for inputs, intermediate activations between layers, and outputs. For ``io_parallel``, arrays are used that, in principle, can be fully unrolled and are typically implemented in RAMs. For ``io_stream``, HLS streams are used, which are a more efficient/scalable mechanism to represent data that are produced and consumed in a sequential manner. Typically, HLS streams are implemented with FIFOs instead of RAMs. For more information see `here <https://docs.xilinx.com/r/en-US/ug1399-vitis-hls/pragma-HLS-stream>`__.

hls4ml/backends/oneapi/oneapi_backend.py

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -129,7 +129,8 @@ def get_default_flow(self):
129129
def get_writer_flow(self):
130130
return self._writer_flow
131131

132-
def create_initial_config(self, part='Arria10', clock_period=5, io_type='io_parallel', write_tar=False, **_):
132+
def create_initial_config(
133+
self, part='Arria10', clock_period=5, io_type='io_parallel', write_tar=False, use_oneapi_bsp=False, **_):
133134
"""Create initial configuration of the oneAPI backend.
134135
135136
Args:
@@ -153,10 +154,7 @@ def create_initial_config(self, part='Arria10', clock_period=5, io_type='io_para
153154
# TODO: add namespace
154155
'WriteTar': write_tar,
155156
}
156-
# Target oneAPI Board Support Package (BSP).
157-
if "use_oneapi_bsp" in _:
158-
config['UseOneAPIBSP'] = _["use_oneapi_bsp"]
159-
157+
config['UseOneAPIBSP'] = use_oneapi_bsp
160158
return config
161159

162160
def compile(self, model):

0 commit comments

Comments
 (0)