|
Zephyr API Documentation
2.7.0-rc2
A Scalable Open Source RTOS
|
Data Structures | |
| struct | gna_config |
| struct | gna_model_header |
| struct | gna_model_info |
| struct | gna_inference_req |
| struct | gna_inference_stats |
| struct | gna_inference_resp |
Enumerations | |
| enum | gna_result { GNA_RESULT_INFERENCE_COMPLETE , GNA_RESULT_SATURATION_OCCURRED , GNA_RESULT_OUTPUT_BUFFER_FULL_ERROR , GNA_RESULT_PARAM_OUT_OF_RANGE_ERROR , GNA_RESULT_GENERIC_ERROR } |
Functions | |
| static int | gna_configure (const struct device *dev, struct gna_config *cfg) |
| Configure the GNA device. More... | |
| static int | gna_register_model (const struct device *dev, struct gna_model_info *model, void **model_handle) |
| Register a neural network model. More... | |
| static int | gna_deregister_model (const struct device *dev, void *model) |
| De-register a previously registered neural network model. More... | |
| static int | gna_infer (const struct device *dev, struct gna_inference_req *req, gna_callback callback) |
| Perform inference on a model with input vectors. More... | |
This file contains the driver APIs for Intel's Gaussian Mixture Model and Neural Network Accelerator (GNA)
| enum gna_result |
#include <include/drivers/gna.h>
Result of an inference operation
| Enumerator | |
|---|---|
| GNA_RESULT_INFERENCE_COMPLETE | |
| GNA_RESULT_SATURATION_OCCURRED | |
| GNA_RESULT_OUTPUT_BUFFER_FULL_ERROR | |
| GNA_RESULT_PARAM_OUT_OF_RANGE_ERROR | |
| GNA_RESULT_GENERIC_ERROR | |
|
inlinestatic |
#include <include/drivers/gna.h>
Configure the GNA device.
Configure the GNA device. The GNA device must be configured before registering a model or performing inference
| dev | Pointer to the device structure for the driver instance. |
| cfg | Device configuration information |
| 0 | If the configuration is successful |
| A | negative error code in case of a failure. |
#include <include/drivers/gna.h>
De-register a previously registered neural network model.
De-register a previously registered neural network model from the GNA device De-registration may be done to free up memory for registering another model Once de-registered, the model can no longer be used to perform inference
| dev | Pointer to the device structure for the driver instance. |
| model | Model handle output by gna_register_model API |
| 0 | If de-registration of the model is successful. |
| A | negative error code in case of a failure. |
|
inlinestatic |
#include <include/drivers/gna.h>
Perform inference on a model with input vectors.
Make an inference request on a previously registered model with an of input data vector A callback is provided for notification of inference completion
| dev | Pointer to the device structure for the driver instance. |
| req | Information required to perform inference on a neural network |
| callback | A callback function to notify inference completion |
| 0 | If the request is accepted |
| A | negative error code in case of a failure. |
|
inlinestatic |
#include <include/drivers/gna.h>
Register a neural network model.
Register a neural network model with the GNA device A model needs to be registered before it can be used to perform inference
| dev | Pointer to the device structure for the driver instance. |
| model | Information about the neural network model |
| model_handle | Handle to the registered model if registration succeeds |
| 0 | If registration of the model is successful. |
| A | negative error code in case of a failure. |