# Workflows compiler
> [!IMPORTANT]
> We require a Roboflow Enterprise License to use this in production. See inference/enterpise/LICENSE.txt for details.
## Overview
We are under development of new feature that would allow clients to define the ML workflow in a declarative form
(JSON configuration or WYSIWYG UI) and let the `inference` care about all required computations. That goal can be
achieved thanks to the compilation and runtime engine that is created here.
The `workflows` module contains components capable to:
* parse the workflow specification (see: [schemas of configuration entities](./entities))
* validate the correctness of workflows specification (see: [validator module](./complier/validator.py))
* construct computational graph and validate its consistency prior to any computations (see: [graph parser](./complier/graph_parser.py))
* analyse runtime input parameter and link them with graph placeholders (see: [input validator](./complier/runtime_input_validator.py))
* execute the computation workflow (see: [execution engine](./complier/execution_engine.py))
![overview diagram](./assets/workflows_overview.jpg)
## How `workflows` can be used?
### Behind Roboflow hosted API
```python
from inference_sdk import InferenceHTTPClient
client = InferenceHTTPClient(
api_url="https://detect.roboflow.com",
api_key="YOUR_API_KEY"
)
client.infer_from_workflow(
specification={}, # workflow specification goes here
images={}, # input images goes here
parameters={}, # input parameters other than image goes here
)
```
### Behind `inference` HTTP API
Use `inference_cli` to start server
```bash
inference server start
```
```python
from inference_sdk import InferenceHTTPClient
client = InferenceHTTPClient(
api_url="http://127.0.0.1:9001",
api_key="YOUR_API_KEY"
)
client.infer_from_workflow(
specification={}, # workflow specification goes here
images={}, # input images goes here
parameters={}, # input parameters other than image goes here
)
```
### Integration with Python code
```python
from inference.enterprise.workflows.complier.core import compile_and_execute
IMAGE = ...
result = compile_and_execute(
workflow_specification={},
runtime_parameters={
"image": IMAGE,
},
api_key="YOUR_API_KEY",
)
```
## How to create workflow specification?
### Workflow specification basics
Workflow specification is defined via a JSON document in the following format:
```json
{
"specification": {
"version": "1.0",
"inputs": [],
"steps": [],
"outputs": []
}
}
```
In general, we have three main elements of specification:
* `inputs` - the section where we define all parameters that can be passed in the execution time by `inference` user
* `steps` - the section where we define computation steps, their interconnections, connections to `inputs` and `outputs`
* `outputs` - the section where we define all fields that needs to be rendered in the final result
### How can we refer between elements of specification?
To create a graph of computations, we need to define links between steps - in order to do it - we need to have a
way to refer to specific elements. By convention, the following references are allowed:
`${type_of_element}.{name_of_element}` and `${type_of_element}.{name_of_element}.{property}`.
Examples:
* `$inputs.image` - reference to an input called `image`
* `$steps.my_step.predictions` - reference to a step called `my_step` and its property `predictions`
Additionally, defining **outputs**, it is allowed (since `v0.9.14`) to use wildcard selector
(`${type_of_element}.{name_of_element}.*`) with intention to extract all properties of given step.
In the code, we usually call references **selectors**.
### How can we define `inputs`?
At the moment, the compiler supports two types of inputs `InferenceParameter` and `InferenceImage`.
#### `InferenceImage`
This input is reserved to represent image or list of images. Definition format:
```json
{"type": "InferenceImage", "name": "my_image"}
```
When creating `InferenceImage` you do not point a specific image - you just create a placeholder that will be linked
with other element of the graph. This placeholder will be substituted with actual image when you run the workflow
graph and provide input parameter called `my_image` that can be `np.ndarray` or other formats that `inference` support,
like:
```json
{
"type": "url",
"value": "https://here.is/my/image.jpg"
}
```
### `InferenceParameter`
Similar to `InferenceImage` - `InferenceParameter` creates a placeholder for a parameters that can be used in runtime
to alter execution of workflow graph.
```json
{"type": "InferenceParameter", "name": "confidence_threshold", "default_value": 0.5}
```
`InferenceParameters` may be optionally defined with default values that will be used, if no actual parameter
of given name is present in user-defined input while executing the workflow graph. Type of parameter is not
explicitly defined, but will be checked in runtime, prior to execution based on types of parameters that
steps using this parameters can accept.
### How can we define `steps`?
Compiler supports multiple type of steps (that will be described later), but let's see how to define a simple one,
that would be responsible for making prediction from object-detection model:
```json
{
"type": "ObjectDetectionModel",
"name": "my_object_detection",
"image": "$inputs.image",
"model_id": "yolov8n-640"
}
```
You can see that the step must have its type associated (that's how we link JSON document elements into code definitions)
and name (unique within all steps). Another required parameters are `image` and `model_id`.
In case of `image` - we use reference to the input - that's how we create a link between parameter that will be provided
in runtime and computational step. Steps parameters can be also provided as predefined values (like `model_id` in this
case). Majority of parameters can be defined both as references to inputs (or outputs of other steps) and predefined
values.
### How can we define `outputs`?
Definition of single output looks like that:
```json
{"type": "JsonField", "name": "predictions", "selector": "$steps.step_1.predictions"}
```
it defines a single output dictionary key (of name `predictions`) that will be created. `selector` field creates a
link between step output and result. In this case, selector points `step_1` and its property - `predictions`.
Additionally, optional parameter `coordinates_system` can be defined with one of two values (`"own", "parent"`).
This parameter defaults to `parent` and describe the coordinate system of detections that should be used.
This setting is only important in case of more complicated graphs (where we crop based on predicted detections and
later on make another detections on each and every crop).
### Example
In the following example, we create a pipeline that at first makes classification first. Based on results
(the top class), `step_2` decides which object detection model to use (if model predicts car, `step_3` will be executed,
`step_4` will be used otherwise).
Result is build from the outputs of all models. Always one of field `step_3_predictions` and `step_4_predictions` will
be empty due to conditional execution.
![example pipeline](./assets/example_pipeline.jpg)
```json
{
"specification": {
"version": "1.0",
"inputs": [
{"type": "InferenceImage", "name": "image"}
],
"steps": [
{
"type": "ClassificationModel",
"name": "step_1",
"image": "$inputs.image",
"model_id": "vehicle-classification-eapcd/2",
"confidence": 0.4
},
{
"type": "Condition",
"name": "step_2",
"left": "$steps.step_1.top",
"operator": "equal",
"right": "Car",
"step_if_true": "
没有合适的资源?快使用搜索试试~ 我知道了~
yolov8系列--A fast, easy-to-use, production-ready inference .zip
共721个文件
py:419个
md:68个
txt:45个
需积分: 5 0 下载量 45 浏览量
2024-02-24
21:44:39
上传
评论
收藏 22.48MB ZIP 举报
温馨提示
yolov8系列--A fast, easy-to-use, production-ready inference
资源推荐
资源详情
资源评论
收起资源包目录
yolov8系列--A fast, easy-to-use, production-ready inference .zip (721个子文件)
Dockerfile.onnx.jetson.4.5.0 2KB
Dockerfile.onnx.jetson.4.6.1 3KB
Dockerfile.onnx.jetson.5.1.1 2KB
.actrc 25B
Dockerfile.onnx.trt.base 1KB
CITATION.cff 481B
.isort.cfg 69B
CNAME 22B
LICENSE.core 10KB
Dockerfile.onnx.cpu 2KB
d957d3731f3eed9e.css 18KB
cookbooks.css 2KB
898f47188aaba0c9.css 2KB
globals.css 993B
styles.css 224B
predictions.csv 9KB
object_counts_by_class_per_interval.csv 561B
object_counts_per_interval.csv 44B
Dockerfile.device_manager 811B
.dockerignore 411B
.dockerignore 12B
example.env 43B
.gitignore 2KB
.gitignore 362B
Dockerfile.onnx.gpu 2KB
Dockerfile.onnx.udp.gpu 868B
.helmignore 349B
index.html 18KB
notebook-instructions.html 10KB
404.html 8KB
main.html 986B
index.html 854B
404.html 208B
outdated.html 196B
pytest.ini 81B
workflows.ipynb 8.82MB
quickstart.ipynb 1.57MB
inference_sdk.ipynb 1.57MB
clip_classification.ipynb 9KB
rgb_anomaly_detection.ipynb 9KB
inference_pipeline_rtsp.ipynb 8KB
workflows_overview.jpg 439KB
example_pipeline.jpg 330KB
detection_consensus_step_diagram.jpg 232KB
person_image.jpg 90KB
beer.jpg 45KB
example_image.jpg 35KB
fd9d1056-e37417fdba1ce8e3.js 160KB
framework-8883d1e9be70c3da.js 137KB
main-cb09853957ebb12a.js 112KB
864-b7d72a04c0a174e1.js 108KB
polyfills-c67a75d1b6f99dc8.js 89KB
page-ab277e8108abf11b.js 10KB
page-21210c1bc9077de0.js 5KB
cookbooks.js 4KB
webpack-7e1925138f2bcee0.js 4KB
_not-found-9d3196f9c7634651.js 2KB
init_kapa_widget.js 528B
main-app-391e5f1f8c4bf4a6.js 508B
layout-6503fcddf5d67cf8.js 415B
_app-27277a117f49dcf1.js 325B
_error-91a5938854a6f402.js 247B
_buildManifest.js 224B
next.config.js 114B
postcss.config.js 82B
_ssgManifest.js 80B
sam_tests.json 50.68MB
clip_tests.json 171KB
package-lock.json 164KB
tests.json 82KB
openapi.json 62KB
batch_tests.json 41KB
package.json 795B
cla.json 603B
tsconfig.json 599B
.eslintrc.json 40B
act_event.json 19B
Dockerfile.onnx.lambda 2KB
LICENSE 421B
Makefile 2KB
README.md 44KB
inference_sdk.md 25KB
http_inference.md 16KB
README.md 15KB
run_model_on_image.md 14KB
README.md 14KB
inference_cli.md 13KB
stream_management_api.md 11KB
http_api.md 10KB
inference_pipeline.md 10KB
README.md 10KB
index.md 9KB
active_learning.md 8KB
models.md 8KB
docker.md 6KB
README.md 6KB
clip.md 6KB
explore_models.md 5KB
native_python_api.md 5KB
create_a_custom_inference_pipeline_sink.md 5KB
共 721 条
- 1
- 2
- 3
- 4
- 5
- 6
- 8
资源评论
Kwan的解忧杂货铺
- 粉丝: 2w+
- 资源: 3690
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功