This folder contains miscellaneous utilities used by the dataset code. We will describe a couple important classes in this file.
## Thread Management
This picture summarizes a few important classes that we will cover in the next few sections.
![Thread management](https://images.gitee.com/uploads/images/2020/0601/220111_9b07c8fa_7342120.jpeg "task_manager.JPG")
## Task
A Task object corresponds to an instance of std::future returning from std::async. In general, a user will not create a Task object directly. Most work will go through TaskManager's TaskGroup interface which we will cover later in this document. Here are some important members and functions of Task class.
```cpp
std::function<Status()> fnc_obj_;
```
It is the entry function when the thead is spawned. The function does not take any input and will return a Status object. The returned Status object will be saved in this member
```cpp
Status rc_;
```
To retrieve the executed result from the entry function, call the following function
```cpp
Status Task::GetTaskErrorIfAny();
```
Here is roughly the pseudo code of a lifetime of a Task. Some extra works needed to spawn the thread are omitted for the purpose of simplicity. As mentioned previously, a user never spawn a thread directly using a Task class without using any helper.
```cpp
1 Task tk = Task("A name for this thread", []() -> Status {
2 return Status::OK();
3 });
4 RETURN_IF_NOT_OK(tk.Run());
5 RETURN_IF_NOT_OK(tk.Join();)
6 RETURN_IF_NOT_OK(tk.GetTaskErrorIfAny());
```
In the above example line 1 to 3 we use Task constructor to prepare a thread that we are going to create and what it will be running. We also assign a name to this thread. The name is for eye catcher purpose. The second parameter is the real job for this thread to run.
<br/>Line 4 we spawn the thread. In the above example, the thread will execute the lambda function which does nothing but return a OK Status object.
<br/>Line 5 We wait for the thread to complete
<br/>Line 6 We retrieve the result from running the thread which should be the OK Status object.
Another purpose of Task object is to wrap around the entry function and capture any possible exceptions thrown by running the entry function but not being caught within the entry function.
```cpp
try {
rc_ = fnc_obj_();
} catch (const std::bad_alloc &e) {
rc_ = Status(StatusCode::kOutOfMemory, __LINE__, __FILE__, e.what());
} catch (const std::exception &e) {
rc_ = Status(StatusCode::kUnexpectedError, __LINE__, __FILE__, e.what());
}
```
Note that
```cpp
Status Task::Run();
```
is not returning the Status of running the entry function func_obj_. It merely indicates if the spawn is successful or not. This function returns immediately.
Another thing to point out that Task::Run() is not designed to re-run the thread repeatedly, say after it has returned. Result will be unexpected if a Task object is re-run.
For the function
```cpp
Status Task::Join(WaitFlag wf = WaitFlag::kBlocking);
```
where
```cpp
enum class WaitFlag : int { kBlocking, kNonBlocking };
```
is also not returning the Status of running the entry function func_obj_ like the function Run(). It can return some other unexpected error while waiting for the thread to return.
This function blocks (kBlocking) by default until the spawned thread returns.
As mentioned previously, use the function GetTaskErrorIfAny() to fetch the result from running the entry function func_obj_.
The non-blocking version (kNonBlocking) of Join allows us to force the thread to return if timed out.
```cpp
while (thrd_.wait_for(std::chrono::seconds(1)) != std::future_status::ready) {
// Do something if the thread is blocked on a conditional variable
}
```
The main use of this form of Join() is after we have interrupted the thread.
A design alternative is to use
```cpp
std::future<Status>
```
to spawn the thread asynchronously and we can get the result using std::future::get(). But get() can only be called once and it is then more convenient to save the returned result in the rc_member for unlimited number of retrieval. As we shall see later, the value of rc_ will be propagated to high level classes like TaskGroup, master thread.
Currently it is how the thread is defined in Task class
```cpp
std::future<void> thrd_;
```
and spawned by this line of code.
```cpp
thrd_ = std::async(std::launch::async, std::ref(*this));
```
Every thread can access its own Task object using the FindMe() function.
```cpp
Task * TaskManager::FindMe();
```
There are other attributes of Task such as interrupt which we will cover later in this document.
## TaskGroup
The first helper in managing Task objects is TaskGroup. Technically speaking a TaskGroup is a collection of related Tasks. As of this writing, every Task must belong to a TaskGroup. We spawn a thread using the following function
```cpp
Status TaskGroup::CreateAsyncTask(const std::string &my_name, const std::function<Status()> &f, Task **pTask = nullptr);
```
The created Task object is added to the TaskGroup object. In many cases, user do not need to get a reference to the newly created Task object. But the CreateAsyncTask can return one if requested.
There is no other way to add a Task object to a TaskGroup other than by calling TaskGroup::CreateAsyncTask. As a result, no Task object can belong to multiple TaskGroup's by design. Every Task object has a back pointer to the TaskGroup it belongs to :
```cpp
TaskGroup *Task::MyTaskGroup();
```
Task objects in the same TaskGroup will form a linked list with newly created Task object appended to the end of the list.
Globally we support multiple TaskGroups's running concurrently. TaskManager (discussed in the next section) will chain all Task objects from all TaskGroup's in a single LRU linked list.
### HandShaking
As of this writing, the following handshaking logic is required. Suppose a thread T1 create another thread, say T2 by calling TaskGroup::CreateAsyncTask. T1 will block on a WaitPost area until T2 post back signalling T1 can resume.
```cpp
// Entry logic of T2
auto *myTask = TaskManager::FindMe();
myTask->Post();
```
If T2 is going to spawn more threads, say T3 and T4, it is *highly recommended* that T2 wait for T3 and T4 to post before it posts back to T1.
The purpose of the handshake is to provide a way for T2 to synchronize with T1 if necessary.
TaskGroup provides similar functions as Task but at a group level.
```cpp
void TaskGroup::interrupt_all() noexcept;
```
This interrupt all the threads currently running in the TaskGroup. The function returns immediately. We will cover more details on the mechanism of interrupt later in this document.
```cpp
Status TaskGroup::join_all(Task::WaitFlag wf = Task::WaitFlag::kBlocking);
```
This performs Task::Join() on all the threads in the group. This is a blocking call by default.
```cpp
Status TaskGroup::GetTaskErrorIfAny();
```
A TaskGroup does not save records for all the Task::rc_ for all the threads in this group. Only the first error is saved. For example, if thread T1 reports error rc1 and later on T2 reports error rc2, only rc1 is saved in the TaskGroup and rc2 is ignored. TaskGroup::GetTaskErrorIfAny() will return rc1 in this case.
```cpp
int size() const noexcept;
```
This returns the size of the TaskGroup.
## TaskManager
TaskManager is a singleton, meaning there is only one such class object. It is created by another Services singleton object which we will cover it in the later section.
```cpp
TaskManager &TaskManager::GetInstance()
```
provides the method to access the singleton.
TaskManager manages all the TaskGroups and all the Tasks objects ever created.
```cpp
List<Task> lru_;
List<Task> free_lst_;
std::set<TaskGroup *> grp_list_;
```
As mentioned previously, all the Tasks in the same TaskGroup are linked in a linked list local to this TaskGroup. At the TaskManager level, all Task objects from all the TaskGroups are linked in the lru_ list.
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
OpenMLDB 是一个开源机器学习数据库,提供线上线下一致的生产级特征平台。 在人工智能工程化落地过程中,企业的数据和工程化团队 95% 的时间精力会被数据处理、数据校验等相关工作所消耗。为了解决该痛点,头部企业会花费上千小时自研构建数据与特征平台,来解决诸如线上线下一致性、数据穿越、特征回填、高并发低延迟等工程挑战;其他中小企业则需要采购高昂的 SaaS 工具和数据治理服务。 OpenMLDB 致力于解决 AI 工程化落地的数据治理难题,并且已经在上百个企业级人工智能场景中得到落地。OpenMLDB 优先开源了特征数据治理能力,依托 SQL 的开发能力,为企业级机器学习应用提供线上线下计算一致、高性能低门槛的生产级特征平台。 在机器学习的很多应用场景中,为了获得高业务价值的模型,对于实时特征有很强的需求,比如实时的个性化推荐、风控、反欺诈等。但是,由数据科学家所构建的特征计算脚本(一般基于 Python 开发),由于无法满足低延迟、高吞吐、高可用等生产级特性,因此无法直接上线。为了在生产环境中上线特征脚本用于模型推理,并且满足实时计算的性能要求,往往需要工程化团队进行代码重构和优
资源推荐
资源详情
资源评论
收起资源包目录
【Ai-人工智能机器学习/深度学习】OpenMLDB是一个开源机器学习数据库,面向机器学习应用提供正确、高效数据供给 (2000个子文件)
winograd_utils.c 191KB
winograd_utils_fp16.c 150KB
winograd_avx.c 106KB
conv_depthwise_fp32.c 82KB
pack_fp32.c 69KB
conv_1x1_avx_fp32.c 56KB
arithmetic_fp16.c 43KB
nnacl_gemm_avx512_12x32_mask_kernel_nhwc_fp32.c 43KB
conv_sw_avx_fp32.c 42KB
conv_depthwise_int8.c 41KB
nnacl_gemm_avx512_12x32_kernel_nhwc_fp32.c 40KB
nnacl_gemm_avx512_11x32_mask_kernel_nhwc_fp32.c 40KB
nnacl_gemm_avx512_8x48_mask_kernel_nhwc_fp32.c 39KB
nnacl_gemm_avx512_5x80_mask_kernel_nhwc_fp32.c 38KB
nnacl_gemm_avx512_11x32_kernel_nhwc_fp32.c 37KB
nnacl_gemm_avx512_6x64_mask_kernel_nhwc_fp32.c 37KB
conv_depthwise_fp16.c 37KB
nnacl_gemm_avx512_8x48_kernel_nhwc_fp32.c 37KB
nnacl_gemm_avx512_5x80_kernel_nhwc_fp32.c 36KB
nnacl_gemm_avx512_10x32_mask_kernel_nhwc_fp32.c 36KB
nnacl_gemm_avx512_4x96_mask_kernel_nhwc_fp32.c 36KB
nnacl_gemm_avx512_6x64_kernel_nhwc_fp32.c 36KB
conv3x3_int8.c 35KB
nnacl_gemm_avx512_4x96_kernel_nhwc_fp32.c 35KB
nnacl_gemm_avx512_7x48_mask_kernel_nhwc_fp32.c 34KB
nnacl_gemm_avx512_10x32_kernel_nhwc_fp32.c 34KB
nnacl_gemm_avx512_9x32_mask_kernel_nhwc_fp32.c 33KB
nnacl_gemm_avx512_7x48_kernel_nhwc_fp32.c 33KB
matmul_fp16.c 33KB
conv_int8.c 33KB
pack_fp16.c 32KB
nnacl_gemm_avx512_5x64_mask_kernel_nhwc_fp32.c 32KB
matmul_avx_fp32.c 32KB
nnacl_gemm_avx512_4x80_mask_kernel_nhwc_fp32.c 31KB
nnacl_gemm_avx512_9x32_kernel_nhwc_fp32.c 31KB
matmul_avx512.c 31KB
arithmetic.c 31KB
nnacl_gemm_avx512_5x64_kernel_nhwc_fp32.c 31KB
arg_min_max_fp32.c 31KB
nnacl_gemm_avx512_6x48_mask_kernel_nhwc_fp32.c 30KB
nnacl_gemm_avx512_4x80_kernel_nhwc_fp32.c 30KB
nnacl_gemm_avx512_12x16_mask_kernel_nhwc_fp32.c 30KB
nnacl_gemm_avx512_8x32_mask_kernel_nhwc_fp32.c 30KB
nnacl_gemm_avx512_6x48_kernel_nhwc_fp32.c 29KB
nnacl_gemm_avx512_3x96_mask_kernel_nhwc_fp32.c 29KB
deconv_winograd_fp32.c 29KB
matmul_int8.c 29KB
nnacl_gemm_avx512_8x32_kernel_nhwc_fp32.c 28KB
nnacl_gemm_avx512_3x96_kernel_nhwc_fp32.c 28KB
nnacl_gemm_avx512_11x16_mask_kernel_nhwc_fp32.c 28KB
matmul_base.c 27KB
nnacl_gemm_avx512_12x16_kernel_nhwc_fp32.c 27KB
matmul_fp32.c 27KB
nnacl_gemm_avx512_7x32_mask_kernel_nhwc_fp32.c 27KB
nnacl_gemm_avx512_4x64_mask_kernel_nhwc_fp32.c 26KB
nnacl_gemm_avx512_5x48_mask_kernel_nhwc_fp32.c 26KB
add_int8.c 26KB
nnacl_gemm_avx512_4x64_kernel_nhwc_fp32.c 26KB
pooling_fp32.c 25KB
nnacl_gemm_avx512_10x16_mask_kernel_nhwc_fp32.c 25KB
nnacl_gemm_avx512_11x16_kernel_nhwc_fp32.c 25KB
resize_fp32.c 25KB
nnacl_gemm_avx512_3x80_mask_kernel_nhwc_fp32.c 25KB
nnacl_gemm_avx512_7x32_kernel_nhwc_fp32.c 25KB
attention_fp32.c 25KB
nnacl_gemm_avx512_5x48_kernel_nhwc_fp32.c 25KB
gemm.c 25KB
nnacl_gemm_avx512_3x80_kernel_nhwc_fp32.c 24KB
transpose_base.c 24KB
nnacl_gemm_avx512_6x32_mask_kernel_nhwc_fp32.c 23KB
nnacl_gemm_avx512_9x16_mask_kernel_nhwc_fp32.c 23KB
nnacl_gemm_avx512_10x16_kernel_nhwc_fp32.c 23KB
pooling_int8.c 23KB
reduce_int8.c 22KB
nnacl_gemm_avx512_6x32_kernel_nhwc_fp32.c 22KB
nnacl_gemm_avx512_4x48_mask_kernel_nhwc_fp32.c 22KB
deconv_winograd_fp16.c 21KB
nnacl_gemm_avx512_2x96_mask_kernel_nhwc_fp32.c 21KB
memcpy_s.c 21KB
nnacl_gemm_avx512_3x64_mask_kernel_nhwc_fp32.c 21KB
nnacl_gemm_avx512_9x16_kernel_nhwc_fp32.c 21KB
infer_register.c 21KB
nnacl_gemm_avx512_8x16_mask_kernel_nhwc_fp32.c 21KB
init_vs_kernels.c 21KB
nnacl_gemm_avx512_2x96_kernel_nhwc_fp32.c 21KB
nnacl_gemm_avx512_4x48_kernel_nhwc_fp32.c 21KB
nnacl_gemm_avx512_3x64_kernel_nhwc_fp32.c 21KB
memset_s.c 20KB
deconvolution_winograd.c 20KB
nnacl_gemm_avx512_5x32_mask_kernel_nhwc_fp32.c 20KB
nnacl_gemm_avx512_8x16_kernel_nhwc_fp32.c 19KB
nnacl_gemm_avx512_5x32_kernel_nhwc_fp32.c 19KB
nnacl_gemm_avx512_7x16_mask_kernel_nhwc_fp32.c 19KB
nnacl_gemm_avx512_2x80_mask_kernel_nhwc_fp32.c 19KB
instance_norm_fp32.c 19KB
nnacl_gemm_avx512_2x80_kernel_nhwc_fp32.c 18KB
nnacl_gemm_avx512_3x48_mask_kernel_nhwc_fp32.c 18KB
pack_int8.c 17KB
group_convolution.c 17KB
nnacl_gemm_avx512_7x16_kernel_nhwc_fp32.c 17KB
共 2000 条
- 1
- 2
- 3
- 4
- 5
- 6
- 20
资源评论
技术宅小伙
- 粉丝: 179
- 资源: 1777
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功