This folder contains miscellaneous utilities used by the dataset code. We will describe a couple important classes in this file.
## Thread Management
This picture summarizes a few important classes that we will cover in the next few sections.
![Thread management](https://images.gitee.com/uploads/images/2020/0601/220111_9b07c8fa_7342120.jpeg "task_manager.JPG")
## Task
A Task object corresponds to an instance of std::future returning from std::async. In general, a user will not create a Task object directly. Most work will go through TaskManager's TaskGroup interface which we will cover later in this document. Here are some important members and functions of Task class.
```cpp
std::function<Status()> fnc_obj_;
```
It is the entry function when the thead is spawned. The function does not take any input and will return a Status object. The returned Status object will be saved in this member
```cpp
Status rc_;
```
To retrieve the executed result from the entry function, call the following function
```cpp
Status Task::GetTaskErrorIfAny();
```
Here is roughly the pseudo code of a lifetime of a Task. Some extra works needed to spawn the thread are omitted for the purpose of simplicity. As mentioned previously, a user never spawn a thread directly using a Task class without using any helper.
```cpp
1 Task tk = Task("A name for this thread", []() -> Status {
2 return Status::OK();
3 });
4 RETURN_IF_NOT_OK(tk.Run());
5 RETURN_IF_NOT_OK(tk.Join();)
6 RETURN_IF_NOT_OK(tk.GetTaskErrorIfAny());
```
In the above example line 1 to 3 we use Task constructor to prepare a thread that we are going to create and what it will be running. We also assign a name to this thread. The name is for eye catcher purpose. The second parameter is the real job for this thread to run.
<br/>Line 4 we spawn the thread. In the above example, the thread will execute the lambda function which does nothing but return a OK Status object.
<br/>Line 5 We wait for the thread to complete
<br/>Line 6 We retrieve the result from running the thread which should be the OK Status object.
Another purpose of Task object is to wrap around the entry function and capture any possible exceptions thrown by running the entry function but not being caught within the entry function.
```cpp
try {
rc_ = fnc_obj_();
} catch (const std::bad_alloc &e) {
rc_ = Status(StatusCode::kOutOfMemory, __LINE__, __FILE__, e.what());
} catch (const std::exception &e) {
rc_ = Status(StatusCode::kUnexpectedError, __LINE__, __FILE__, e.what());
}
```
Note that
```cpp
Status Task::Run();
```
is not returning the Status of running the entry function func_obj_. It merely indicates if the spawn is successful or not. This function returns immediately.
Another thing to point out that Task::Run() is not designed to re-run the thread repeatedly, say after it has returned. Result will be unexpected if a Task object is re-run.
For the function
```cpp
Status Task::Join(WaitFlag wf = WaitFlag::kBlocking);
```
where
```cpp
enum class WaitFlag : int { kBlocking, kNonBlocking };
```
is also not returning the Status of running the entry function func_obj_ like the function Run(). It can return some other unexpected error while waiting for the thread to return.
This function blocks (kBlocking) by default until the spawned thread returns.
As mentioned previously, use the function GetTaskErrorIfAny() to fetch the result from running the entry function func_obj_.
The non-blocking version (kNonBlocking) of Join allows us to force the thread to return if timed out.
```cpp
while (thrd_.wait_for(std::chrono::seconds(1)) != std::future_status::ready) {
// Do something if the thread is blocked on a conditional variable
}
```
The main use of this form of Join() is after we have interrupted the thread.
A design alternative is to use
```cpp
std::future<Status>
```
to spawn the thread asynchronously and we can get the result using std::future::get(). But get() can only be called once and it is then more convenient to save the returned result in the rc_member for unlimited number of retrieval. As we shall see later, the value of rc_ will be propagated to high level classes like TaskGroup, master thread.
Currently it is how the thread is defined in Task class
```cpp
std::future<void> thrd_;
```
and spawned by this line of code.
```cpp
thrd_ = std::async(std::launch::async, std::ref(*this));
```
Every thread can access its own Task object using the FindMe() function.
```cpp
Task * TaskManager::FindMe();
```
There are other attributes of Task such as interrupt which we will cover later in this document.
## TaskGroup
The first helper in managing Task objects is TaskGroup. Technically speaking a TaskGroup is a collection of related Tasks. As of this writing, every Task must belong to a TaskGroup. We spawn a thread using the following function
```cpp
Status TaskGroup::CreateAsyncTask(const std::string &my_name, const std::function<Status()> &f, Task **pTask = nullptr);
```
The created Task object is added to the TaskGroup object. In many cases, user do not need to get a reference to the newly created Task object. But the CreateAsyncTask can return one if requested.
There is no other way to add a Task object to a TaskGroup other than by calling TaskGroup::CreateAsyncTask. As a result, no Task object can belong to multiple TaskGroup's by design. Every Task object has a back pointer to the TaskGroup it belongs to :
```cpp
TaskGroup *Task::MyTaskGroup();
```
Task objects in the same TaskGroup will form a linked list with newly created Task object appended to the end of the list.
Globally we support multiple TaskGroups's running concurrently. TaskManager (discussed in the next section) will chain all Task objects from all TaskGroup's in a single LRU linked list.
### HandShaking
As of this writing, the following handshaking logic is required. Suppose a thread T1 create another thread, say T2 by calling TaskGroup::CreateAsyncTask. T1 will block on a WaitPost area until T2 post back signalling T1 can resume.
```cpp
// Entry logic of T2
auto *myTask = TaskManager::FindMe();
myTask->Post();
```
If T2 is going to spawn more threads, say T3 and T4, it is *highly recommended* that T2 wait for T3 and T4 to post before it posts back to T1.
The purpose of the handshake is to provide a way for T2 to synchronize with T1 if necessary.
TaskGroup provides similar functions as Task but at a group level.
```cpp
void TaskGroup::interrupt_all() noexcept;
```
This interrupt all the threads currently running in the TaskGroup. The function returns immediately. We will cover more details on the mechanism of interrupt later in this document.
```cpp
Status TaskGroup::join_all(Task::WaitFlag wf = Task::WaitFlag::kBlocking);
```
This performs Task::Join() on all the threads in the group. This is a blocking call by default.
```cpp
Status TaskGroup::GetTaskErrorIfAny();
```
A TaskGroup does not save records for all the Task::rc_ for all the threads in this group. Only the first error is saved. For example, if thread T1 reports error rc1 and later on T2 reports error rc2, only rc1 is saved in the TaskGroup and rc2 is ignored. TaskGroup::GetTaskErrorIfAny() will return rc1 in this case.
```cpp
int size() const noexcept;
```
This returns the size of the TaskGroup.
## TaskManager
TaskManager is a singleton, meaning there is only one such class object. It is created by another Services singleton object which we will cover it in the later section.
```cpp
TaskManager &TaskManager::GetInstance()
```
provides the method to access the singleton.
TaskManager manages all the TaskGroups and all the Tasks objects ever created.
```cpp
List<Task> lru_;
List<Task> free_lst_;
std::set<TaskGroup *> grp_list_;
```
As mentioned previously, all the Tasks in the same TaskGroup are linked in a linked list local to this TaskGroup. At the TaskManager level, all Task objects from all the TaskGroups are linked in the lru_ list.
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
收起资源包目录
MindSpore深度学习框架 v2.2.11.zip (2000个子文件)
node.h 28KB
context.h 20KB
model.h 19KB
types.h 17KB
types.h 11KB
serialization.h 10KB
attribute.h 9KB
dual_abi_helper.h 8KB
status.h 7KB
graph.h 7KB
model_parallel_runner.h 7KB
tensor.h 7KB
delegate.h 6KB
model_c.h 6KB
context_c.h 6KB
kernel_api.h 6KB
tensor_c.h 5KB
status_c.h 4KB
context.h 4KB
custom_aot_extra_dual_abi.h 4KB
callback.h 3KB
allocator.h 3KB
cfg.h 3KB
custom_aot_extra.h 3KB
abstract.h 3KB
value.h 3KB
delegate_api.h 3KB
cell.h 3KB
model_group.h 3KB
kernel.h 2KB
lr_scheduler.h 2KB
data_type_c.h 2KB
data_type.h 1KB
train_accuracy.h 1KB
graph.h 1KB
ckpt_saver.h 1KB
handle_types.h 1KB
accuracy.h 1KB
types_c.h 1KB
metrics.h 1KB
format_c.h 1KB
visible.h 1KB
time_monitor.h 1KB
loss_monitor.h 1KB
macros.h 1KB
format.h 969B
status.h 967B
说明.htm 4KB
async_read_tensors_expected.json 6KB
sync_read_tensors_expected.json 6KB
sync_watchpoints_expected.json 2KB
async_watchpoints_expected.json 2KB
config.json 445B
overflow_watchpoint_expected.json 291B
schema.json 252B
RELEASE.md 303KB
RELEASE_CN.md 127KB
README.md 23KB
README.md 16KB
README_CN.md 15KB
README.md 14KB
README.md 10KB
README.md 9KB
README.md 7KB
CONTRIBUTING_CN.md 6KB
README.md 5KB
README.md 4KB
README.md 3KB
README.md 3KB
README.md 2KB
README.md 2KB
SECURITY.md 2KB
README.md 1KB
README.md 1KB
bug-report.md 966B
README.md 939B
PULL_REQUEST_TEMPLATE.md 880B
README.md 771B
RFC.md 504B
task-tracking.md 284B
README.md 262B
README.md 221B
README.md 184B
README.md 137B
README.md 128B
README.md 91B
README.md 21B
readme.md 4B
readme.md 4B
test_math_ops.py 73KB
test_auto_monad.py 61KB
test_dynamic_wenet_ascend.py 61KB
test_map_dvpp.py 60KB
test_tensor_slice.py 58KB
test_grad_return_type.py 57KB
test_array_ops.py 50KB
test_sparse_unary_ops.py 46KB
bert_model.py 44KB
test_dynamic_asr.py 43KB
transformer_model.py 43KB
共 2000 条
- 1
- 2
- 3
- 4
- 5
- 6
- 20
资源评论
芝麻粒儿
- 粉丝: 6w+
- 资源: 2万+
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功