# KubeRay APIServer
The KubeRay APIServer provides gRPC and HTTP APIs to manage KubeRay resources.
**Note**
The KubeRay APIServer is an optional component. It provides a layer of simplified
configuration for KubeRay resources. The KubeRay API server is used internally
by some organizations to back user interfaces for KubeRay resource management.
The KubeRay APIServer is community-managed and is not officially endorsed by the
Ray maintainers. At this time, the only officially supported methods for
managing KubeRay resources are
- Direct management of KubeRay custom resources via kubectl, kustomize, and Kubernetes language clients.
- Helm charts.
KubeRay APIServer maintainer contacts (GitHub handles):
@Jeffwan @scarlet25151
## Installation
### Helm
Make sure the version of Helm is v3+. Currently, [existing CI tests](https://github.com/ray-project/kuberay/blob/master/.github/workflows/helm-lint.yaml) are based on Helm v3.4.1 and v3.9.4.
```sh
helm version
```
### Install KubeRay APIServer
* Install a stable version via Helm repository (only supports KubeRay v0.4.0+)
```sh
helm repo add kuberay https://ray-project.github.io/kuberay-helm/
# Install KubeRay APIServer v0.4.0.
helm install kuberay-apiserver kuberay/kuberay-apiserver --version 0.4.0
# Check the KubeRay APIServer Pod in `default` namespace
kubectl get pods
# NAME READY STATUS RESTARTS AGE
# kuberay-apiserver-67b46b88bf-m7dzg 1/1 Running 0 6s
```
* Install the nightly version
```sh
# Step1: Clone KubeRay repository
# Step2: Move to `helm-chart/kuberay-apiserver`
# Step3: Install KubeRay APIServer
helm install kuberay-apiserver .
```
### List the chart
To list the `my-release` deployment:
```sh
helm ls
# NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
# kuberay-apiserver default 1 2023-02-07 09:28:15.510869781 -0500 EST deployedkuberay-apiserver-0.4.0
```
### Uninstall the Chart
```sh
# Uninstall the `kuberay-apiserver` release
helm uninstall kuberay-apiserver
# The KubeRay APIServer Pod should be removed.
kubectl get pods
# No resources found in default namespace.
```
## Usage
After the deployment we may use the `{{baseUrl}}` to access the
- (default) for nodeport access, we provide the default http port `31888` for connection and you can connect it using.
- for ingress access, you will need to create your own ingress
The requests parameters detail can be seen in [KubeRay swagger](https://github.com/ray-project/kuberay/tree/master/proto/swagger), here we only present some basic example:
### Setup end-to-end test
0. (Optional) You may use your local kind cluster or minikube
```bash
cat <<EOF | kind create cluster --name ray-test --config -
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 30379
hostPort: 6379
listenAddress: "0.0.0.0"
protocol: tcp
- containerPort: 30265
hostPort: 8265
listenAddress: "0.0.0.0"
protocol: tcp
- containerPort: 30001
hostPort: 10001
listenAddress: "0.0.0.0"
protocol: tcp
- containerPort: 8000
hostPort: 8000
listenAddress: "0.0.0.0"
- containerPort: 31888
hostPort: 31888
listenAddress: "0.0.0.0"
- role: worker
- role: worker
EOF
```
1. Deploy the KubeRay APIServer within the same cluster of KubeRay operator
```bash
helm repo add kuberay https://ray-project.github.io/kuberay-helm/
helm -n ray-system install kuberay-apiserver kuberay/kuberay-apiserver
```
2. The APIServer expose service using `NodePort` by default. You can test access by your host and port, the default port is set to `31888`.
```
curl localhost:31888
{"code":5, "message":"Not Found"}
```
3. You can create `RayCluster`, `RayJobs` or `RayService` by dialing the endpoints. The following is a simple example for creating the `RayService` object, follow [swagger support](https://ray-project.github.io/kuberay/components/apiserver/#swagger-support) to get the complete definitions of APIs.
```shell
curl -X POST 'localhost:31888/apis/v1alpha2/namespaces/ray-system/compute_templates' \
--header 'Content-Type: application/json' \
--data '{
"name": "default-template",
"namespace": "ray-system",
"cpu": 2,
"memory": 4
}'
curl -X POST 'localhost:31888/apis/v1alpha2/namespaces/ray-system/services' \
--header 'Content-Type: application/json' \
--data '{
"name": "user-test-1",
"namespace": "ray-system",
"user": "user",
"serveDeploymentGraphSpec": {
"importPath": "fruit.deployment_graph",
"runtimeEnv": "working_dir: \"https://github.com/ray-project/test_dag/archive/c620251044717ace0a4c19d766d43c5099af8a77.zip\"\n",
"serveConfigs": [
{
"deploymentName": "OrangeStand",
"replicas": 1,
"userConfig": "price: 2",
"actorOptions": {
"cpusPerActor": 0.1
}
},
{
"deploymentName": "PearStand",
"replicas": 1,
"userConfig": "price: 1",
"actorOptions": {
"cpusPerActor": 0.1
}
},
{
"deploymentName": "FruitMarket",
"replicas": 1,
"actorOptions": {
"cpusPerActor": 0.1
}
},{
"deploymentName": "DAGDriver",
"replicas": 1,
"routePrefix": "/",
"actorOptions": {
"cpusPerActor": 0.1
}
}]
},
"clusterSpec": {
"headGroupSpec": {
"computeTemplate": "default-template",
"image": "rayproject/ray:2.3.0",
"serviceType": "NodePort",
"rayStartParams": {
"dashboard-host": "0.0.0.0",
"metrics-export-port": "8080"
},
"volumes": []
},
"workerGroupSpec": [
{
"groupName": "small-wg",
"computeTemplate": "default-template",
"image": "rayproject/ray:2.3.0",
"replicas": 1,
"minReplicas": 0,
"maxReplicas": 5,
"rayStartParams": {
"node-ip-address": "$MY_POD_IP"
}
}
]
}
}'
```
The Ray resource will then be created in your Kubernetes cluster.
## Full definition of payload
### Compute Template
For the purpose to simplify the setting of resource, we abstract the resource
of the pods template resource to the `compute template` for usage, you can
define the resource in the `compute template` and then choose the appropriate
template for your `head` and `workergroup` when you are creating the real objects of `RayCluster`, `RayJobs` or `RayService`.
#### Create compute templates in a given namespace
```
POST {{baseUrl}}/apis/v1alpha2/namespaces/<namespace>/compute_templates
```
```json
{
"name": "default-template",
"namespace": "<namespace>",
"cpu": 2,
"memory": 4,
"gpu": 1,
"gpuAccelerator": "Tesla-V100"
}
```
#### List all compute templates in a given namespace
```
GET {{baseUrl}}/apis/v1alpha2/namespaces/<namespace>/compute_templates
```
```json
{
"compute_templates": [
{
"name": "default-template",
"namespace": "<namespace>",
"cpu": 2,
"memory": 4,
"gpu": 1,
"gpu_accelerator": "Tesla-V100"
}
]
}
```
#### List all compute templates in all namespaces
```
GET {{baseUrl}}/apis/v1alpha2/compute_templates
```
#### Get compute template by name
```
GET {{baseUrl}}/apis/v1alpha2/namespaces/<namespace>/compute_templates/<compute_template_name>
```
#### Delete compute template by name
```
DELETE {{baseUrl}}/apis/v1alpha2/namespaces/<namespace>/compute_templates/<compu
没有合适的资源?快使用搜索试试~ 我知道了~
kuberay.tar.gz
需积分: 4 8 浏览量
2023-03-28
10:16:25
上传
评论
收藏 238.27MB GZ 举报
kuberay.tar.gz
资源推荐
资源详情
资源评论

























收起资源包目录





































































































共 418 条
- 1
- 2
- 3
- 4
- 5
资源评论


qq_37959585
- 粉丝: 0
- 资源: 81
上传资源 快速赚钱
我的内容管理 收起
我的资源 快来上传第一个资源
我的收益
登录查看自己的收益我的积分 登录查看自己的积分
我的C币 登录后查看C币余额
我的收藏
我的下载
下载帮助


会员权益专享
安全验证
文档复制为VIP权益,开通VIP直接复制
