# Prerequisites
- [Packer](https://developer.hashicorp.com/packer/downloads)
- [Terraform](https://developer.hashicorp.com/terraform/downloads?ajs_aid=e8824c6e-5f6f-480c-bb7d-27f8c97f8d8d&product_intent=terraform)
# Build AMI
Set necessary variables by creating a file `ds-ami.pkrvars.hcl` and adding the following variables according to your own usage.
```shel
cat <<EOF > ds-ami.pkrvars.hcl
aws_access_key = ""
aws_secret_key = ""
aws_region = "cn-north-1"
ds_ami_name = "my-test-ds-2"
# If you want to use the official distribution tar, just set the `ds_version` to the one you want.
ds_version = "3.1.1"
# If you want to use a locally built distribution tar, set the `ds_tar` to the tar file location.
ds_tar = "~/workspace/dolphinscheduler/dolphinscheduler-dist/target/apache-dolphinscheduler-3.1.3-SNAPSHOT-bin.tar.gz"
EOF
```
Then run the following command to initialize and build a custom AMI.
- If you want to use the official distribution tar.
```shell
packer init --var-file=ds-ami.pkrvars.hcl packer/ds-ami-official.pkr.hcl
packer build --var-file=ds-ami.pkrvars.hcl packer/ds-ami-official.pkr.hcl
```
- If you want to use the locally built distribution tar.
```shell
packer init --var-file=ds-ami.pkrvars.hcl packer/ds-ami-local.pkr.hcl
packer build --var-file=ds-ami.pkrvars.hcl packer/ds-ami-local.pkr.hcl
```
# Create resources
Set necessary variables by creating a file `terraform.tfvars` and adding the following variables according to your own usage.
Make sure `ds_ami_name` is the same as the one in `ds-ami.pkrvars.hcl` above.
```tfvars
cat <<EOF > terraform.tfvars
aws_access_key = ""
aws_secret_key = ""
aws_region = ""
name_prefix = "test-ds-terraform"
ds_ami_name = "my-test-ds"
ds_component_replicas = {
master = 1
worker = 1
alert = 1
api = 1
standalone_server = 0
}
EOF
```
Then run the following commands to apply necessary resources.
```shell
terraform init -var-file=terraform.tfvars
terraform apply -var-file=terraform.tfvars -auto-approve
```
# Open DolphinScheduler UI
```shell
open http://$(terraform output -json api_server_instance_public_dns | jq -r '.[0]'):12345/dolphinscheduler/ui
```
# Inputs
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| <a name="input_aws_access_key"></a> [aws\_access\_key](#input\_aws\_access\_key) | AWS access key | `string` | n/a | yes |
| <a name="input_aws_region"></a> [aws\_region](#input\_aws\_region) | AWS region | `string` | `"cn-north-1"` | no |
| <a name="input_aws_secret_key"></a> [aws\_secret\_key](#input\_aws\_secret\_key) | AWS secret key | `string` | n/a | yes |
| <a name="input_db_instance_class"></a> [db\_instance\_class](#input\_db\_instance\_class) | Database instance class | `string` | `"db.t3.micro"` | no |
| <a name="input_db_password"></a> [db\_password](#input\_db\_password) | Database password | `string` | n/a | yes |
| <a name="input_db_username"></a> [db\_username](#input\_db\_username) | Database username | `string` | `"dolphinscheduler"` | no |
| <a name="input_ds_ami_name"></a> [ds\_ami\_name](#input\_ds\_ami\_name) | Name of DolphinScheduler AMI | `string` | `"dolphinscheduler-ami"` | no |
| <a name="input_ds_component_replicas"></a> [ds\_component\_replicas](#input\_ds\_component\_replicas) | Replicas of the DolphinScheduler Components | `map(number)` | <pre>{<br> "alert": 1,<br> "api": 1,<br> "master": 1,<br> "standalone_server": 0,<br> "worker": 1<br>}</pre> | no |
| <a name="input_ds_version"></a> [ds\_version](#input\_ds\_version) | DolphinScheduler Version | `string` | `"3.1.1"` | no |
| <a name="input_name_prefix"></a> [name\_prefix](#input\_name\_prefix) | Name prefix for all resources | `string` | `"dolphinscheduler"` | no |
| <a name="input_private_subnet_cidr_blocks"></a> [private\_subnet\_cidr\_blocks](#input\_private\_subnet\_cidr\_blocks) | Available CIDR blocks for private subnets | `list(string)` | <pre>[<br> "10.0.101.0/24",<br> "10.0.102.0/24",<br> "10.0.103.0/24",<br> "10.0.104.0/24"<br>]</pre> | no |
| <a name="input_public_subnet_cidr_blocks"></a> [public\_subnet\_cidr\_blocks](#input\_public\_subnet\_cidr\_blocks) | CIDR blocks for the public subnets | `list(string)` | <pre>[<br> "10.0.1.0/24",<br> "10.0.2.0/24",<br> "10.0.3.0/24",<br> "10.0.4.0/24"<br>]</pre> | no |
| <a name="input_s3_bucket_prefix"></a> [s3\_bucket\_prefix](#input\_s3\_bucket\_prefix) | n/a | `string` | `"dolphinscheduler-test-"` | no |
| <a name="input_subnet_count"></a> [subnet\_count](#input\_subnet\_count) | Number of subnets | `map(number)` | <pre>{<br> "private": 2,<br> "public": 1<br>}</pre> | no |
| <a name="input_tags"></a> [tags](#input\_tags) | Tags to apply to all resources | `map(string)` | <pre>{<br> "Deployment": "Test"<br>}</pre> | no |
| <a name="input_vm_associate_public_ip_address"></a> [vm\_associate\_public\_ip\_address](#input\_vm\_associate\_public\_ip\_address) | Associate a public IP address to the EC2 instance | `map(bool)` | <pre>{<br> "alert": true,<br> "api": true,<br> "master": true,<br> "standalone_server": true,<br> "worker": true<br>}</pre> | no |
| <a name="input_vm_data_volume_size"></a> [vm\_data\_volume\_size](#input\_vm\_data\_volume\_size) | Data volume size of the EC2 Instance | `map(number)` | <pre>{<br> "alert": 10,<br> "api": 10,<br> "master": 10,<br> "standalone_server": 10,<br> "worker": 10<br>}</pre> | no |
| <a name="input_vm_data_volume_type"></a> [vm\_data\_volume\_type](#input\_vm\_data\_volume\_type) | Data volume type of the EC2 Instance | `map(string)` | <pre>{<br> "alert": "gp2",<br> "api": "gp2",<br> "master": "gp2",<br> "standalone_server": "gp2",<br> "worker": "gp2"<br>}</pre> | no |
| <a name="input_vm_instance_type"></a> [vm\_instance\_type](#input\_vm\_instance\_type) | EC2 instance type | `map(string)` | <pre>{<br> "alert": "t2.micro",<br> "api": "t2.small",<br> "master": "t2.medium",<br> "standalone_server": "t2.small",<br> "worker": "t2.medium"<br>}</pre> | no |
| <a name="input_vm_root_volume_size"></a> [vm\_root\_volume\_size](#input\_vm\_root\_volume\_size) | Root Volume size of the EC2 Instance | `map(number)` | <pre>{<br> "alert": 30,<br> "api": 30,<br> "master": 30,<br> "standalone_server": 30,<br> "worker": 30<br>}</pre> | no |
| <a name="input_vm_root_volume_type"></a> [vm\_root\_volume\_type](#input\_vm\_root\_volume\_type) | Root volume type of the EC2 Instance | `map(string)` | <pre>{<br> "alert": "gp2",<br> "api": "gp2",<br> "master": "gp2",<br> "standalone_server": "gp2",<br> "worker": "gp2"<br>}</pre> | no |
| <a name="input_vpc_cidr"></a> [vpc\_cidr](#input\_vpc\_cidr) | CIDR for the VPC | `string` | `"10.0.0.0/16"` | no |
| <a name="input_zookeeper_connect_string"></a> [zookeeper\_connect\_string](#input\_zookeeper\_connect\_string) | Zookeeper connect string, if empty, will create a single-node zookeeper for demonstration, don't use this in production | `string` | `""` | no |
# Outputs
| Name | Description |
|------|-------------|
| <a name="output_alert_server_instance_id"></a> [alert\_server\_instance\_id](#output\_alert\_server\_instance\_id) | Instance IDs of alert instances |
| <a name="output_alert_server_instance_private_ip"></a> [alert\_server\_instance\_private\_ip](#output\_alert\_server\_instance\_private\_ip) | Private IPs of alert instances |
| <a name="output_alert_server_instance_public_dns"></a> [alert\_server\_instance\_public\_dns](#output\_alert\_server\_instance\_public\_dns) | Public domain names of alert instances |
| <a name="output_alert_server_instance_public_ip"></a> [alert\_server\_instance\_public\_ip](#output\_alert\_server\_instance\_public\_ip) | Public IPs of alert instances |
| <a name="output_api_server_instance_id"></a> [api\_server\_instance\_id](#output\_api\_server\_instance\_id) | Instance IDs of api instances |
| <a name="output_api_server_instance_private_ip"></a> [api\_server\_instance
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
Apache DolphinScheduler是新一代分布式大数据工作流任务调度平台,致力于“解决大数据任务之间错综复杂的依赖关系,整个数据处理开箱即用”。它以 DAG(有向无环图) 的方式将任务连接起来,可实时监控任务的运行状态,同时支持重试、从指定节点恢复失败、暂停及 Kill 任务等操作。已经有IBM、阿里、腾讯、360、JD等 3000 多家公司生产上使用
资源推荐
资源详情
资源评论
收起资源包目录
新一代分布式大数据工作流任务调度平台,致力于“解决大数据任务之间错综复杂的依赖关系,整个数据处理开箱即用” (2000个子文件)
index.html 1KB
ProcessDefinitionServiceImpl.java 132KB
WorkflowExecuteRunnable.java 107KB
TaskDefinitionServiceImpl.java 70KB
ResourcesServiceImpl.java 62KB
ProcessDefinitionServiceTest.java 60KB
ExecutorServiceImpl.java 57KB
ProcessInstanceServiceImpl.java 54KB
Status.java 53KB
ProcessTaskRelationServiceImpl.java 49KB
UsersServiceImpl.java 46KB
ProcessInstanceServiceTest.java 45KB
ProcessDefinitionController.java 43KB
ProcessDefinitionDemo.java 42KB
TaskDefinitionServiceImplTest.java 42KB
ResourcesServiceTest.java 38KB
ResourcesController.java 38KB
UsersServiceTest.java 36KB
SchedulerServiceImpl.java 36KB
ExecutorController.java 33KB
ProjectServiceImpl.java 32KB
PythonGateway.java 31KB
ProcessTaskRelationServiceTest.java 31KB
ExecuteFunctionServiceTest.java 31KB
HdfsStorageOperator.java 29KB
UsersController.java 28KB
DataSourceServiceTest.java 26KB
ProjectServiceTest.java 25KB
ProcessInstanceController.java 25KB
DataSourceServiceImpl.java 24KB
TaskDefinitionController.java 23KB
V200DolphinSchedulerUpgrader.java 23KB
S3StorageOperator.java 22KB
SchedulerServiceTest.java 22KB
SchedulerController.java 22KB
ResourcesControllerTest.java 22KB
TaskExecutionContextFactory.java 21KB
StreamTaskExecuteRunnable.java 20KB
EnvironmentServiceImpl.java 20KB
DependentExecute.java 20KB
DataAnalysisServiceImpl.java 20KB
OssStorageOperator.java 19KB
ProcessDefinitionControllerTest.java 19KB
DataSourceController.java 19KB
WorkflowExecuteRunnableTest.java 19KB
WorkerGroupServiceImpl.java 18KB
ObsStorageOperator.java 18KB
ResourcePermissionCheckServiceImpl.java 18KB
TaskInstanceServiceTest.java 18KB
TaskInstanceServiceImpl.java 18KB
TaskGroupServiceImpl.java 17KB
GcsStorageOperator.java 17KB
AbsStorageOperator.java 17KB
DependentTaskTest.java 17KB
StateWheelExecuteThread.java 17KB
AlertPluginInstanceServiceImpl.java 17KB
ProcessDefinitionService.java 17KB
TaskGroupController.java 16KB
ExecuteFunctionControllerTest.java 16KB
AlertBootstrapService.java 16KB
WorkerGroupServiceTest.java 16KB
ProjectController.java 16KB
PluginParamsTransferTest.java 15KB
UsersControllerTest.java 15KB
DynamicLogicTask.java 15KB
MailSender.java 15KB
AlertPluginInstanceControllerTest.java 15KB
EtcdRegistry.java 15KB
EnvironmentServiceTest.java 15KB
UdfFuncServiceImpl.java 15KB
ProjectV2Controller.java 14KB
WorkFlowLineageServiceImpl.java 14KB
UdfFuncServiceTest.java 14KB
ProcessTaskRelationController.java 14KB
DingTalkSender.java 14KB
WorkerFailoverService.java 14KB
JsonSplitDao.java 14KB
ServerNodeManager.java 14KB
AlertPluginInstanceController.java 14KB
AlertPluginInstanceServiceTest.java 14KB
ProcessInstanceControllerTest.java 14KB
TenantServiceImpl.java 13KB
DependentAsyncTaskExecuteFunction.java 13KB
DataAnalysisServiceTest.java 13KB
V320DolphinSchedulerUpgrader.java 13KB
K8SNamespaceServiceImpl.java 13KB
DataSourceControllerTest.java 13KB
AlertGroupServiceTest.java 13KB
AlertGroupControllerTest.java 13KB
TenantServiceTest.java 13KB
WeChatSender.java 13KB
DqRuleServiceImpl.java 13KB
LoggerServiceTest.java 13KB
ListenerEventPostService.java 12KB
ProjectParameterServiceImpl.java 12KB
OssStorageOperatorTest.java 12KB
QueueServiceImpl.java 12KB
S3StorageOperatorTest.java 12KB
AlertGroupController.java 12KB
TaskInstanceController.java 12KB
共 2000 条
- 1
- 2
- 3
- 4
- 5
- 6
- 20
资源评论
Java程序员-张凯
- 粉丝: 1w+
- 资源: 6656
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- pta题库答案c语言之排序4统计工龄.zip
- pta题库答案c语言之树结构7堆中的路径.zip
- pta题库答案c语言之树结构3TreeTraversalsAgain.zip
- pta题库答案c语言之树结构2ListLeaves.zip
- pta题库答案c语言之树结构1树的同构.zip
- 基于C++实现民航飞行与地图简易管理系统可执行程序+说明+详细注释.zip
- pta题库答案c语言之复杂度1最大子列和问题.zip
- 三维装箱问题(Three-Dimensional Bin Packing Problem,3D-BPP)是一个经典的组合优化问题
- 以下是一些关于Linux线程同步的基本概念和方法.txt
- 以下是一个简化的示例,它使用pygame库来模拟烟花动画的框架.txt
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功