elasticdump
==================
Tools for moving and saving indices.
![picture](https://raw.github.com/elasticsearch-dump/elasticsearch-dump/master/elasticdump.jpg)
---
[![Build Status](https://github.com/elasticsearch-dump/elasticsearch-dump/actions/workflows/elasticdump.yaml/badge.svg)](https://github.com/elasticsearch-dump/elasticsearch-dump)
[![npm version](https://badge.fury.io/js/elasticdump.svg)](https://npmjs.org/package/elasticdump)
[![NPM Weekly stats](https://img.shields.io/npm/dw/elasticdump.svg)](https://npmjs.org/package/elasticdump)
[![NPM Monthly stats](https://img.shields.io/npm/dm/elasticdump.svg)](https://npmjs.org/package/elasticdump)
[![DockerHub Badge](https://img.shields.io/docker/pulls/elasticdump/elasticsearch-dump.svg)](https://hub.docker.com/r/elasticdump/elasticsearch-dump/)
[![DockerHub Badge](https://img.shields.io/docker/pulls/taskrabbit/elasticsearch-dump.svg)](https://hub.docker.com/r/taskrabbit/elasticsearch-dump/)
## Version Warnings!
- Version `1.0.0` of Elasticdump changes the format of the files created by the dump. Files created with version `0.x.x` of this tool are likely not to work with versions going forward. To learn more about the breaking changes, vist the release notes for version [`1.0.0`](https://github.com/elasticsearch-dump/elasticsearch-dump/releases/tag/v1.0.0). If you recive an "out of memory" error, this is probably or most likely the cause.
- Version `2.0.0` of Elasticdump removes the `bulk` options. These options were buggy, and differ between versions of Elasticsearch. If you need to export multiple indexes, look for the `multielasticdump` section of the tool.
- Version `2.1.0` of Elasticdump moves from using `scan/scroll` (ES 1.x) to just `scroll` (ES 2.x). This is a backwards-compatible change within Elasticsearch, but performance may suffer on Elasticsearch versions prior to 2.x.
- Version `3.0.0` of Elasticdump has the default queries updated to only work for ElasticSearch version 5+. The tool *may* be compatible with earlier versions of Elasticsearch, but our version detection method may not work for all ES cluster topologies
- Version `5.0.0` of Elasticdump contains a breaking change for the s3 transport. _s3Bucket_ and _s3RecordKey_ params are no longer supported please use s3urls instead
- Version `6.1.0` and higher of Elasticdump contains a change to the upload/dump process. This change allows for overlapping promise processing. The benefit of which is improved performance due increased parallel processing, but a side-effect exists where-by records (data-set) aren't processed in a sequential order (the ordering is no longer guaranteed)
- Version `6.67.0` and higher of Elasticdump will quit if the node version does not match the minimum requirement needed (v10.0.0)
- Version `6.76.0` and higher of Elasticdump added support for OpenSearch (forked from Elasticsearch 7.10.2)
## Installing
(local)
```bash
npm install elasticdump
./bin/elasticdump
```
(global)
```bash
npm install elasticdump -g
elasticdump
```
## Use
### Standard Install
Elasticdump works by sending an `input` to an `output`. Both can be either an elasticsearch URL or a File.
Elasticsearch:
- format: `{protocol}://{host}:{port}/{index}`
- example: `http://127.0.0.1:9200/my_index`
File:
- format: `{FilePath}`
- example: `/Users/evantahler/Desktop/dump.json`
Stdio:
- format: stdin / stdout
- format: `$`
You can then do things like:
```bash
# Copy an index from production to staging with analyzer and mapping:
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=http://staging.es.com:9200/my_index \
--type=analyzer
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=http://staging.es.com:9200/my_index \
--type=mapping
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=http://staging.es.com:9200/my_index \
--type=data
# Backup index data to a file:
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=/data/my_index_mapping.json \
--type=mapping
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=/data/my_index.json \
--type=data
# Backup and index to a gzip using stdout:
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=$ \
| gzip > /data/my_index.json.gz
# Backup the results of a query to a file
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=query.json \
--searchBody="{\"query\":{\"term\":{\"username\": \"admin\"}}}"
# Specify searchBody from a file
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=query.json \
--searchBody=@/data/searchbody.json
# Copy a single shard data:
elasticdump \
--input=http://es.com:9200/api \
--output=http://es.com:9200/api2 \
--input-params="{\"preference\":\"_shards:0\"}"
# Backup aliases to a file
elasticdump \
--input=http://es.com:9200/index-name/alias-filter \
--output=alias.json \
--type=alias
# Import aliases into ES
elasticdump \
--input=./alias.json \
--output=http://es.com:9200 \
--type=alias
# Backup templates to a file
elasticdump \
--input=http://es.com:9200/template-filter \
--output=templates.json \
--type=template
# Import templates into ES
elasticdump \
--input=./templates.json \
--output=http://es.com:9200 \
--type=template
# Split files into multiple parts
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=/data/my_index.json \
--fileSize=10mb
# Import data from S3 into ES (using s3urls)
elasticdump \
--s3AccessKeyId "${access_key_id}" \
--s3SecretAccessKey "${access_key_secret}" \
--input "s3://${bucket_name}/${file_name}.json" \
--output=http://production.es.com:9200/my_index
# Export ES data to S3 (using s3urls)
elasticdump \
--s3AccessKeyId "${access_key_id}" \
--s3SecretAccessKey "${access_key_secret}" \
--input=http://production.es.com:9200/my_index \
--output "s3://${bucket_name}/${file_name}.json"
# Import data from MINIO (s3 compatible) into ES (using s3urls)
elasticdump \
--s3AccessKeyId "${access_key_id}" \
--s3SecretAccessKey "${access_key_secret}" \
--input "s3://${bucket_name}/${file_name}.json" \
--output=http://production.es.com:9200/my_index
--s3ForcePathStyle true
--s3Endpoint https://production.minio.co
# Export ES data to MINIO (s3 compatible) (using s3urls)
elasticdump \
--s3AccessKeyId "${access_key_id}" \
--s3SecretAccessKey "${access_key_secret}" \
--input=http://production.es.com:9200/my_index \
--output "s3://${bucket_name}/${file_name}.json"
--s3ForcePathStyle true
--s3Endpoint https://production.minio.co
# Import data from CSV file into ES (using csvurls)
elasticdump \
# csv:// prefix must be included to allow parsing of csv files
# --input "csv://${file_path}.csv" \
--input "csv:///data/cars.csv"
--output=http://production.es.com:9200/my_index \
--csvSkipRows 1 # used to skip parsed rows (this does not include the headers row)
--csvDelimiter ";" # default csvDelimiter is ','
```
### Non-Standard Install
If Elasticsearch is not being served from the root directory the `--input-index` and
`--output-index` are required. If they are not provided, the additional sub-directories will
be parsed for index and type.
Elasticsearch:
- format: `{protocol}://{host}:{port}/{sub}/{directory...}`
- example: `http://127.0.0.1:9200/api/search`
```bash
# Copy a single index from a elasticsearch:
elasticdump \
--input=http://es.com:9200/api/search \
--input-index=my_index \
--output=http://es.com:9200/api/search \
--output-index=my_index \
--type=mapping
# Copy a single type:
elasticdump \
--input=http://es.com:9200/api/search \
--input-index=my_index/my_type \
--output=http://es.com:9200/api/search \
--output-index=my_index \
--type=mapping
```
### Docker install
If you prefer using docker to use elasticdump, you can download this project from docker hub:
```bash
docker pull e
没有合适的资源?快使用搜索试试~ 我知道了~
elasticsearch的导入导出工具___下载.zip
共91个文件
js:48个
json:17个
yml:3个
1.该资源内容由用户上传,如若侵权请联系客服进行举报
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
版权申诉
0 下载量 20 浏览量
2023-04-19
00:48:56
上传
评论
收藏 140KB ZIP 举报
温馨提示
elasticsearch的导入导出工具___下载.zip
资源推荐
资源详情
资源评论
收起资源包目录
elasticsearch的导入导出工具___下载.zip (91个子文件)
elasticsearch-dump-master
lib
request.js 832B
transports
csv.js 6KB
_template.js 545B
file.js 3KB
__es__
_helpers.js 5KB
_base.js 2KB
_template.js 3KB
_mapping.js 4KB
_data.js 8KB
_alias.js 2KB
_analyzer.js 4KB
index.js 310B
_policy.js 2KB
_script.js 2KB
_setting.js 2KB
base.js 3KB
s3.js 3KB
elasticsearch.js 6KB
help.txt 19KB
splitters
csvStreamSplitter.js 2KB
streamSplitter.js 2KB
s3StreamSplitter.js 1KB
argv.js 2KB
version-check.js 331B
parse-meta-data.js 530B
processor.js 4KB
ioHelper.js 2KB
jsonparser.js 2KB
init-aws.js 1KB
aws4signer.js 3KB
add-auth.js 544B
is-url.js 1KB
parse-base-url.js 847B
docker-entrypoint.sh 145B
.github
ISSUE_TEMPLATE
Feature_request.md 688B
Bug_report.md 1KB
workflows
elasticdump.yaml 829B
npm-publish.yml 595B
docker-publish.yaml 1KB
scripts
build.sh 896B
LICENSE.txt 11KB
elasticdump.jpg 32KB
.ncurc.json 115B
templates
dates.js 867B
docker-compose.yml 541B
package.json 2KB
bin
elasticdump 4KB
multielasticdump 18KB
Dockerfile_local 458B
Dockerfile 504B
.npmignore 2KB
test
aws4signing.js 3KB
seeds.json 1011B
transform.js 6KB
mocha.opts 44B
parentchild.js 11KB
csv-import.tests.js 3KB
parse-base-url.tests.js 4KB
is-url.tests.js 1KB
add-auth.tests.js 984B
csv-nested.tests.js 3KB
io-helper.tests.js 948B
is-csvurl.tests.js 933B
test.js 32KB
test-resources
bigint2.json 141B
nested.csv 6KB
template_7x.json 178B
cars.csv 22KB
bigint.json 460B
transform.js 176B
template_7x_component_template.json 317B
malformedHttpAuth.ini 7B
template_6x.json 190B
bigint_mapping.json 81B
bigint_mapping3.json 154B
bigint3.json 164B
stored_scripts.jsonl 306B
template_2x.json 182B
data_set.json 51KB
template_1x.json 176B
bigint_mapping2.json 152B
httpAuthTest.ini 22B
template_7x_index_template.json 498B
transform_with_params.js 216B
alias.json 72B
.gitignore 90B
.dockerignore 667B
README.md 39KB
transforms
anonymize.js 2KB
elasticdump.js 3KB
docker-compose-test-helper.yml 544B
共 91 条
- 1
资源评论
快撑死的鱼
- 粉丝: 1w+
- 资源: 9156
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- Alibaba SpringCloud集成nacos实现配置中心
- 那些好的不像话的心流体验
- 基于pytorch的卷积神经网络识别是否为奥特曼的项目python源码+文档说明+数据集(课程设计)
- 基于卷积神经网络的图像风格迁移python源码+文档说明+界面图片(课程设计)
- 河北地质大学毕业设计-基于卷积神经网络的垃圾分类研究代码python源码+文档说明
- 菜鸟网络运营模式的浅析
- Alibaba SpringCloud集成nacos实现注册中心-源码
- Springboot集成Netflix-ribbon、Enreka实现负载均衡-源码
- 互联网产品项目管理流程-PPT.ppt
- 互联网大数据分析之《用户画像分析》ppt.ppt
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功