# Virtlet pod example
In order to try out the example, do the following on a cluster that
has nodes with Virtlet on it (see [the instructions](../deploy/README.md) in
`deploy/` directory):
1. Create a sample VM:
```bash
kubectl create -f cirros-vm.yaml
```
2. Wait for `cirros-vm` pod to become `Running`:
```bash
kubectl get pods -w
```
3. Connect to the VM console:
```bash
kubectl attach -it cirros-vm
```
4. As soon as the VM has booted, you can use
[virtletctl tool](../docs/virtletctl.md) (available as part
of each Virtlet release on GitHub starting from Virtlet 1.0):
```bash
virtletctl ssh cirros@cirros-vm -- -i examples/vmkey [command...]
```
Besides [cirros-vm.yaml](cirros-vm.yaml), there's also [ubuntu-vm.yaml](ubuntu-vm.yaml) that can be used to start an Ubuntu Xenial VM and [fedora-vm.yaml](fedora-vm.yaml) that starts a Fedora VM. These VMs can also be accessed using `virtletctl ssh` after it boots:
```bash
virtletctl ssh ubuntu@ubuntu-vm -- -i examples/vmkey [command...]
virtletctl ssh fedora@fedora-vm -- -i examples/vmkey [command...]
```
# Kubernetes on VM-based StatefulSet
[Another example](k8s.yaml) involves starting several VMs using `StatefulSet` and deploying
Kubernetes using `kubeadm` on it.
You can create the cluster like this:
```bash
kubectl create -f k8s.yaml
```
Watch progress of the cluster setup via the VM console:
```bash
kubectl attach -it k8s-0
```
After it's complete you can log into the master node:
```bash
virtletctl ssh root@k8s-0 -- -i examples/vmkey
```
There you can wait a bit for k8s nodes and pods to become ready.
You can list them using the following commands inside the VM:
```bash
kubectl get nodes -w
# Press Ctrl-C when all 3 nodes are present and Ready
kubectl get pods --all-namespaces -o wide -w
# Press Ctrl-C when all the pods are ready
```
You can then deploy and test nginx on the inner cluster:
```bash
kubectl run nginx --image=nginx --expose --port 80
kubectl get pods -w
# Press Ctrl-C when the pod is ready
kubectl run bbtest --rm --attach --image=docker.io/busybox --restart=Never -- wget -O - http://nginx
```
After that you can follow
[the instructions](../deploy/real-cluster.md) to install Virtlet on
the cluster if you want, but note that you'll have to disable KVM
because nested virtualization is not yet supported by Virtlet.
# Using local block PVs
To use the block PV examples, you need to enable `BlockVolume`
[feature gate](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/)
for your Kubernetes cluster components. When using
[kubeadm-dind-cluster](https://github.com/Mirantis/kubeadm-dind-cluster)
for testing, you can use this command to start the cluster with
`BlockVolume` and Ceph support:
```bash
FEATURE_GATES="BlockVolume=true" \
KUBELET_FEATURE_GATES="BlockVolume=true" \
ENABLE_CEPH=1 \
./dind-cluster-v1.14.sh up
```
[ubuntu-vm-local-block-pv.yaml](ubuntu-vm-local-block-pv.yaml)
demonstrates the use of local block volumes. For the sake of
simplicity, it uses a file named `/var/lib/virtlet/looptest` instead
of a real block device, but from the user perspective the usage is the
same except that `/dev/...` path must be specified instead of
`/var/lib/virtlet/looptest` in the most real-world use cases. The
path is chosen to be under `/var/lib/virtlet` because this directory
is mounted into the Virtlet pod by default and Virtlet must have
access to the file or block device specified for the block PV.
First, you need to create the file to be used for the contents
of the local block PV:
```bash
docker exec kube-node-1 dd if=/dev/zero of=/var/lib/virtlet/looptest bs=1M count=1000
docker exec kube-node-1 mkfs.ext4 /var/lib/virtlet/looptest
```
Let's create the PV, PVC and the pod that uses them:
```bash
kubectl apply -f examples/ubuntu-vm-local-block-pv.yaml
```
After the VM boots, we can log into it and verify that the PV is
indeed mounted:
```console
$ virtletctl ssh ubuntu@ubuntu-vm -- -i examples/vmkey
...
ubuntu@ubuntu-vm:~$ sudo touch /mnt/foo
ubuntu@ubuntu-vm:~$ ls -l /mnt
total 16
-rw-r--r-- 1 root root 0 Oct 1 17:27 foo
drwx------ 2 root root 16384 Oct 1 14:41 lost+found
$ exit
```
Then we can delete and re-create the pod
```bash
kubectl delete pod ubuntu-vm
# wait till the pod disappears
kubectl get pod -w
kubectl apply -f examples/ubuntu-vm-local-block-pv.yaml
```
And, after the VM boots, log in again to verify that the file `foo` is
still there:
```console
$ virtletctl ssh ubuntu@ubuntu-vm -- -i examples/vmkey
...
ubuntu@ubuntu-vm:~$ ls -l /mnt
total 16
-rw-r--r-- 1 root root 0 Oct 1 17:27 foo
drwx------ 2 root root 16384 Oct 1 14:41 lost+found
$ exit
```
# Using Ceph block PVs
For Ceph examples you'll also need to start a Ceph test container
(`--privileged` flag and `-v` mounts of `/sys/bus` and `/dev` are
needed for `rbd map` to work from within the `ceph_cluster` container;
they're not needed for persistent root filesystem example in the next
section):
```bash
MON_IP=$(docker exec kube-master route | grep default | awk '{print $2}')
CEPH_PUBLIC_NETWORK=${MON_IP}/16
docker run -d --net=host -e MON_IP=${MON_IP} \
--privileged \
-v /dev:/dev \
-v /sys/bus:/sys/bus \
-e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} \
-e CEPH_DEMO_UID=foo \
-e CEPH_DEMO_ACCESS_KEY=foo \
-e CEPH_DEMO_SECRET_KEY=foo \
-e CEPH_DEMO_BUCKET=foo \
-e DEMO_DAEMONS="osd mds" \
--name ceph_cluster docker.io/ceph/daemon demo
# wait for the cluster to start
docker logs -f ceph_cluster
```
Create a pool there:
```bash
docker exec ceph_cluster ceph osd pool create kube 8 8
```
Create an image for testing (it's important to use `rbd create` with
`layering` feature here so as not to get a feature mismatch error
later when creating a pod):
```bash
docker exec ceph_cluster rbd create tstimg \
--size 1G --pool kube --image-feature layering
```
Set up a Kubernetes secret for use with Ceph:
```bash
admin_secret="$(docker exec ceph_cluster ceph auth get-key client.admin)"
kubectl create secret generic ceph-admin \
--type="kubernetes.io/rbd" \
--from-literal=key="${admin_secret}"
```
To test the block PV, we also need to create a filesystem on the node
(this is not needed for testing the persistent rootfs below).
Yo may need to load RBD module on the docker host to be able to do this:
```bash
modprobe rbd
```
Then we can map the RBD, create a filesystem on it and unmap it again:
```bash
rbd=$(docker exec ceph_cluster rbd map tstimg --pool=kube)
docker exec kube-node-1 mkfs.ext4 "${rbd}"
docker exec ceph_cluster rbd unmap tstimg --pool=kube
```
After that, you can create the block PV, PVC and the pod that uses
them and verify the PV being mounted into `ubuntu-vm` the same way as
it was done in the previous section:
```bash
kubectl apply -f examples/ubuntu-vm-rbd-block-pv.yaml
```
# Using the persistent root filesystem
[cirros-vm-persistent-rootfs-local.yaml](cirros-vm-persistent-rootfs-local.yaml)
demonstrates the use of persistent root filesystem. The most important part
is the `volumeDevices` section in the pod's container definition:
```yaml
volumeDevices:
- devicePath: /
name: testpvc
```
Unlike the local PV example above, we can't use a file instead of a
real block device, as Virtlet uses the device mapper internally which
can't work with plain files. We don't need to run `mkfs.ext4` this
time though as Virtlet will copy the VM image over the contents of the
device. Let's create a loop device to be used for the PV:
```bash
docker exec kube-node-1 dd if=/dev/zero of=/rawtest bs=1M count=1000
docker exec kube-node-1 /bin/bash -c 'ln -s $(losetup -f /rawtest --show) /dev/rootdev'
```
We use a symbolic link to the actual block device here so we don't
need to edit the example yaml.
After that, we create the PV, PVC and the pod:
```bash
kubectl apply -f examples/cirros-vm-persistent-rootfs-local.yaml
```
After the VM boots, we can log into it
没有合适的资源?快使用搜索试试~ 我知道了~
virtlet:用于运行VM工作负载的Kubernetes CRI实现
共536个文件
go:283个
yaml:128个
md:49个
需积分: 11 3 下载量 101 浏览量
2021-02-03
15:49:22
上传
评论
收藏 1.1MB ZIP 举报
温馨提示
维尔特 Virtlet是Kubernetes运行时服务器,它允许您基于QCOW2映像运行VM工作负载。 可以按照文档或“将的说明来运行Virtlet。 还有一个描述了在实际集群上安装Virtlet的过程。 Virtlet体系结构的说明,。 说明和文档 面向用户的Virtlet描述和文档,。 社区 您可以在上加入频道(如果您尚未加入k8s组,请在上注册)。 欢迎用户和开发人员! Virtlet入门 要试用Virtlet,请遵循并文档。 Virtlet介绍视频 您可以观看和聆听到被记录在Kubernetes社区会议Virtlet演示视频。 命令行界面 Virtlet带有一个帮助工具 ,可帮助管理VM Pod。 这些二进制文件可用于“部分中Linux和Mac OSX。 您还可以将virtletctl安装为kubectl插件: virtletctl install 之后,您可以使用kubectl plugin virt而不是virtletctl (当kubectl插件变得稳定时,不再需要plugin子命令): kubectl plugin virt ssh cirros@cirros
资源详情
资源评论
资源推荐
收起资源包目录
virtlet:用于运行VM工作负载的Kubernetes CRI实现 (536个子文件)
Dockerfile.build 574B
Dockerfile.build-base 4KB
qemu.conf 391B
qemu-build.conf 268B
sample.conf 191B
libvirtd.conf 126B
cirros-repo.diff 98KB
buildroot.diff 12KB
.dockerignore 7B
.gitignore 822B
virtualization.go 32KB
vm_network_test.go 30KB
cloudinit_test.go 27KB
nettools.go 25KB
cloudinit.go 24KB
runtime_test.go 22KB
nettools_test.go 20KB
virtualization_test.go 19KB
runtime.go 19KB
image.go 18KB
image_test.go 17KB
fdserver.go 15KB
kubeclient.go 15KB
bindata.go 15KB
utils_test.go 14KB
kubeclient_test.go 14KB
tapfdsource.go 13KB
container_test.go 13KB
controller.go 13KB
diag.go 12KB
transport_test.go 12KB
zz_generated.deepcopy.go 12KB
fake_domain.go 12KB
common.go 12KB
annotations.go 11KB
scopes.go 11KB
config.go 10KB
types.go 10KB
config_test.go 10KB
sriov.go 10KB
pod_interface.go 10KB
server.go 10KB
vm_interface.go 9KB
container_create_test.go 9KB
gc_test.go 9KB
fake_cni_test.go 9KB
diskimage_linux.go 9KB
flexvolume.go 9KB
root_volumesource_test.go 9KB
manager.go 8KB
fields.go 8KB
data.go 8KB
cri.go 8KB
diskdriver.go 8KB
validate.go 8KB
pvc_interface.go 8KB
gc.go 8KB
nsfix_test.go 8KB
basic_test.go 7KB
cri_test.go 7KB
blockdev.go 7KB
fake_storage.go 7KB
nsfix.go 7KB
fdserver_test.go 7KB
diag.go 7KB
fs.go 7KB
libvirt_domain.go 7KB
sandboxes_test.go 7KB
extdata_test.go 7KB
cloudinit_test.go 7KB
persistentroot_volumesource_test.go 7KB
client.go 7KB
version_test.go 6KB
listener.go 6KB
libvirt_storage.go 6KB
translator.go 6KB
md_docs.go 6KB
volume_mount_test.go 6KB
docker_interface.go 6KB
data_test.go 6KB
image_test.go 6KB
calico.go 6KB
flexvolume_test.go 6KB
diag_test.go 6KB
dhcp_test.go 6KB
download.go 6KB
extdata.go 6KB
virtletctl_test.go 6KB
virtletconfigmapping.go 6KB
version.go 6KB
containers.go 6KB
virtletimagemapping.go 6KB
annotations_test.go 5KB
calico_test.go 5KB
virtlet.go 5KB
diskdriver_test.go 5KB
fakefs.go 5KB
fake_virtletconfigmapping.go 5KB
vmwrapper.go 5KB
fake_virtletimagemapping.go 5KB
共 536 条
- 1
- 2
- 3
- 4
- 5
- 6
Matt小特
- 粉丝: 31
- 资源: 4539
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论0