0%

NVIDIA驱动安装

以下操作在Redhat 7.6上进行,已安装好k8s并使用docker作为contianer runtime

查看服务器的GPU信息

1
2
yum install pciutils
lspci | grep "NVIDIA"

下载对应的驱动文件

官网下载驱动

安装驱动

redhat/centos rpm离线包安装时,需要epel提供一些必要软件

1
2
3
4
5
6
yum install -y epel-release  
curl -OL https://cn.download.nvidia.cn/tesla/515.48.07/nvidia-driver-local-repo-rhel7-515.48.07-1.0-1.x86_64.rpm
rpm -ivh nvidia-driver-local-repo-rhel7-515.48.07-1.0-1.x86_64.rpm
yum install cuda-drivers
reboot
nvidia-smi

安装nvidia-docker

1
2
3
4
5
6
7
8
9
10
11
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
&& curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.repo | sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo
yum install -y nvidia-docker2
# 安装完成后,会把原有的配置备份,可以在原有的配置上修改添加default-runtime,然后覆盖/etc/docker/daemon.json
# 编辑 daemon.json,如果没有 default-runtime 则加入,并且添加上之前原有的配置内容。
vim /etc/docker/daemon.json
"default-runtime": "nvidia",

systemctl restart docker

docker run --rm -e NVIDIA_VISIBLE_DEVICES=all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi

安装k8s-device-plugin

部署说明

官方文档
gpu-feature-discovery
k8s-device-plugin

部署文件

  • 部署使用helm chart
    1
    2
    3
    4
    5
    #templates/_helpers.tpl上可能有问题,部署前可以helm template .试试看
    # 或者helm install ndp nvidia-device-plugin --dry-run
    allowPrivilegeEscalation: false
    capabilities:
    drop: ["ALL"]
  • 需要最新版本helm
  • 所需镜像
    1
    2
    3
    4
    k8s.gcr.io/nfd/node-feature-discovery:v0.11.0
    nvcr.io/nvidia/gpu-feature-discovery:v0.6.1
    k8s.gcr.io/nfd/node-feature-discovery:v0.11.0
    nvcr.io/nvidia/k8s-device-plugin:v0.12.2

    开始安装

    MIG_STRATEGY类型可查阅官网文档
    MIG_STRATEGY=none
    ./helm -n nvidia-device-plugin install \
     ndp \
     --set migStrategy=${MIG_STRATEGY} \
     --set gfd.enabled=true \
     nvidia-device-plugin
    

MacOS 下均是使用remote来连接vm来使用podman。我们可以使用vm管理软件来创建vm,这里介绍使用multipass和podman machine。

使用 Podman machine

配置前需要准备一些依赖

安装依赖

fedora qemu 镜像,可提前准备,也可以在安装时让其自动下载

1
2
3
mkdir fedora-coreos
cd fedora-coreos
curl -L 'https://builds.coreos.fedoraproject.org/prod/streams/next/builds/36.20220426.1.0/x86_64/fedora-coreos-36.20220426.1.0-qemu.x86_64.qcow2.xz' -o fedora-coreos-36.20220426.1.0-qemu.x86_64.qcow2.xz

获取podman二进制文件

1
2
3
curl -L 'https://github.com/containers/podman/releases/download/v4.0.3/podman-remote-release-darwin_amd64.zip' -o podman-remote-release-darwin_amd64.zip
unzip podman-remote-release-darwin_amd64.zip
sudo cp podman-4.0.3/usr/bin/podman /usr/local/bin/podman

获取gvproxy二进制文件

1
2
curl -L 'https://github.com/containers/gvisor-tap-vsock/releases/download/v0.3.0/gvproxy-darwin' -o gvproxy
sudo cp gvproxy /usr/local/bin/gvproxy

安装qemu

1
brew install qemu

创建machine

使用podman machine init进行初始化,可以使用podman machine init --help查看可用的参数
podman machine只能同时运行一个vm

1
2
3
podman machine init --cpus 3 --disk-size 50 --memory 3072 --image-path ~/fedora-coreos/fedora-coreos-36.20220426.1.0-qemu.x86_64.qcow2.xz  
podman machine start # 启动默认vm
podman machine ls # 查看已有vm

初始化完成后,可以看看podman的配置是否正常

1
2
podman system connection ls
podman version

使用 Multipass

Multipass 是一种在 Linux、macOS 和 Windows 上快速生成云式 Ubuntu VM 的工具。

下载multipass

配置vm

初始化vm

1
multipass launch -n podman -c 3 -m 3G -d 50G 22.04

如果vm不能访问互联网,尝试修改一下DNS,参考using-a-custom-dns
其他问题参考troubleshooting-networking-on-macos

1
2
3
4
multipass shell podman
vim /etc/netplan/50-cloud-init.yaml
# 修改完成后,运行一下,然后敲一下回车
sudo netplan try

Ubuntu 22.04需要为sshd添加支持的PubkeyAcceptedKeyTypes的算法

1
2
echo "PubkeyAcceptedKeyTypes=+ssh-rsa" >>/etc/ssh/sshd_config
systemctl restart sshd

配置podman

已经记录podman3、podman4的安装方式,可根据喜好安装配置

podman3

ubuntu 22.04默认支持Podman v3 LTS版本,安装参考以下命令

1
2
3
4
5
6
sudo su
sed -i 's?archive.ubuntu.com?mirrors.aliyun.com?g' /etc/apt/sources.list
sed -i 's?security.ubuntu.com?mirrors.aliyun.com?g' /etc/apt/sources.list
apt-get update
apt-get -y upgrade
apt-get -y install podman

podman4

ubuntu 22.04 没有podman4的软件包,可以通过debain的Experimental 源获取

1
2
3
4
5
6
7
sed -i 's?archive.ubuntu.com?mirrors.aliyun.com?g'  /etc/apt/sources.list
sed -i 's?security.ubuntu.com?mirrors.aliyun.com?g' /etc/apt/sources.list
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 648ACFD622F3D138 0E98404D386FA1D9
echo 'deb http://deb.debian.org/debian experimental main' >> /etc/apt/sources.list.d/debian-experimental.list
apt-get update
apt-get -y upgrade
apt-get -t experimental -y install podman

通过socket连接到podman

给vm添加ssh公钥

1
echo 'macos本地的公钥' >> /root/.ssh/authorized_keys

podman3的API版本较低,客户端版本也需要降低,需要下载podman3的客户端

1
2
3
curl -L https://github.com/containers/podman/releases/download/v3.4.4/podman-remote-release-darwin.zip -o podman-remote-release-darwin.zip
unzip podman-remote-release-darwin.zip
cp podman-3.4.4/podman /usr/local/bin/podman3

然后添加远端连接
以下命令用的IP记得改成你VM实际的IP

1
2
3
podman system connection add podman3 --identity ~/.ssh/id_rsa ssh://root@192.168.64.2/run/podman/podman.sock
podman system connection default podman3
podman3 version

podman4

1
2
3
podman system connection add podman4 --identity ~/.ssh/id_rsa ssh://root@192.168.64.4/run/podman/podman.sock
podman system connection default podman4
podman version

使用podman

用以上方式安装好podman后,可以进行一些配置,如本地镜像仓库,镜像加速等

信任镜像仓库

本地镜像仓库一般使用自签证书,信任本地仓库配置如下

1
2
3
4
5
6
cat << EOF >> /etc/containers/registries.conf.d/001-192.168.110.35-insecure.conf
[[registry]]
location="192.168.110.35"
prefix="192.168.110.35"
insecure=true
EOF

镜像加速

国内下载比较慢,我们一般会配置加速,参考如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
cat << EOF >> /etc/containers/registries.conf.d/002-mirrors.conf
[[registry]]
prefix="docker.io"
location="docker.m.daocloud.io"

[[registry]]
prefix="cr.l5d.io"
location="l5d.m.daocloud.io"

[[registry]]
prefix="docker.elastic.co"
location="elastic.m.daocloud.io"

[[registry]]
prefix="gcr.io"
location="gcr.m.daocloud.io"

[[registry]]
prefix="k8s.gcr.io"
location="k8s-gcr.m.daocloud.io"

[[registry]]
prefix="mcr.microsoft.com"
location="mcr.m.daocloud.io"

[[registry]]
prefix="nvcr.io"
location="nvcr.m.daocloud.io"

[[registry]]
prefix="quay.io"
location="quay.m.daocloud.io"

[[registry]]
prefix="registry.jujucharms.com"
location="jujucharms.m.daocloud.io"

[[registry]]
prefix="rocks.canonical.com"
location="rocks-canonical.m.daocloud.io"
EOF

测试镜像拉取

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
➜  ~ podman pull 192.168.110.35/library/nginx:1.19.4
Trying to pull 192.168.110.35/library/nginx:1.19.4...
Getting image source signatures
Copying blob sha256:232bf38931fc8c7f00f73e6d2be46776bd5b0999eb4c190c810a74cf203b1474
Copying blob sha256:c5df295936d31cee0907f9652ff1b0518482ea87102f4cd2a872ed720e72314b
Copying blob sha256:a29b129f410924b8ca6289b0e958f3d5ac159e29b54e4d9ab33e51eb87857474
Copying blob sha256:b3ddf1fa5595a82768da495f49d416bae8806d06ffe705935b4573035d8cfbad
Copying blob sha256:852e50cd189dfeb54d97680d9fa6bed21a6d7d18cfb56d6abfe2de9d7f173795
Copying config sha256:daee903b4e436178418e41d8dc223b73632144847e5fe81d061296e667f16ef2
Writing manifest to image destination
Storing signatures
daee903b4e436178418e41d8dc223b73632144847e5fe81d061296e667f16ef2

➜ ~ podman pull k8s.gcr.io/kube-apiserver:v1.20.1
Trying to pull k8s.gcr.io/kube-apiserver:v1.20.1...
Getting image source signatures
Copying blob sha256:f398b465657ed53ee83af22197ef61be9daec6af791c559ee5220dee5f3d94fe
Copying blob sha256:d7d21f5bdd8303a60bac834f99867a58e6f3e1abcb6d486158a1ccb67dbf85bf
Copying blob sha256:cbcdf8ef32b41cd954f25c9d85dee61b05acc3b20ffa8620596ed66ee6f1ae1d
Copying blob sha256:f398b465657ed53ee83af22197ef61be9daec6af791c559ee5220dee5f3d94fe
Copying blob sha256:cbcdf8ef32b41cd954f25c9d85dee61b05acc3b20ffa8620596ed66ee6f1ae1d
Copying blob sha256:d7d21f5bdd8303a60bac834f99867a58e6f3e1abcb6d486158a1ccb67dbf85bf
Copying config sha256:e1822562bf942868d700a2f08eb368f2c88987e473aae12997cc07cc83e789d1
Writing manifest to image destination
Storing signatures
e1822562bf942868d700a2f08eb368f2c88987e473aae12997cc07cc83e789d1

运行容器

1
2
3
4
➜  ~ podman run -d --name adminer --hostname adminer -p 8080:8080 --network bridge 192.168.110.35/library/adminer:4.8.1-standalone
➜ ~ podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
99b21f9f723b 192.168.110.35/library/adminer:4.8.1-standalone php -S [::]:8080 ... 7 seconds ago Up 7 seconds ago 0.0.0.0:8080->8080/tcp adminer

访问容器

用以上方式运行后,已有的podman配置如下所示

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
➜  ~ podman system connection ls
Name URI Identity Default
podman ssh://core@localhost:63215/run/user/501/podman/podman.sock /Users/lucas/.ssh/podman true
podman-root ssh://root@localhost:63215/run/podman/podman.sock /Users/lucas/.ssh/podman false
podman3 ssh://root@192.168.64.2:22/run/podman/podman.sock /Users/lucas/.ssh/id_rsa false
podman4 ssh://root@192.168.64.4:22/run/podman/podman.sock /Users/lucas/.ssh/id_rsa false
➜ ~ podman system connection default podman3
➜ ~ podman3 ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
99b21f9f723b 192.168.110.35/library/adminer:4.8.1-standalone php -S [::]:8080 ... 30 minutes ago Up 30 minutes ago 0.0.0.0:8080->8080/tcp adminer
➜ ~ podman system connection default podman4
➜ ~ podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
72b3ad0ac98d 192.168.110.35/library/adminer:4.8.1-standalone php -S [::]:8080 ... 38 seconds ago Up 38 seconds ago 0.0.0.0:8080->8080/tcp adminer
➜ ~ podman system connection default podman
➜ ~ podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a38dc8af9a8c 192.168.110.35/library/adminer:4.8.1-standalone php -S [::]:8080 ... 3 seconds ago Up 4 seconds ago 0.0.0.0:8080->8080/tcp adminer

对于使用podman machine的可以直接使用127.0.0.1进行访问

1
curl 127.0.0.1:8080

也可以与podman-desktop一起使用

对于使用multipass的,可以使用vm ip进行访问

1
2
curl 192.168.64.2:8080
curl 192.168.64.4:8080

podman的命令大部分与docker类似,可以配置一个alias

1
2
3
echo 'alias docker="podman"' >> .zshrc
source .zshrc
docker version

参考链接

[1] multipass using a custom dns
[2] multipass troubleshooting networking on macos
[3] podman containers registries.conf.d
[4] podman mac experimental
[5] podman tutorial
[6] debian experimental

前言

在使用envoy过程中 access log 越来越多,想从文档中找到答案,但没有发现相关的配置。从 github 中找到了相关的 issues:139621109,从里面内容来看,用的是 logrotate 来做日志切分。众所周知,logratate是linux下做日志切割的工具。网络上比较多介绍它的文章,这里不做过多的介绍。

简介

logratate 是根据配置的规则进行日志文件切割,想要持续的的对日志文件进行切割,那么需要crond来配合。即使用crond来定时运行logratate

crond

用于定时运行 logratate,CentOS的配置在/etc/cron.daily/目录下

1
2
3
4
5
6
7
8
9
10
11
12
[root@localhost ~]# ls /etc/cron.daily/logrotate
/etc/cron.daily/logrotate

[root@localhost ~]# cat /etc/cron.daily/logrotate
#!/bin/sh

/usr/sbin/logrotate /etc/logrotate.conf
EXITVALUE=$?
if [ $EXITVALUE != 0 ]; then
/usr/bin/logger -t logrotate "ALERT exited abnormally with [$EXITVALUE]"
fi
exit $EXITVALUE

logratate

默认配置文件位于/etc/logrotate.conf,配置文件最后引用了/etc/logrotate.d,一般来讲,用户可将自定义配置放到该目录内

测试使用

这里在使用docker来运行centos进行测试

安装

首先我们安装logrotate

1
2
3
4
5
6
[root@localhost ~]# docker run --rm --name centos -it centos:centos7 bash
[root@0c4a0d0885e8 /]# yum install logrotate -y # 安装logrotate
[root@0c4a0d0885e8 ~]# ls -la /etc/ | grep -E 'cron|logrotate'
drwxr-xr-x 2 root root 23 Apr 15 13:36 cron.daily
-rw-r--r-- 1 root root 662 Jul 31 2013 logrotate.conf
drwxr-xr-x. 1 root root 6 Apr 1 2020 logrotate.d

安装完logrotate,可以看到创建了一个cron.daily的目录,和logrotate相关的配置及目录,这里还需要安装crond

1
2
3
4
5
6
7
8
9
10
[root@0c4a0d0885e8 ~]# yum install cronie -y
[root@0c4a0d0885e8 ~]# ls -la /etc/ | grep cron # 安装完成可以看到cron相关的目录
-rw------- 1 root root 541 Jan 13 16:52 anacrontab
drwxr-xr-x 2 root root 21 Apr 15 13:43 cron.d
drwxr-xr-x 2 root root 23 Jun 9 2014 cron.daily
-rw------- 1 root root 0 Jan 13 16:52 cron.deny
drwxr-xr-x 2 root root 22 Apr 15 13:43 cron.hourly
drwxr-xr-x 2 root root 6 Jun 9 2014 cron.monthly
drwxr-xr-x 2 root root 6 Jun 9 2014 cron.weekly
-rw-r--r-- 1 root root 451 Jun 9 2014 crontab

配置

配置如下所示,最多3个文件,大小超过100K进行切割,保留3天前的日志

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@0c4a0d0885e8 ~]# cat << EOF > /etc/logrotate.d/test.conf
nomail
dateformat %s
start 0
compress
/var/log/test.log {
rotate 3
missingok
copytruncate
size 100K
maxage 3
}
EOF
[root@0c4a0d0885e8 ~]# cp -r /etc/cron.daily/ /etc/cron.min/
[root@0c4a0d0885e8 ~]# cat << EOF > /etc/cron.d/min
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
* * * * * root run-parts /etc/cron.min
EOF

模拟日志生成

这里简单的无限循环进行日志打印

[root@0c4a0d0885e8 ~]# while true;do echo "$(date)" >> /var/log/test.log;done

查看日志切割状态

[root@0c4a0d0885e8 ~]# ls -lth /var/log/
total 248K
-rw-r--r--  1 root root 101K Apr 15 14:05 test.log
-rw-r--r--  1 root root 1.2K Apr 15 14:05 test.log1650031501.gz
-rw-r--r--  1 root root 1.2K Apr 15 14:04 test.log1650031441.gz
-rw-r--r--  1 root root 6.6K Apr 15 14:03 test.log1650031381.gz
[root@0c4a0d0885e8 ~]# ls -lth /var/log/
total 180K
-rw-r--r--  1 root root  75K Apr 15 14:06 test.log
-rw-r--r--  1 root root  865 Apr 15 14:06 test.log1650031561.gz
-rw-r--r--  1 root root 1.2K Apr 15 14:05 test.log1650031501.gz
-rw-r--r--  1 root root 1.2K Apr 15 14:04 test.log1650031441.gz

从日志切割记录来看,保留了3个,符合配置规则

容器化实践

可参考 docker-logrotate

配置

效果

参考文档

[1] cron-in-docker
[2] logrotate

安装

依赖python,我一般使用python3

1
pip3 install -U Sphinx

快速使用

创建项目

提供了一个命令进行项目创建sphinx-quickstart,参考quickstart

1
2
3
4
5
6
7
8
9
10
11
12
sphinx-quickstart demo
tree demo
# 目录结构如下
demo
|-- Makefile
|-- build
|-- make.bat
`-- source
|-- _static
|-- _templates
|-- conf.py
`-- index.rst

主题设置

官方自带一些主题,可以到官方文档查阅。
这里使用sphinx_rtd_theme,使用下面的命令进行安装

1
2
3
4
5
pip3 install sphinx_rtd_theme
vi demo/source/conf.py
# 修改html_theme,并添加html_theme_path
html_theme = 'sphinx_rtd_theme'
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]

构建

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
cd demo
make html
tree build -L 2
###
build
├── doctrees
│ ├── environment.pickle
│ └── index.doctree
└── html
├── _sources
├── _static
├── genindex.html
├── index.html
├── objects.inv
├── search.html
└── searchindex.js

访问

demo

API 2

默认情况下etcdctl使用的是v3版本的API,如果需要使用v2版本的API,那么就需要指定环境变量ETCDCTL_API=2,如下所示

1
2
3
4
export ETCDCTL_API=2
etcdctl ls /
# or
ETCDCTL_API=2 etcdctl ls /

查看集群状态

1
ETCDCTL_API=2 etcdctl member list

API 3

在使用APIv3时,只获取key需要指定参数,如下所示

1
2
3
4
5
6
7
export prefix_keys='--prefix --keys-only'
etcdctl get / $prefix_keys
# or
etcdctl get / --prefix --keys-only
```

### 查看集群状态

export ETCDCTL_ENDPOINTS=https://10.95.35.76:12379,https://10.95.35.77:12379,https://10.95.35.78:12379
export ETCDCTL_CA_FILE=/etc/ssl/private/ca.crt
export ETCDCTL_CERT_FILE=/etc/ssl/private/etcd/peer.crt
export ETCDCTL_KEY_FILE=/etc/ssl/private/etcd/peer.key
etcdctl endpoint status –write-out=table
etcdctl endpoint health

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26

## 备份恢复
在v2、v3数据混用的情况下,v2数据导出kv,v3使用etcd命令进行备份
```bash
## v2数据导出
for k in $(etcdctl ls --recursive -p | grep -v "/$")
do
v=$(etcdctl get $k)
if [ $? -eq 0 ]; then
value=${v//\'/\'\\\'\'}
num=$((num+1))
echo "ETCDCTL_API=2 etcdctl set $k '$value'" >> /backup_v2_.sh
else
rm -rf /backup_v2_.sh
exit 1
fi
done
## v3数据备份
etcdctl snapshot save /backup_v3.db
etcdctl --write-out=table snapshot status /backup_v3.db
```

恢复数据时,先使用快照恢复v3数据,然后再将v2数据导入
```bash
etcdctl snapshot restore /backup_v3.db
bash -c /backup_v2_.sh

记录了使用Maven的一些命令

常用命令

构建打包

1
mvn -U -B clean package

参考链接: package

构建部署

常规部署

1
mvn -U -B clean deploy

指定私服地址

1
2
3
4
5
6
7
8
9
10
11
# 插件小于3.0版本
mvn -B -U clean deploy \
-DaltDeploymentRepository=maven-release::default::http://192.168.110.35:8081/repository/maven-release \
-DaltDeploymentRepository=maven-snapshots::default::http://192.168.110.35:8081/repository/maven-snapshots

mvn deploy:deploy-file -DgroupId=<groupId> -DartifactId=<artifactId> -Dversion=<version> -Dpackaging=<package> -Dfile=<file> -Durl=<url> -DrepositoryId=<repositoryId>

mvn deploy:deploy-file -Dfile=<file> -DrepositoryId=<repositoryId> -Durl=<url> -DpomFile=<pomFile> -Dpackaging=jar

mvn deploy:deploy-file -Dfile=<file> -DrepositoryId=<repositoryId> -Durl=<url> -DpomFile=<pomFile> -Dpackaging=pom

参考链接: maven-deploy-plugin

获取项目信息

1
2
mvn help:evaluate -Dexpression=project.artifactId -q -DforceStdout
mvn help:evaluate -Dexpression=project.version -q -DforceStdout

参考链接: evaluate-mojo

sonar 扫描

1
2
3
4
5
6
7
8
9
10
11
12
mvn -U -B clean package sonar:sonar \
-Dmaven.test.skip=true \
-Dsonar.scm.disabled=true \
-Dsonar.projectName=$SONAR_PROJECT \
-Dsonar.projectKey=$SONAR_PROJECT \
-Dsonar.host.url=$SONAR_HOST_URL \
-Dsonar.login=$SONAR_LOGIN \
-Dsonar.sources=$SONAR_SOURCES \
-Dsonar.java.binaries=$SONAR_JAVA_BINARIES \
-Dsonar.exclusions=$SONAR_EXCLUSIONS \
-Dsonar.java.covergaePlugin=jacoco \
-Dsonar.jacoco.reportPaths=target/jacoco.exec

参考链接: sonarscanner-for-maven

常用配置

Mirrors

一般情况下配置文件可放在${user.home}/.m2/settings.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
<settings>
...
<mirrors>
<mirror>
<id>aliyunmaven</id>
<mirrorOf>central</mirrorOf>
<name>阿里云公共仓库</name>
<url>https://maven.aliyun.com/repository/public</url>
</mirror>
<mirror>
<id>local</id>
<name>Local Mirror Repository</name>
<url>http://192.168.110.35:8081/repository/maven-public</url>
<mirrorOf>maven-release</mirrorOf>
</mirror>
</mirrors>
...
</settings>

参考链接: guide-mirror-settings