Compare commits

...

56 Commits

Author SHA1 Message Date
Jérémy Lecour 5ac990473e remove monolithic script
gitea/evobackup/pipeline/head This commit looks good Details
2023-03-22 14:19:29 +01:00
Jérémy Lecour a6573c6db3 changelog
gitea/evobackup/pipeline/head This commit looks good Details
2023-03-22 14:17:42 +01:00
Jérémy Lecour ea054f314c Add some comments 2023-03-22 14:17:10 +01:00
Jérémy Lecour 70e541dd6d zzz_evobackup.sh: LIBDIR="/usr/local/lib/evobackup"
gitea/evobackup/pipeline/head This commit looks good Details
2023-03-22 14:11:03 +01:00
Jérémy Lecour 5aeba28d5c utilities.sh: fix line count 2023-03-22 14:10:27 +01:00
Jérémy Lecour 4475ee9af8 dump.sh: improve options handling
* default values,
* reset variable each time
* option masterdata seulement si présente
2023-03-22 14:10:11 +01:00
Jérémy Lecour feafe01692 Delete error directories recursively
gitea/evobackup/pipeline/head This commit looks good Details
2023-03-08 09:22:28 +01:00
Jérémy Lecour 1fa1eb7793 Delete README containing dead links 2023-02-27 14:56:45 +01:00
Jérémy Lecour 50f81f2716 Add options for dump functions
gitea/evobackup/pipeline/head This commit looks good Details
2023-02-08 22:53:28 +01:00
Jérémy Lecour 2e9eb4a946 variable for script path 2023-02-08 22:51:38 +01:00
Jérémy Lecour 149b5d0e8d comments
gitea/evobackup/pipeline/head This commit looks good Details
2023-01-28 21:14:22 +01:00
Jérémy Lecour d532ac83da client: declare variable earlier
gitea/evobackup/pipeline/head This commit looks good Details
2023-01-28 16:20:51 +01:00
Jérémy Lecour 767d509390 deploy evobackup beta with configured MAIL and LIBDIR 2023-01-28 16:20:36 +01:00
Jérémy Lecour 70fbab9bb0 Test presence of old config file before trying to delete it
gitea/evobackup/pipeline/head This commit looks good Details
2023-01-28 16:07:39 +01:00
Jérémy Lecour c5d82eda68 deployment playbook
gitea/evobackup/pipeline/head This commit looks good Details
2023-01-16 14:26:15 +01:00
Jérémy Lecour 0491598c1f hook functions 2023-01-16 14:26:04 +01:00
Jérémy Lecour 2bf4d0dd0f mtree includes must be directories 2023-01-16 14:25:31 +01:00
Jérémy Lecour ed7f9e79ae default value 2023-01-16 13:16:19 +01:00
Jérémy Lecour 7784ba5548 load libraries just before calling main
gitea/evobackup/pipeline/head This commit looks good Details
2023-01-16 09:58:17 +01:00
Jérémy Lecour 2ea9614e3c WIP: separate lib and custom code
gitea/evobackup/pipeline/head This commit looks good Details
2023-01-15 22:56:03 +01:00
Jérémy Lecour f9aa722ac9 log errors as they happen 2023-01-14 18:51:37 +01:00
Jérémy Lecour 86f0046797 send rsync full log file if it exists 2023-01-13 18:17:54 +01:00
Jérémy Lecour 518fa9d1e7 Store errors in dedicated and persistent directories 2023-01-13 17:17:56 +01:00
Jérémy Lecour 27568820bf revert "declare -a" on array variables 2023-01-13 17:15:33 +01:00
Jérémy Lecour 22814bc5d7 Ldap dump file name 2023-01-13 17:13:06 +01:00
Jérémy Lecour 9665a4ef00 commands arguments (long options and whitespaces 2023-01-13 16:58:24 +01:00
Jérémy Lecour 46c012f5fc skip mtree if disabled or missing 2023-01-13 13:30:57 +01:00
Jérémy Lecour e9cf39ad40 remove PING_BEFORE_SSH 2023-01-13 11:26:41 +01:00
Jérémy Lecour 22ba5ed823 declare bash arrays 2023-01-13 11:26:19 +01:00
Jérémy Lecour 7f4cb78826 shellcheck 2023-01-13 11:17:20 +01:00
Jérémy Lecour 7199ffc64f Add PING_BEFORE_SSH (enabled by default) 2023-01-09 11:45:39 +01:00
Jérémy Lecour 4ff1bc5976 better comments 2023-01-06 16:59:12 +01:00
Jérémy Lecour aeebb815c8 Use bash array for temp_files 2023-01-06 14:45:02 +01:00
Jérémy Lecour c2d08ed80e create and sync mtree files 2023-01-06 14:34:51 +01:00
Jérémy Lecour c3c98b64f2 Use bash array for list of paths to include 2023-01-06 14:33:20 +01:00
Jérémy Lecour 053c339e8f better comments 2023-01-05 13:45:17 +01:00
Jérémy Lecour d75d75cd4c Use an array to build the rsync commands, instead of eval 2023-01-04 23:32:12 +01:00
Jérémy Lecour 58f41963a7 store temp_files in TMPDIR instead of current directory 2023-01-04 14:51:10 +01:00
Jérémy Lecour f6c8d966d7 shellcheck 2023-01-04 14:20:12 +01:00
Jérémy Lecour 82df2b38e9 move variables around to simplify common usage 2023-01-04 14:19:48 +01:00
Jérémy Lecour f5660b1e46 doc 2023-01-04 12:34:17 +01:00
Jérémy Lecour 8d4105cf31 sync only the Rsync stats alongside the canary file 2023-01-04 11:34:42 +01:00
Jérémy Lecour a957498b6f push rsync log file with the canary file 2023-01-04 09:40:26 +01:00
Jérémy Lecour 17c2868fee shellcheck fixes 2023-01-04 09:20:41 +01:00
Jérémy Lecour c3f65a1722 extract variables 2023-01-04 09:19:47 +01:00
Jérémy Lecour 9ee784509d add whitespace to align log outputs with start/stop 2023-01-04 09:16:00 +01:00
Jérémy Lecour b6d50cc921 remove trailing slash in dump_dir 2023-01-04 09:15:26 +01:00
Jérémy Lecour 0235906546 fix dump_file 2023-01-04 09:13:46 +01:00
Jérémy Lecour b1c5b693ee Output error file if size is not null 2023-01-04 09:13:22 +01:00
Jérémy Lecour 65ba8695ad Add documentation comments 2023-01-04 07:45:47 +01:00
Jérémy Lecour c6a89cbc32 Reorder functions 2023-01-04 07:35:26 +01:00
Jérémy Lecour c368c9b11a typo 2023-01-03 23:50:34 +01:00
Jérémy Lecour 910a7398fb error codes 2023-01-03 23:50:23 +01:00
Jérémy Lecour e3c7da32a9 Add logs and error control 2023-01-03 23:30:50 +01:00
Jérémy Lecour 4496ea883a explicit canary file 2023-01-03 09:59:13 +01:00
Jérémy Lecour cb5c842979 Extract functions for each local task
gitea/evobackup/pipeline/head This commit looks good Details
2023-01-01 23:04:44 +01:00
9 changed files with 2314 additions and 605 deletions

View File

@ -10,6 +10,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Changed
* split functions into libraries
### Deprecated
### Removed

View File

@ -1,3 +0,0 @@
Pour l'installation de `zzz_evobackup`, voir <https://intra.evolix.net/Installation_jail_backup_Evolix#installation-du-client-evobackup>
Pour `update-evobackup-canary`, voir <https://intra.evolix.net/OutilsInternes/update-evobackup-canary>

View File

@ -0,0 +1,49 @@
---
- hosts: all
gather_facts: yes
become: yes
vars:
evobackup_script_path: /etc/cron.daily/zzz_evobackup_beta
evobackup_mail: alert4@evolix.net
evobackup_libdir: "/usr/local/lib/evobackup"
tasks:
- name: LIBDIR is present
file:
path: "{{ evobackup_libdir }}"
state: directory
- name: libraries are installed
copy:
src: "{{ item }}"
dest: "{{ evobackup_libdir }}/"
remote_src: False
owner: root
group: root
mode: "0640"
force: yes
loop: "{{ lookup('fileglob', 'lib/*.sh', wantlist=True) }}"
- name: script is present
copy:
src: zzz_evobackup.sh
dest: "{{ evobackup_script_path }}"
remote_src: False
owner: root
group: root
mode: "0750"
force: no
- name: Email is customized
replace:
dest: /etc/cron.daily/zzz_evobackup
regexp: "^MAIL=.*"
replace: "MAIL={{ evobackup_mail }}"
- name: LIBDIR is customized
replace:
dest: /etc/cron.daily/zzz_evobackup
regexp: "^LIBDIR=.*"
replace: "LIBDIR=\"{{ evobackup_libdir }}\""

1476
client/lib/dump.sh Normal file
View File

@ -0,0 +1,1476 @@
#!/bin/bash
# shellcheck disable=SC2034,SC2317
mysql_list_databases() {
port=${1:-"3306"}
mysql --defaults-extra-file=/etc/mysql/debian.cnf --port="${port}" --execute="show databases" --silent --skip-column-names \
| grep --extended-regexp --invert-match "^(Database|information_schema|performance_schema|sys)"
}
### BEGIN Dump functions ####
#######################################################################
# Dump LDAP files (config, data, all)
#
# Arguments: <none>
#######################################################################
dump_ldap() {
## OpenLDAP : example with slapcat
local dump_dir="${LOCAL_BACKUP_DIR}/ldap"
rm -rf "${dump_dir}"
# shellcheck disable=SC2174
mkdir -p -m 700 "${dump_dir}"
log "LOCAL_TASKS - start dump_ldap to ${dump_dir}"
slapcat -n 0 -l "${dump_dir}/config.bak"
slapcat -n 1 -l "${dump_dir}/data.bak"
slapcat -l "${dump_dir}/all.bak"
log "LOCAL_TASKS - stop dump_ldap"
}
#######################################################################
# Dump a single compressed file of all databases of an instance
#
# Arguments:
# --masterdata (default: <absent>)
# --port=[Integer] (default: 3306)
#######################################################################
dump_mysql_global() {
local dump_dir="${LOCAL_BACKUP_DIR}/mysql-global"
local errors_dir=$(errors_dir_from_dump_dir "${dump_dir}")
rm -rf "${dump_dir}" "${errors_dir}"
# shellcheck disable=SC2174
mkdir -p -m 700 "${dump_dir}" "${errors_dir}"
local error_file="${errors_dir}/mysql.bak.err"
local dump_file="${dump_dir}/mysql.bak.gz"
log "LOCAL_TASKS - start ${dump_file}"
local option_masterdata=""
local option_port="3306"
# Parse options, based on https://gist.github.com/deshion/10d3cb5f88a21671e17a
while :; do
case ${1:-''} in
--masterdata)
option_masterdata="--masterdata"
;;
--port)
# port options, with value separated by space
if [ -n "$2" ]; then
option_port="${2}"
shift
else
log_error "LOCAL_TASKS - '--port' requires a non-empty option argument."
exit 1
fi
;;
--port=?*)
# port options, with value separated by =
option_port="${1#*=}"
;;
--port=)
# port options, without value
log_error "LOCAL_TASKS - '--port' requires a non-empty option argument."
exit 1
;;
--)
# End of all options.
shift
break
;;
-?*|[[:alnum:]]*)
# ignore unknown options
log_error "LOCAL_TASKS - unkwnown option (ignored): '${1}'"
;;
*)
# Default case: If no more options then break out of the loop.
break
;;
esac
shift
done
declare -a options
options=()
options+=(--defaults-extra-file=/etc/mysql/debian.cnf)
options+=(--port="${option_port}")
options+=(--opt)
options+=(--force)
options+=(--events)
options+=(--hex-blob)
options+=(--all-databases)
if [ -n "${option_masterdata}" ]; then
options+=("${option_masterdata}")
fi
mysqldump "${options[@]}" 2> "${error_file}" | gzip --best > "${dump_file}"
local last_rc=$?
# shellcheck disable=SC2086
if [ ${last_rc} -ne 0 ]; then
log_error "LOCAL_TASKS - mysqldump to ${dump_file} returned an error ${last_rc}" "${error_file}"
GLOBAL_RC=${E_DUMPFAILED}
else
rm -f "${error_file}"
fi
log "LOCAL_TASKS - stop ${dump_file}"
}
#######################################################################
# Dump a compressed file per database of an instance
#
# Arguments:
# --port=[Integer] (default: 3306)
#######################################################################
dump_mysql_per_base() {
local dump_dir="${LOCAL_BACKUP_DIR}/mysql-per-base"
local errors_dir=$(errors_dir_from_dump_dir "${dump_dir}")
rm -rf "${dump_dir}" "${errors_dir}"
# shellcheck disable=SC2174
mkdir -p -m 700 "${dump_dir}" "${errors_dir}"
local option_port="3306"
# Parse options, based on https://gist.github.com/deshion/10d3cb5f88a21671e17a
while :; do
case ${1:-''} in
--port)
# port options, with value separated by space
if [ -n "$2" ]; then
option_port="${2}"
shift
else
log_error "LOCAL_TASKS - '--port' requires a non-empty option argument."
exit 1
fi
;;
--port=?*)
# port options, with value separated by =
option_port="${1#*=}"
;;
--port=)
# port options, without value
log_error "LOCAL_TASKS - '--port' requires a non-empty option argument."
exit 1
;;
--)
# End of all options.
shift
break
;;
-?*|[[:alnum:]]*)
# ignore unknown options
log_error "LOCAL_TASKS - unkwnown option (ignored): '${1}'"
;;
*)
# Default case: If no more options then break out of the loop.
break
;;
esac
shift
done
declare -a options
options=()
options+=(--defaults-extra-file=/etc/mysql/debian.cnf)
options+=(--port="${option_port}")
options+=(--force)
options+=(--events)
options+=(--hex-blob)
databases=$(mysql_list_databases ${option_port})
for database in ${databases}; do
local error_file="${errors_dir}/${database}.err"
local dump_file="${dump_dir}/${database}.sql.gz"
log "LOCAL_TASKS - start ${dump_file}"
mysqldump "${options[@]}" "${database}" 2> "${error_file}" | gzip --best > "${dump_file}"
local last_rc=$?
# shellcheck disable=SC2086
if [ ${last_rc} -ne 0 ]; then
log_error "LOCAL_TASKS - mysqldump to ${dump_file} returned an error ${last_rc}" "${error_file}"
GLOBAL_RC=${E_DUMPFAILED}
else
rm -f "${error_file}"
fi
log "LOCAL_TASKS - stop ${dump_file}"
done
}
#######################################################################
# Dump grants, variables and databases schemas for an instance
#
# Arguments:
# --port=[Integer] (default: 3306)
#######################################################################
dump_mysql_meta() {
local dump_dir="${LOCAL_BACKUP_DIR}/mysql-meta"
local errors_dir=$(errors_dir_from_dump_dir "${dump_dir}")
rm -rf "${dump_dir}" "${errors_dir}"
# shellcheck disable=SC2174
mkdir -p -m 700 "${dump_dir}" "${errors_dir}"
local option_port="3306"
# Parse options, based on https://gist.github.com/deshion/10d3cb5f88a21671e17a
while :; do
case ${1:-''} in
--port)
# port options, with value separated by space
if [ -n "$2" ]; then
option_port="${2}"
shift
else
log_error "LOCAL_TASKS - '--port' requires a non-empty option argument."
exit 1
fi
;;
--port=?*)
# port options, with value separated by =
option_port="${1#*=}"
;;
--port=)
# port options, without value
log_error "LOCAL_TASKS - '--port' requires a non-empty option argument."
exit 1
;;
--)
# End of all options.
shift
break
;;
-?*|[[:alnum:]]*)
# ignore unknown options
log_error "LOCAL_TASKS - unkwnown option (ignored): '${1}'"
;;
*)
# Default case: If no more options then break out of the loop.
break
;;
esac
shift
done
## Dump all grants (requires 'percona-toolkit' package)
local error_file="${errors_dir}/all_grants.err"
local dump_file="${dump_dir}/all_grants.sql"
log "LOCAL_TASKS - start ${dump_file}"
declare -a options
options=()
options+=(--port "${option_port}")
options+=(--flush)
options+=(--no-header)
pt-show-grants "${options[@]}" 2> "${error_file}" > "${dump_file}"
local last_rc=$?
# shellcheck disable=SC2086
if [ ${last_rc} -ne 0 ]; then
log_error "LOCAL_TASKS - pt-show-grants to ${dump_file} returned an error ${last_rc}" "${error_file}"
GLOBAL_RC=${E_DUMPFAILED}
else
rm -f "${error_file}"
fi
log "LOCAL_TASKS - stop ${dump_file}"
## Dump all variables
local error_file="${errors_dir}/variables.err"
local dump_file="${dump_dir}/variables.txt"
log "LOCAL_TASKS - start ${dump_file}"
declare -a options
options=()
options+=(--port="${option_port}")
options+=(--no-auto-rehash)
options+=(-e "SHOW GLOBAL VARIABLES;")
mysql "${options[@]}" 2> "${error_file}" > "${dump_file}"
local last_rc=$?
# shellcheck disable=SC2086
if [ ${last_rc} -ne 0 ]; then
log_error "LOCAL_TASKS - mysql 'show variables' returned an error ${last_rc}" "${error_file}"
GLOBAL_RC=${E_DUMPFAILED}
else
rm -f "${error_file}"
fi
log "LOCAL_TASKS - stop ${dump_file}"
## Schema only (no data) for each databases
databases=$(mysql_list_databases "${option_port}")
for database in ${databases}; do
local error_file="${errors_dir}/${database}.schema.err"
local dump_file="${dump_dir}/${database}.schema.sql"
log "LOCAL_TASKS - start ${dump_file}"
declare -a options
options=()
options+=(--defaults-extra-file=/etc/mysql/debian.cnf)
options+=(--port="${option_port}")
options+=(--force)
options+=(--no-data)
options+=(--databases "${database}")
mysqldump "${options[@]}" 2> "${error_file}" > "${dump_file}"
local last_rc=$?
# shellcheck disable=SC2086
if [ ${last_rc} -ne 0 ]; then
log_error "LOCAL_TASKS - mysqldump to ${dump_file} returned an error ${last_rc}" "${error_file}"
GLOBAL_RC=${E_DUMPFAILED}
else
rm -f "${error_file}"
fi
log "LOCAL_TASKS - stop ${dump_file}"
done
}
#######################################################################
# Dump "tabs style" separate schema/data for each database of an instance
#
# Arguments:
# --port=[Integer] (default: 3306)
#######################################################################
dump_mysql_tabs() {
databases=$(mysql_list_databases 3306)
for database in ${databases}; do
local dump_dir="${LOCAL_BACKUP_DIR}/mysql-tabs/${database}"
local errors_dir=$(errors_dir_from_dump_dir "${dump_dir}")
rm -rf "${dump_dir}" "${errors_dir}"
# shellcheck disable=SC2174
mkdir -p -m 700 "${dump_dir}" "${errors_dir}"
chown -RL mysql "${dump_dir}"
local error_file="${errors_dir}.err"
log "LOCAL_TASKS - start ${dump_dir}"
local option_port="3306"
# Parse options, based on https://gist.github.com/deshion/10d3cb5f88a21671e17a
while :; do
case ${1:-''} in
--port)
# port options, with value separated by space
if [ -n "$2" ]; then
option_port="${2}"
shift
else
log_error "LOCAL_TASKS - '--port' requires a non-empty option argument."
exit 1
fi
;;
--port=?*)
# port options, with value separated by =
option_port="${1#*=}"
;;
--port=)
# port options, without value
log_error "LOCAL_TASKS - '--port' requires a non-empty option argument."
exit 1
;;
--)
# End of all options.
shift
break
;;
-?*|[[:alnum:]]*)
# ignore unknown options
log_error "LOCAL_TASKS - unkwnown option (ignored): '${1}'"
printf 'WARN: Unknown option (ignored): %s\n' "$1" >&2
;;
*)
# Default case: If no more options then break out of the loop.
break
;;
esac
shift
done
declare -a options
options=()
options+=(--defaults-extra-file=/etc/mysql/debian.cnf)
options+=(--port="${option_port}")
options+=(--force)
options+=(--quote-names)
options+=(--opt)
options+=(--events)
options+=(--hex-blob)
options+=(--skip-comments)
options+=(--fields-enclosed-by='\"')
options+=(--fields-terminated-by=',')
options+=(--tab="${dump_dir}")
options+=("${database}")
mysqldump "${options[@]}" 2> "${error_file}"
local last_rc=$?
# shellcheck disable=SC2086
if [ ${last_rc} -ne 0 ]; then
log_error "LOCAL_TASKS - mysqldump to ${dump_dir} returned an error ${last_rc}" "${error_file}"
GLOBAL_RC=${E_DUMPFAILED}
else
rm -f "${error_file}"
fi
log "LOCAL_TASKS - stop ${dump_dir}"
done
}
#######################################################################
# Dump a single file for all databases of an instance
# using a custom authentication, instead of /etc/mysql/debian.cnf
#
# Arguments:
# --port=[Integer] (default: 3306)
# --user=[String] (default: <blank>)
# --password=[String] (default: <blank>)
#######################################################################
dump_mysql_instance() {
local dump_dir="${LOCAL_BACKUP_DIR}/mysql-instances"
local errors_dir=$(errors_dir_from_dump_dir "${dump_dir}")
rm -rf "${dump_dir}" "${errors_dir}"
# shellcheck disable=SC2174
mkdir -p -m 700 "${dump_dir}" "${errors_dir}"
local option_port=""
local option_user=""
local option_password=""
# Parse options, based on https://gist.github.com/deshion/10d3cb5f88a21671e17a
while :; do
case ${1:-''} in
--port)
# port options, with value separated by space
if [ -n "$2" ]; then
option_port="${2}"
shift
else
log_error "LOCAL_TASKS - '--port' requires a non-empty option argument."
exit 1
fi
;;
--port=?*)
# port options, with value separated by =
option_port="${1#*=}"
;;
--port=)
# port options, without value
log_error "LOCAL_TASKS - '--port' requires a non-empty option argument."
exit 1
;;
--user)
# user options, with value separated by space
if [ -n "$2" ]; then
option_user="${2}"
shift
else
log_error "LOCAL_TASKS - '--user' requires a non-empty option argument."
exit 1
fi
;;
--user=?*)
# user options, with value separated by =
option_user="${1#*=}"
;;
--user=)
# user options, without value
log_error "LOCAL_TASKS - '--user' requires a non-empty option argument."
exit 1
;;
--password)
# password options, with value separated by space
if [ -n "$2" ]; then
option_password="${2}"
shift
else
log_error "LOCAL_TASKS - '--password' requires a non-empty option argument."
exit 1
fi
;;
--password=?*)
# password options, with value separated by =
option_password="${1#*=}"
;;
--password=)
# password options, without value
log_error "LOCAL_TASKS - '--password' requires a non-empty option argument."
exit 1
;;
--)
# End of all options.
shift
break
;;
-?*|[[:alnum:]]*)
# ignore unknown options
log_error "LOCAL_TASKS - unkwnown option (ignored): '${1}'"
;;
*)
# Default case: If no more options then break out of the loop.
break
;;
esac
shift
done
declare -a options
options=()
options+=(--port="${option_port}")
options+=(--user="${option_user}")
options+=(--password="${option_password}")
options+=(--force)
options+=(--opt)
options+=(--all-databases)
options+=(--events)
options+=(--hex-blob)
local error_file="${errors_dir}/${option_port}.err"
local dump_file="${dump_dir}/${option_port}.bak.gz"
log "LOCAL_TASKS - start ${dump_file}"
mysqldump "${options[@]}" 2> "${error_file}" | gzip --best > "${dump_file}"
local last_rc=$?
# shellcheck disable=SC2086
if [ ${last_rc} -ne 0 ]; then
log_error "LOCAL_TASKS - mysqldump to ${dump_file} returned an error ${last_rc}" "${error_file}"
GLOBAL_RC=${E_DUMPFAILED}
else
rm -f "${error_file}"
fi
log "LOCAL_TASKS - stop ${dump_file}"
}
#######################################################################
# Dump a single file of all PostgreSQL databases
#
# Arguments: <none>
#######################################################################
dump_postgresql_global() {
local dump_dir="${LOCAL_BACKUP_DIR}/postgresql-global"
local errors_dir=$(errors_dir_from_dump_dir "${dump_dir}")
rm -rf "${dump_dir}" "${errors_dir}"
# shellcheck disable=SC2174
mkdir -p -m 700 "${dump_dir}" "${errors_dir}"
## example with pg_dumpall and with compression
local dump_file="${dump_dir}/pg.dump.bak.gz"
log "LOCAL_TASKS - start ${dump_file}"
(sudo -u postgres pg_dumpall) 2> "${error_file}" | gzip --best > "${dump_file}"
local last_rc=$?
# shellcheck disable=SC2086
if [ ${last_rc} -ne 0 ]; then
log_error "LOCAL_TASKS - pg_dumpall to ${dump_file} returned an error ${last_rc}" "${error_file}"
GLOBAL_RC=${E_DUMPFAILED}
else
rm -f "${error_file}"
fi
log "LOCAL_TASKS - stop ${dump_file}"
## example with pg_dumpall and without compression
## WARNING: you need space in ~postgres
# local dump_file="${dump_dir}/pg.dump.bak"
# log "LOCAL_TASKS - start ${dump_file}"
#
# (su - postgres -c "pg_dumpall > ~/pg.dump.bak") 2> "${error_file}"
# mv ~postgres/pg.dump.bak "${dump_file}"
#
# log "LOCAL_TASKS - stop ${dump_file}"
}
#######################################################################
# Dump a compressed file per database
#
# Arguments: <none>
#######################################################################
dump_postgresql_per_base() {
local dump_dir="${LOCAL_BACKUP_DIR}/postgresql-per-base"
local errors_dir=$(errors_dir_from_dump_dir "${dump_dir}")
rm -rf "${dump_dir}" "${errors_dir}"
# shellcheck disable=SC2174
mkdir -p -m 700 "${dump_dir}" "${errors_dir}"
(
# shellcheck disable=SC2164
cd /var/lib/postgresql
databases=$(sudo -u postgres psql -U postgres -lt | awk -F\| '{print $1}' | grep -v "template.*")
for database in ${databases} ; do
local error_file="${errors_dir}/${database}.err"
local dump_file="${dump_dir}/${database}.sql.gz"
log "LOCAL_TASKS - start ${dump_file}"
(sudo -u postgres /usr/bin/pg_dump --create -s -U postgres -d "${database}") 2> "${error_file}" | gzip --best > "${dump_file}"
local last_rc=$?
# shellcheck disable=SC2086
if [ ${last_rc} -ne 0 ]; then
log_error "LOCAL_TASKS - pg_dump to ${dump_file} returned an error ${last_rc}" "${error_file}"
GLOBAL_RC=${E_DUMPFAILED}
else
rm -f "${error_file}"
fi
log "LOCAL_TASKS - stop ${dump_file}"
done
)
}
#######################################################################
# Dump a compressed file per database
#
# Arguments: <none>
#
# TODO: add arguments to include/exclude tables
#######################################################################
dump_postgresql_filtered() {
local dump_dir="${LOCAL_BACKUP_DIR}/postgresql-filtered"
local errors_dir=$(errors_dir_from_dump_dir "${dump_dir}")
rm -rf "${dump_dir}" "${errors_dir}"
# shellcheck disable=SC2174
mkdir -p -m 700 "${dump_dir}" "${errors_dir}"
local error_file="${errors_dir}/pg-backup.err"
local dump_file="${dump_dir}/pg-backup.tar"
log "LOCAL_TASKS - start ${dump_file}"
## example with all tables from MYBASE excepts TABLE1 and TABLE2
# pg_dump -p 5432 -h 127.0.0.1 -U USER --clean -F t --inserts -f "${dump_file}" -t 'TABLE1' -t 'TABLE2' MYBASE 2> "${error_file}"
## example with only TABLE1 and TABLE2 from MYBASE
# pg_dump -p 5432 -h 127.0.0.1 -U USER --clean -F t --inserts -f "${dump_file}" -T 'TABLE1' -T 'TABLE2' MYBASE 2> "${error_file}"
local last_rc=$?
# shellcheck disable=SC2086
if [ ${last_rc} -ne 0 ]; then
log_error "LOCAL_TASKS - pg_dump to ${dump_file} returned an error ${last_rc}" "${error_file}"
GLOBAL_RC=${E_DUMPFAILED}
else
rm -f "${error_file}"
fi
log "LOCAL_TASKS - stop ${dump_file}"
}
#######################################################################
# Copy dump file of Redis instances
#
# Arguments:
# --instances=[Integer] (default: all)
#######################################################################
dump_redis() {
all_instances=$(find /var/lib/ -mindepth 1 -maxdepth 1 -type d -name 'redis*')
local option_instances=""
# Parse options, based on https://gist.github.com/deshion/10d3cb5f88a21671e17a
while :; do
case ${1:-''} in
--instances)
# instances options, with key and value separated by space
if [ -n "$2" ]; then
if [ "${2}" == "all" ]; then
read -a option_instances <<< "${all_instances}"
else
IFS="," read -a option_instances <<< "${2}"
fi
shift
else
log_error "LOCAL_TASKS - '--instances' requires a non-empty option argument."
exit 1
fi
;;
--instances=?*)
# instances options, with key and value separated by =
if [ "${1#*=}" == "all" ]; then
read -a option_instances <<< "${all_instances}"
else
IFS="," read -a option_instances <<< "${1#*=}"
fi
;;
--instances=)
# instances options, without value
log_error "LOCAL_TASKS - '--instances' requires a non-empty option argument."
exit 1
;;
--)
# End of all options.
shift
break
;;
-?*|[[:alnum:]]*)
# ignore unknown options
log_error "LOCAL_TASKS - unkwnown option (ignored): '${1}'"
;;
*)
# Default case: If no more options then break out of the loop.
break
;;
esac
shift
done
for instance in "${option_instances[@]}"; do
name=$(basename "${instance}")
local dump_dir="${LOCAL_BACKUP_DIR}/${name}"
local errors_dir=$(errors_dir_from_dump_dir "${dump_dir}")
rm -rf "${dump_dir}" "${errors_dir}"
# shellcheck disable=SC2174
mkdir -p -m 700 "${dump_dir}" "${errors_dir}"
if [ -f "${instance}/dump.rdb" ]; then
local error_file="${errors_dir}/${instance}.err"
log "LOCAL_TASKS - start ${dump_dir}"
cp -a "${instance}/dump.rdb" "${dump_dir}/" 2> "${error_file}"
local last_rc=$?
# shellcheck disable=SC2086
if [ ${last_rc} -ne 0 ]; then
log_error "LOCAL_TASKS - cp ${instance}/dump.rdb to ${dump_dir} returned an error ${last_rc}" "${error_file}"
GLOBAL_RC=${E_DUMPFAILED}
else
rm -f "${error_file}"
fi
log "LOCAL_TASKS - stop ${dump_dir}"
else
log_error "LOCAL_TASKS - '${instance}/dump.rdb' not found."
fi
done
}
#######################################################################
# Dump all collections of a MongoDB database
# using a custom authentication, instead of /etc/mysql/debian.cnf
#
# Arguments:
# --user=[String] (default: <blank>)
# --password=[String] (default: <blank>)
#######################################################################
dump_mongodb() {
## don't forget to create use with read-only access
## > use admin
## > db.createUser( { user: "mongobackup", pwd: "PASS", roles: [ "backup", ] } )
local dump_dir="${LOCAL_BACKUP_DIR}/mongodump"
local errors_dir=$(errors_dir_from_dump_dir "${dump_dir}")
rm -rf "${dump_dir}" "${errors_dir}"
# shellcheck disable=SC2174
mkdir -p -m 700 "${dump_dir}" "${errors_dir}"
local error_file="${errors_dir}.err"
log "LOCAL_TASKS - start ${dump_dir}"
local option_user=""
local option_password=""
# Parse options, based on https://gist.github.com/deshion/10d3cb5f88a21671e17a
while :; do
case ${1:-''} in
--user)
# user options, with value separated by space
if [ -n "$2" ]; then
option_user="${2}"
shift
else
log_error "LOCAL_TASKS - '--user' requires a non-empty option argument."
exit 1
fi
;;
--user=?*)
# user options, with value separated by =
option_user="${1#*=}"
;;
--user=)
# user options, without value
log_error "LOCAL_TASKS - '--user' requires a non-empty option argument."
exit 1
;;
--password)
# password options, with value separated by space
if [ -n "$2" ]; then
option_password="${2}"
shift
else
log_error "LOCAL_TASKS - '--password' requires a non-empty option argument."
exit 1
fi
;;
--password=?*)
# password options, with value separated by =
option_password="${1#*=}"
;;
--password=)
# password options, without value
log_error "LOCAL_TASKS - '--password' requires a non-empty option argument."
exit 1
;;
--)
# End of all options.
shift
break
;;
-?*|[[:alnum:]]*)
# ignore unknown options
log_error "LOCAL_TASKS - unkwnown option (ignored): '${1}'"
;;
*)
# Default case: If no more options then break out of the loop.
break
;;
esac
shift
done
declare -a options
options=()
options+=(--username="${option_user}")
options+=(--password="${option_password}")
options+=(--out="${dump_dir}/")
mongodump "${options[@]}" 2> "${error_file}" > /dev/null
local last_rc=$?
# shellcheck disable=SC2086
if [ ${last_rc} -ne 0 ]; then
log_error "LOCAL_TASKS - mongodump to ${dump_dir} returned an error ${last_rc}" "${error_file}"
GLOBAL_RC=${E_DUMPFAILED}
else
rm -f "${error_file}"
fi
log "LOCAL_TASKS - stop ${dump_dir}"
}
#######################################################################
# Dump MegaCLI configuration
#
# Arguments: <none>
#######################################################################
dump_megacli_config() {
local dump_dir="${LOCAL_BACKUP_DIR}/megacli"
local errors_dir=$(errors_dir_from_dump_dir "${dump_dir}")
rm -rf "${dump_dir}" "${errors_dir}"
# shellcheck disable=SC2174
mkdir -p -m 700 "${dump_dir}" "${errors_dir}"
local dump_file="${dump_dir}/megacli.cfg"
local error_file="${errors_dir}/megacli.err"
log "LOCAL_TASKS - start ${dump_file}"
megacli -CfgSave -f "${dump_file}" -a0 2> "${error_file}" > /dev/null
local last_rc=$?
# shellcheck disable=SC2086
if [ ${last_rc} -ne 0 ]; then
log_error "LOCAL_TASKS - megacli to ${dump_file} returned an error ${last_rc}" "${error_file}"
GLOBAL_RC=${E_DUMPFAILED}
else
rm -f "${error_file}"
fi
log "LOCAL_TASKS - stop ${dump_file}"
}
#######################################################################
# Save some traceroute/mtr results
#
# Arguments:
# --targets=[IP,HOST] (default: <none>)
#######################################################################
dump_traceroute() {
local dump_dir="${LOCAL_BACKUP_DIR}/traceroute"
local errors_dir=$(errors_dir_from_dump_dir "${dump_dir}")
rm -rf "${dump_dir}" "${errors_dir}"
# shellcheck disable=SC2174
mkdir -p -m 700 "${dump_dir}" "${errors_dir}"
local option_targets=""
# Parse options, based on https://gist.github.com/deshion/10d3cb5f88a21671e17a
while :; do
case ${1:-''} in
--targets)
# targets options, with key and value separated by space
if [ -n "$2" ]; then
IFS="," read -a option_targets <<< "${2}"
shift
else
log_error "LOCAL_TASKS - '--targets' requires a non-empty option argument."
exit 1
fi
;;
--targets=?*)
# targets options, with key and value separated by =
IFS="," read -a option_targets <<< "${1#*=}"
;;
--targets=)
# targets options, without value
log_error "LOCAL_TASKS - '--targets' requires a non-empty option argument."
exit 1
;;
--)
# End of all options.
shift
break
;;
-?*|[[:alnum:]]*)
# ignore unknown options
log_error "LOCAL_TASKS - unkwnown option (ignored): '${1}'"
;;
*)
# Default case: If no more options then break out of the loop.
break
;;
esac
shift
done
mtr_bin=$(command -v mtr)
if [ -n "${mtr_bin}" ]; then
for target in "${option_targets[@]}"; do
local dump_file="${dump_dir}/mtr-${target}"
log "LOCAL_TASKS - start ${dump_file}"
${mtr_bin} -r "${target}" > "${dump_file}"
log "LOCAL_TASKS - stop ${dump_file}"
done
fi
traceroute_bin=$(command -v traceroute)
if [ -n "${traceroute_bin}" ]; then
for target in "${option_targets[@]}"; do
local dump_file="${dump_dir}/traceroute-${target}"
log "LOCAL_TASKS - start ${dump_file}"
${traceroute_bin} -n "${target}" > "${dump_file}" 2>&1
log "LOCAL_TASKS - stop ${dump_file}"
done
fi
}
#######################################################################
# Save many system information, using dump_server_state
#
# Arguments:
# any option for dump-server-state (except --dump-dir) is usable
# (default: --all)
#######################################################################
dump_server_state() {
local dump_dir="${LOCAL_BACKUP_DIR}/server-state"
rm -rf "${dump_dir}"
# Do not create the directory
# shellcheck disable=SC2174
# mkdir -p -m 700 "${dump_dir}"
log "LOCAL_TASKS - start ${dump_dir}"
# pass all options
read -a options <<< "${@}"
# if no option is given, use "--all" as fallback
if [ ${#options[@]} -le 0 ]; then
options=(--all)
fi
# add "--dump-dir" in case it is missing (as it should)
options+=(--dump-dir "${dump_dir}")
dump_server_state_bin=$(command -v dump-server-state)
if [ -z "${dump_server_state_bin}" ]; then
log_error "LOCAL_TASKS - dump-server-state is missing"
rc=1
else
${dump_server_state_bin} "${options[@]}"
local last_rc=$?
# shellcheck disable=SC2086
if [ ${last_rc} -ne 0 ]; then
log_error "LOCAL_TASKS - dump-server-state returned an error ${last_rc}, check ${dump_dir}"
GLOBAL_RC=${E_DUMPFAILED}
fi
fi
log "LOCAL_TASKS - stop ${dump_dir}"
}
#######################################################################
# Save RabbitMQ data
#
# Arguments: <none>
#
# Warning: This has been poorly tested
#######################################################################
dump_rabbitmq() {
local dump_dir="${LOCAL_BACKUP_DIR}/rabbitmq"
local errors_dir=$(errors_dir_from_dump_dir "${dump_dir}")
rm -rf "${dump_dir}" "${errors_dir}"
# shellcheck disable=SC2174
mkdir -p -m 700 "${dump_dir}" "${errors_dir}"
local error_file="${errors_dir}.err"
local dump_file="${dump_dir}/config"
log "LOCAL_TASKS - start ${dump_file}"
rabbitmqadmin export "${dump_file}" 2> "${error_file}" >> "${LOGFILE}"
local last_rc=$?
# shellcheck disable=SC2086
if [ ${last_rc} -ne 0 ]; then
log_error "LOCAL_TASKS - pg_dump to ${dump_file} returned an error ${last_rc}" "${error_file}"
GLOBAL_RC=${E_DUMPFAILED}
else
rm -f "${error_file}"
fi
log "LOCAL_TASKS - stop ${dump_file}"
}
#######################################################################
# Save Files ACL on various partitions.
#
# Arguments: <none>
#######################################################################
dump_facl() {
local dump_dir="${LOCAL_BACKUP_DIR}/facl"
local errors_dir=$(errors_dir_from_dump_dir "${dump_dir}")
rm -rf "${dump_dir}" "${errors_dir}"
# shellcheck disable=SC2174
mkdir -p -m 700 "${dump_dir}" "${errors_dir}"
log "LOCAL_TASKS - start ${dump_dir}"
getfacl -R /etc > "${dump_dir}/etc.txt"
getfacl -R /home > "${dump_dir}/home.txt"
getfacl -R /usr > "${dump_dir}/usr.txt"
getfacl -R /var > "${dump_dir}/var.txt"
log "LOCAL_TASKS - stop ${dump_dir}"
}
#######################################################################
# Snapshot Elasticsearch data (single-node cluster)
#
# Arguments:
# --protocol=[String] (default: http)
# --host=[String] (default: localhost)
# --port=[Integer] (default: 9200)
# --user=[String] (default: <none>)
# --password=[String] (default: <none>)
# --repository=[String] (default: snaprepo)
# --snapshot=[String] (default: snapshot.daily)
#######################################################################
dump_elasticsearch_snapshot_singlenode() {
log "LOCAL_TASKS - start dump_elasticsearch_snapshot_singlenode"
local option_protocol="http"
local option_host="localhost"
local option_port="9200"
local option_user=""
local option_password=""
local option_repository="snaprepo"
local option_snapshot="snapshot.daily"
# Parse options, based on https://gist.github.com/deshion/10d3cb5f88a21671e17a
while :; do
case ${1:-''} in
--protocol)
# protocol options, with value separated by space
if [ -n "$2" ]; then
option_protocol="${2}"
shift
else
log_error "LOCAL_TASKS - '--protocol' requires a non-empty option argument."
exit 1
fi
;;
--protocol=?*)
# protocol options, with value separated by =
option_protocol="${1#*=}"
;;
--protocol=)
# protocol options, without value
log_error "LOCAL_TASKS - '--protocol' requires a non-empty option argument."
exit 1
;;
--host)
# host options, with value separated by space
if [ -n "$2" ]; then
option_host="${2}"
shift
else
log_error "LOCAL_TASKS - '--host' requires a non-empty option argument."
exit 1
fi
;;
--host=?*)
# host options, with value separated by =
option_host="${1#*=}"
;;
--host=)
# host options, without value
log_error "LOCAL_TASKS - '--host' requires a non-empty option argument."
exit 1
;;
--port)
# port options, with value separated by space
if [ -n "$2" ]; then
option_port="${2}"
shift
else
log_error "LOCAL_TASKS - '--port' requires a non-empty option argument."
exit 1
fi
;;
--port=?*)
# port options, with value separated by =
option_port="${1#*=}"
;;
--port=)
# port options, without value
log_error "LOCAL_TASKS - '--port' requires a non-empty option argument."
exit 1
;;
--user)
# user options, with value separated by space
if [ -n "$2" ]; then
option_user="${2}"
shift
else
log_error "LOCAL_TASKS - '--user' requires a non-empty option argument."
exit 1
fi
;;
--user=?*)
# user options, with value separated by =
option_user="${1#*=}"
;;
--user=)
# user options, without value
log_error "LOCAL_TASKS - '--user' requires a non-empty option argument."
exit 1
;;
--password)
# password options, with value separated by space
if [ -n "$2" ]; then
option_password="${2}"
shift
else
log_error "LOCAL_TASKS - '--password' requires a non-empty option argument."
exit 1
fi
;;
--password=?*)
# password options, with value separated by =
option_password="${1#*=}"
;;
--password=)
# password options, without value
log_error "LOCAL_TASKS - '--password' requires a non-empty option argument."
exit 1
;;
--repository)
# repository options, with value separated by space
if [ -n "$2" ]; then
option_repository="${2}"
shift
else
log_error "LOCAL_TASKS - '--repository' requires a non-empty option argument."
exit 1
fi
;;
--repository=?*)
# repository options, with value separated by =
option_repository="${1#*=}"
;;
--repository=)
# repository options, without value
log_error "LOCAL_TASKS - '--repository' requires a non-empty option argument."
exit 1
;;
--snapshot)
# snapshot options, with value separated by space
if [ -n "$2" ]; then
option_snapshot="${2}"
shift
else
log_error "LOCAL_TASKS - '--snapshot' requires a non-empty option argument."
exit 1
fi
;;
--snapshot=?*)
# snapshot options, with value separated by =
option_snapshot="${1#*=}"
;;
--snapshot=)
# snapshot options, without value
log_error "LOCAL_TASKS - '--snapshot' requires a non-empty option argument."
exit 1
;;
--)
# End of all options.
shift
break
;;
-?*|[[:alnum:]]*)
# ignore unknown options
log_error "LOCAL_TASKS - unkwnown option (ignored): '${1}'"
;;
*)
# Default case: If no more options then break out of the loop.
break
;;
esac
shift
done
## Take a snapshot as a backup.
## Warning: You need to have a path.repo configured.
## See: https://wiki.evolix.org/HowtoElasticsearch#snapshots-et-sauvegardes
local base_url="${option_protocol}://${option_host}:${option_port}"
local snapshot_url="${base_url}/_snapshot/${option_repository}/${option_snapshot}"
if [ -n "${option_user}" ] || [ -n "${option_password}" ]; then
local option_auth="--user ${option_user}:${option_password}"
else
local option_auth=""
fi
curl -s -XDELETE "${option_auth}" "${snapshot_url}" >> "${LOGFILE}"
curl -s -XPUT "${option_auth}" "${snapshot_url}?wait_for_completion=true" >> "${LOGFILE}"
# Clustered version here
# It basically the same thing except that you need to check that NFS is mounted
# if ss | grep ':nfs' | grep -q 'ip\.add\.res\.s1' && ss | grep ':nfs' | grep -q 'ip\.add\.res\.s2'
# then
# curl -s -XDELETE "${option_auth}" "${snapshot_url}" >> "${LOGFILE}"
# curl -s -XPUT "${option_auth}" "${snapshot_url}?wait_for_completion=true" >> "${LOGFILE}"
# else
# echo 'Cannot make a snapshot of elasticsearch, at least one node is not mounting the repository.'
# fi
log "LOCAL_TASKS - stop dump_elasticsearch_snapshot_singlenode"
}
#######################################################################
# Snapshot Elasticsearch data (multi-node cluster)
#
# Arguments:
# --protocol=[String] (default: http)
# --host=[String] (default: localhost)
# --port=[Integer] (default: 9200)
# --user=[String] (default: <none>)
# --password=[String] (default: <none>)
# --repository=[String] (default: snaprepo)
# --snapshot=[String] (default: snapshot.daily)
# --nfs-server=[IP|HOST] (default: <none>)
#######################################################################
dump_elasticsearch_snapshot_multinode() {
log "LOCAL_TASKS - start dump_elasticsearch_snapshot_multinode"
local option_protocol="http"
local option_host="localhost"
local option_port="9200"
local option_user=""
local option_password=""
local option_repository="snaprepo"
local option_snapshot="snapshot.daily"
local option_nfs_server=""
# Parse options, based on https://gist.github.com/deshion/10d3cb5f88a21671e17a
while :; do
case ${1:-''} in
--protocol)
# protocol options, with value separated by space
if [ -n "$2" ]; then
option_protocol="${2}"
shift
else
log_error "LOCAL_TASKS - '--protocol' requires a non-empty option argument."
exit 1
fi
;;
--protocol=?*)
# protocol options, with value separated by =
option_protocol="${1#*=}"
;;
--protocol=)
# protocol options, without value
log_error "LOCAL_TASKS - '--protocol' requires a non-empty option argument."
exit 1
;;
--host)
# host options, with value separated by space
if [ -n "$2" ]; then
option_host="${2}"
shift
else
log_error "LOCAL_TASKS - '--host' requires a non-empty option argument."
exit 1
fi
;;
--host=?*)
# host options, with value separated by =
option_host="${1#*=}"
;;
--host=)
# host options, without value
log_error "LOCAL_TASKS - '--host' requires a non-empty option argument."
exit 1
;;
--port)
# port options, with value separated by space
if [ -n "$2" ]; then
option_port="${2}"
shift
else
log_error "LOCAL_TASKS - '--port' requires a non-empty option argument."
exit 1
fi
;;
--port=?*)
# port options, with value separated by =
option_port="${1#*=}"
;;
--port=)
# port options, without value
log_error "LOCAL_TASKS - '--port' requires a non-empty option argument."
exit 1
;;
--user)
# user options, with value separated by space
if [ -n "$2" ]; then
option_user="${2}"
shift
else
log_error "LOCAL_TASKS - '--user' requires a non-empty option argument."
exit 1
fi
;;
--user=?*)
# user options, with value separated by =
option_user="${1#*=}"
;;
--user=)
# user options, without value
log_error "LOCAL_TASKS - '--user' requires a non-empty option argument."
exit 1
;;
--password)
# password options, with value separated by space
if [ -n "$2" ]; then
option_password="${2}"
shift
else
log_error "LOCAL_TASKS - '--password' requires a non-empty option argument."
exit 1
fi
;;
--password=?*)
# password options, with value separated by =
option_password="${1#*=}"
;;
--password=)
# password options, without value
log_error "LOCAL_TASKS - '--password' requires a non-empty option argument."
exit 1
;;
--repository)
# repository options, with value separated by space
if [ -n "$2" ]; then
option_repository="${2}"
shift
else
log_error "LOCAL_TASKS - '--repository' requires a non-empty option argument."
exit 1
fi
;;
--repository=?*)
# repository options, with value separated by =
option_repository="${1#*=}"
;;
--repository=)
# repository options, without value
log_error "LOCAL_TASKS - '--repository' requires a non-empty option argument."
exit 1
;;
--snapshot)
# snapshot options, with value separated by space
if [ -n "$2" ]; then
option_snapshot="${2}"
shift
else
log_error "LOCAL_TASKS - '--snapshot' requires a non-empty option argument."
exit 1
fi
;;
--snapshot=?*)
# snapshot options, with value separated by =
option_snapshot="${1#*=}"
;;
--snapshot=)
# snapshot options, without value
log_error "LOCAL_TASKS - '--snapshot' requires a non-empty option argument."
exit 1
;;
--nfs-server)
# nfs-server options, with value separated by space
if [ -n "$2" ]; then
option_nfs_server="${2}"
shift
else
log_error "LOCAL_TASKS - '--nfs-server' requires a non-empty option argument."
exit 1
fi
;;
--nfs-server=?*)
# nfs-server options, with value separated by =
option_nfs_server="${1#*=}"
;;
--nfs-server=)
# nfs-server options, without value
log_error "LOCAL_TASKS - '--nfs-server' requires a non-empty option argument."
exit 1
;;
--)
# End of all options.
shift
break
;;
-?*|[[:alnum:]]*)
# ignore unknown options
log_error "LOCAL_TASKS - unkwnown option (ignored): '${1}'"
;;
*)
# Default case: If no more options then break out of the loop.
break
;;
esac
shift
done
## Take a snapshot as a backup.
## Warning: You need to have a path.repo configured.
## See: https://wiki.evolix.org/HowtoElasticsearch#snapshots-et-sauvegardes
local base_url="${option_protocol}://${option_host}:${option_port}"
local snapshot_url="${base_url}/_snapshot/${option_repository}/${option_snapshot}"
if [ -n "${option_user}" ] || [ -n "${option_password}" ]; then
local option_auth="--user ${option_user}:${option_password}"
else
local option_auth=""
fi
# Clustered version here
# It basically the same thing except that you need to check that NFS is mounted
if ss | grep ':nfs' | grep -q -F "${option_nfs_server}"; then
curl -s -XDELETE "${option_auth}" "${snapshot_url}" >> "${LOGFILE}"
curl -s -XPUT "${option_auth}" "${snapshot_url}?wait_for_completion=true" >> "${LOGFILE}"
else
echo 'Cannot make a snapshot of elasticsearch, at least one node is not mounting the repository.'
fi
log "LOCAL_TASKS - stop dump_elasticsearch_snapshot_multinode"
}

445
client/lib/main.sh Normal file
View File

@ -0,0 +1,445 @@
#!/bin/bash
# shellcheck disable=SC2034,SC2317
readonly VERSION="23.1-pre"
# set all programs to C language (english)
export LC_ALL=C
# If expansion is attempted on an unset variable or parameter, the shell prints an
# error message, and, if not interactive, exits with a non-zero status.
set -u
# The pipeline's return status is the value of the last (rightmost) command
# to exit with a non-zero status, or zero if all commands exit successfully.
set -o pipefail
source "${LIBDIR}/utilities.sh"
source "${LIBDIR}/dump.sh"
# Called from main, it is wrapping the local_tasks function defined in the real script
local_tasks_wrapper() {
log "START LOCAL_TASKS"
# Remove old log directories (recursively)
find "${LOCAL_BACKUP_DIR}/" -type d -name "${PROGNAME}.errors-*" -ctime +30 -exec rm -rf \;
local_tasks_type="$(type -t local_tasks)"
if [ "${local_tasks_type}" = "function" ]; then
local_tasks
else
log_error "There is no 'local_tasks' function to execute"
fi
# TODO: check if this is still needed
# print_error_files_content
log "STOP LOCAL_TASKS"
}
# Called from main, it is wrapping the sync_tasks function defined in the real script
sync_tasks_wrapper() {
declare -a SERVERS # Indexed array for server/port values
declare -a RSYNC_INCLUDES # Indexed array for includes
declare -a RSYNC_EXCLUDES # Indexed array for excludes
case "${SYSTEM}" in
linux)
declare -a rsync_default_includes=(
/bin
/boot
/lib
/opt
/sbin
/usr
)
;;
*bsd)
declare -a rsync_default_includes=(
/bin
/bsd
/sbin
/usr
)
;;
*)
echo "Unknown system '${SYSTEM}'" >&2
exit 1
;;
esac
if [ -f "${CANARY_FILE}" ]; then
rsync_default_includes+=("${CANARY_FILE}")
fi
readonly rsync_default_includes
declare -a rsync_default_excludes=(
/dev
/proc
/run
/sys
/tmp
/usr/doc
/usr/obj
/usr/share/doc
/usr/src
/var/apt
/var/cache
/var/db/munin/*.tmp
/var/lib/amavis/amavisd.sock
/var/lib/amavis/tmp
/var/lib/clamav/*.tmp
/var/lib/elasticsearch
/var/lib/metche
/var/lib/mongodb
/var/lib/munin/*tmp*
/var/lib/mysql
/var/lib/php/sessions
/var/lib/php5
/var/lib/postgres
/var/lib/postgresql
/var/lib/sympa
/var/lock
/var/run
/var/spool/postfix
/var/spool/smtpd
/var/spool/squid
/var/state
/var/tmp
lost+found
.nfs.*
lxc/*/rootfs/tmp
lxc/*/rootfs/usr/doc
lxc/*/rootfs/usr/obj
lxc/*/rootfs/usr/share/doc
lxc/*/rootfs/usr/src
lxc/*/rootfs/var/apt
lxc/*/rootfs/var/cache
lxc/*/rootfs/var/lib/php5
lxc/*/rootfs/var/lib/php/sessions
lxc/*/rootfs/var/lock
lxc/*/rootfs/var/run
lxc/*/rootfs/var/state
lxc/*/rootfs/var/tmp
/home/mysqltmp
)
readonly rsync_default_excludes
sync_tasks_type="$(type -t sync_tasks)"
if [ "${sync_tasks_type}" = "function" ]; then
sync_tasks
else
log_error "There is no 'sync_tasks' function to execute"
fi
}
sync() {
local sync_name=${1}
local -a rsync_servers=("${!2}")
local -a rsync_includes=("${!3}")
local -a rsync_excludes=("${!4}")
## Initialize variable to store SSH connection errors
declare -a SSH_ERRORS=()
# echo "### sync ###"
# for server in "${rsync_servers[@]}"; do
# echo "server: ${server}"
# done
# for include in "${rsync_includes[@]}"; do
# echo "include: ${include}"
# done
# for exclude in "${rsync_excludes[@]}"; do
# echo "exclude: ${exclude}"
# done
local -i n=0
local server=""
if [ "${SERVERS_FALLBACK}" = "1" ]; then
# We try to find a suitable server
while :; do
server=$(pick_server ${n})
test $? = 0 || exit ${E_NOSRVAVAIL}
if test_server "${server}"; then
break
else
server=""
n=$(( n + 1 ))
fi
done
else
# we force the server
server=$(pick_server "${n}")
fi
rsync_server=$(echo "${server}" | cut -d':' -f1)
rsync_port=$(echo "${server}" | cut -d':' -f2)
log "START SYNC_TASKS - \"${sync_name}\" : server=${server}"
# Rsync complete log file for the current run
RSYNC_LOGFILE="/var/log/${PROGNAME}.${sync_name}.rsync.log"
# Rsync stats for the current run
RSYNC_STATSFILE="/var/log/${PROGNAME}.${sync_name}.rsync-stats.log"
# reset Rsync log file
if [ -n "$(command -v truncate)" ]; then
truncate -s 0 "${RSYNC_LOGFILE}"
truncate -s 0 "${RSYNC_STATSFILE}"
else
printf "" > "${RSYNC_LOGFILE}"
printf "" > "${RSYNC_STATSFILE}"
fi
# Initialize variable here, we need it later
local -a mtree_files=()
if [ "${MTREE_ENABLED}" = "1" ]; then
mtree_bin=$(command -v mtree)
if [ -n "${mtree_bin}" ]; then
# Dump filesystem stats with mtree
log "SYNC_TASKS - start mtree"
# Loop over Rsync includes
for i in "${!rsync_includes[@]}"; do
include="${rsync_includes[i]}"
if [ -d "${include}" ]; then
# … but exclude for mtree what will be excluded by Rsync
mtree_excludes_file="$(mktemp --tmpdir "${PROGNAME}.${sync_name}.mtree-excludes.XXXXXX")"
add_to_temp_files "${mtree_excludes_file}"
for j in "${!rsync_excludes[@]}"; do
echo "${rsync_excludes[j]}" | grep -E "^([^/]|${include})" | sed -e "s|^${include}|.|" >> "${mtree_excludes_file}"
done
mtree_file="/var/log/evobackup.$(basename "${include}").mtree"
add_to_temp_files "${mtree_file}"
${mtree_bin} -x -c -p "${include}" -X "${mtree_excludes_file}" > "${mtree_file}"
mtree_files+=("${mtree_file}")
fi
done
if [ "${#mtree_files[@]}" -le 0 ]; then
log_error "SYNC_TASKS - ERROR: mtree didn't produce any file"
fi
log "SYNC_TASKS - stop mtree (files: ${mtree_files[*]})"
else
log "SYNC_TASKS - skip mtree (missing)"
fi
else
log "SYNC_TASKS - skip mtree (disabled)"
fi
rsync_bin=$(command -v rsync)
# Build the final Rsync command
# Rsync main options
rsync_main_args=()
rsync_main_args+=(--archive)
rsync_main_args+=(--itemize-changes)
rsync_main_args+=(--quiet)
rsync_main_args+=(--stats)
rsync_main_args+=(--human-readable)
rsync_main_args+=(--relative)
rsync_main_args+=(--partial)
rsync_main_args+=(--delete)
rsync_main_args+=(--delete-excluded)
rsync_main_args+=(--force)
rsync_main_args+=(--ignore-errors)
rsync_main_args+=(--log-file "${RSYNC_LOGFILE}")
rsync_main_args+=(--rsh "ssh -p ${rsync_port} -o 'ConnectTimeout ${SSH_CONNECT_TIMEOUT}'")
# Rsync excludes
for i in "${!rsync_excludes[@]}"; do
rsync_main_args+=(--exclude "${rsync_excludes[i]}")
done
# Rsync local sources
rsync_main_args+=("${rsync_includes[@]}")
# Rsync remote destination
rsync_main_args+=("root@${rsync_server}:${REMOTE_BACKUP_DIR}/")
# … log it
log "SYNC_TASKS - \"${sync_name}\" Rsync main command : ${rsync_bin} ${rsync_main_args[*]}"
# … execute it
${rsync_bin} "${rsync_main_args[@]}"
rsync_main_rc=$?
# Copy last lines of rsync log to the main log
tail -n 30 "${RSYNC_LOGFILE}" >> "${LOGFILE}"
# Copy Rsync stats to special file
tail -n 30 "${RSYNC_LOGFILE}" | grep --invert-match --extended-regexp " [\<\>ch\.\*]\S{10} " > "${RSYNC_STATSFILE}"
# We ignore rc=24 (vanished files)
if [ ${rsync_main_rc} -ne 0 ] && [ ${rsync_main_rc} -ne 24 ]; then
log_error "SYNC_TASKS - ${sync_name} Rsync main command returned an error ${rsync_main_rc}" "${LOGFILE}"
GLOBAL_RC=${E_SYNCFAILED}
else
# Build the report Rsync command
local -a rsync_report_args
rsync_report_args=()
# Rsync options
rsync_report_args+=(--rsh "ssh -p ${rsync_port} -o 'ConnectTimeout ${SSH_CONNECT_TIMEOUT}'")
# Rsync local sources
if [ "${#mtree_files[@]}" -gt 0 ]; then
# send mtree files if there is any
rsync_report_args+=("${mtree_files[@]}")
fi
if [ -f "${RSYNC_LOGFILE}" ]; then
# send rsync full log file if it exists
rsync_report_args+=("${RSYNC_LOGFILE}")
fi
if [ -f "${RSYNC_STATSFILE}" ]; then
# send rsync stats log file if it exists
rsync_report_args+=("${RSYNC_STATSFILE}")
fi
# Rsync remote destination
rsync_report_args+=("root@${rsync_server}:${REMOTE_LOG_DIR}/")
# … log it
log "SYNC_TASKS - ${sync_name} Rsync report command : ${rsync_bin} ${rsync_report_args[*]}"
# … execute it
${rsync_bin} "${rsync_report_args[@]}"
fi
log "STOP SYNC_TASKS - ${sync_name} server=${server}"
}
setup() {
# Default return-code (0 == succes)
GLOBAL_RC=0
# Possible error codes
readonly E_NOSRVAVAIL=21 # No server is available
readonly E_SYNCFAILED=20 # Failed sync task
readonly E_DUMPFAILED=10 # Failed dump task
# explicit PATH
PATH=/sbin:/usr/sbin:/bin:/usr/bin:/usr/local/sbin:/usr/local/bin
# System name (linux, openbsd…)
: "${SYSTEM:=$(uname | tr '[:upper:]' '[:lower:]')}"
# Hostname (for logs and notifications)
: "${HOSTNAME:=$(hostname)}"
# Store pid in a file named after this program's name
: "${PROGNAME:=$(basename "$0")}"
: "${PIDFILE:="/var/run/${PROGNAME}.pid"}"
# Customize the log path if you want multiple scripts to have separate log files
: "${LOGFILE:="/var/log/evobackup.log"}"
# Canary file to update before executing tasks
: "${CANARY_FILE:="/zzz_evobackup_canary"}"
# Date format for log messages
: "${DATE_FORMAT:="%Y-%m-%d %H:%M:%S"}"
# Should we fallback on other servers when the first one is unreachable?
: "${SERVERS_FALLBACK:=1}"
# timeout (in seconds) for SSH connections
: "${SSH_CONNECT_TIMEOUT:=90}"
: "${LOCAL_BACKUP_DIR:="/home/backup"}"
# shellcheck disable=SC2174
mkdir -p -m 700 "${LOCAL_BACKUP_DIR}"
: "${ERRORS_DIR:="${LOCAL_BACKUP_DIR}/${PROGNAME}.errors-${START_TIME}"}"
# shellcheck disable=SC2174
mkdir -p -m 700 "${ERRORS_DIR}"
# Backup directory on remote server
: "${REMOTE_BACKUP_DIR:="/var/backup"}"
# Log directory in remote server
: "${REMOTE_LOG_DIR:="/var/log"}"
# Email address for notifications
: "${MAIL:="root"}"
# Email subject for notifications
: "${MAIL_SUBJECT:="[info] EvoBackup - Client ${HOSTNAME}"}"
# Enable/disable local tasks (default: enabled)
: "${LOCAL_TASKS:=1}"
# Enable/disable sync tasks (default: enabled)
: "${SYNC_TASKS:=1}"
# Enable/disable mtree (default: enabled)
: "${MTREE_ENABLED:=1}"
# If "setup_custom" exists and is a function, let's call it
setup_custom_type="$(type -t setup_custom)"
if [ "${setup_custom_type}" = "function" ]; then
setup_custom
fi
## Force umask
umask 077
# Initialize a list of temporary files
declare -a TEMP_FILES=()
# Any file in this list will be deleted when the program exits
trap "clean_temp_files" EXIT
}
main() {
# Start timer
START_EPOCH=$(/bin/date +%s)
START_TIME=$(/bin/date +"%Y%m%d%H%M%S")
# Configure variables and environment
setup
log "START GLOBAL - VERSION=${VERSION} LOCAL_TASKS=${LOCAL_TASKS} SYNC_TASKS=${SYNC_TASKS}"
# /!\ Only one backup processus can run at the sametime /!\
# Based on PID file, kill any running process before continuing
enforce_single_process "${PIDFILE}"
# Update canary to keep track of each run
update-evobackup-canary --who "${PROGNAME}" --file "${CANARY_FILE}"
if [ "${LOCAL_TASKS}" = "1" ]; then
local_tasks_wrapper
fi
if [ "${SYNC_TASKS}" = "1" ]; then
sync_tasks_wrapper
fi
STOP_EPOCH=$(/bin/date +%s)
case "${SYSTEM}" in
*bsd)
start_time=$(/bin/date -f "%s" -j "${START_EPOCH}" +"${DATE_FORMAT}")
stop_time=$(/bin/date -f "%s" -j "${STOP_EPOCH}" +"${DATE_FORMAT}")
;;
*)
start_time=$(/bin/date --date="@${START_EPOCH}" +"${DATE_FORMAT}")
stop_time=$(/bin/date --date="@${STOP_EPOCH}" +"${DATE_FORMAT}")
;;
esac
duration=$(( STOP_EPOCH - START_EPOCH ))
log "STOP GLOBAL - start='${start_time}' stop='${stop_time}' duration=${duration}s"
send_mail
exit ${GLOBAL_RC}
}

136
client/lib/utilities.sh Normal file
View File

@ -0,0 +1,136 @@
#!/bin/bash
# Output a message to the log file
log() {
local msg="${1:-$(cat /dev/stdin)}"
local pid=$$
printf "[%s] %s[%s]: %s\\n" \
"$(/bin/date +"${DATE_FORMAT}")" "${PROGNAME}" "${pid}" "${msg}" \
>> "${LOGFILE}"
}
log_error() {
local error_msg=${1}
local error_file=${2:-""}
if [ -n "${error_file}" ] && [ -f "${error_file}" ]; then
printf "\n### %s\n" "${error_msg}" >&2
# shellcheck disable=SC2046
if [ $(wc -l "${error_file}" | cut -d " " -f 1) -gt 30 ]; then
printf "~~~{%s (tail -30)}\n" "${error_file}" >&2
tail -n 30 "${error_file}" >&2
else
printf "~~~{%s}\n" "${error_file}" >&2
cat "${error_file}" >&2
fi
printf "~~~\n" >&2
log "${error_msg}, check ${error_file}"
else
printf "\n### %s\n" "${error_msg}" >&2
log "${error_msg}"
fi
}
add_to_temp_files() {
TEMP_FILES+=("${1}")
}
# Remove all temporary file created during the execution
clean_temp_files() {
# shellcheck disable=SC2086
rm -f "${TEMP_FILES[@]}"
}
enforce_single_process() {
local pidfile=$1
if [ -e "${pidfile}" ]; then
pid=$(cat "${pidfile}")
# Does process still exist?
if kill -0 "${pid}" 2> /dev/null; then
# Killing the childs of evobackup.
for ppid in $(pgrep -P "${pid}"); do
kill -9 "${ppid}";
done
# Then kill the main PID.
kill -9 "${pid}"
printf "%s is still running (PID %s). Process has been killed" "$0" "${pid}\\n" >&2
else
rm -f "${pidfile}"
fi
fi
add_to_temp_files "${pidfile}"
echo "$$" > "${pidfile}"
}
# Build the error directory (inside ERRORS_DIR) based on the dump directory path
errors_dir_from_dump_dir() {
local dump_dir=$1
local relative_path=$(realpath --relative-to="${LOCAL_BACKUP_DIR}" "${dump_dir}")
# return absolute path
realpath --canonicalize-missing "${ERRORS_DIR}/${relative_path}"
}
# Call test_server with "HOST:PORT" string
# It will return with 0 if the server is reachable.
# It will return with 1 and a message on stderr if not.
test_server() {
local item=$1
# split HOST and PORT from the input string
local host=$(echo "${item}" | cut -d':' -f1)
local port=$(echo "${item}" | cut -d':' -f2)
local new_error
# Test if the server is accepting connections
ssh -q -o "ConnectTimeout ${SSH_CONNECT_TIMEOUT}" "${host}" -p "${port}" -t "exit"
# shellcheck disable=SC2181
if [ $? = 0 ]; then
# SSH connection is OK
return 0
else
# SSH connection failed
new_error=$(printf "Failed to connect to \`%s' within %s seconds" "${item}" "${SSH_CONNECT_TIMEOUT}")
log "${new_error}"
SSH_ERRORS+=("${new_error}")
return 1
fi
}
# Call pick_server with an optional positive integer to get the nth server in the list.
pick_server() {
local -i increment=${1:-0}
local -i list_length=${#SERVERS[@]}
if (( increment >= list_length )); then
# We've reached the end of the list
new_error="No more server available"
log "${new_error}"
SSH_ERRORS+=("${new_error}")
# Log errors to stderr
for i in "${!SSH_ERRORS[@]}"; do
printf "%s\n" "${SSH_ERRORS[i]}" >&2
done
return 1
fi
# Extract the day of month, without leading 0 (which would give an octal based number)
today=$(/bin/date +%e)
# A salt is useful to randomize the starting point in the list
# but stay identical each time it's called for a server (based on hostname).
salt=$(hostname | cksum | cut -d' ' -f1)
# Pick an integer between 0 and the length of the SERVERS list
# It changes each day
n=$(( (today + salt + increment) % list_length ))
echo "${SERVERS[n]}"
}
send_mail() {
tail -20 "${LOGFILE}" | mail -s "${MAIL_SUBJECT}" "${MAIL}"
}

804
client/zzz_evobackup Executable file → Normal file
View File

@ -13,658 +13,260 @@
# and others.
#
# Licence: AGPLv3
#######################################################################
#
# /!\ DON'T FORGET TO SET "MAIL" and "SERVERS" VARIABLES
# You must configure the MAIL variable to receive notifications.
#
# There is some optional configuration that you can do
# at the end of this script.
#
# The library (usually installed at /usr/local/lib/evobackup/main.sh)
# also has many variables that you can override for fine-tuning.
#
#######################################################################
##### Configuration ###################################################
VERSION="22.12"
# email adress for notifications
# Email adress for notifications
MAIL=jdoe@example.com
# list of hosts (hostname or IP) and SSH port for Rsync
SERVERS="node0.backup.example.com:2XXX node1.backup.example.com:2XXX"
#######################################################################
#
# The "sync_tasks" function will be called by the main function.
#
# You can customize the variables:
# * "sync_name" (String)
# * "SERVERS" (Array of HOST:PORT)
# * "RSYNC_INCLUDES" (Array of paths to include)
# * "RSYNC_EXCLUDES" (Array of paths to exclude)
#
# The "sync" function can be called multiple times
# with a different set of variables.
# That way you can to sync to various destinations.
#
#######################################################################
# explicit PATH
PATH=/sbin:/usr/sbin:/bin:/usr/bin:/usr/local/sbin:/usr/local/bin
sync_tasks() {
# Should we fallback on other servers when the first one is unreachable?
SERVERS_FALLBACK=${SERVERS_FALLBACK:-1}
########## System-only backup (to Evolix servers) #################
# timeout (in seconds) for SSH connections
SSH_CONNECT_TIMEOUT=${SSH_CONNECT_TIMEOUT:-90}
# Name your sync task, for logs
sync_name="evolix-system"
# We use /home/backup : feel free to use your own dir
LOCAL_BACKUP_DIR="/home/backup"
# List of host/port for your sync task
# shellcheck disable=SC2034
SERVERS=(
node0.backup.evolix.net:2234
node1.backup.evolix.net:2234
)
# You can set "linux" or "bsd" manually or let it choose automatically
SYSTEM=$(uname | tr '[:upper:]' '[:lower:]')
# What to include in your sync task
# Add or remove paths if you need
# shellcheck disable=SC2034
RSYNC_INCLUDES=(
"${rsync_default_includes[@]}"
/etc
/root
/var
)
# Store pid in a file named after this program's name
PROGNAME=$(basename "$0")
PIDFILE="/var/run/${PROGNAME}.pid"
# What to exclude from your sync task
# Add or remove paths if you need
# shellcheck disable=SC2034
RSYNC_EXCLUDES=(
"${rsync_default_excludes[@]}"
)
# Customize the log path if you have multiple scripts and with separate logs
LOGFILE="/var/log/evobackup.log"
# Full Rsync log file, reset each time
RSYNC_LOGFILE="/var/log/${PROGNAME}.rsync.log"
HOSTNAME=$(hostname)
DATE_FORMAT="%Y-%m-%d %H:%M:%S"
# Enable/disable local tasks (default: enabled)
: "${LOCAL_TASKS:=1}"
# Enable/disable sync tasks (default: enabled)
: "${SYNC_TASKS:=1}"
CANARY_FILE="/zzz_evobackup_canary"
# Source paths can be customized
# Empty lines, and lines containing # or ; are ignored
RSYNC_INCLUDES="
/etc
/root
/var
/home
"
# Excluded paths can be customized
# Empty lines, and lines beginning with # or ; are ignored
RSYNC_EXCLUDES="
/dev
/proc
/run
/sys
/tmp
/usr/doc
/usr/obj
/usr/share/doc
/usr/src
/var/apt
/var/cache
/var/db/munin/*.tmp
/var/lib/amavis/amavisd.sock
/var/lib/amavis/tmp
/var/lib/clamav/*.tmp
/var/lib/elasticsearch
/var/lib/metche
/var/lib/mongodb
/var/lib/munin/*tmp*
/var/lib/mysql
/var/lib/php/sessions
/var/lib/php5
/var/lib/postgres
/var/lib/postgresql
/var/lib/sympa
/var/lock
/var/run
/var/spool/postfix
/var/spool/smtpd
/var/spool/squid
/var/state
/var/tmp
lost+found
.nfs.*
lxc/*/rootfs/tmp
lxc/*/rootfs/usr/doc
lxc/*/rootfs/usr/obj
lxc/*/rootfs/usr/share/doc
lxc/*/rootfs/usr/src
lxc/*/rootfs/var/apt
lxc/*/rootfs/var/cache
lxc/*/rootfs/var/lib/php5
lxc/*/rootfs/var/lib/php/sessions
lxc/*/rootfs/var/lock
lxc/*/rootfs/var/run
lxc/*/rootfs/var/state
lxc/*/rootfs/var/tmp
/home/mysqltmp
"
# Call the sync task
sync "${sync_name}" "SERVERS[@]" "RSYNC_INCLUDES[@]" "RSYNC_EXCLUDES[@]"
##### FUNCTIONS #######################################################
########## Full backup (to client servers) ########################
# Name your sync task, for logs
sync_name="client-full"
# List of host/port for your sync task
# shellcheck disable=SC2034
SERVERS=(
client-backup00.evolix.net:2221
client-backup01.evolix.net:2221
)
# What to include in your sync task
# Add or remove paths if you need
# shellcheck disable=SC2034
RSYNC_INCLUDES=(
"${rsync_default_includes[@]}"
/etc
/root
/var
/home
/srv
)
# What to exclude from your sync task
# Add or remove paths if you need
# shellcheck disable=SC2034
RSYNC_EXCLUDES=(
"${rsync_default_excludes[@]}"
)
# Call the sync task
sync "${sync_name}" "SERVERS[@]" "RSYNC_INCLUDES[@]" "RSYNC_EXCLUDES[@]"
}
#######################################################################
#
# The "local_tasks" function will be called by the main function.
#
# You can call any available "dump_xxx" function
# (usually installed at /usr/local/lib/evobackup/dump.sh)
#
# You can also write some custom functions and call them.
# A "dump_custom" example is available further down.
#
#######################################################################
local_tasks() {
log "START LOCAL_TASKS"
# You can comment or uncomment sections below to customize the backup
########## OpenLDAP ###############
## OpenLDAP : example with slapcat
# slapcat -n 0 -l ${LOCAL_BACKUP_DIR}/config.ldap.bak
# slapcat -n 1 -l ${LOCAL_BACKUP_DIR}/data.ldap.bak
# slapcat -l ${LOCAL_BACKUP_DIR}/ldap.bak
### dump_ldap
## MySQL
########## MySQL ##################
## Purge previous dumps
# rm -f ${LOCAL_BACKUP_DIR}/mysql.*.gz
# rm -rf ${LOCAL_BACKUP_DIR}/mysql
# rm -rf ${LOCAL_BACKUP_DIR}/mysqlhotcopy
# rm -rf /home/mysqldump
# find ${LOCAL_BACKUP_DIR}/ -type f -name '*.err' -delete
# Dump all grants (permissions), config variables and schema of databases
### dump_mysql_meta [--port=3306]
## example with global and compressed mysqldump
# mysqldump --defaults-extra-file=/etc/mysql/debian.cnf -P 3306 \
# --opt --all-databases --force --events --hex-blob 2> ${LOCAL_BACKUP_DIR}/mysql.bak.err | gzip --best > ${LOCAL_BACKUP_DIR}/mysql.bak.gz
# last_rc=$?
# if [ ${last_rc} -ne 0 ]; then
# error "mysqldump (global compressed) returned an error ${last_rc}, check ${LOCAL_BACKUP_DIR}/mysql.bak.err"
# rc=101
# fi
# Dump all databases in a single compressed file
### dump_mysql_global [--port=3306] [--masterdata]
# Dump each database separately, in a compressed file
### dump_mysql_per_base [--port=3306]
# Dump multiples instances, each in a single compressed file
### dump_mysql_instance [--port=3306]
## example with compressed SQL dump (with data) for each databases
# mkdir -p -m 700 ${LOCAL_BACKUP_DIR}/mysql/
# for i in $(mysql --defaults-extra-file=/etc/mysql/debian.cnf -P 3306 -e 'show databases' -s --skip-column-names \
# | grep --extended-regexp --invert-match "^(Database|information_schema|performance_schema|sys)"); do
# mysqldump --defaults-extra-file=/etc/mysql/debian.cnf --force -P 3306 --events --hex-blob $i 2> ${LOCAL_BACKUP_DIR}/${i}.err | gzip --best > ${LOCAL_BACKUP_DIR}/mysql/${i}.sql.gz
# last_rc=$?
# if [ ${last_rc} -ne 0 ]; then
# error "mysqldump (${i} compressed) returned an error ${last_rc}, check ${LOCAL_BACKUP_DIR}/${i}.err"
# rc=102
# fi
# done
# Dump each table in schema/data files, for all databases
### dump_mysql_tabs [--port=3306] [--user=foo] [--password=123456789]
## Dump all grants (requires 'percona-toolkit' package)
# mkdir -p -m 700 ${LOCAL_BACKUP_DIR}/mysql/
# pt-show-grants --flush --no-header 2> ${LOCAL_BACKUP_DIR}/mysql/all_grants.err > ${LOCAL_BACKUP_DIR}/mysql/all_grants.sql
# last_rc=$?
# if [ ${last_rc} -ne 0 ]; then
# error "pt-show-grants returned an error ${last_rc}, check ${LOCAL_BACKUP_DIR}/mysql/all_grants.err"
# rc=103
# fi
########## PostgreSQL #############
# Dump all variables
# mysql -A -e"SHOW GLOBAL VARIABLES;" 2> ${LOCAL_BACKUP_DIR}/MySQLCurrentSettings.err > ${LOCAL_BACKUP_DIR}/MySQLCurrentSettings.txt
# last_rc=$?
# if [ ${last_rc} -ne 0 ]; then
# error "mysql (variables) returned an error ${last_rc}, check ${LOCAL_BACKUP_DIR}/MySQLCurrentSettings.err"
# rc=104
# fi
# Dump all databases in a single file (compressed or not)
### dump_postgresql_global
## example with SQL dump (schema only, no data) for each databases
# mkdir -p -m 700 ${LOCAL_BACKUP_DIR}/mysql/
# for i in $(mysql --defaults-extra-file=/etc/mysql/debian.cnf -P 3306 -e 'show databases' -s --skip-column-names \
# | grep --extended-regexp --invert-match "^(Database|information_schema|performance_schema|sys)"); do
# mysqldump --defaults-extra-file=/etc/mysql/debian.cnf --force -P 3306 --no-data --databases $i 2> ${LOCAL_BACKUP_DIR}/${i}.schema.err > ${LOCAL_BACKUP_DIR}/mysql/${i}.schema.sql
# last_rc=$?
# if [ ${last_rc} -ne 0 ]; then
# error "mysqldump (${i} schema) returned an error ${last_rc}, check ${LOCAL_BACKUP_DIR}/${i}.schema.err"
# rc=105
# fi
# done
# Dump a specific databse with only some tables, or all but some tables (must be configured)
### dump_postgresql_filtered
## example with *one* uncompressed SQL dump for *one* database (MYBASE)
# mkdir -p -m 700 ${LOCAL_BACKUP_DIR}/mysql/MYBASE
# chown -RL mysql ${LOCAL_BACKUP_DIR}/mysql/
# mysqldump --defaults-extra-file=/etc/mysql/debian.cnf --force -Q \
# --opt --events --hex-blob --skip-comments -T ${LOCAL_BACKUP_DIR}/mysql/MYBASE MYBASE 2> ${LOCAL_BACKUP_DIR}/mysql/MYBASE.err
# last_rc=$?
# if [ ${last_rc} -ne 0 ]; then
# error "mysqldump (MYBASE) returned an error ${last_rc}, check ${LOCAL_BACKUP_DIR}/mysql/MYBASE.err"
# rc=106
# fi
# Dump each database separately, in a compressed file
### dump_postgresql_per_base
## example with two dumps for each table (.sql/.txt) for all databases
# for i in $(echo SHOW DATABASES | mysql --defaults-extra-file=/etc/mysql/debian.cnf -P 3306 \
# | grep --extended-regexp --invert-match "^(Database|information_schema|performance_schema|sys)" ); do
# mkdir -p -m 700 /home/mysqldump/$i ; chown -RL mysql /home/mysqldump
# mysqldump --defaults-extra-file=/etc/mysql/debian.cnf --force -P 3306 -Q --opt --events --hex-blob --skip-comments \
# --fields-enclosed-by='\"' --fields-terminated-by=',' -T /home/mysqldump/$i $i 2> /home/mysqldump/$i.err"
# last_rc=$?
# if [ ${last_rc} -ne 0 ]; then
# error "mysqldump (${i} files) returned an error ${last_rc}, check /home/mysqldump/$i.err"
# rc=107
# fi
# done
########## MongoDB ################
### dump_mongodb [--user=foo] [--password=123456789]
## example with mysqlhotcopy
# mkdir -p -m 700 ${LOCAL_BACKUP_DIR}/mysqlhotcopy/
# mysqlhotcopy MYBASE ${LOCAL_BACKUP_DIR}/mysqlhotcopy/ 2> ${LOCAL_BACKUP_DIR}/mysqlhotcopy/MYBASE.err
# last_rc=$?
# if [ ${last_rc} -ne 0 ]; then
# error "mysqlhotcopy returned an error ${last_rc}, check ${LOCAL_BACKUP_DIR}/mysqlhotcopy/MYBASE.err"
# rc=108
# fi
########## Redis ##################
## example for multiples MySQL instances
# mysqladminpasswd=$(grep -m1 'password = .*' /root/.my.cnf|cut -d" " -f3)
# grep --extended-regexp "^port\s*=\s*\d*" /etc/mysql/my.cnf | while read instance; do
# instance=$(echo "$instance"|awk '{ print $3 }')
# if [ "$instance" != "3306" ]
# then
# mysqldump -P $instance --opt --all-databases --hex-blob -u mysqladmin -p$mysqladminpasswd 2> ${LOCAL_BACKUP_DIR}/mysql.${instance}.err | gzip --best > ${LOCAL_BACKUP_DIR}/mysql.${instance}.bak.gz
# last_rc=$?
# if [ ${last_rc} -ne 0 ]; then
# error "mysqldump (instance ${instance}) returned an error ${last_rc}, check ${LOCAL_BACKUP_DIR}/mysql.${instance}.err"
# rc=107
# fi
# fi
# done
# Copy data file for all instances
### dump_redis [--instances=<all|instance1|instance2>]
## PostgreSQL
########## Elasticsearch ##########
## Purge previous dumps
# rm -rf ${LOCAL_BACKUP_DIR}/pg.*.gz
# rm -rf ${LOCAL_BACKUP_DIR}/pg-backup.tar
# rm -rf ${LOCAL_BACKUP_DIR}/postgresql/*
# Snapshot data for a single-node cluster
### dump_elasticsearch_snapshot_singlenode [--protocol=http] [--host=localhost] [--port=9200] [--user=foo] [--password=123456789] [--repository=snaprepo] [--snapshot=snapshot.daily]
## example with pg_dumpall (warning: you need space in ~postgres)
# su - postgres -c "pg_dumpall > ~/pg.dump.bak"
# mv ~postgres/pg.dump.bak ${LOCAL_BACKUP_DIR}/
# Snapshot data for a multi-node cluster
### dump_elasticsearch_snapshot_multinode [--protocol=http] [--host=localhost] [--port=9200] [--user=foo] [--password=123456789] [--repository=snaprepo] [--snapshot=snapshot.daily] [--nfs-server=192.168.2.1]
## another method with gzip directly piped
# (
# cd /var/lib/postgresql;
# sudo -u postgres pg_dumpall | gzip > ${LOCAL_BACKUP_DIR}/pg.dump.bak.gz
# )
########## RabbitMQ ###############
## example with all tables from MYBASE excepts TABLE1 and TABLE2
# pg_dump -p 5432 -h 127.0.0.1 -U USER --clean -F t --inserts -f ${LOCAL_BACKUP_DIR}/pg-backup.tar -t 'TABLE1' -t 'TABLE2' MYBASE
### dump_rabbitmq
## example with only TABLE1 and TABLE2 from MYBASE
# pg_dump -p 5432 -h 127.0.0.1 -U USER --clean -F t --inserts -f ${LOCAL_BACKUP_DIR}/pg-backup.tar -T 'TABLE1' -T 'TABLE2' MYBASE
########## MegaCli ################
## example with compressed PostgreSQL dump for each databases
# mkdir -p -m 700 ${LOCAL_BACKUP_DIR}/postgresql
# chown postgres:postgres ${LOCAL_BACKUP_DIR}/postgresql
# (
# cd /var/lib/postgresql
# dbs=$(sudo -u postgres psql -U postgres -lt | awk -F\| '{print $1}' |grep -v template*)
# for databases in $dbs ; do sudo -u postgres /usr/bin/pg_dump --create -s -U postgres -d $databases | gzip --best -c > ${LOCAL_BACKUP_DIR}/postgresql/$databases.sql.gz ; done
# )
# Copy RAID config
### dump_megacli_config
## MongoDB
########## Network ################
## don't forget to create use with read-only access
## > use admin
## > db.createUser( { user: "mongobackup", pwd: "PASS", roles: [ "backup", ] } )
## Purge previous dumps
# rm -rf ${LOCAL_BACKUP_DIR}/mongodump/
# mkdir -p -m 700 ${LOCAL_BACKUP_DIR}/mongodump/
# mongodump --quiet -u mongobackup -pPASS -o ${LOCAL_BACKUP_DIR}/mongodump/
# if [ $? -ne 0 ]; then
# echo "Error with mongodump!"
# fi
# Dump network routes with mtr and traceroute (warning: could be long with aggressive firewalls)
### dump_traceroute --targets=host_or_ip[,host_or_ip]
dump_traceroute --targets=8.8.8.8,www.evolix.fr,travaux.evolix.net
## Redis
########## Server state ###########
## Purge previous dumps
# rm -rf ${LOCAL_BACKUP_DIR}/redis/
# rm -rf ${LOCAL_BACKUP_DIR}/redis-*
## Copy dump.rdb file for each found instance
# for instance in $(find /var/lib/ -mindepth 1 -maxdepth 1 -type d -name 'redis*'); do
# if [ -f "${instance}/dump.rdb" ]; then
# name=$(basename $instance)
# mkdir -p ${LOCAL_BACKUP_DIR}/${name}
# cp -a "${instance}/dump.rdb" "${LOCAL_BACKUP_DIR}/${name}"
# fi
# done
# Run dump-server-state to extract system information
### dump-server-state [any dump-server-state option]
dump_server_state
## ElasticSearch
# Dump file access control lists
### dump_facl
## Take a snapshot as a backup.
## Warning: You need to have a path.repo configured.
## See: https://wiki.evolix.org/HowtoElasticsearch#snapshots-et-sauvegardes
# curl -s -XDELETE "localhost:9200/_snapshot/snaprepo/snapshot.daily" >> "${LOGFILE}"
# curl -s -XPUT "localhost:9200/_snapshot/snaprepo/snapshot.daily?wait_for_completion=true" >> "${LOGFILE}"
## Clustered version here
## It basically the same thing except that you need to check that NFS is mounted
# if ss | grep ':nfs' | grep -q 'ip\.add\.res\.s1' && ss | grep ':nfs' | grep -q 'ip\.add\.res\.s2'
# then
# curl -s -XDELETE "localhost:9200/_snapshot/snaprepo/snapshot.daily" >> "${LOGFILE}"
# curl -s -XPUT "localhost:9200/_snapshot/snaprepo/snapshot.daily?wait_for_completion=true" >> "${LOGFILE}"
# else
# echo 'Cannot make a snapshot of elasticsearch, at least one node is not mounting the repository.'
# fi
## If you need to keep older snapshot, for example the last 10 daily snapshots, replace the XDELETE and XPUT lines by :
# for snapshot in $(curl -s -XGET "localhost:9200/_snapshot/snaprepo/_all?pretty=true" | grep -Eo 'snapshot_[0-9]{4}-[0-9]{2}-[0-9]{2}' | head -n -10); do
# curl -s -XDELETE "localhost:9200/_snapshot/snaprepo/${snapshot}" | grep -v -Fx '{"acknowledged":true}'
# done
# date=$(/bin/date +%F)
# curl -s -XPUT "localhost:9200/_snapshot/snaprepo/snapshot_${date}?wait_for_completion=true" >> "${LOGFILE}"
## RabbitMQ
## export config
# rabbitmqadmin export ${LOCAL_BACKUP_DIR}/rabbitmq.config >> "${LOGFILE}"
## MegaCli config
# megacli -CfgSave -f ${LOCAL_BACKUP_DIR}/megacli_conf.dump -a0 >/dev/null
## Dump network routes with mtr and traceroute (warning: could be long with aggressive firewalls)
network_targets="8.8.8.8 www.evolix.fr travaux.evolix.net"
mtr_bin=$(command -v mtr)
if [ -n "${mtr_bin}" ]; then
for addr in ${network_targets}; do
${mtr_bin} -r "${addr}" > "${LOCAL_BACKUP_DIR}/mtr-${addr}"
done
fi
traceroute_bin=$(command -v traceroute)
if [ -n "${traceroute_bin}" ]; then
for addr in ${network_targets}; do
${traceroute_bin} -n "${addr}" > "${LOCAL_BACKUP_DIR}/traceroute-${addr}" 2>&1
done
fi
server_state_dir="${LOCAL_BACKUP_DIR}/server-state"
dump_server_state_bin=$(command -v dump-server-state)
if [ -z "${dump_server_state_bin}" ]; then
error "dump-server-state is missing"
rc=1
else
if [ "${SYSTEM}" = "linux" ]; then
${dump_server_state_bin} --all --force --dump-dir "${server_state_dir}"
last_rc=$?
if [ ${last_rc} -ne 0 ]; then
error "dump-server-state returned an error ${last_rc}, check ${server_state_dir}"
rc=1
fi
else
${dump_server_state_bin} --all --force --dump-dir "${server_state_dir}"
last_rc=$?
if [ ${last_rc} -ne 0 ]; then
error "dump-server-state returned an error ${last_rc}, check ${server_state_dir}"
rc=1
fi
fi
fi
## Dump rights
# getfacl -R /var > ${server_state_dir}/rights-var.txt
# getfacl -R /etc > ${server_state_dir}/rights-etc.txt
# getfacl -R /usr > ${server_state_dir}/rights-usr.txt
# getfacl -R /home > ${server_state_dir}/rights-home.txt
log "STOP LOCAL_TASKS"
}
build_rsync_main_cmd() {
###################################################################
# /!\ WARNING /!\ WARNING /!\ WARNING /!\ WARNING /!\ WARNING /!\ #
###################################################################
# DO NOT USE COMMENTS in rsync lines #
# DO NOT ADD WHITESPACES AFTER \ in rsync lines #
# It breaks the command and destroys data #
# You should not modify this, unless you are really REALLY sure #
###################################################################
# Create a temp file for excludes and includes
includes_file="$(mktemp "${PROGNAME}.includes.XXXXXX")"
excludes_file="$(mktemp "${PROGNAME}.excludes.XXXXXX")"
# … and add them to the list of files to delete at exit
temp_files="${temp_files} ${includes_file} ${excludes_file}"
# Store includes/excludes in files
# without blank lines of comments (# or ;)
echo "${RSYNC_INCLUDES}" | sed -e 's/\s*\(#\|;\).*//; /^\s*$/d' > "${includes_file}"
echo "${RSYNC_EXCLUDES}" | sed -e 's/\s*\(#\|;\).*//; /^\s*$/d' > "${excludes_file}"
# Rsync command
cmd="$(command -v rsync)"
# Rsync main options
cmd="${cmd} --archive"
cmd="${cmd} --itemize-changes"
cmd="${cmd} --quiet"
cmd="${cmd} --stats"
cmd="${cmd} --human-readable"
cmd="${cmd} --relative"
cmd="${cmd} --partial"
cmd="${cmd} --delete"
cmd="${cmd} --delete-excluded"
cmd="${cmd} --force"
cmd="${cmd} --ignore-errors"
cmd="${cmd} --log-file=${RSYNC_LOGFILE}"
cmd="${cmd} --rsh='ssh -p ${SSH_PORT} -o \"ConnectTimeout ${SSH_CONNECT_TIMEOUT}\"'"
# Rsync excludes
while read line ; do
cmd="${cmd} --exclude ${line}"
done < "${excludes_file}"
# Rsync local sources
cmd="${cmd} ${default_includes}"
while read line ; do
cmd="${cmd} ${line}"
done < "${includes_file}"
# Rsync remote destination
cmd="${cmd} root@${SSH_SERVER}:/var/backup/"
# output final command
echo "${cmd}"
}
build_rsync_canary_cmd() {
# Rsync command
cmd="$(command -v rsync)"
# Rsync options
cmd="${cmd} --rsh='ssh -p ${SSH_PORT} -o \"ConnectTimeout ${SSH_CONNECT_TIMEOUT}\"'"
# Rsync local source
cmd="${cmd} ${CANARY_FILE}"
# Rsync remote destination
cmd="${cmd} root@${SSH_SERVER}:/var/backup/"
# output final command
echo "${cmd}"
}
sync_tasks() {
n=0
server=""
if [ "${SERVERS_FALLBACK}" = "1" ]; then
# We try to find a suitable server
while :; do
server=$(pick_server "${n}")
test $? = 0 || exit 2
if test_server "${server}"; then
break
else
server=""
n=$(( n + 1 ))
fi
done
else
# we force the server
server=$(pick_server "${n}")
fi
SSH_SERVER=$(echo "${server}" | cut -d':' -f1)
SSH_PORT=$(echo "${server}" | cut -d':' -f2)
log "START SYNC_TASKS - server=${server}"
# default paths, depending on system
if [ "${SYSTEM}" = "linux" ]; then
default_includes="/bin /boot /lib /opt /sbin /usr"
else
default_includes="/bsd /bin /sbin /usr"
fi
# reset Rsync log file
if [ -n "$(command -v truncate)" ]; then
truncate -s 0 "${RSYNC_LOGFILE}"
else
printf "" > "${RSYNC_LOGFILE}"
fi
# Build the final Rsync command
rsync_main_cmd=$(build_rsync_main_cmd)
# … log it
log "SYNC_TASKS - Rsync main command : ${rsync_main_cmd}"
# … execute it
eval "${rsync_main_cmd}"
rsync_main_rc=$?
# Copy last lines of rsync log to the main log
tail -n 30 "${RSYNC_LOGFILE}" >> "${LOGFILE}"
if [ ${rsync_main_rc} -ne 0 ]; then
error "rsync returned an error ${rsync_main_rc}, check ${LOGFILE}"
rc=201
else
# Build the canary Rsync command
rsync_canary_cmd=$(build_rsync_canary_cmd)
# … log it
log "SYNC_TASKS - Rsync canary command : ${rsync_canary_cmd}"
# … execute it
eval "${rsync_canary_cmd}"
fi
log "STOP SYNC_TASKS - server=${server}"
# No-op, in case nothing is enabled
:
}
# Call test_server with "HOST:PORT" string
# It will return with 0 if the server is reachable.
# It will return with 1 and a message on stderr if not.
test_server() {
item=$1
# split HOST and PORT from the input string
host=$(echo "${item}" | cut -d':' -f1)
port=$(echo "${item}" | cut -d':' -f2)
# This is an example for a custom dump function
# Uncomment, customize and call it from the "local_tasks" function
### dump_custom() {
### # Set dump and errors directories and files
### local dump_dir="${LOCAL_BACKUP_DIR}/custom"
### local dump_file="${dump_dir}/dump.gz"
### local errors_dir=$(errors_dir_from_dump_dir "${dump_dir}")
### local error_file="${errors_dir}/dump.err"
###
### # Reset dump and errors directories
### rm -rf "${dump_dir}" "${errors_dir}"
### # shellcheck disable=SC2174
### mkdir -p -m 700 "${dump_dir}" "${errors_dir}"
###
### # Log the start of the command
### log "LOCAL_TASKS - start ${dump_file}"
###
### # Execute your dump command
### # Send errors to the error file and the data to the dump file
### my-dump-command 2> "${error_file}" > "${dump_file}"
###
### # Check result and deal with potential errors
### local last_rc=$?
### # shellcheck disable=SC2086
### if [ ${last_rc} -ne 0 ]; then
### log_error "LOCAL_TASKS - my-dump-command to ${dump_file} returned an error ${last_rc}" "${error_file}"
### GLOBAL_RC=${E_DUMPFAILED}
### else
### rm -f "${error_file}"
### fi
###
### # Log the end of the command
### log "LOCAL_TASKS - stop ${dump_file}"
### }
# Test if the server is accepting connections
ssh -q -o "ConnectTimeout ${SSH_CONNECT_TIMEOUT}" "${host}" -p "${port}" -t "exit"
# shellcheck disable=SC2181
if [ $? = 0 ]; then
# SSH connection is OK
return 0
else
# SSH connection failed
new_error=$(printf "Failed to connect to \`%s' within %s seconds" "${item}" "${SSH_CONNECT_TIMEOUT}")
log "${new_error}"
SERVERS_SSH_ERRORS=$(printf "%s\\n%s" "${SERVERS_SSH_ERRORS}" "${new_error}" | sed -e '/^$/d')
########## Optional configuration #####################################
return 1
fi
}
# Call pick_server with an optional positive integer to get the nth server in the list.
pick_server() {
increment=${1:-0}
list_length=$(echo "${SERVERS}" | wc -w)
setup_custom() {
# If you set a value (like "linux", "openbsd"…) it will be used,
# Default: uname(1) in lowercase.
### SYSTEM="linux"
if [ "${increment}" -ge "${list_length}" ]; then
# We've reached the end of the list
new_error="No more server available"
log "${new_error}"
SERVERS_SSH_ERRORS=$(printf "%s\\n%s" "${SERVERS_SSH_ERRORS}" "${new_error}" | sed -e '/^$/d')
# If you set a value it will be used,
# Default: hostname(1).
### HOSTNAME="example-host"
# Log errors to stderr
printf "%s\\n" "${SERVERS_SSH_ERRORS}" >&2
return 1
fi
# Email subect for notifications
### MAIL_SUBJECT="[info] EvoBackup - Client ${HOSTNAME}"
# Extract the day of month, without leading 0 (which would give an octal based number)
today=$(/bin/date +%e)
# A salt is useful to randomize the starting point in the list
# but stay identical each time it's called for a server (based on hostname).
salt=$(hostname | cksum | cut -d' ' -f1)
# Pick an integer between 0 and the length of the SERVERS list
# It changes each day
item=$(( (today + salt + increment) % list_length ))
# cut starts counting fields at 1, not 0.
field=$(( item + 1 ))
echo "${SERVERS}" | cut -d' ' -f${field}
}
log() {
msg="${1:-$(cat /dev/stdin)}"
pid=$$
printf "[%s] %s[%s]: %s\\n" \
"$(/bin/date +"${DATE_FORMAT}")" "${PROGNAME}" "${pid}" "${msg}" \
>> "${LOGFILE}"
}
error() {
msg="${1:-$(cat /dev/stdin)}"
pid=$$
printf "[%s] %s[%s]: %s\\n" \
"$(/bin/date +"${DATE_FORMAT}")" "${PROGNAME}" "${pid}" "${msg}" \
>&2
# No-op in case nothing is executed
:
}
main() {
START_EPOCH=$(/bin/date +%s)
log "START GLOBAL - VERSION=${VERSION} LOCAL_TASKS=${LOCAL_TASKS} SYNC_TASKS=${SYNC_TASKS}"
########## Libraries ##################################################
# shellcheck disable=SC2174
mkdir -p -m 700 ${LOCAL_BACKUP_DIR}
# Change this to wherever you install the libraries
LIBDIR="/usr/local/lib/evobackup"
## Force umask
umask 077
source "${LIBDIR}/main.sh"
## Initialize variable to store SSH connection errors
SERVERS_SSH_ERRORS=""
########## Let's go! ##################################################
## Verify other evobackup process and kill if needed
if [ -e "${PIDFILE}" ]; then
pid=$(cat "${PIDFILE}")
# Does process still exist ?
if kill -0 "${pid}" 2> /dev/null; then
# Killing the childs of evobackup.
for ppid in $(pgrep -P "${pid}"); do
kill -9 "${ppid}";
done
# Then kill the main PID.
kill -9 "${pid}"
printf "%s is still running (PID %s). Process has been killed" "$0" "${pid}\\n" >&2
else
rm -f "${PIDFILE}"
fi
fi
echo "$$" > "${PIDFILE}"
# Initialize a list of files to delete at exit
# Any file added to the list will also be deleted at exit
temp_files="${PIDFILE}"
# shellcheck disable=SC2064
trap "rm -f ${temp_files}" EXIT
# Update canary to keep track of each run
update-evobackup-canary --who "${PROGNAME}"
if [ "${LOCAL_TASKS}" = "1" ]; then
local_tasks
fi
if [ "${SYNC_TASKS}" = "1" ]; then
sync_tasks
fi
STOP_EPOCH=$(/bin/date +%s)
if [ "${SYSTEM}" = "openbsd" ]; then
start_time=$(/bin/date -f "%s" -j "${START_EPOCH}" +"${DATE_FORMAT}")
stop_time=$(/bin/date -f "%s" -j "${STOP_EPOCH}" +"${DATE_FORMAT}")
else
start_time=$(/bin/date --date="@${START_EPOCH}" +"${DATE_FORMAT}")
stop_time=$(/bin/date --date="@${STOP_EPOCH}" +"${DATE_FORMAT}")
fi
duration=$(( STOP_EPOCH - START_EPOCH ))
log "STOP GLOBAL - start='${start_time}' stop='${stop_time}' duration=${duration}s"
tail -20 "${LOGFILE}" | mail -s "[info] EvoBackup - Client ${HOSTNAME}" ${MAIL}
}
# set all programs to C language (english)
export LC_ALL=C
# Error on unassigned variable
set -u
# Default return-code (0 == succes)
rc=0
# execute main funciton
main
exit ${rc}
main

View File

@ -16,6 +16,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Fixed
* Test presence of old config file before trying to delete it
### Security
## [22.11] - 2022-11-28

View File

@ -49,7 +49,7 @@ fi
"${LIBDIR}/bkctld-is-on" "${jail_name}" && "${LIBDIR}/bkctld-stop" "${jail_name}"
rm -f "${CONFDIR}/${jail_name}"
test -f "${CONFDIR}/${jail_name}" && rm -f "${CONFDIR}/${jail_name}"
rm -rf "$(jail_config_dir "${jail_name}")"
btrfs_bin=$(command -v btrfs)