- Create ConfigMaps for backup configuration and scripts
- Define Secrets for S3 credentials
- Implement Role and RoleBinding for access control
- Set up a DaemonSet for running backup containers
- Add a CronJob to schedule backups daily
This commit establishes a comprehensive backup solution within the Kubernetes cluster, allowing for automated backups of specified directories to S3 storage. It includes necessary configurations and scripts to ensure proper execution and notification of backup status.
- Change provisioner name in zgo-us1.yaml to match the correct
provisioner.
- Update node selector in multiple statefulset manifests to
ensure they target the correct nodes.
- Replace storage class name from nfs-zgo-us1 to local-vkus2
for better resource management.
These changes ensure that the application components are
correctly configured to use the appropriate storage and
node resources, improving deployment stability and
performance.
- Adjust the script to include logic for cleaning up old backups
- Added support for handling PostgreSQL data directories
- Ensure temporary directories are cleaned after use
This update improves the backup process by ensuring that old backups
are properly cleaned up to save storage space and enhance efficiency.
It also includes logic to handle specific cases for PostgreSQL
directories, providing a more robust backup operation.
- Introduce a sleep command before triggering backup for each pod
- This change prevents simultaneous execution of backup tasks
- Ensures system stability by spreading out resource usage during backups
- Introduce SOURCE_SIZE variable in cm-script.yaml
- Remove redundant SOURCE_SIZE calculation
This change calculates the size of the data directory
prior to initiating the backup process. It ensures that
the backup script has accurate information about the size
of the source data, enhancing the logging and monitoring
of backup activities.
- Added special handling for PostgreSQL data directories to ensure
proper backup of the `pg_wal` folder.
- Backup now distinguishes between standard and PostgreSQL specific
logic, improving the reliability of database backups.
- Implemented warnings suppression for unchanged files to prevent
clutter during backups.
This update improves the robustness of the backup process, ensuring
that PostgreSQL data is handled correctly while also maintaining
standard directory backup functionality, providing better clarity
and usability to users performing backups.
- Implement node affinity to prevent scheduling on vkvm-us2
- Update affinity section in daemonset.yaml
- Ensure that the DaemonSet runs only on specific nodes
This change introduces a node affinity rule to the DaemonSet configuration,
allowing it to avoid scheduling on nodes labeled with `kubernetes.io/hostname`
set to `vkvm-us2`. This helps to ensure resource allocation and performance
by restricting the DaemonSet to the desired nodes.
- Change TEMP_DIR to use a more structured temporary path
- Adjust rsync command to reflect the new directory structure
- Improve MSG_TEXT formatting for better clarity
- Add 'jq' to the dependencies for JSON processing
These changes address issues with the previous temporary directory location and enhance the output messages for a more informative backup success notification.
- Include a webhook URL to send notifications after backups.
- Capture and log the duration and size of the backups.
- Create a new Kubernetes secret for storing the Feishu webhook URL.
- Enhance the backup script to notify users of backup success with details.
This change improves monitoring and user notification of backup events,
allowing for better awareness and response times in case of failure or
success of the backup processes.
- Create a new namespace for the backup system
- Implement a cron job for scheduled backups
- Add a daemon set to handle backup tasks across nodes
- Introduce necessary service accounts, roles, and role bindings
- Include environment variable handling and configuration via secrets and config maps
- Ensure triggering and execution workflow for backups is efficient
This commit establishes a new backup system that utilizes both a cron job and a daemon set to automate backups. It organizes the configurations and credentials needed for S3-compatible storage, allowing for seamless backup management across the specified nodes in the Kubernetes cluster.
- Create a cronjob to back up node1 data to node8
- Define schedule for daily backups at 3:00 AM
- Include error handling and notifications via Feishu
- Use SSH and rsync for secure and efficient data transfer
This commit introduces a new cronjob that automates the backup process
for node1 to node8, enabling easier management and recovery of data.
The setup includes necessary security measures and proper logging of backups,
ensuring smoother operation and notifications in case of failures.