- Change the APK repository URL to use a mirror site
- This ensures better availability and potentially faster downloads
The update to the repository URL is intended to improve the
reliability of package installations in the daemonset's
configuration.
- Introduce S3_PROVIDER variable in cm-script.yaml
- Update s3cmd configuration to include provider
- Modify daemonset.yaml to support tencent-gz1 and tencent-sh1 in node affinity
These changes allow the backup system to utilize multiple S3 providers, enhancing its compatibility and deployment options across different cloud environments.
- Introduce S3_PROVIDER environment variable in daemonset.yaml
- Update secret.yaml to include provider information
This change allows the application to specify the S3 provider type, improving
flexibility in storage configuration. The new variable is sourced from the
existing s3-credentials secret, ensuring secure access to the provider
information.
- Create ConfigMaps for backup configuration and scripts
- Define Secrets for S3 credentials
- Implement Role and RoleBinding for access control
- Set up a DaemonSet for running backup containers
- Add a CronJob to schedule backups daily
This commit establishes a comprehensive backup solution within the Kubernetes cluster, allowing for automated backups of specified directories to S3 storage. It includes necessary configurations and scripts to ensure proper execution and notification of backup status.
- Change provisioner name in zgo-us1.yaml to match the correct
provisioner.
- Update node selector in multiple statefulset manifests to
ensure they target the correct nodes.
- Replace storage class name from nfs-zgo-us1 to local-vkus2
for better resource management.
These changes ensure that the application components are
correctly configured to use the appropriate storage and
node resources, improving deployment stability and
performance.
- Adjust the script to include logic for cleaning up old backups
- Added support for handling PostgreSQL data directories
- Ensure temporary directories are cleaned after use
This update improves the backup process by ensuring that old backups
are properly cleaned up to save storage space and enhance efficiency.
It also includes logic to handle specific cases for PostgreSQL
directories, providing a more robust backup operation.
- Implement node affinity to prevent scheduling on vkvm-us2
- Update affinity section in daemonset.yaml
- Ensure that the DaemonSet runs only on specific nodes
This change introduces a node affinity rule to the DaemonSet configuration,
allowing it to avoid scheduling on nodes labeled with `kubernetes.io/hostname`
set to `vkvm-us2`. This helps to ensure resource allocation and performance
by restricting the DaemonSet to the desired nodes.
- Change TEMP_DIR to use a more structured temporary path
- Adjust rsync command to reflect the new directory structure
- Improve MSG_TEXT formatting for better clarity
- Add 'jq' to the dependencies for JSON processing
These changes address issues with the previous temporary directory location and enhance the output messages for a more informative backup success notification.
- Include a webhook URL to send notifications after backups.
- Capture and log the duration and size of the backups.
- Create a new Kubernetes secret for storing the Feishu webhook URL.
- Enhance the backup script to notify users of backup success with details.
This change improves monitoring and user notification of backup events,
allowing for better awareness and response times in case of failure or
success of the backup processes.
- Create a new namespace for the backup system
- Implement a cron job for scheduled backups
- Add a daemon set to handle backup tasks across nodes
- Introduce necessary service accounts, roles, and role bindings
- Include environment variable handling and configuration via secrets and config maps
- Ensure triggering and execution workflow for backups is efficient
This commit establishes a new backup system that utilizes both a cron job and a daemon set to automate backups. It organizes the configurations and credentials needed for S3-compatible storage, allowing for seamless backup management across the specified nodes in the Kubernetes cluster.
- Create a cronjob to back up node1 data to node8
- Define schedule for daily backups at 3:00 AM
- Include error handling and notifications via Feishu
- Use SSH and rsync for secure and efficient data transfer
This commit introduces a new cronjob that automates the backup process
for node1 to node8, enabling easier management and recovery of data.
The setup includes necessary security measures and proper logging of backups,
ensuring smoother operation and notifications in case of failures.