Cron Jobs and Task Automation: Scheduling Tasks Like a Sysadmin

Cron Jobs and Task Automation: Scheduling Tasks Like a Sysadmin
min read

Cron Jobs and Task Automation: Scheduling Tasks Like a Sysadmin

One of the most powerful aspects of Linux system administration is the ability to automate repetitive tasks. Whether you're backing up files, updating systems, monitoring services, or performing maintenance tasks, automation saves time, reduces errors, and ensures consistency. In this comprehensive guide, we'll explore cron jobs, systemd timers, and practical automation scripts.

Understanding Task Automation in Linux

Why Automate Tasks?

- Consistency: Tasks run the same way every time

  • Reliability: No human error or forgotten tasks
  • Efficiency: Free up time for more important work
  • 24/7 Operations: Tasks can run when you're not available
  • Scalability: Manage multiple systems efficiently

  • Types of Automation

    1. Time-based: Run tasks at specific times or intervals 2. Event-based: Trigger tasks based on system events 3. Conditional: Run tasks based on system state 4. Chain automation: Link multiple tasks together

    Introduction to Cron: The Time-Based Task Scheduler

    What is Cron?

    Cron is a time-based job scheduler in Linux that runs tasks (called "cron jobs") at specified times and dates. It's perfect for automating routine maintenance tasks, backups, system monitoring, and more.

    How Cron Works

    1. Cron Daemon: The crond service runs continuously in the background 2. Crontab Files: Store the schedule and commands for each user 3. Cron Jobs: Individual tasks with their timing specifications 4. Execution: Cron executes jobs at their scheduled times

    Cron Components

    bash
    # Check if cron is running
    sudo systemctl status cron     # Debian/Ubuntu
    sudo systemctl status crond    # RHEL/CentOS
    
    # Cron-related files and directories
    /etc/crontab                   # System-wide cron jobs
    /etc/cron.d/                   # Additional system cron jobs
    /etc/cron.daily/               # Daily scripts
    /etc/cron.hourly/              # Hourly scripts
    /etc/cron.weekly/              # Weekly scripts
    /etc/cron.monthly/             # Monthly scripts
    /var/spool/cron/crontabs/      # User crontab files
    /var/log/cron                  # Cron log file (RHEL/CentOS)
    /var/log/syslog                # Contains cron logs (Debian/Ubuntu)

    Working with Crontab

    Basic Crontab Commands

    bash
    # Edit your crontab
    crontab -e
    
    # List your cron jobs
    crontab -l
    
    # Remove all your cron jobs
    crontab -r
    
    # Edit another user's crontab (as root)
    sudo crontab -e -u username
    
    # List another user's cron jobs
    sudo crontab -l -u username

    Crontab Syntax: The Five Fields

    plaintext
    * * * * * command-to-execute
    │ │ │ │ │
    │ │ │ │ └─── Day of week (0-7, Sunday = 0 or 7)
    │ │ │ └───── Month (1-12)
    │ │ └─────── Day of month (1-31)
    │ └───────── Hour (0-23)
    └─────────── Minute (0-59)

    Basic Cron Examples

    bash
    # Run every minute
    * * * * * /path/to/script.sh
    
    # Run at 2:30 AM every day
    30 2 * * * /path/to/backup.sh
    
    # Run at 9 AM on weekdays
    0 9 * * 1-5 /path/to/workday-script.sh
    
    # Run every hour
    0 * * * * /path/to/hourly-task.sh
    
    # Run at midnight on the 1st of every month
    0 0 1 * * /path/to/monthly-cleanup.sh
    
    # Run every 15 minutes
    */15 * * * * /path/to/frequent-check.sh
    
    # Run twice a day (6 AM and 6 PM)
    0 6,18 * * * /path/to/twice-daily.sh

    Advanced Cron Scheduling

    bash
    # Special time strings
    @reboot     /path/to/startup-script.sh    # Run at startup
    @yearly     /path/to/annual-task.sh       # Run once a year (0 0 1 1 *)
    @annually   /path/to/annual-task.sh       # Same as @yearly
    @monthly    /path/to/monthly-task.sh      # Run once a month (0 0 1 * *)
    @weekly     /path/to/weekly-task.sh       # Run once a week (0 0 * * 0)
    @daily      /path/to/daily-task.sh        # Run once a day (0 0 * * *)
    @midnight   /path/to/midnight-task.sh     # Same as @daily
    @hourly     /path/to/hourly-task.sh       # Run once an hour (0 * * * *)
    
    # Complex scheduling examples
    # Run every weekday at 8:30 AM
    30 8 * * 1-5 /path/to/workday-reminder.sh
    
    # Run every 30 minutes during business hours
    */30 9-17 * * 1-5 /path/to/business-check.sh
    
    # Run on the 1st and 15th of every month
    0 0 1,15 * * /path/to/bimonthly-task.sh
    
    # Run every quarter (Jan, Apr, Jul, Oct) on the 1st at midnight
    0 0 1 1,4,7,10 * /path/to/quarterly-report.sh
    
    # Run every 6 hours
    0 */6 * * * /path/to/six-hour-task.sh

    Environment Variables in Crontab

    Cron runs with a minimal environment, so you often need to set variables:

    bash
    # Set environment variables at the top of crontab
    SHELL=/bin/bash
    PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
    MAILTO=admin@example.com
    HOME=/home/username
    
    # Or use full paths in commands
    0 2 * * * /usr/bin/python3 /home/user/scripts/backup.py
    
    # Source your environment
    0 2 * * * /bin/bash -l -c '/home/user/scripts/backup.sh'

    Practical Cron Job Examples

    System Maintenance Tasks

    bash
    # Daily system cleanup at 3 AM
    0 3 * * * /usr/bin/apt update && /usr/bin/apt autoremove -y
    
    # Weekly log rotation at Sunday 2 AM
    0 2 * * 0 /usr/sbin/logrotate /etc/logrotate.conf
    
    # Monthly disk usage report
    0 9 1 * * df -h | mail -s "Monthly Disk Usage Report" admin@example.com
    
    # Clean temporary files daily at midnight
    0 0 * * * find /tmp -type f -mtime +7 -delete
    
    # Update locate database daily
    30 2 * * * /usr/bin/updatedb

    Backup Tasks

    bash
    # Daily database backup at 1 AM
    0 1 * * * /usr/bin/mysqldump -u backup_user -p'password' database_name > /backup/db_$(date +\%Y\%m\%d).sql
    
    # Weekly full system backup
    0 2 * * 0 /usr/bin/rsync -av /home/ /backup/home_backup/
    
    # Daily web directory backup
    30 1 * * * /bin/tar -czf /backup/website_$(date +\%Y\%m\%d).tar.gz /var/www/html/
    
    # Hourly incremental backup during business hours
    0 9-17 * * 1-5 /home/user/scripts/incremental_backup.sh

    Monitoring and Alerts

    bash
    # Check disk space every hour
    0 * * * * /home/user/scripts/check_disk_space.sh
    
    # Monitor website uptime every 5 minutes
    */5 * * * * /usr/bin/curl -f http://example.com || echo "Website down!" | mail admin@example.com
    
    # Daily system health check
    0 8 * * * /home/user/scripts/system_health.sh | mail -s "Daily System Report" admin@example.com
    
    # Check for failed SSH logins every 10 minutes
    */10 * * * * /home/user/scripts/check_failed_logins.sh

    Creating Automated Backup Scripts

    Simple File Backup Script

    bash
    #!/bin/bash
    # simple_backup.sh - Basic file backup script
    
    # Configuration
    SOURCE_DIR="/home/user/documents"
    BACKUP_DIR="/backup/documents"
    DATE=$(date +"%Y%m%d_%H%M%S")
    BACKUP_NAME="documents_backup_$DATE"
    
    # Create backup directory if it doesn't exist
    mkdir -p "$BACKUP_DIR"
    
    # Create compressed backup
    echo "Starting backup of $SOURCE_DIR..."
    tar -czf "$BACKUP_DIR/$BACKUP_NAME.tar.gz" -C "$(dirname "$SOURCE_DIR")" "$(basename "$SOURCE_DIR")"
    
    if [ $? -eq 0 ]; then
        echo "Backup completed successfully: $BACKUP_DIR/$BACKUP_NAME.tar.gz"
        
        # Remove backups older than 30 days
        find "$BACKUP_DIR" -name "documents_backup_*.tar.gz" -mtime +30 -delete
        echo "Old backups cleaned up"
    else
        echo "Backup failed!" >&2
        exit 1
    fi

    Advanced MySQL Database Backup Script

    bash
    #!/bin/bash
    # mysql_backup.sh - Advanced MySQL backup script
    
    # Configuration
    DB_USER="backup_user"
    DB_PASS="secure_password"
    DB_HOST="localhost"
    BACKUP_DIR="/backup/mysql"
    DATE=$(date +"%Y%m%d_%H%M%S")
    RETENTION_DAYS=7
    LOG_FILE="/var/log/mysql_backup.log"
    
    # Email settings
    ADMIN_EMAIL="admin@example.com"
    SMTP_SERVER="smtp.example.com"
    
    # Functions
    log_message() {
        echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE"
    }
    
    send_notification() {
        local subject="$1"
        local message="$2"
        echo "$message" | mail -s "$subject" "$ADMIN_EMAIL"
    }
    
    # Create backup directory
    mkdir -p "$BACKUP_DIR"
    
    # Get list of databases
    DATABASES=$(mysql -u"$DB_USER" -p"$DB_PASS" -h"$DB_HOST" -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema|performance_schema|mysql|sys)")
    
    log_message "Starting MySQL backup process"
    
    # Backup each database
    for DB in $DATABASES; do
        log_message "Backing up database: $DB"
        
        mysqldump -u"$DB_USER" -p"$DB_PASS" -h"$DB_HOST" \
            --single-transaction \
            --routines \
            --triggers \
            --events \
            --hex-blob \
            "$DB" | gzip > "$BACKUP_DIR/${DB}_$DATE.sql.gz"
        
        if [ $? -eq 0 ]; then
            log_message "Successfully backed up database: $DB"
        else
            log_message "ERROR: Failed to backup database: $DB"
            send_notification "MySQL Backup Failed" "Failed to backup database: $DB"
        fi
    done
    
    # Cleanup old backups
    log_message "Cleaning up backups older than $RETENTION_DAYS days"
    find "$BACKUP_DIR" -name "*.sql.gz" -mtime +$RETENTION_DAYS -delete
    
    # Calculate backup sizes
    TOTAL_SIZE=$(du -sh "$BACKUP_DIR" | cut -f1)
    log_message "Backup process completed. Total backup size: $TOTAL_SIZE"
    
    # Send success notification
    send_notification "MySQL Backup Completed" "All databases backed up successfully. Total size: $TOTAL_SIZE"

    Incremental Backup with Rsync

    bash
    #!/bin/bash
    # incremental_backup.sh - Incremental backup using rsync
    
    # Configuration
    SOURCE_DIRS=("/home/user/documents" "/home/user/projects" "/etc")
    BACKUP_ROOT="/backup/incremental"
    DATE=$(date +"%Y-%m-%d")
    CURRENT_BACKUP="$BACKUP_ROOT/current"
    DATED_BACKUP="$BACKUP_ROOT/backup-$DATE"
    LOG_FILE="/var/log/incremental_backup.log"
    EXCLUDE_FILE="/home/user/.backup_exclude"
    
    # Create exclude file if it doesn't exist
    cat > "$EXCLUDE_FILE" << 'EOF'
    *.tmp
    *.log
    .cache/
    .thumbnails/
    node_modules/
    .git/
    *.iso
    *.img
    EOF
    
    # Functions
    log_message() {
        echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE"
    }
    
    # Create backup directories
    mkdir -p "$BACKUP_ROOT"
    
    log_message "Starting incremental backup"
    
    # If current backup exists, create hard-link copy for incremental
    if [ -d "$CURRENT_BACKUP" ]; then
        log_message "Creating incremental backup based on previous backup"
        cp -al "$CURRENT_BACKUP" "$DATED_BACKUP"
    fi
    
    # Perform rsync backup for each source directory
    for SOURCE in "${SOURCE_DIRS[@]}"; do
        if [ -d "$SOURCE" ]; then
            log_message "Backing up: $SOURCE"
            
            # Create destination directory
            DEST_DIR="$DATED_BACKUP$(dirname "$SOURCE")"
            mkdir -p "$DEST_DIR"
            
            # Perform rsync
            rsync -av \
                --delete \
                --exclude-from="$EXCLUDE_FILE" \
                "$SOURCE/" \
                "$DATED_BACKUP$SOURCE/"
            
            if [ $? -eq 0 ]; then
                log_message "Successfully backed up: $SOURCE"
            else
                log_message "ERROR: Failed to backup: $SOURCE"
            fi
        else
            log_message "WARNING: Source directory not found: $SOURCE"
        fi
    done
    
    # Update current backup link
    rm -f "$CURRENT_BACKUP"
    ln -s "$DATED_BACKUP" "$CURRENT_BACKUP"
    
    # Cleanup old backups (keep last 14 days)
    find "$BACKUP_ROOT" -maxdepth 1 -name "backup-*" -type d -mtime +14 -exec rm -rf {} \;
    
    log_message "Incremental backup completed"
    
    # Calculate backup size
    BACKUP_SIZE=$(du -sh "$DATED_BACKUP" | cut -f1)
    log_message "Backup size: $BACKUP_SIZE"

    System Monitoring Script

    bash
    #!/bin/bash
    # system_monitor.sh - Comprehensive system monitoring
    
    # Configuration
    REPORT_FILE="/tmp/system_report_$(date +%Y%m%d).txt"
    ADMIN_EMAIL="admin@example.com"
    DISK_THRESHOLD=80
    MEMORY_THRESHOLD=80
    CPU_THRESHOLD=80
    
    # Functions
    check_disk_usage() {
        echo "=== DISK USAGE ===" >> "$REPORT_FILE"
        df -h >> "$REPORT_FILE"
        echo "" >> "$REPORT_FILE"
        
        # Check for high disk usage
        df -h | awk 'NR>1 {print $5 " " $6}' | while read line; do
            USAGE=$(echo $line | awk '{print $1}' | sed 's/%//')
            PARTITION=$(echo $line | awk '{print $2}')
            
            if [ "$USAGE" -gt "$DISK_THRESHOLD" ]; then
                echo "WARNING: High disk usage on $PARTITION: $USAGE%" >> "$REPORT_FILE"
            fi
        done
        echo "" >> "$REPORT_FILE"
    }
    
    check_memory_usage() {
        echo "=== MEMORY USAGE ===" >> "$REPORT_FILE"
        free -h >> "$REPORT_FILE"
        echo "" >> "$REPORT_FILE"
        
        # Check memory usage percentage
        MEMORY_USAGE=$(free | awk 'NR==2{printf "%.0f", $3*100/$2}')
        if [ "$MEMORY_USAGE" -gt "$MEMORY_THRESHOLD" ]; then
            echo "WARNING: High memory usage: $MEMORY_USAGE%" >> "$REPORT_FILE"
        fi
        echo "" >> "$REPORT_FILE"
    }
    
    check_cpu_usage() {
        echo "=== CPU USAGE ===" >> "$REPORT_FILE"
        top -bn1 | head -20 >> "$REPORT_FILE"
        echo "" >> "$REPORT_FILE"
        
        # Check average CPU load
        LOAD_AVG=$(uptime | awk -F'load average:' '{print $2}' | awk '{print $1}' | sed 's/,//')
        CPU_CORES=$(nproc)
        CPU_USAGE=$(echo "$LOAD_AVG * 100 / $CPU_CORES" | bc -l | cut -d. -f1)
        
        if [ "$CPU_USAGE" -gt "$CPU_THRESHOLD" ]; then
            echo "WARNING: High CPU usage: $CPU_USAGE%" >> "$REPORT_FILE"
        fi
        echo "" >> "$REPORT_FILE"
    }
    
    check_services() {
        echo "=== CRITICAL SERVICES ===" >> "$REPORT_FILE"
        SERVICES=("ssh" "nginx" "mysql" "cron")
        
        for service in "${SERVICES[@]}"; do
            if systemctl is-active --quiet "$service"; then
                echo "$service: RUNNING" >> "$REPORT_FILE"
            else
                echo "$service: STOPPED" >> "$REPORT_FILE"
                echo "WARNING: Critical service $service is not running!" >> "$REPORT_FILE"
            fi
        done
        echo "" >> "$REPORT_FILE"
    }
    
    check_failed_logins() {
        echo "=== FAILED LOGIN ATTEMPTS ===" >> "$REPORT_FILE"
        FAILED_LOGINS=$(grep "Failed password" /var/log/auth.log | grep "$(date +%Y-%m-%d)" | wc -l)
        echo "Failed login attempts today: $FAILED_LOGINS" >> "$REPORT_FILE"
        
        if [ "$FAILED_LOGINS" -gt 10 ]; then
            echo "WARNING: High number of failed login attempts: $FAILED_LOGINS" >> "$REPORT_FILE"
        fi
        echo "" >> "$REPORT_FILE"
    }
    
    # Initialize report
    echo "SYSTEM MONITORING REPORT - $(date)" > "$REPORT_FILE"
    echo "========================================" >> "$REPORT_FILE"
    echo "" >> "$REPORT_FILE"
    
    # Run checks
    check_disk_usage
    check_memory_usage
    check_cpu_usage
    check_services
    check_failed_logins
    
    # Add system uptime
    echo "=== SYSTEM UPTIME ===" >> "$REPORT_FILE"
    uptime >> "$REPORT_FILE"
    echo "" >> "$REPORT_FILE"
    
    # Send report via email
    mail -s "Daily System Report - $(hostname)" "$ADMIN_EMAIL" < "$REPORT_FILE"
    
    # Cleanup
    rm -f "$REPORT_FILE"

    Advanced Scheduling with Systemd Timers

    Introduction to Systemd Timers

    Systemd timers are a modern alternative to cron jobs, offering more flexibility and better integration with the systemd ecosystem.

    Advantages of Systemd Timers

    - Better logging: Integrated with journald

  • Dependencies: Can depend on other services
  • Resource control: CPU and memory limits
  • Failure handling: Restart policies and failure detection
  • Calendar events: More flexible time specifications

  • Creating a Systemd Timer

    Step 1: Create the Service File

    bash
    # /etc/systemd/system/backup.service
    [Unit]
    Description=Daily Backup Service
    Wants=backup.timer
    
    [Service]
    Type=oneshot
    ExecStart=/home/user/scripts/backup.sh
    User=backup
    Group=backup
    
    [Install]
    WantedBy=multi-user.target

    Step 2: Create the Timer File

    bash
    # /etc/systemd/system/backup.timer
    [Unit]
    Description=Run backup daily
    Requires=backup.service
    
    [Timer]
    OnCalendar=daily
    Persistent=true
    RandomizedDelaySec=30m
    
    [Install]
    WantedBy=timers.target

    Step 3: Enable and Start the Timer

    bash
    # Reload systemd configuration
    sudo systemctl daemon-reload
    
    # Enable and start the timer
    sudo systemctl enable backup.timer
    sudo systemctl start backup.timer
    
    # Check timer status
    sudo systemctl status backup.timer
    
    # List all timers
    sudo systemctl list-timers

    Systemd Timer Calendar Events

    bash
    # Daily at 3 AM
    OnCalendar=*-*-* 03:00:00
    
    # Every 15 minutes
    OnCalendar=*:0/15
    
    # Weekdays at 9 AM
    OnCalendar=Mon..Fri 09:00
    
    # Monthly on the 1st at midnight
    OnCalendar=*-*-01 00:00:00
    
    # Every 6 hours
    OnCalendar=0/6:00:00
    
    # Specific date and time
    OnCalendar=2025-12-25 10:30:00
    
    # Multiple times
    OnCalendar=08:00
    OnCalendar=20:00

    Advanced Systemd Timer Example

    bash
    # /etc/systemd/system/system-monitor.service
    [Unit]
    Description=System Monitoring Service
    After=network.target
    
    [Service]
    Type=oneshot
    ExecStart=/usr/local/bin/system-monitor.sh
    User=monitor
    Group=monitor
    Environment=PATH=/usr/local/bin:/usr/bin:/bin
    WorkingDirectory=/home/monitor
    StandardOutput=journal
    StandardError=journal
    
    # Resource limits
    MemoryMax=256M
    CPUQuota=50%
    
    # Security settings
    NoNewPrivileges=true
    PrivateTmp=true
    ProtectSystem=strict
    ProtectHome=read-only
    ReadWritePaths=/var/log/monitoring
    
    [Install]
    WantedBy=multi-user.target
    bash
    # /etc/systemd/system/system-monitor.timer
    [Unit]
    Description=Run system monitoring every hour
    Requires=system-monitor.service
    
    [Timer]
    OnCalendar=hourly
    Persistent=true
    RandomizedDelaySec=300
    AccuracySec=1min
    
    [Install]
    WantedBy=timers.target

    Troubleshooting Automation Issues

    Debugging Cron Jobs

    bash
    # Check if cron is running
    sudo systemctl status cron
    
    # View cron logs
    sudo tail -f /var/log/syslog | grep CRON  # Debian/Ubuntu
    sudo tail -f /var/log/cron                # RHEL/CentOS
    
    # Test cron job manually
    # Run the exact command from your crontab to test
    
    # Common issues and solutions:
    # 1. PATH problems - use full paths
    # 2. Environment variables - set them in crontab
    # 3. Permissions - check file and directory permissions
    # 4. Output redirection - capture output for debugging
    
    # Example with debugging
    */5 * * * * /path/to/script.sh >> /tmp/script.log 2>&1

    Debugging Systemd Timers

    bash
    # Check timer status
    sudo systemctl status backup.timer
    
    # View timer logs
    sudo journalctl -u backup.timer -f
    
    # Check service logs
    sudo journalctl -u backup.service -f
    
    # List all timers with next run times
    sudo systemctl list-timers --all
    
    # Test service manually
    sudo systemctl start backup.service
    
    # Check service configuration
    sudo systemctl cat backup.service
    sudo systemctl cat backup.timer

    Common Automation Problems

    Environment Issues

    bash
    # Problem: Script works manually but fails in cron
    # Solution: Set environment variables
    
    # In crontab
    SHELL=/bin/bash
    PATH=/usr/local/bin:/usr/bin:/bin
    HOME=/home/username
    
    # Or in script
    export PATH="/usr/local/bin:/usr/bin:/bin"
    source ~/.bashrc

    Permission Issues

    bash
    # Problem: Permission denied errors
    # Solution: Check file permissions and ownership
    
    # Check script permissions
    ls -la /path/to/script.sh
    
    # Make script executable
    chmod +x /path/to/script.sh
    
    # Check directory permissions
    ls -ld /path/to/directory
    
    # Run cron job as specific user
    sudo crontab -e -u username

    Logging and Debugging

    bash
    # Add logging to scripts
    #!/bin/bash
    LOG_FILE="/var/log/myscript.log"
    
    log() {
        echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> "$LOG_FILE"
    }
    
    log "Script started"
    # ... rest of script
    log "Script completed"
    
    # Capture all output in cron
    0 2 * * * /path/to/script.sh >> /var/log/script.log 2>&1
    
    # Email output (if MAILTO is set)
    MAILTO=admin@example.com
    0 2 * * * /path/to/script.sh

    Best Practices for Task Automation

    1. Script Design Principles

    bash
    #!/bin/bash
    # follow-best-practices.sh
    
    # Set strict error handling
    set -euo pipefail
    
    # Configuration section at top
    CONFIG_FILE="/etc/myapp/config.conf"
    LOG_FILE="/var/log/myapp.log"
    LOCK_FILE="/var/run/myapp.lock"
    
    # Functions for common operations
    log() {
        echo "$(date '+%Y-%m-%d %H:%M:%S') [$1] $2" >> "$LOG_FILE"
    }
    
    error_exit() {
        log "ERROR" "$1"
        exit 1
    }
    
    # Lock file to prevent concurrent runs
    acquire_lock() {
        if [ -f "$LOCK_FILE" ]; then
            PID=$(cat "$LOCK_FILE")
            if ps -p "$PID" > /dev/null 2>&1; then
                error_exit "Script is already running (PID: $PID)"
            else
                log "WARN" "Removing stale lock file"
                rm -f "$LOCK_FILE"
            fi
        fi
        echo $$ > "$LOCK_FILE"
    }
    
    cleanup() {
        rm -f "$LOCK_FILE"
        log "INFO" "Script completed"
    }
    
    # Set trap for cleanup
    trap cleanup EXIT
    
    # Main script logic
    main() {
        log "INFO" "Script started"
        acquire_lock
        
        # Your automation logic here
        
        log "INFO" "All tasks completed successfully"
    }
    
    # Run main function
    main "$@"

    2. Monitoring and Alerting

    bash
    #!/bin/bash
    # monitoring-wrapper.sh - Wrapper for monitoring script execution
    
    SCRIPT_NAME="$1"
    EXPECTED_RUNTIME="$2"  # in minutes
    ADMIN_EMAIL="admin@example.com"
    
    if [ -z "$SCRIPT_NAME" ] || [ -z "$EXPECTED_RUNTIME" ]; then
        echo "Usage: $0 <script-path> <expected-runtime-minutes>"
        exit 1
    fi
    
    # Start monitoring
    START_TIME=$(date +%s)
    LOG_FILE="/var/log/automation/$(basename "$SCRIPT_NAME" .sh).log"
    
    # Run the script and capture output
    if timeout "${EXPECTED_RUNTIME}m" "$SCRIPT_NAME" >> "$LOG_FILE" 2>&1; then
        END_TIME=$(date +%s)
        RUNTIME=$((END_TIME - START_TIME))
        echo "Script completed successfully in ${RUNTIME}s" >> "$LOG_FILE"
    else
        EXIT_CODE=$?
        END_TIME=$(date +%s)
        RUNTIME=$((END_TIME - START_TIME))
        
        if [ $EXIT_CODE -eq 124 ]; then
            # Timeout
            echo "ALERT: Script $SCRIPT_NAME timed out after ${EXPECTED_RUNTIME} minutes" | \
                mail -s "Script Timeout Alert" "$ADMIN_EMAIL"
        else
            # Other failure
            echo "ALERT: Script $SCRIPT_NAME failed with exit code $EXIT_CODE" | \
                mail -s "Script Failure Alert" "$ADMIN_EMAIL"
        fi
    fi

    3. Configuration Management

    bash
    # /etc/automation/global.conf
    # Global configuration for automation scripts
    
    # Email settings
    ADMIN_EMAIL="admin@example.com"
    SMTP_SERVER="localhost"
    
    # Backup settings
    BACKUP_ROOT="/backup"
    RETENTION_DAYS=30
    
    # Monitoring thresholds
    DISK_THRESHOLD=80
    MEMORY_THRESHOLD=80
    CPU_THRESHOLD=80
    
    # Paths
    SCRIPT_DIR="/usr/local/automation"
    LOG_DIR="/var/log/automation"
    LOCK_DIR="/var/run/automation"
    
    # Database settings
    DB_HOST="localhost"
    DB_USER="backup_user"
    DB_PASS_FILE="/etc/automation/.db_password"

    4. Testing Automation Scripts

    bash
    #!/bin/bash
    # test-automation.sh - Test framework for automation scripts
    
    TEST_DIR="/tmp/automation-test"
    SCRIPT_TO_TEST="$1"
    
    setup_test_environment() {
        mkdir -p "$TEST_DIR"/{data,backup,logs}
        echo "Test data" > "$TEST_DIR/data/testfile.txt"
    }
    
    cleanup_test_environment() {
        rm -rf "$TEST_DIR"
    }
    
    run_test() {
        local test_name="$1"
        local test_command="$2"
        
        echo "Running test: $test_name"
        if eval "$test_command"; then
            echo "✓ $test_name passed"
            return 0
        else
            echo "✗ $test_name failed"
            return 1
        fi
    }
    
    # Test cases
    test_script_exists() {
        [ -f "$SCRIPT_TO_TEST" ]
    }
    
    test_script_executable() {
        [ -x "$SCRIPT_TO_TEST" ]
    }
    
    test_script_runs() {
        "$SCRIPT_TO_TEST" --dry-run
    }
    
    # Run tests
    setup_test_environment
    
    TESTS_PASSED=0
    TESTS_FAILED=0
    
    for test in test_script_exists test_script_executable test_script_runs; do
        if run_test "$test" "$test"; then
            ((TESTS_PASSED++))
        else
            ((TESTS_FAILED++))
        fi
    done
    
    cleanup_test_environment
    
    echo "Tests passed: $TESTS_PASSED"
    echo "Tests failed: $TESTS_FAILED"
    
    [ $TESTS_FAILED -eq 0 ]

    Real-World Automation Examples

    Complete Backup Solution

    bash
    #!/bin/bash
    # enterprise-backup.sh - Enterprise-grade backup solution
    
    # Source configuration
    source /etc/automation/backup.conf
    
    # Global variables
    SCRIPT_NAME="$(basename "$0")"
    LOG_FILE="/var/log/automation/${SCRIPT_NAME%.sh}.log"
    LOCK_FILE="/var/run/automation/${SCRIPT_NAME%.sh}.lock"
    START_TIME=$(date +%s)
    
    # Functions
    log() {
        echo "$(date '+%Y-%m-%d %H:%M:%S') [$1] $2" | tee -a "$LOG_FILE"
    }
    
    send_notification() {
        local subject="$1"
        local message="$2"
        echo "$message" | mail -s "[$HOSTNAME] $subject" "$ADMIN_EMAIL"
    }
    
    acquire_lock() {
        if [ -f "$LOCK_FILE" ]; then
            local pid=$(cat "$LOCK_FILE")
            if ps -p "$pid" > /dev/null 2>&1; then
                log "ERROR" "Backup already running (PID: $pid)"
                exit 1
            fi
        fi
        echo $$ > "$LOCK_FILE"
    }
    
    cleanup() {
        rm -f "$LOCK_FILE"
        local end_time=$(date +%s)
        local runtime=$((end_time - START_TIME))
        log "INFO" "Backup completed in ${runtime}s"
    }
    
    backup_files() {
        log "INFO" "Starting file backup"
        
        for source in "${FILE_SOURCES[@]}"; do
            if [ -d "$source" ]; then
                local dest="$BACKUP_ROOT/files/$(basename "$source")_$(date +%Y%m%d)"
                log "INFO" "Backing up $source to $dest"
                
                rsync -av --delete "$source/" "$dest/" || {
                    log "ERROR" "Failed to backup $source"
                    return 1
                }
            fi
        done
    }
    
    backup_databases() {
        log "INFO" "Starting database backup"
        
        local db_backup_dir="$BACKUP_ROOT/databases/$(date +%Y%m%d)"
        mkdir -p "$db_backup_dir"
        
        mysql -u"$DB_USER" -p"$(cat "$DB_PASS_FILE")" -e "SHOW DATABASES;" | \
        grep -Ev "(Database|information_schema|performance_schema|mysql|sys)" | \
        while read database; do
            log "INFO" "Backing up database: $database"
            mysqldump -u"$DB_USER" -p"$(cat "$DB_PASS_FILE")" \
                --single-transaction \
                --routines \
                --triggers \
                "$database" | gzip > "$db_backup_dir/${database}.sql.gz" || {
                log "ERROR" "Failed to backup database: $database"
                return 1
            }
        done
    }
    
    cleanup_old_backups() {
        log "INFO" "Cleaning up old backups"
        find "$BACKUP_ROOT" -type f -mtime +$RETENTION_DAYS -delete
        find "$BACKUP_ROOT" -type d -empty -delete
    }
    
    # Main execution
    trap cleanup EXIT
    
    log "INFO" "Starting backup process"
    acquire_lock
    
    # Perform backups
    if backup_files && backup_databases; then
        cleanup_old_backups
        
        # Calculate backup size
        backup_size=$(du -sh "$BACKUP_ROOT" | cut -f1)
        log "INFO" "Backup completed successfully. Total size: $backup_size"
        send_notification "Backup Successful" "Backup completed. Size: $backup_size"
    else
        log "ERROR" "Backup failed"
        send_notification "Backup Failed" "Backup process encountered errors. Check logs."
        exit 1
    fi

    System Maintenance Automation

    bash
    #!/bin/bash
    # system-maintenance.sh - Automated system maintenance
    
    # Configuration
    MAINTENANCE_LOG="/var/log/system-maintenance.log"
    REBOOT_REQUIRED_FILE="/var/run/reboot-required"
    UPDATE_AVAILABLE_FILE="/var/lib/update-notifier/updates-available"
    
    log() {
        echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> "$MAINTENANCE_LOG"
    }
    
    update_system() {
        log "Starting system updates"
        
        # Update package lists
        apt update >> "$MAINTENANCE_LOG" 2>&1
        
        # Upgrade packages
        DEBIAN_FRONTEND=noninteractive apt upgrade -y >> "$MAINTENANCE_LOG" 2>&1
        
        # Remove unnecessary packages
        apt autoremove -y >> "$MAINTENANCE_LOG" 2>&1
        
        # Clean package cache
        apt autoclean >> "$MAINTENANCE_LOG" 2>&1
        
        log "System updates completed"
    }
    
    cleanup_logs() {
        log "Starting log cleanup"
        
        # Rotate logs
        logrotate -f /etc/logrotate.conf >> "$MAINTENANCE_LOG" 2>&1
        
        # Clean old journal logs (keep 30 days)
        journalctl --vacuum-time=30d >> "$MAINTENANCE_LOG" 2>&1
        
        # Clean temporary files
        find /tmp -type f -atime +7 -delete
        find /var/tmp -type f -atime +7 -delete
        
        log "Log cleanup completed"
    }
    
    check_disk_usage() {
        log "Checking disk usage"
        
        # Check for partitions over 80% usage
        df -h | awk 'NR>1 {print $5 " " $6}' | while read line; do
            usage=$(echo $line | awk '{print $1}' | sed 's/%//')
            partition=$(echo $line | awk '{print $2}')
            
            if [ "$usage" -gt 80 ]; then
                log "WARNING: High disk usage on $partition: $usage%"
                echo "Warning: Partition $partition is $usage% full" | \
                    mail -s "High Disk Usage Alert" admin@example.com
            fi
        done
    }
    
    update_security_patches() {
        log "Installing security updates"
        
        # Install only security updates
        unattended-upgrade -d >> "$MAINTENANCE_LOG" 2>&1
        
        log "Security updates completed"
    }
    
    optimize_database() {
        log "Optimizing databases"
        
        # MySQL optimization
        if systemctl is-active --quiet mysql; then
            mysql -e "OPTIMIZE TABLE mysql.innodb_index_stats, mysql.innodb_table_stats;" \
                >> "$MAINTENANCE_LOG" 2>&1
        fi
        
        log "Database optimization completed"
    }
    
    # Main execution
    log "=== Starting system maintenance ==="
    
    update_system
    cleanup_logs
    check_disk_usage
    update_security_patches
    optimize_database
    
    # Check if reboot is required
    if [ -f "$REBOOT_REQUIRED_FILE" ]; then
        log "Reboot required after updates"
        echo "System requires reboot after maintenance" | \
            mail -s "Reboot Required" admin@example.com
    fi
    
    log "=== System maintenance completed ==="

    Key Takeaways

    - Cron Jobs: Time-based task scheduling with flexible timing options

  • Crontab Management: User and system-wide job scheduling
  • Systemd Timers: Modern alternative with better integration and features
  • Backup Automation: Essential for data protection and disaster recovery
  • Script Best Practices: Error handling, logging, and monitoring
  • Troubleshooting: Common issues and debugging techniques

  • What's Next?

    With solid automation skills using cron jobs and systemd timers, you're ready to explore more advanced system administration topics like network configuration, service management, and performance monitoring. Automation is the foundation of scalable system administration.

    Quick Reference

    bash
    # Cron Management
    crontab -e          # Edit user crontab
    crontab -l          # List cron jobs
    crontab -r          # Remove all cron jobs
    
    # Cron Syntax
    # Min Hour Day Month DayOfWeek Command
    0 2 * * * /script   # Daily at 2 AM
    */15 * * * * /check # Every 15 minutes
    0 9 * * 1-5 /work   # Weekdays at 9 AM
    
    # Systemd Timers
    systemctl enable timer.timer    # Enable timer
    systemctl start timer.timer     # Start timer
    systemctl list-timers          # List all timers
    journalctl -u service.service  # View logs
    
    # Special Cron Times
    @reboot    # At startup
    @daily     # Once a day
    @weekly    # Once a week
    @monthly   # Once a month
    @yearly    # Once a year

    Remember: Good automation reduces manual work, prevents errors, and ensures consistency. Always test your scripts thoroughly and implement proper monitoring and alerting for critical automated tasks.

    ---

    🚀 Continue Your Linux Journey

    This is Part 11 of our comprehensive Linux mastery series - completing the Intermediate Skills section!

    Previous: Environment Variables - Master shell configuration and customization

    Next: System Logs Analysis - Learn advanced log monitoring and troubleshooting

    📚 Complete Linux Series Navigation

    Intermediate Skills Complete!

  • Part 6: Text Processing
  • Part 7: Package Management
  • Part 8: User & Group Management
  • Part 9: Process Management
  • Part 10: Environment Variables
  • Part 11: Automation with CronYou are here

    Advanced Skills Await!

  • Part 12: System Logs Analysis
  • Part 13: Network Configuration
  • Part 14: Systemd Deep Dive

    Ready for Advanced Topics? Continue with system logs analysis to master troubleshooting and monitoring!

  • - Shell Scripting

  • Process Management
  • System Monitoring

    ---

    Ready to dive deeper into Linux system administration? Next, we'll explore advanced topics like network configuration, security hardening, and performance optimization to complete your journey to becoming a Linux expert.

  • Made With Love on