Integration Methods
Xi-Batch integrates with external systems via:
- Command-line tools
- Shell script interfaces
- API (requires network mode)
- File-based triggers
- Database interactions
- Email notifications
- Log monitoring
Command-Line Integration
Submit jobs programmatically:
bash
#!/bin/bash # Application triggers batch job # From external system btr -t "now + 5 minutes" /apps/process-order.sh
Query job status:
bash
#!/bin/bash
# Check if job completed
JOB_NUM=$1
if btstat $JOB_NUM done; then
echo "Job $JOB_NUM completed"
# Trigger next step
run-next-process.sh
fi
Monitor variables:
bash
#!/bin/bash
# Wait for external system to set variable
while [ "$(btvar -v integration_status | awk '{print $2}')" != "Ready" ]; do
sleep 30
done
echo "External system ready, proceeding..."
btr next-step.sh
Database Integration
Job writes to database:
bash
#!/bin/bash # process-orders.sh # Connect to database psql -d production <<EOF UPDATE orders SET status='processing' WHERE batch_id=$BATCH_ID; EOF # Process orders ./process.py # Update completion psql -d production <<EOF UPDATE orders SET status='complete' WHERE batch_id=$BATCH_ID; UPDATE batch_runs SET end_time=now() WHERE id=$BATCH_ID; EOF
Database triggers job:
bash
#!/bin/bash
# poll-database.sh (runs every 5 minutes)
# Check for pending work
COUNT=$(psql -t -d production -c "SELECT count(*) FROM orders WHERE status='pending'")
if [ $COUNT -gt 0 ]; then
# Submit processing job if not already running
if ! btjlist | grep -q "process-orders"; then
btr -T "process-orders" process-orders.sh
fi
fi
Schedule as:
bash
btr -r Minutes:5 poll-database.sh
Email Integration
Job completion notifications:
bash
#!/bin/bash
# backup.sh with email notification
# Perform backup
/usr/local/bin/run-backup.sh
if [ $? -eq 0 ]; then
echo "Backup completed successfully at $(date)" | \
mail -s "Backup Success" ops@example.com
else
echo "Backup failed at $(date)" | \
mail -s "Backup FAILURE" ops@example.com
fi
Variable-triggered alerts:
bash
#!/bin/bash
# monitor-errors.sh (runs every minute)
ERROR_COUNT=$(btvar -v error_count | awk '{print $2}')
if [ $ERROR_COUNT -gt 10 ]; then
echo "Error count exceeded threshold: $ERROR_COUNT" | \
mail -s "ALERT: High Error Count" ops@example.com
# Reset counter
btvar -s error_count 0
fi
File-Based Integration
Watch directory for trigger files:
bash
#!/bin/bash
# file-watcher.sh
WATCH_DIR=/data/incoming
for file in $WATCH_DIR/*.trigger; do
if [ -f "$file" ]; then
# Extract job name from trigger file
JOB=$(basename $file .trigger)
# Submit job
btr -T "$JOB-$(date +%Y%m%d-%H%M%S)" \
/apps/process-$JOB.sh
# Remove trigger
rm $file
fi
done
Schedule watcher:
bash
btr -r Minutes:1 file-watcher.sh
Job creates completion marker:
bash
#!/bin/bash # data-export.sh # Export data ./export-process.sh > /exports/data-$(date +%Y%m%d).csv # Create completion marker for external system touch /exports/data-$(date +%Y%m%d).complete
REST API Integration
Call external API from job:
bash
#!/bin/bash
# notify-external-system.sh
WEBHOOK_URL="https://external-system/api/webhook"
# Send notification
curl -X POST $WEBHOOK_URL \
-H "Content-Type: application/json" \
-d "{\"status\":\"complete\",\"timestamp\":\"$(date -Iseconds)\"}"
External system triggers job:
bash
#!/bin/bash
# api-trigger.sh (external system calls this)
# Receive parameters
BATCH_ID=$1
PRIORITY=$2
# Submit job
btr -p $PRIORITY \
-A "batch_id = $BATCH_ID @ Job start" \
process-batch.sh
Set up as HTTP endpoint using web server CGI or similar.
Monitoring Integration
Export metrics:
bash
#!/bin/bash
# export-metrics.sh
# Get current queue statistics
PENDING=$(btjlist | grep -c "^ ")
RUNNING=$(btjlist | grep -c " Run ")
LOAD=$(btvar -v CLOAD | awk '{print $2}')
# Export to monitoring system
echo "xibatch.pending.jobs:$PENDING|g" | nc -u -w1 statsd-server 8125
echo "xibatch.running.jobs:$RUNNING|g" | nc -u -w1 statsd-server 8125
echo "xibatch.current.load:$LOAD|g" | nc -u -w1 statsd-server 8125
Schedule metrics export:
bash
btr -r Minutes:1 export-metrics.sh
Log shipping:
bash
#!/bin/bash
# ship-logs.sh
# Send scheduler log to central logging
tail -n 100 /var/spool/batch/btsched_reps | \
logger -t xibatch -n log-server -P 514
Complex Workflow Integration
ETL Pipeline:
bash
# Extract job
btr -t "01:00" \
-A "extract_done = Yes @ Job completed" \
extract.sh
# Transform job (waits for extract)
btr -c "extract_done = Yes" \
-A "extract_done = No @ Job start" \
-A "transform_done = Yes @ Job completed" \
transform.sh
# Load job (waits for transform)
btr -c "transform_done = Yes" \
-A "transform_done = No @ Job start" \
-A "load_done = Yes @ Job completed" \
-A "pipeline_complete = Yes @ Job completed" \
load.sh
# Notification job (waits for pipeline)
btr -c "pipeline_complete = Yes" \
-A "pipeline_complete = No @ Job start" \
notify-completion.sh
External system monitors progress:
bash
#!/bin/bash
# check-pipeline-status.sh (external monitoring)
EXTRACT=$(btvar -v extract_done | awk '{print $2}')
TRANSFORM=$(btvar -v transform_done | awk '{print $2}')
LOAD=$(btvar -v load_done | awk '{print $2}')
echo "Pipeline Status:"
echo " Extract: $EXTRACT"
echo " Transform: $TRANSFORM"
echo " Load: $LOAD"
Error Handling Integration
Job failure notification:
bash
#!/bin/bash
# critical-job.sh
# Set trap for failures
trap 'error_handler' ERR
error_handler() {
# Log to external system
curl -X POST https://logging-service/error \
-d "job=critical-job&error=$1"
# Update tracking variable
btvar -s critical_job_failed "Yes"
exit 1
}
# Job logic
./run-critical-process.sh
Recovery job:
bash
# Auto-recovery job
btr -c "critical_job_failed = Yes" \
-A "critical_job_failed = No @ Job start" \
recovery-job.sh
Best Practices
Use variables for state:
Variables provide reliable state sharing between Xi-Batch and external systems.
Implement timeouts:
When waiting for external systems:
bash
TIMEOUT=300 # 5 minutes
START=$(date +%s)
while condition_not_met; do
if [ $(($(date +%s) - START)) -gt $TIMEOUT ]; then
echo "Timeout waiting for external system"
exit 1
fi
sleep 10
done
Log integration points:
Record when external systems interact:
bash
echo "$(date): External API called" >> /var/log/xibatch-integration.log
Version control integration scripts:
Keep all integration code in git or similar.
Document external dependencies:
Maintain list of systems Xi-Batch integrates with.
Test error scenarios:
Verify graceful handling when external systems fail.
Use API for complex integration:
For sophisticated integration, consider Xi-Batch API (requires network mode and separate documentation).