Skip to main content

Oracle Export Cleaner (Example Script)

Regular database exports can be monitored and cleaned regularly while keeping desired number of successful exports with this linux script. There are two files, one is config file that containing schema names and corresponding schema export path, the other one  is export cleaner script.

app.config file contents:
schema1_name,schema1_export_path
schema2_name,schema2_export_path

export cleaner script:
#!/bin/bash

rm -f /tmp/cleaner.txt

today="$(date +%d%m%Y)"
echo "${today} Oracle Export Cleaner Log\n\n" >> /tmp/cleaner.txt
echo "———-${today} Export Logs———" >> /root/scripts/export_cleaner.log

cat /root/scripts/app.config | while read line
do
schema=${line%,*}
path=${line#*,}
jobname="${schema}_expdp_${today}"
filename="${jobname}.dmp"
logname="${jobname}.log"

cd ${path}

if [ ! -f "${logname}" ]
then
echo "$schema $logname does not exists" >> /root/scripts/export_cleaner.log
echo "$schema $logname does not exists\n\n" >> /tmp/cleaner.txt
else
is_job_completed=$(tail -1 $logname |grep "successfully completed")
if [ "$is_job_completed" == "" ];
then
echo "$schema $logname unsuccessfull" >> /root/scripts/export_cleaner.log
echo "$schema $logname unsuccessfull\n\n" >> /tmp/cleaner.txt
else
echo "$schema $logname successfull" >> /root/scripts/export_cleaner.log
echo "$schema $logname successfull\n\n" >> /tmp/cleaner.txt
ls -t ${schema}_expdp*.log | awk 'NR>2' | xargs rm >> /root/scripts/export_cleaner.log
ls -t ${schema}_expdp*.dmp | awk 'NR>2' | xargs rm >> /root/scripts/export_cleaner.log
fi
fi
done
mailtext=$(cat /tmp/cleaner.txt)
echo -e $mailtext | mail -s "export cleaner" mail@example.com

Comments

Popular posts from this blog

Creating Multiple VLANs over Bonding Interfaces with Proper Routing on a Centos Linux Host

In this post, I am going to explain configuring multiple VLANs on a bond interface. First and foremost, I would like to describe the environment and give details of the infrastructure. The server has 4 Ethernet links to a layer 3 switch with names: enp3s0f0, enp3s0f1, enp4s0f0, enp4s0f1 There are two bond interfaces both configured as active-backup bond0, bond1 enp4s0f0 and enp4s0f1 interfaces are bonded as bond0. Bond0 is for making ssh connections and management only so corresponding switch ports are not configured in trunk mode. enp3s0f0 and enp3s0f1 interfaces are bonded as bond1. Bond1 is for data and corresponding switch ports are configured in trunk mode. Bond0 is the default gateway for the server and has IP address 10.1.10.11 Bond1 has three subinterfaces with VLAN 4, 36, 41. IP addresses are 10.1.3.11, 10.1.35.11, 10.1.40.11 respectively. Proper communication with other servers on the network we should use routing tables. There are three

Sending Jboss Server Logs to Logstash Using Filebeat with Multiline Support

In addition to sending system logs to logstash, it is possible to add a prospector section to the filebeat.yml for jboss server logs. Sometimes jboss server.log has single events made up from several lines of messages. In such cases Filebeat should be configured for a multiline prospector. Filebeat takes lines do not start with a date pattern (look at pattern in the multiline section "^[[:digit:]]{4}-[[:digit:]]{2}-[[:digit:]]{2}" and negate section is set to true ) and combines them with the previous line that starts with a date pattern. server.log file excerpt where DatePattern: yyyy-MM-dd-HH and ConversionPattern: %d %-5p [%c] %m%n Logstash filter: