Thursday 13 December 2012

Brief introduction into Materialized Views

Great Materialized Views article written by http://uhesse.com/2009/07/08/brief-introduction-into-materialized-views/


Brief introduction into Materialized Views

This week, I am teaching an Oracle Database 11g Data Warehouse Administration course in Munich. One of the focus areas of that course are Materialized Views, and I have developed some examples for that course that I like to share with the Oracle community. Other themes of that four days course are Partitioning, ETL and Parallelization.
Materialized Views have the ability to speed up queries (even dramatically) while being transparent for the queries in a similar way as indexes resp. partitioned tables are. That means that we do not have to modify our queries in order to benefit from these structures. Unlike an ordinary view which is only a stored select statement that runs if we use the view, a materialized view stores the result set of the select statement as a container table. Let me demonstrate the benefit of that. I have a (for my little database) relatively large table and do an aggregation query on it:
SQL> desc sales
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 PROD_ID                                   NOT NULL NUMBER
 CUST_ID                                   NOT NULL NUMBER
 TIME_ID                                   NOT NULL DATE
 CHANNEL_ID                                NOT NULL NUMBER
 PROMO_ID                                  NOT NULL NUMBER
 QUANTITY_SOLD                             NOT NULL NUMBER(10,2)
 AMOUNT_SOLD                               NOT NULL NUMBER(10,2)

SQL> select count(*) from sales;

 COUNT(*)
----------
 7350744

SQL> select bytes/1024/1024 as mb from user_segments where segment_name='SALES';

 MB
----------
 286

SQL> show sga

Total System Global Area  373293056 bytes
Fixed Size                  1219496 bytes
Variable Size             134218840 bytes
Database Buffers          234881024 bytes
Redo Buffers                2973696 bytes

SQL> set timing on
SQL> select channel_id,sum(amount_sold) from sales group by channel_id;

CHANNEL_ID SUM(AMOUNT_SOLD)
---------- ----------------
 2        210770739
 4        109654416
 3        463002085
 9       2219410.08

Elapsed: 00:00:04.51
As you can see, the aggregation takes round 5 seconds. Now I will speed that up using a Materialized View:
SQL> create materialized view mv1 enable query rewrite
 2  as select channel_id,sum(amount_sold) from sales group by channel_id;

Materialized view created.

Elapsed: 00:00:05.69
SQL> select channel_id,sum(amount_sold) from sales group by channel_id;

CHANNEL_ID SUM(AMOUNT_SOLD)
---------- ----------------
 2        210770739
 4        109654416
 3        463002085
 9       2219410.08

Elapsed: 00:00:00.01
The very same statement now takes way less time! Why is that so?
SQL> set autotrace on explain
SQL> select channel_id,sum(amount_sold) from sales group by channel_id;

CHANNEL_ID SUM(AMOUNT_SOLD)
---------- ----------------
 2        210770739
 4        109654416
 3        463002085
 9       2219410.08

Execution Plan
----------------------------------------------------------
Plan hash value: 2958490228
--------------------------------------------------------------------------------
| Id  | Operation                    | Name | Rows  | Bytes | Cost (%CPU)| Time
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |      |     4 |   104 | 3   (0)| 00:00:01
|   1 |  MAT_VIEW REWRITE ACCESS FULL| MV1  |     4 |   104 | 3   (0)| 00:00:01
--------------------------------------------------------------------------------
Because the optimizer transparently rewrote the query on the Materialized View! But what if we now change the content of the base table sales?
SQL> update sales set amount_sold=amount_sold+1 where rownum<2;

1 row updated.

SQL> commit;

Commit complete.
The materialized view is now stale and will no longer be used for query rewrite (as we can already determine by query runtime):
SQL> select mview_name,staleness from user_mviews;

MVIEW_NAME                     STALENESS
------------------------------ -------------------
MV1                            NEEDS_COMPILE

SQL> set timing on
SQL> select channel_id,sum(amount_sold) from sales group by channel_id;

CHANNEL_ID SUM(AMOUNT_SOLD)
---------- ----------------
 2        210770739
 4        109654416
 3        463002086
 9       2219410.08

Elapsed: 00:00:04.52
In order to get statements rewritten against the materialized view again, we must refresh it by some method.
One method is on demand with a procedure call like in this example:
SQL> exec dbms_mview.refresh(list=>'MV1',method=>'C')

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.62

SQL> select channel_id,sum(amount_sold) from sales group by channel_id;

CHANNEL_ID SUM(AMOUNT_SOLD)
---------- ----------------
 2        210770739
 4        109654416
 3        463002086
 9       2219410.08

Elapsed: 00:00:00.01
Now the MV1 can get used again, as we see by runtime. That was a complete refresh – which can take a long time. We would like to get this done faster.
One comfortable method to get a fast refreshable materialized view is the usage of the package DBMS_ADVISOR:
SQL> vari t varchar2(50)
SQL> begin
 2   dbms_advisor.tune_mview(task_name=>:t,
 3                           mv_create_stmt=>'create materialized view mv1'
 4                               || ' refresh fast as'
 5                               || ' select channel_id,sum(amount_sold)'
 6                               || ' from sales group by channel_id');
 7  end;
 8/  
PL/SQL procedure successfully completed.

SQL> set long 5000
SQL> select statement from user_tune_mview where task_name=:t order by action_id;

STATEMENT
--------------------------------------------------------------------------------
CREATE MATERIALIZED VIEW LOG ON "ADAM"."SALES" WITH ROWID, SEQUENCE ("CHANNEL_ID
","AMOUNT_SOLD")  INCLUDING NEW VALUES

ALTER MATERIALIZED VIEW LOG FORCE ON "ADAM"."SALES" ADD ROWID, SEQUENCE ("CHANNE
L_ID","AMOUNT_SOLD")  INCLUDING NEW VALUES

CREATE MATERIALIZED VIEW ADAM.MV1   REFRESH FAST WITH ROWID DISABLE QUERY REWRIT
E AS SELECT ADAM.SALES.CHANNEL_ID C1, SUM("ADAM"."SALES"."AMOUNT_SOLD") M1, COUN
T("ADAM"."SALES"."AMOUNT_SOLD") M2, COUNT(*) M3 FROM ADAM.SALES GROUP BY ADAM.SA
LES.CHANNEL_ID

STATEMENT
--------------------------------------------------------------------------------
DROP MATERIALIZED VIEW ADAM.MV1

SQL> exec dbms_advisor.delete_task(:t)
PL/SQL procedure successfully completed.
Usually, we need Materialized View Logs on the base tables and special expressions in the materialized view query to be able to do a fast refresh.
I simply take the coding from above now:
SQL> CREATE MATERIALIZED VIEW LOG ON "ADAM"."SALES"
WITH ROWID, SEQUENCE ("CHANNEL_ID","AMOUNT_SOLD")
INCLUDING NEW VALUES;  2    3

Materialized view log created.

SQL> CREATE MATERIALIZED VIEW ADAM.MV1
 REFRESH FAST WITH ROWID
 enable QUERY REWRITE AS
 SELECT ADAM.SALES.CHANNEL_ID C1,
 SUM("ADAM"."SALES"."AMOUNT_SOLD") M1,
 COUNT("ADAM"."SALES"."AMOUNT_SOLD") M2,
 COUNT(*) M3
 FROM ADAM.SALES
 GROUP BY ADAM.SALES.CHANNEL_ID;

Materialized view created.
After again modifying the base table sales, I will then try a fast refresh of the MV1:
SQL> set timing off
SQL> update sales set amount_sold=amount_sold*1 where rownum<2;

1 row updated.

SQL> commit;

Commit complete.

SQL> select mview_name,staleness from user_mviews;

MVIEW_NAME                     STALENESS
------------------------------ -------------------
MV1                            NEEDS_COMPILE

SQL> set timing on
SQL> exec dbms_mview.refresh('MV1','F')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.62
It worked! But do I always have to refresh manually? No, we can also schedule a regular refresh like this:
SQL>  alter materialized view mv1 refresh start with sysdate next sysdate + interval '1' minute;

Materialized view altered.

SQL> alter session set nls_date_format='hh24:mi:ss';

Session altered.

SQL> select MVIEW_NAME,LAST_REFRESH_TYPE,LAST_REFRESH_DATE from user_mviews;

MVIEW_NAME                     LAST_REF LAST_REF
------------------------------ -------- --------
MV1                            FAST     10:40:05

SQL> select MVIEW_NAME,LAST_REFRESH_TYPE,LAST_REFRESH_DATE from user_mviews;

MVIEW_NAME                     LAST_REF LAST_REF
------------------------------ -------- --------
MV1                            FAST     10:41:04
Of course, a one minute refresh interval is extraordinary short. An other possibility to refresh the MVs very comfortably is on commit:
CREATE MATERIALIZED VIEW ADAM.MV1
 REFRESH FAST on commit 
 WITH ROWID
 enable QUERY REWRITE AS
 SELECT ADAM.SALES.CHANNEL_ID C1,
 SUM("ADAM"."SALES"."AMOUNT_SOLD") M1,
 COUNT("ADAM"."SALES"."AMOUNT_SOLD") M2,
 COUNT(*) M3
 FROM ADAM.SALES
 GROUP BY ADAM.SALES.CHANNEL_ID;

Materialized view created.

SQL> select MVIEW_NAME,LAST_REFRESH_TYPE,LAST_REFRESH_DATE from user_mviews;

MVIEW_NAME                     LAST_REF LAST_REF
------------------------------ -------- --------
MV1                            COMPLETE 11:16:28

SQL> update sales set amount_sold=amount_sold*1 where rownum<2;

1 row updated.

SQL> commit;

Commit complete.

SQL> select MVIEW_NAME,LAST_REFRESH_TYPE,LAST_REFRESH_DATE from user_mviews;

MVIEW_NAME                     LAST_REF LAST_REF
------------------------------ -------- --------
MV1                            FAST     11:19:56

Wednesday 12 December 2012

Tablespace size

spool $ORACLE_SBIN/logs/tablespace.log;
SELECT  'Tablespace '||df.tablespace_name "TBS"
        ,round((sum(nvl(fs.bytes,0))/ (df.bytes)) * 100) "%Free"
        ,round(((df.bytes - sum(nvl(fs.bytes,0))) / (df.bytes) ) * 100) "%Used"
        ,round(sum(nvl(fs.bytes/1024/1024,0))) "Mb Free"
        ,round(df.bytes/1024/1024 - sum(nvl(fs.bytes/1024/1024,0))) "Mb Used"
        ,df.autoextensible "AutoExtensible"
FROM   dba_free_space fs,
       dba_data_files df
WHERE  fs.file_id(+) = df.file_id
GROUP BY df.tablespace_name, df.file_id, df.bytes, df.autoextensible
ORDER BY df.file_id;
SELECT round(sum(round(sum(nvl(fs.bytes/1024/1024,0)))) / sum(round(sum(nvl(fs.bytes/1024/1024,0))) + round(df.bytes/1024/1024 - sum(nvl(fs.bytes/1024/1024,0)))) * 100, 0) "%Free"
,round(sum(round(df.bytes/1024/1024 - sum(nvl(fs.bytes/1024/1024,0)))) / sum(round(sum(nvl(fs.bytes/1024/1024,0))) + round(df.bytes/1024/1024 - sum(nvl(fs.bytes/1024/1024,0)))) * 100, 0) "%Used"
        ,sum(round(sum(nvl(fs.bytes/1024/1024,0)))) "Mb Free"
        ,sum(round(df.bytes/1024/1024 - sum(nvl(fs.bytes/1024/1024,0)))) "Mb Used"
        ,sum(round(sum(nvl(fs.bytes/1024/1024,0))) + round(df.bytes/1024/1024 - sum(nvl(fs.bytes/1024/1024,0)))) "Size"
FROM   dba_free_space fs,
       dba_data_files df
WHERE  fs.file_id(+) = df.file_id
GROUP BY df.tablespace_name, df.file_id, df.bytes, df.autoextensible
ORDER BY df.file_id;
spool off;

Validating Archives Between Production and Standby Databases

Validating Archives Between Production and Standby Databases

On the Production Database

#!/bin/ksh

#
# ---------------------------------------------------------------------------------------------------------------------------------
#

ORACLE_HOME=/<mount-point>/oracle/product/8.1.7
LD_LIBRARY_PATH=/<mount-point>/oracle/product/8.1.7/lib:/usr/ucblib:/usr/openwin/lib
JAVA_HOME=/<mount-point>/oracle/product/jre/1.1.8
ORACLE_SID=<SID>
ORACLE_SBIN=/<mount-point>/oracle/product/8.1.7/sbin
ORACLE_ALERT=/<mount-point>/oracle/admin/sudbp/bdump

PATH=/bin:/usr/bin:/etc:/usr/ccs/bin:/usr/openwin/bin:/usr/ucb:/usr/local/bin:/usr/sbin:/usr/X/bin:$JAVA_HOME
PATH=${PATH}:$ORACLE_HOME/bin:$ORACLE_HOME/sbin

host=`hostname`

if [ -z "$ORACLE_HOME" ] || [ -z "$LD_LIBRARY_PATH" ] || [ -z "$JAVA_HOME" ]\
|| [ -z "$ORACLE_SID" ] || [ -z "$ORACLE_ALERT" ] || [ -z "$ORACLE_SBIN" ]\
|| [ -z "$PATH" ] || [ -z "$host" ]; then 
        echo "$0 aborted - variable not defined"
        mailx -s "$0 aborted - variable not defined" $email < /dev/null
        exit 1
fi

USAGE="$0: Incorrect arguments, Usage: $0 <\"email1 email2 ... emailn\">" 

if [ `whoami` != 'oracle' ]; then    
        echo "$0 aborted - user `whoami` is incorrect, must be user oracle"
        mailx -s "$0 aborted - user `whoami` is incorrect, must be user oracle" $email < /dev/null
        exit 1
elif [ -z "$1" ]; then
        echo "$USAGE"
        mailx -s "$USAGE" "emergency@xyz.com" < /dev/null
        exit 1
fi

email=$1

echo "Executing $0 on $host"

#
# ---------------------------------------------------------------------------------------------------------------------------------
#

$ORACLE_HOME/bin/sqlplus internal<<!
set head off term off echo off feedback off show off trim off trims off verify off linesize 132 escape on;
spool /u01/app/oracle/product/8.1.5/sbin/logs/archives_e4501.log;
select 'SEQUENCE '||to_char(sequence#)||' DATE '||to_char(first_time,'yyyy/mm/dd hh:mi:ss') 
from v\$log_history where sequence# = (select max(sequence#)-1 from
v\$log_history);
spool off;
exit
!

cat $ORACLE_SBIN/logs/archives_production.log | grep "SEQUENCE" | grep -v grep | grep -v SQL\> | awk '{ print $1,$2,$3,$4,$5 }' > $ORACLE_SBIN/logs/archives_production.log

if [ -s $ORACLE_SBIN/logs/archives_production.log ]; then
        /usr/bin/rcp -p $ORACLE_SBIN/logs/archives_production.log <Standby Database Hostname>:/<mount-point>/oracle/product/8.1.7/sbin/logs/archives_production.log
else
        sub="Script $0 on $host in database $ORACLE_SID detected missing archive logs"
        echo $sub
        mailx -s "$sub" $email < "Error"
        exit 1
fi

exit 0 

On the Standby Database

#!/bin/ksh

#
# ---------------------------------------------------------------------------------------------------------------------------------
#

ORACLE_HOME=/<mount-point>/oracle/product/8.1.7
LD_LIBRARY_PATH=/<mount-point>/oracle/product/8.1.7/lib:/usr/ucblib:/usr/openwin/lib
JAVA_HOME=/<mount-point>/oracle/product/jre/1.1.8
ORACLE_SID=<SID>
ORACLE_SBIN=/<mount-point>/oracle/product/8.1.7/sbin
ORACLE_ALERT=/<mount-point>/oracle/admin/sudbp/bdump

PATH=/bin:/usr/bin:/etc:/usr/ccs/bin:/usr/openwin/bin:/usr/ucb:/usr/local/bin:/usr/sbin:/usr/X/bin:$JAVA_HOME
PATH=${PATH}:$ORACLE_HOME/bin:$ORACLE_HOME/sbin

host=`hostname`

if [ -z "$ORACLE_HOME" ] || [ -z "$LD_LIBRARY_PATH" ] || [ -z "$JAVA_HOME" ]\
|| [ -z "$ORACLE_SID" ] || [ -z "$ORACLE_ALERT" ] || [ -z "$ORACLE_SBIN" ]\
|| [ -z "$PATH" ] || [ -z "$host" ]; then 
        echo "$0 aborted - variable not defined"
        mailx -s "$0 aborted - variable not defined" $email < /dev/null
        exit 1
fi

USAGE="$0: Incorrect arguments, Usage: $0 <\"email1 email2 ... emailn\">" 

if [ `whoami` != 'oracle' ]; then    
        echo "$0 aborted - user `whoami` is incorrect, must be user oracle"
        mailx -s "$0 aborted - user `whoami` is incorrect, must be user oracle" $email < /dev/null
        exit 1
elif [ -z "$1" ]; then
        echo "$USAGE"
        mailx -s "$USAGE" "emergency@xyz.com" < /dev/null
        exit 1
fi

email=$1

echo "Executing $0 on $host"

#
# ---------------------------------------------------------------------------------------------------------------------------------
#

$ORACLE_HOME/bin/sqlplus internal<<!
set head off term off echo off feedback off show off trim off trims off verify off linesize 132 escape on;
spool $ORACLE_SBIN/logs/archives_standby.log;
select 'SEQUENCE '||to_char(sequence#)||' DATE '||to_char(first_time,'yyyy/mm/dd hh:mi:ss') 
from v\$log_history where sequence# = (select max(sequence#) from v\$log_history);
spool off;
exit
!

cat $ORACLE_SBIN/logs/archives_standby.log | grep "SEQUENCE" | grep -v grep | grep -v SQL\> | awk '{ print $2 }' > $ORACLE_SBIN/logs/archives_standby.log

if [ -s $ORACLE_SBIN/logs/archives_production.log ] -a [ -s $ORACLE_SBIN/logs/archives_standby.log ]; then
 export prod=`/bin/cat $ORACLE_SBIN/logs/archives_production.log`
 export stby=`/bin/cat $ORACLE_SBIN/logs/archives_standby.log`
 export diffs=$(($prod-$stby))
 if [ $diffs -gt 2 ]; then 
         sub="Script $0 on $host in database $ORACLE_SID detected missing archive logs ($diffs)"
         echo $sub
         mailx -s "$sub" $email < /dev/null
         exit 1
 fi
else
        sub="Script $0 on $host in database $ORACLE_SID detected missing archive logs"
        echo $sub
        mailx -s "$sub" $email < /dev/null
        exit 1
fi

exit 0 

Tuesday 11 December 2012

UNIX for DBA's

Collect server statistics
$ sadc

Collect system activity
$sar

The vmstat utility is used to report virtual memory statistics.

$vmstat

The Unix (w)atch Command

$w

iostat is used to report I/O statistics.
# iostat

Memory and CPUs

  • Display the top CPU users with the command ps -ef | sort +6 | tail -10.
    # ps -ef | sort +6 | tail -10
Display allocated memory and semaphores using ipcs -mqs and ipcs -pmb as shown below. Use the ipcrm -m to deallocate memory segments. Execute oradebug ipc in svrmgrl to write Oracle used semaphores into a trace file.

 
# ipcs -mqs

  • Display RAM size with the command prtconf | grep "Memory Size".
     
  • $ prtconf|grep -i "Memory Size"
    Memory size: 1024 Megabytes
    
  • Display the number of CPUs using the psrinfo -v command.
    # psrinfo -v
    


    Display system details using the uname -X command.
    # uname -X
    


    Find the number CPUs and the speed of those CPUs or memory using dmesg | grep -i [cpu | mem].
    # dmesg | grep -i cpu
    

    # dmesg | grep -i mem

    To find the files that contain a string we can use:
    find . -exec grep -l “string to find” {} \;
    This starts the search from the current directory, looks for files that  contain the specified string, and then it prints their names.

    Monitoring Oracle

    Display active dedicated connections as shown below. Execute the command below excluding |wc -l to view connected dedicated server processes.
    ps -ef |grep $ORACLE_SID|grep -v grep|grep -v ora_|wc -l
     
    # ps -ef |grep $ORACLE_SID|grep -v grep|grep -v ora_ oracle 24440 24439 0 16:35:42 ? 0:00 oracle<SID> (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))

    Examining Processes

    Find all Oracle processes. grep -v grep removes the grep command from the list of processes. Also note that the results of one unix shell command can be passed to the next using the | (pipe) command.
    ps -ef | grep ora_ | grep -v grep

    Check the listener and the agent.
    ps -ef | grep [lsnr|dbsnmp] | grep -v grep


    List available instances on a server:$ ps -ef | grep smon

    List available listeners on a server:
    $ ps -ef | grep listener | grep -v grep

    Find out file system usage for Oracle archive  destination:$ df -k | grep oraarch

    List number of lines in the alert.log file:$ cat alert.log | wc -l

    List all Oracle error messages from the alert.log file:$ grep ORA- alert.log

    To view a crontab file, type:                
    Crontab -l

    Check Oracle Instance Availability
    The oratab file lists all the databases on a server:
    $ cat /var/opt/oracle/oratab

    File Permissions and Ownership:

    The chown command changes who owns the file. It does not actually change what permissions the file's owner has. So, for example, if you have a file set to 555 (r-xr-xr-x) and you want to write to it, then chown'ing it won't do you any good. Note that you can only chown a file if you already own it (so you're giving ownership to someone else), or if you are root.

    By the way, there's also a chgrp command, to change the group with which the file is associated. The example I used for chown holds here as well. You must own the file before you can chgrp it, and you can usually only chgrp the file to a group that you belong to. Root, of course, can chgrp any file to any group it wants.

    The chmod command changes the actual permissions on a file. So if you want to write to the file, you can add write permissions to it. I believe you can only chmod a file if you could chown it (again, meaning you must either own the file or be root).

    Which should you use? In the end, it depends on the circumstances. Each is appropriate at different times. You'll probably use chmod the most, followed by chown and finally chgrp.
     
    read-only for all users:
    read=4
    write=2
    execute=1
     
    Add the numbers to get the combination you want--the numbers for the example you show are 644.

    For example:

    $ chmod 777 <filename>                                  
    $ chown oracle:dba <filename>
    Command to kill all Oracle processes
    ps -ef | grep ora_ | grep -v grep | grep $ORACLE_SID | awk '{print $2}' | xargs kill -9 {}
    Finding a file:
    find . -name "listener.ora" -print

    Disk space:
    # df -k

    The command below could constitute a basis for a disk space running-low warning script.
     
    # df -k | awk '{print $1 " " $5}'

    Find and Display Files

    • ls -lat - list recently touched files.
    • ls -lac - list recently changed files.
    • du -s * | sort -n - du -s shows file size only, piping to sort orders the results in ascending order.
      # du -s $ORACLE_HOME | sort -n
      1801978 /<mount-point-1>/oracle/product/8.1.7
      
    • du -sk $ORACLE_HOME - changes number displayed from units of 512 bytes to 1024 bytes.
      # du -sk $ORACLE_HOME
      900989  /<mount-point-1>/oracle/product/8.1.7
      
    • find . -print | xargs grep -i oracle - finds lines in files, in the current directory recursively, which have the word oracle in them.
    • find . -mtime -1 -print - files created or altered today.
    • find . -size +1048576c -print - files greater than 1Mb.
    • find . -mtime +7 -exec rm {} \; - remove files in the current directory created more than 7 days ago.

    Delete Old Trace and Audit Files

    #!/bin/ksh
    
    #
    # ---------------------------------------------------------------------------------------------------------------------------------
    #
    
    export ORACLE_BASE=/<mount-point-1>/oracle
    export ORACLE_HOME=$ORACLE_BASE/product/8.1.7
    export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/openwin/lib
    export PATH=/bin:/usr/bin:/etc:/usr/ccs/bin:/usr/openwin/bin:/usr/ucb:/usr/local/bin:/usr/sbin:/usr/X/bin:$JAVA_HOME
    
    export host=`hostname`
    export email=$1
    export USAGE="$0: Incorrect arguments, Usage: $0 <\"email1 email2 ... emailn\">" 
    
    if [ -z "$email" ]; then
            echo "$USAGE"
            mailx -s "$USAGE" emergencyEMail@whatever.com < /dev/null
            exit 1
    elif [ `whoami` != 'oracle' ]; then    
            echo "$0 aborted - user `whoami` is incorrect, must be user root"
            mailx -s "$0 aborted - user `whoami` is incorrect, must be user root" $email < /dev/null
            exit 1
    fi
    
    if [ -z "$PATH" ] || [ -z "$host" ] || [ -z "$ORACLE_BASE" ] || [ -z "$ORACLE_HOME" ]; then 
            echo "$0 aborted - variable not defined"
            mailx -s "$0 aborted - variable not defined" $email < /dev/null
            exit 1
    fi
    
    echo "Executing $0 on $host"
    
    #
    # ---------------------------------------------------------------------------------------------------------------------------------
    #
    
    echo Cleaning trace and transaction audit files for $1 upto 14 days ago
    unalias rm
    find $ORACLE_BASE/admin/$1/bdump/*.trc -mtime +14 | xargs rm -f
    find $ORACLE_BASE/admin/$1/udump/*.trc -mtime +14 | xargs rm -f
    find $ORACLE_BASE/admin/$1/cdump/*.* -mtime +14 | xargs rm -f
    find $ORACLE_HOME/rdbms/audit/*.aud -mtime +14 | xargs rm -f
    alias rm 'rm -i'
    

    ORA-600 Errors in the Alert Log

    The script below will detect and email ORA-00600 errors in the Oracle database alter log file.
    #!/bin/ksh
    
    #
    # ---------------------------------------------------------------------------------------------------------------------------------
    #
    
    export ORACLE_BASE=/<mount-point-1>/oracle
    export ORACLE_HOME=$ORACLE_BASE/product/8.1.7
    export TNS_ADMIN=$ORACLE_HOME/network/admin
    export ORACLE_SBIN=$ORACLE_HOME/sbin
    export PATH=/bin:/usr/bin:/etc:/usr/ccs/bin:/usr/openwin/bin:/usr/ucb:/usr/local/bin:/usr/sbin:/usr/X/bin:$JAVA_HOME
    
    export host=`hostname`
    export email=$1
    
    USAGE="$0: Incorrect arguments, Usage: $0 <email>" 
    
    if [ -z "$email" ]; then
            echo "$USAGE"
            mailx -s "$USAGE" emergencyEMail@whatever.com < /dev/null
            exit 1
    elif [ `whoami` != 'oracle' ]; then    
            echo "$0 aborted - user `whoami` is incorrect, must be user root"
            mailx -s "$0 aborted - user `whoami` is incorrect, must be user root" $email < /dev/null
            exit 1
    fi
    
    if [ -z "$PATH" ] || [ -z "$host" ] || [ -z "$ORACLE_BASE" ] || [ -z "$ORACLE_HOME" ]\
    || [ -z "$password" ] || [ -z "$ORACLE_SID" ] || [ -z "$tnsname" ]; then 
            echo "$0 aborted - variable not defined"
            mailx -s "$0 aborted - variable not defined" $email < /dev/null
            exit 1
    fi
    
    echo "Executing $0 on $host"
    
    #
    # ---------------------------------------------------------------------------------------------------------------------------------
    #
    
    err="`tail -50 $ORACLE_ALERT/alert_$ORACLE_SID.log | grep ORA-00600 | grep -v grep | wc -l`"
    
    if [ $err -gt 0 ]; then
    
            tail -50 $ORACLE_ALERT/alert_$ORACLE_SID.log | grep ORA-00600 | grep -v grep > $ORACLE_SBIN/logs/ora600.log
            sub="Script $0 on $host detected ORA-00600 for SID $ORACLE_SID"
            echo $sub
            mailx -s "$sub" $email < $ORACLE_SBIN/logs/ora600.log
            exit 1
    
    fi
    
    exit 0
    

    Changing Strings in Files Using the sed Editor

    Be extremely careful using a script such as shown below where global changes can be made.
    #!/bin/ksh
    for file in $* do
     cp $file $file.old
     sed -e 's/find/replace/g' < $file > $file
    done
    

    The following script checks all the databases listed in the oratab file, and  finds out the status (up or down) of databases:
    ###################################################################
    ## ckinstance.ksh ##
    ###################################################################
    ORATAB=/var/opt/oracle/oratab
    echo "`date`   "
    echo  "Oracle Database(s) Status `hostname` :\n"

    db=`egrep -i ":Y|:N" $ORATAB | cut -d":" -f1 | grep -v "\#" | grep -v "\*"`
    pslist="`ps -ef | grep pmon`"
    for i in $db ; do
      echo  "$pslist" | grep  "ora_pmon_$i"  > /dev/null 2>$1
      if (( $? )); then
            echo "Oracle Instance - $i:       Down"
      else
            echo "Oracle Instance - $i:       Up"
      fi
    done     


    Check Oracle Listener's Availability
    A similar script checks for the Oracle listener. If the listener is down, the  script will restart the listener:


    #######################################################################
    ## cklsnr.sh                                                         ##
    #######################################################################
    #!/bin/ksh
    DBALIST="
    primary.dba@company.com,another.dba@company.com";export DBALIST
    cd /var/opt/oracle
    rm -f lsnr.exist
    ps -ef | grep mylsnr | grep -v grep  > lsnr.exist
    if [ -s lsnr.exist ]
    then
        echo
    else
    echo "Alert" | mailx -s "Listener 'mylsnr' on `hostname` is down" $DBALIST
        TNS_ADMIN=/var/opt/oracle; export TNS_ADMIN
        ORACLE_SID=db1; export ORACLE_SID  
        ORAENV_ASK=NO; export ORAENV_ASK
        PATH=$PATH:/bin:/usr/local/bin; export PATH
        . oraenv
        LD_LIBRARY_PATH=${ORACLE_HOME}/lib;export LD_LIBRARY_PATH
        lsnrctl start mylsnr
    fi


    Clean Up Old Archived Logs
    The following script cleans up old archive logs if the log file system  reaches 90-percent capacity:

    $ df -k | grep arch
    Filesystem                kbytes   used     avail    capacity  Mounted on
    /dev/vx/dsk/proddg/archive 71123968 30210248 40594232   43%  /u08/archive
    #######################################################################
    ## clean_arch.ksh                                                    ##
    #######################################################################
    #!/bin/ksh
    df -k | grep arch > dfk.result
    archive_filesystem=`awk  -F" "  '{ print $6 }' dfk.result`
    archive_capacity=`awk  -F" "  '{ print $5 }' dfk.result`


    if [[ $archive_capacity > 90% ] ]
    then
        echo "Filesystem ${archive_filesystem} is ${archive_capacity} filled"
        # try one of the following option depend on your need
        find $archive_filesystem -type f -mtime +2 -exec rm -r {} \;   
        tar
        rman
    fi


    Check Tablespace Usage
    This scripts checks for tablespace usage. If tablespace is 10 percent free,  it will send an alert e-mail.

    #####################################################################
    ## ck_tbsp.sh ##
    #####################################################################
    #!/bin/ksh
    sqlplus -s <<!
    oracle/$1@$2
    set feed off
    set linesize 100
    set pagesize 200
    spool tablespace.alert
    SELECT F.TABLESPACE_NAME,
           TO_CHAR ((T.TOTAL_SPACE - F.FREE_SPACE),'999,999') "USED (MB)",
           TO_CHAR (F.FREE_SPACE, '999,999') "FREE (MB)",
           TO_CHAR (T.TOTAL_SPACE, '999,999') "TOTAL (MB)",
           TO_CHAR ((ROUND ((F.FREE_SPACE/T.TOTAL_SPACE)*100)),'999')||' %' PER_FREE
    FROM   (
           SELECT       TABLESPACE_NAME,
                        ROUND (SUM (BLOCKS*(SELECT VALUE/1024
                                            FROM V\$PARAMETER
                                            WHERE NAME = 'db_block_size')/1024)
                               ) FREE_SPACE
           FROM DBA_FREE_SPACE
           GROUP BY TABLESPACE_NAME
           ) F,
           (
           SELECT TABLESPACE_NAME,
           ROUND (SUM (BYTES/1048576)) TOTAL_SPACE
           FROM DBA_DATA_FILES
           GROUP BY TABLESPACE_NAME
           ) T
    WHERE F.TABLESPACE_NAME = T.TABLESPACE_NAME
    AND (ROUND ((F.FREE_SPACE/T.TOTAL_SPACE)*100)) < 10;
    spool off
    exit
    !
    if [ `cat tablespace.alert|wc -l` -gt 0 ]
    then
              cat tablespace.alert -l tablespace.alert > tablespace.tmp
              mailx -s "TABLESPACE ALERT for ${2}" $DBALIST < tablespace.tmp
    fi


    The following finds out invalid database objects:

    #####################################################################         
    ## invalid_object_alert.sh ##
    #####################################################################         
    #!/bin/ksh
    . /etc/oracle.profile
    sqlplus -s <<!
    oracle/$1@$2
    set          feed off
    set heading off
    column object_name format a30
    spool invalid_object.alert         
    SELECT  OWNER, OBJECT_NAME, OBJECT_TYPE, STATUS
    FROM    DBA_OBJECTS
    WHERE   STATUS = 'INVALID'
    ORDER BY OWNER, OBJECT_TYPE, OBJECT_NAME;
    spool off         
    exit
    !
    if [ `cat invalid_object.alert|wc -l` -gt 0 ]
    then
        mailx -s "INVALID OBJECTS for ${2}" $DBALIST < invalid_object.alert
    fi

    Monitor Users and Transactions (Dead Locks, et al)
    This script sends out an alert e-mail if dead lock occurs:

    ###################################################################
    ## deadlock_alert.sh ##
    ###################################################################
    #!/bin/ksh
    . /etc/oracle.profile
    sqlplus -s <<!
    oracle/$1@$2
    set feed off
    set heading off
    spool deadlock.alert
    SELECT   SID, DECODE(BLOCK, 0, 'NO', 'YES' ) BLOCKER,
                  DECODE(REQUEST, 0, 'NO','YES' ) WAITER
    FROM     V$LOCK
    WHERE    REQUEST > 0 OR BLOCK > 0
    ORDER BY block DESC;
    spool off
    exit
    !
    if [ `cat deadlock.alert|wc -l` -gt 0 ]
    then
        mailx -s "DEADLOCK ALERT for ${2}" $DBALIST < deadlock.alert
    fi

    dbverify utility

    This great command line utility to check file level or segment level corruption:

    This example for datafile level check: 

    dbv userid=bert/bert file=D:\Oracle\oradata\ORDB1\ABC.DBF blocksize=4096

    Segment level check:

    dbv userid=bert/bert segment_id=4.1024.10278

    to find above information about segment run following query:

    select tablespace_name, segment_name, TABLESPACE_ID, HEADER_FILE, HEADER_BLOCK
    from sys.sys_user_segs
    where tablespace_name='ABC' and SEGMENT_NAME like 'XYZ%';

    TABLESPACE_NAME SEGMENT_NAME TABLESPACE_ID HEADER_FILE HEADER_BLOCK
    ---------------- ---------------- ------------- ----------- -----------------------------------------------
    ABC                                       XYZ                           4                       1024                             10278
    ABC                                       XYZ2                         4                       1024                             10534

    Block corruption can also be detected by querying the v$database_block_corruption dynamic performance view. To repair block corruption, the dbms_repair package can be used with Oracle 11g.



    Non critical error ORA-48913 caught while writing to trace file

    Non critical error ORA-48913 caught while writing to trace file
    Error message: ORA-48913: Writing into trace file failed, file size limit [10485760] reached

    the suggestion:

    ORA-48913: Writing into trace file failed
    *Cause:An attempt was made to write into a trace file that exceeds the trace's file size limit
    *Action:increase the trace's file size limit.
    max_dump_file_size = 10240

    Read ora error from alert.log using SQL statement

    SQL> select ORIGINATING_TIMESTAMP, MESSAGE_TEXT from V$DIAG_ALERT_EXT where MESSAGE_TEXT like '%ORA-%';

    Wednesday 5 December 2012

    exp & expdp parameter gap mapping

    file
    dumpfile
    log
    logfile
    owner
    schema
    userid
    userid
    direct
    optional
    conventional
    Optional
    grant
    EXCLUDE=GRANT
    STATISTICS
    A parameter comparable to STATISTICS is not needed. Statistics are always saved for tables.
    Buffer
    no need (Data Pump Export and Import are self-tuning utilities, are neither required nor supported by Data Pump Export and Import)
    compress
    no need
    consistent
    no need(A parameter comparable to CONSISTENT is not needed. Use FLASHBACK_SCN and FLASHBACK_TIME for this functionality.)
    CONSTRAINTS
    EXCLUDE=CONSTRAINT
    DIRECT
    not needed. Data Pump Export automatically chooses the best method (direct path mode or external tables mode).
    FEEDBACK
    STATUS
    FILESIZE
    FILESIZE
    FLASHBACK_SCN
    FLASHBACK_SCN
    FLASHBACK_TIME
    FLASHBACK_TIME
    FULL
    FULL (For example, suppose you perform an export with FULL=Y, followed by an import using SCHEMAS=HR.
     continue ....
    This will produce the same results as if you performed an export with SCHEMAS=HR, followed by an import with FULL=Y)
    INDEXES
    EXCLUDE=INDEX
    PARFILE
    PARFILE
    QUERY
    QUERY
    ROWS=N
    CONTENT=METADATA_ONLY
    ROWS=Y
    CONTENT=ALL
    TABLES
    TABLES
    TABLESPACES
    TABLESPACES (Same parameter; slightly different behavior)
    TRANSPORT_TABLESPACE
    TRANSPORT_TABLESPACES (Same parameter; slightly different behavior)
    TRIGGERS
    EXCLUDE=TRIGGER
    TTS_FULL_CHECK
    TRANSPORT_FULL_CHECK
    VOLSIZE
    A parameter comparable to VOLSIZE is not needed.


    Imp & impdp parameters gap mapping

    Original Import Parameter
    Comparable Data Pump Import Parameter
    BUFFER
    A parameter comparable to BUFFER is not needed.
    CHARSET
    A parameter comparable to CHARSET is not needed.
    COMMIT
    A parameter comparable to COMMIT is not supported.
    COMPILE
    A parameter comparable to COMPILE is not supported.
    CONSTRAINTS
    EXCLUDE=CONSTRAINT
    DATAFILES
    TRANSPORT_DATAFILES
    DESTROY
    REUSE_DATAFILES
    FEEDBACK
    STATUS
    FILE
    DUMPFILE
    FILESIZE
    Not necessary. It is included in the dump file set.
    FROMUSER
    SCHEMAS
    FULL
    FULL
    GRANTS
    EXCLUDE=GRANT
    HELP
    HELP
    IGNORE
    TABLE_EXISTS_ACTION
    INDEXES
    EXCLUDE=INDEX
    INDEXFILE
    SQLFILE with INCLUDE INDEX
    LOG
    LOGFILE
    PARFILE
    PARFILE
    RECORDLENGTH
    A parameter comparable to RECORDLENGTH is not needed.
    RESUMABLE
    A parameter comparable to RESUMABLE is not needed. It is automatically defaulted for privileged users.
    RESUMABLE_NAME
    A parameter comparable to RESUMABLE_NAME is not needed. It is automatically defaulted for privileged users.
    RESUMABLE_TIMEOUT
    A parameter comparable to RESUMABLE_TIMEOUT is not needed. It is automatically defaulted for privileged users.
    ROWS=N
    CONTENT=METADATA_ONLY
    ROWS=Y
    CONTENT=ALL
    SHOW
    SQLFILE
    SKIP_UNUSABLE_INDEXES
    SKIP_UNUSABLE_INDEXES
    STATISTICS
    A parameter comparable to STATISTICS is not needed. If the source table has statistics, they are imported.
    STREAMS_CONFIGURATION
    STREAMS_CONFIGURATION
    STREAMS_INSTANTIATION
    A parameter comparable to STREAMS_INSTANTIATION is not needed.
    TABLES
    TABLES
    TABLESPACES
    This parameter still exists, but some of its functionality is now performed using the TRANSPORT_TABLESPACES parameter.
    TOID_NOVALIDATE
    A command comparable to TOID_NOVALIDATE is not needed. OIDs are no longer used for type validation.
    TOUSER
    REMAP_SCHEMA
    TRANSPORT_TABLESPACE
    TRANSPORT_TABLESPACES (see command description)
    TTS_OWNERS
    A parameter comparable to TTS_OWNERS is not needed because the information is stored in the dump file set.
    USERID
    A parameter comparable to USERID is not needed. This information is supplied as the username/password when you invoke Import.
    VOLSIZE
    A parameter comparable to VOLSIZE is not needed because tapes are not supported.