Friday, January 06, 2012

Display only the active archive log destinations

One thing I find annoying is when I want to see the archive log destinations in an oracle database.
I usually want to see only those that are enabled, and have a non null value for the destination.

show parameter log_archive_dest shows more than I care to look at.

Try this:


select name, value
from v$parameter
where name = 'log_archive_dest'
and value is not null
union all
select p.name, p.value
from v$parameter p where
name like 'log_archive_dest%'
and p.name not like '%state%'
and p.value is not null
and 'enable' = (
   select lower(p2.value)
   from v$parameter p2
   where p2.name =  substr(p.name,1,instr(p.name,'_',-1)) || 'state' || substr(p.name,instr(p.name,'_',-1))
)
union all
select p.name, p.value
from v$parameter p
where p.name like 'log_archive_dest_stat%'
and lower(p.value) = 'enable'
and (
   select p2.value
   from v$parameter p2
   where name = substr(p.name,1,16) || substr(p.name,instr(p.name,'_',-1))
) is not null
/


Monday, December 05, 2011

SSH root attacks on the rise

This is not directly Oracle related, but probably still of interest.

SSH Password Brute Forcing may be on the Rise

Out of curiosity I pulled the ssh login attempts from /var/log/messages an internet facing server, and the data  corresponds to what was shown in the article.

What was interesting was that all ssh attempts that I saw were for root.  In the past when I have looked at these there are a number of different accounts being attacked, but now the attacks are all for root.

Friday, July 15, 2011

Blogging at Pythian

As I started working for Pythian at the beginning of the year, I have started to blog there as well.

First post is today:  Applying External Timing Data to Untimed Events

I may still post here from time to time. Work at Pythian is quite enjoyable, but it is always so busy there is less time for blogging.  At least for me anyway, as I have non-Oracle interests to attend to as well.

Saturday, November 27, 2010

You can always learn something new.

It’s high time for this blog to come off hiatus.  I really don’t know why I let it go so long, just pre-occupied with work and extra curricular activities I guess.

One of those activities was to contribute two chapters to a new book from Apress, Pro Oracle SQL.  Though it was only two chapters, it did consume a significant amount of time.  Some folks seem to be able to bang out well written prose and code with seemingly little effort.  It seems that I labor over it more than most, at least it feels that way at times.

On to something new.  Not really new, but it was new to me the other day.  Or if it was not new, I had completely forgotten about it.

It has to do with the innocuous date formats used with to_date().  I ran into to some unexpected behavior from to_date() while running one of the scripts used for the aforementioned book.
When logging into a data base, part of my normal login includes setting the nls_date_format for my session:

 alter session set nls_date_format='mm/dd/yyyy hh24:mi:ss'  

The main purpose of doing so is so that DBA scripts that include dates will display in my preferred format without the need to use to_date(to_char()) to display the preferred format while preserving the data type.

When writing scripts that may be used by others, or any circumstance where I cannot rely on a setting for nls_date_format, I will use to_char() and to_date() with format masks to ensure the script will run without error.

When developing scripts for use in published writing, I normally do not set nls_date_format for my sessions, but this time I had forgot to disable it.

So, when double checking the scripts to be included with the book chapters, I was rather surprised to see that one of them did not work.

 SQL> l  
 1 select  
 2   to_date('01/01/2011 12:00:00','mm/dd/yyyy hh24:mi:ss')  
 3   , to_date('01/01/2011')  
 4* from dual; 
 SQL>  
 , to_date('01/01/2011')  
 *  
 ERROR at line 3:  
 ORA-01843: not a valid month  

The SQL session I was checking it from was connected to a  completely new and different database, setup just for the purpose of verifying that the scripts all worked as I expected, but one script failed on the to_date().  I at first thought it just do to not having a format mask specified in the second to_date(), but then immediately wondered why script had always worked previously. You can probably guess why, though at first I did not understand what was occurring.

The new environment was not setting nls_date_format upon login.  I had inadvertently setup my initial test environment where the scripts were developed with nls_date_format=’mm/dd/yyyy hh24:mi:ss’.

What surprised me was that to_date(‘01/01/2011’) had worked properly without a specific date format mask, and a date format that did not match the nls_date_format.

The “new” bit is that as long as the date format corresponds to part of the session nls_date_format setting, the conversion will work.

So, with nls_date_format set to ‘mm/dd/yyyy hh24:mi:ss’, we should expect to_date(‘01/01/2011’) to succeed.

This can easily be tested by setting a more restrictive nls_date_format, and then attempting to use to_date() without a format mask.

 SQL> alter session set nls_date_format = 'mm/dd/yyyy';  
 Session altered.  
 SQL> select to_date('01/01/2011 12:00') from dual;  
 select to_date('01/01/2011 12:00') from dual  
 *  
 ERROR at line 1:  
 ORA-01830: date format picture ends before converting entire input string  

When I saw that error message, I then understood what was happening. to_date() could be used without a format mask, as long as the date corresponded to a portion of the nls_date_format.  When the specified date exceeded could be specified with nls_date_format, an ORA-1830 error would be raised.

In this sense it is much like number formats.  I was a little surprised that I didn’t already know this, or had forgotten it so completely.

But, here’s the real surprise.  The following to_date calls will also be correctly translated by nls_date_format.

 SQL> select to_date('Jan-01 2011') from dual;  
 TO_DATE('JAN-012011  
 -------------------  
 01/01/2011 00:00:00  
 1 row selected.  

 SQL> select to_date('Jan-01 2011 12:00:00') from dual;  
 TO_DATE('JAN-012011  
 -------------------  
 01/01/2011 12:00:00  
 1 row selected.  

This was quite unexpected it.  It also is  not new.  I tested it on various Oracle versions going back to 9.2.0.8, and it worked the same way on all.

There’s always something to learn when working with complex pieces of software such as Oracle, even something as seemingly simple as formatting dates.

Friday, March 05, 2010

Who's using a database link?

Every once in awhile it is useful to find out which sessions are using a database link in an Oracle database. It's one of those things that you may not need very often, but when you do need it, it is usually rather important.

Unfortunately for those of us charged with the care and feeding of the Oracle RDBMS, this information is not terribly easy to track down.

Some years ago when I first had need to determine which sessions were at each end of database link, I found a script that was supplied courtesy of Mark Bobak. When asked, Mark said he got the script from Tom Kyte. Yong Huang includes this script on his website, and notes that Mark further attributed authorship in Metalink Forum thread 524821.994. Yong has informed me that this note is no longer available.  I have found the script in other locations as well, such Dan Morgan's website. So now you know the scripts provenance.

Here's the script, complete with comments.  Following the script is an example of usage.


-- who is querying via dblink?
-- Courtesy of Tom Kyte, via Mark Bobak
-- this script can be used at both ends of the database link
-- to match up which session on the remote database started
-- the local transaction
-- the GTXID will match for those sessions
-- just run the script on both databases

Select /*+ ORDERED */
substr(s.ksusemnm,1,10)||'-'|| substr(s.ksusepid,1,10)      "ORIGIN",
substr(g.K2GTITID_ORA,1,35) "GTXID",
substr(s.indx,1,4)||'.'|| substr(s.ksuseser,1,5) "LSESSION" ,
s2.username,
substr(
   decode(bitand(ksuseidl,11),
      1,'ACTIVE',
      0, decode( bitand(ksuseflg,4096) , 0,'INACTIVE','CACHED'),
      2,'SNIPED',
      3,'SNIPED',
      'KILLED'
   ),1,1
) "S",
substr(w.event,1,10) "WAITING"
from  x$k2gte g, x$ktcxb t, x$ksuse s, v$session_wait w, v$session s2
where  g.K2GTDXCB =t.ktcxbxba
and   g.K2GTDSES=t.ktcxbses
and  s.addr=g.K2GTDSES
and  w.sid=s.indx
and s2.sid = w.sid


Now let's take a look a the results of the script.

Logging on to DB1 as system, create a database link to db2 using the SCOTT account:

create database link scott_link connect to scott identified by "tiger" using 'db2';

Make sure it works:

system@db1 SQL> select sysdate from dual@scott_link;

SYSDATE
-------------------
03/05/2010 10:13:00

1 row selected.


Now logon to DB1 as sysdba and run who_dblink.sql:

sys@db1 SQL> @who_dblink

ORIGIN                GTXID                               LSESSION   USERNAME   S WAITING
--------------------- ----------------------------------- ---------- ---------- - ----------
oraserver.-21901      DB1.d6d6d69e.3.16.7190              500.15059  SYSTEM     I SQL*Net me

1 row selected.

Now do the same on DB2:

sys@db2 SQL> @who_dblink

ORIGIN                GTXID                               LSESSION   USERNAME   S WAITING
--------------------- ----------------------------------- ---------- ---------- - ----------
ordevdb01.-21903      DB1.d6d6d69e.3.16.7190             138.28152  SCOTT      I SQL*Net me

1 row selected.

How do you identify the session on the database where the database link connection was initiated?

Notice that the output from DB1 shows the PID in the ORIGIN column.  In this case it is 21901.

Running the following SQL on DB1 we can identify the session from which the SYSTEM user initiated the database link connection:

select
   b.username,
   b.sid SID,
   b.serial# SERIAL#,
   b.osuser,
   b.process
from v$session b,
     v$process d,
     v$sess_io e,
     v$timer
where
   b.sid = e.sid
   and  b.paddr = d.addr
   and  b.username is not null
   -- added 0.0000001 to the division above to
   -- avoid divide by zero errors
   -- this is to show all sessions, whether they
   -- have done IO or not
   --and  (e.consistent_Gets + e.block_Gets) > 0
   -- uncomment to see only your own session
   --and userenv('SESSIONID') = b.audsid
order by
         b.username,
         b.osuser,
         b.serial#,
         d.spid,
         b.process,
         b.status,
         e.consistent_Gets,
         e.block_Gets,
         e.Physical_reads;


USERNAME      SID SERIAL # OS USER              PROCESS
---------- ------ -------- -------------------- ------------------------
SYS           507    12708 oracle               22917

SYSTEM        500    15059 oracle               21901

2 rows selected.

The session created using the SCOTT account via the database link can also be seen using the same query on DB2 and looking for the session with a PID of 21903:

USERNAME    SID SERIAL# OSUSER  PROCESS
---------- ---- ------- ------- ------------
SCOTT       138   28152 oracle  21903
SYS         147   33860 oracle  22991
            146   40204 oracle  24096


3 rows selected.

Tuesday, March 02, 2010

Treasure Trove of Oracle Security Documents

This morning I stumbled across a veritable treasure trove of Oracle Security documents at the web site of the Defense Information Systems Agency.

I've only spent a few minutes glancing through a small sampling of them, and they appear to be fairly comprehensive. When you consider that these are documents used to audit sensitive database installations, it makes a lot of sense that they would cross all the t's and dot all the i's.

The index of documents can be found here: DISA IT Security Documents

As you can see for yourself, there are documents covering security concerns for a number of other systems.

Friday, February 19, 2010

Cool but unknown RMAN feature

Unknown to me anyway until just this week.

Some time ago I read a post about RMAN on Oracle-L that detailed what seemed like a very good idea.

The poster's RMAN scripts were written so that the only connection while making backups was a local one using the control file only for the RMAN repository.
rman target sys/manager nocatalog

After the backups were made, a connection was made to the RMAN catalog and a SYNC command was issued.

The reason for this was that if the catalog was unavailable for some reason, the backups would still succeed, which would not be the case with this command:


rman target sys/manager catalog rman/password@rcat

This week I found out this is not true.

Possibly this is news to no one but me, but I'm sharing anyway. :)

Last week I cloned an apps system and created a new OID database on a server. I remembered to do nearly everything, but I did forget to setup TNS so that the catalog database could be found.

After setting up the backups vie NetBackup, the logs showed that there was an error condition, but the backup obviously succeeded:

archive log filename=/u01/oracle/oradata/oiddev/archive/oiddev_arch_1_294_709899427.dbf recid=232 stamp=710999909
deleted archive log
archive log filename=/u01/oracle/oradata/oiddev/archive/oiddev_arch_1_295_709899427.dbf recid=233 stamp=710999910
Deleted 11 objects


Starting backup at 16-FEB-10
released channel: ORA_DISK_1
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: sid=369 devtype=SBT_TAPE
channel ORA_SBT_TAPE_1: VERITAS NetBackup for Oracle - Release 6.0 (2008081305)
channel ORA_SBT_TAPE_1: starting full datafile backupset
channel ORA_SBT_TAPE_1: specifying datafile(s) in backupset
including current controlfile in backupset
channel ORA_SBT_TAPE_1: starting piece 1 at 16-FEB-10
channel ORA_SBT_TAPE_1: finished piece 1 at 16-FEB-10
piece handle=OIDDEV_T20100216_ctl_s73_p1_t711086776 comment=API Version 2.0,MMS Version 5.0.0.0
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:45
Finished backup at 16-FEB-10

Starting Control File and SPFILE Autobackup at 16-FEB-10
piece handle=c-3982952863-20100216-02 comment=API Version 2.0,MMS Version 5.0.0.0
Finished Control File and SPFILE Autobackup at 16-FEB-10

RMAN> RMAN>

Recovery Manager complete.

Script /usr/openv/netbackup/scripts/oiddev/oracle_db_rman.sh
==== ended in error on  Tue Feb 16 04:07:59 PST 2010  ====


That seemed rather strange, and it was happening in both of the new databases.
The key to this was to look at the top of the log file, where I found the following:

BACKUP_MODE: lvl_0
BACKUP_TYPE: INCREMENTAL LEVEL=0
ORACLE_SID : oiddev
PWD_SID    : oiddev
ORACLE_HOME: /u01/oracle/oas
PATH: /sbin:/usr/sbin:/bin:/usr/bin:/usr/X11R6/bin

Recovery Manager: Release 10.1.0.5.0 - Production

Copyright (c) 1995, 2004, Oracle.  All rights reserved.


RMAN;
connected to target database: OIDDEV (DBID=3982952863)

RMAN;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-04004: error from recovery catalog database: ORA-12154: TNS:could not resolve the connect identifier specified

RMAN;
Starting backup at 16-FEB-10
using target database controlfile instead of recovery catalogallocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: sid=369 devtype=SBT_TAPE
channel ORA_SBT_TAPE_1: VERITAS NetBackup for Oracle - Release 6.0 (2008081305)
channel ORA_SBT_TAPE_1: starting incremental level 0 datafile backupset

Notice the line near the bottom of the displayed output?

The one that says "using target database controlfile instead of recovery catalog" ?

RMAN will go ahead with the backup of the database even though the connection to the catalog database failed.  This apparently only works when running in a scripted environment, as when I tried connecting on the command line RMAN would simply exit when the connection to the catalog could not be made.

The RMAN scripts are being run on a linux server in the following format:

$OH/bin/rman target sys/manager catalog rman/password@rcat <<-EOF >> $LOGFILE

rman commands go here

EOF


This was quite interesting to discover, and my be old news to many of you, but it was new to me.

This is not exactly a new feature either - one of the databases being backed up is 9.2.0.6. And of course there is now no need to update the backup scripts.