Oracle Server Architecture in Simple Words

Oracle

In order to efficiently and intelligently manage an Oracle database, you have to have a sound knowledge of its underlying architectural details. This knowledge pays off because it is directly linked with DBA’s job. The real job of DBA starts after the installation of database, when he embarks on to setting-up a database and then maintaining it. Naturally, the configuration and maintenance of database cannot be done by a DBA unless he doesn’t know about the ins and outs of its architecture. The following section provides a short overview of Oracle architecture.

The Oracle Server can be divided into a physical and logical portion. The physical portion is Oracle database, which resides on disks and logical portion is Instance, which resides in memory. Instance is basically a mean to access the database.

Oracle Instance: In order to access the database Oracle Server initiates some background processes and allocates some shared memory. These background processes and shared memory structures made up of Oracle Instance. The properties of instance (i.e. the properties of background processes and memory structures) are present in the parameter file in the form of parameters having some values. Oracle Server needs these parameters to create an instance while in startup stage.

Shared Memory Structures: The shared memory structure of instance is called as Shared Global Area (SGA). Basically, it contains the most frequently data and code, along with some control information. SGA divides its work into subcomponents.

Database Buffer Cache: It is commonly called as ‘Buffer Cache’. Buffer Cache stores the most recent and frequent data. Any data read from the database, first comes to buffer cache and from it, multiple user can access it.

Redo Log Buffer: It stores the changes, which are made to the data and is only used for recovery purposes.

Shared Pool: It is that part of SGA, which stores most frequent and recent SQL, PL/SQL code along with data dictionary information. The code part (SQL, PL/SQL) is stored library cache of shared pool and data about schema objects is stored in data dictionary cache of shared pool.

Large Pool: It is an optional area and used to facilitate large jobs.

Java Pool: It is also an optional area used to facilitate Java code.

Stream Pool: It is also an optional area, used in case of Oracle Streams.

Background Processes: The Background processes are used to facilitate the working of instance with respect to database. Background processes are responsible for the efficient interaction of memory structures of instance and database. Some background processes are always there and some are added up as you configure more features of database. Some common background processes are:

Database Writer: It takes data from buffer cache and writes it to data files.

Log Writer: It takes information from redo log buffer and writes it to online redo logs on disk.

Checkpoint: It ensures that at continuous intervals, all modified data from buffer cache is written to data files.

System Monitor: It performs crash recovery and some space management.

Process Monitor: It performs process recovery, in case of failure of user process.

Archiver: It makes offline copies of online redo logs and helps to recover from media failure.

Private Memory Area: Besides of SGA, there is a private memory structure called Program Global Area (PGA). It contains session information for a single server process. A server process is a foreground or shadow process which acts on behalf of one or more users. If a database is in dedicated mode, then for each user there would be a separate server process, and if a database is in shared mode, then there would be some server process which would be servicing all user processes between them.


A Few Words on Oracle Licenses

Despite reams of legalese and countless blog/forum posts, basic Oracle RDBMS licensing still remains a mystery to many. While I won’t even begin to discuss components like Oracle Apps, hopefully this post may help you attain a basic understanding of Oracle, what you can license, and what it costs.
Editions

The first and most important thing to learn are the editions of Oracle RDBMS. These editions are:

Oracle Express Edition (XE) – A free use version of Oracle which can use a single CPU (though it can be installed on a server with as many CPUs as you want), 1GB RAM, and 11GB data.
Oracle Personal Edition (PE) – Single User, Single Machine development/deployment license which can use any SE1, SE, or EE feature with the exception of RAC and Management Packs.
Oracle Enterprise Edition (EE) – Full featured edition of Oracle RDBMS which allows all core features along with the option to purchase add-on features (like Partitioning) and Management Packs (like the Diagnostics Pack).
Oracle Standard Edition (SE) – A damped down of edition of Oracle which can run on a server with a maximum of four CPU sockets. It cannot take advantage of add-ons or Management Packs. It does, however, include Oracle RAC as long as all nodes combined have no more than 4 sockets.
Oracle Standard Edition One (SE1) – A further reduced edition which shares the same features with SE but is limited to 2 CPU Sockets with no RAC option.

License Type

On top of knowing the editions, you also need to know about the type of license. The most common is a Processor based license, where your license count is based on the number of CPU Sockets/Cores and the CPU Type for EE or the number of CPU Sockets for SE and SE1. The other type is a Named User License, where you buy a number of seats on the database instead of licensing it for unlimited users. Please remember if you are pursuing this type of license: non-human processes are also named users. So any application server, script, or even a scheduled job is a ‘named user’.
Enterprise Edition Core Factor

For Enterprise Edition and all of its add-ons and management packs, there is a concept called “core factor” which is applied for licensing purposes. This factor is based on the type of CPU you are using in your server. For instance, if you are using a SUN M5000 with SPARC VII processors, the core factor is 0.75. That means 4 Quad-Core CPUs (16 cores) would equal 16 * 0.75 = 12 licenses. You would purchase 12 EE licenses to cover the server, and then purchase 12 licenses of whatever add-ons or management packs you might be using.

If, on the other hand, you used SPARC VII+ processors, the core factor is 0.5. The same CPU setup would only require 8 licenses in that case.

For the most part, x86/x86_64 architecture CPUs are a 0.5 core factor. A small number of processors are 0.25, but there is a reason: they are not very suitable for running Oracle, generally System On A Chip (SOC) solutions with a very high amount of hyperthreading.

You can find the full core factor table here.
Special Cases

Two types of databases are exempt from licensing restrictions: RMAN Catalogs and Grid/Cloud Control Repositories. If one of these repositories is the only database on a server, that server does not have to be licensed. But if the server or those databases are used for any custom or other Oracle purpose, they must be licensed. Generally speaking it can be beneficial to put these databases on the same server. See this page under “Infrastructure Repository Databases” for details.
Add-Ons and Management Packs

Let’s get one thing out of the way: Standard Edition and Standard Edition One CAN NOT use any add-on or Management Pack features (except RAC for SE). That means you can’t even buy Diagnostics Pack for Oracle SE. The option does not exist. Based on licensing requirements, AWR/ASH/ADDM are completely off limits on an SE or SE1 database.

However, Enterprise Edition has options for add-ons like RAC, Partitioning, and Active DataGuard. These add-ons are extra cost and apply to any database on which they are used. For instance, if you want to use Enterprise Edition with Partitioning and RAC across four nodes, each of the four nodes must be fully licensed for EE, Partitioning, and RAC. If Oracle is running on it, you must license it.
Misconceptions

Your standby databases need to be licensed. I’ve heard a lot of mixed opinions on DataGuard and whether your standby database must be licensed. The definitive answer is YES. A DataGuard standby is running Oracle (in MOUNT mode), which means it must be fully licensed for all features.

This goes for Development, QA, and Testing servers as well. This one has a lot of misinformation surrounding it. While you can install Oracle on your PC and use it for your own development/deployment purposes with Oracle Personal Edition, you cannot have a dedicated production-cycle development or QA server without a license. If it is part of the development cycle, then it must be paid. The only way you can get around that is if it is single-machine, single-user, for personal development.

One other big misconception is Oracle on a virtual server, like an ESX environment. Oracle does not recognize soft partitioning of CPUs. If you host Oracle on a VM in ESX, you must license the entire ESX server. (Note, this article mentions that it is possible to work with Oracle on this).
Cost Cases

So let’s take a look at some costs. The license costs are:

Enterprise Edition – $47,500 per unit (sockets * cores per socket * core factor)
Standard Edition – $17,500 per unit (sockets)
Standard Edition One – $5,800 per unit (sockets)

You can find the full cost list on Oracle’s site, which includes all licenses types, add-ons, and management package.

2 Node RAC + DataGuard + AWR/ASH

In this example we are going to set up a 2 node RAC cluster with DataGuard to a standby 2 node RAC cluster. We also want AWR/ASH for diagnostics. Each server (4 total) will have 2 Intel Xeon X7560 processors (8 core).

Our costs for Enterprise Edition would be (in list prices):

4 servers * 2 sockets per server * 8 cores per socket * 0.5 core factor = 32 units
32 Units Enterprise Edition ($47,500ea) = $1,520,000
32 Units RAC ($23,000ea) = $736,000
32 Units Diagnostics Pack ($5,000ea) = $160,000

Our grand total for this setup in EE would be $2,416,000. Hopefully you have a good discount!

Now, if we wanted to do the same thing in Standard Edition, we would have to make some exceptions. DataGuard does not work on SE, so we would have to use a third party software like DBVisit or homegrown scripts. Also, AWR/ASH don’t work in SE and can’t be added, so you might have to fall back to Statspack and adapt with excellent tools like Tanel Poder’s Snapper. In return for these trade-offs, let’s look at pricing:

4 servers * 2 sockets per server = 8 units
8 units Standard Edition ($17,500ea) = $140,000

The total price for two 2-node RAC clusters which 2 sockets per server (you can’t have more than 4 sockets in a single SE RAC cluster) is $140,000. Quite a big savings, and definitely enough leftover to buy some extra utilities.

Production, QA, and Development with Partitioning and AWR

In this example we’ll have three databases which must be licensed as they are part of the development production cycle. Each server is a SUN M5000 with 4 Quad-Core SPARC VII processors.

Enterprise Edition:

3 servers * 4 sockets per server * 4 cores per socket * 0.75 core factor = 36 units
36 units Enterprise Edition ($47,500ea) = $1,710,000
36 units Partitioning ($11,500ea) = $414,000
36 units Diagnostics Pack ($5,000ea) = $180,000

The grand total is $2,304,000.

If we were able to forego Partitioning and Diagnostics Pack, the requirement in Standard Edition would be:

3 servers * 4 sockets per server = 12 units
12 units Standard Edition ($17,500ea) = $210,000

Conclusion

Remember, a lot of this is up to you and your sales rep or third party reseller. Make sure you work out all the details and understand exactly what you can and can’t deploy. It is highly recommended that you keep an eye out for license use; the last thing you want is a licensing audit that you are not prepared for, with database installations ranging all over the environment. This paragraph also servers as a DISCLAIMER: This post is based on my personal knowledge and references available online and is not a substitute for actual information from Oracle Corporation. Please check with your sales rep on all licensing questions.

As you can see in these prices, it is VERY important that you buy what you need and nothing more. Instead of going the whole hog and getting Active DataGuard, Diagnostics and Tuning Packs, Partitioning, etc. on every single server, really take a look and see what your requirements are. You can save a huge amount of money that way.

Also note that there are special types of licensing structures out there such as site licenses. These are special deals worked out with Oracle in order to license your entire enterprise for a certain period, and can often save you a lot of money on licensing depending on your plans. If you decide to pursue this kind of agreement with Oracle, it is important to really work with your team to decide what you need and what you don’t, as well as anticipated business requirements for the near and long-term future.

Ref:http://www.oraclealchemist.com/news/a-few-words-on-oracle-licenses/

=============================================================================================================================================

How does an instance registers with the listener !!!

If you check the status of the listener, some times we will see as instance status is UNKNOWN or instance is READY…. So what are these messages?…When do they appear…?

Lets explore it now,

There are basically 2 ways of getting a database registered with listener

  1. Static Registration
  2. Dynamic Registration
    Static Registration:

This is the first and old method of registering a database with the listener.

In this scenario instance is listed in the SID_LIST section of listener.ora file like below,

LISTENER_bharatdb =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1.localdomain)(PORT = 1522))
)
)

SID_LIST_LISTENER_bharatdb =
(SID_LIST =
(SID_DESC =
(SID_NAME = bharatdb)
(ORACLE_HOME = /u01/app/oracle/product/11.2.0/db_1)
)
)

Start the listener

[[email protected] admin]$ lsnrctl start LISTENER_bharatdb

LSNRCTL for Linux: Version 11.2.0.3.0 – Production on 22-AUG-2013 06:12:07
Copyright (c) 1991, 2011, Oracle.  All rights reserved.
Starting /u01/app/oracle/product/11.2.0/db_1/bin/tnslsnr: please wait…
TNSLSNR for Linux: Version 11.2.0.3.0 – Production
System parameter file is /u01/app/oracle/product/11.2.0/db_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/product/11.2.0/db_1/log/diag/tnslsnr/rac1/listener_bharatdb/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac1)(PORT=1522)))
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=rac1.localdomain)(PORT=1522)))

STATUS of the LISTENER
————————
Alias                     LISTENER_bharatdb
Version                   TNSLSNR for Linux: Version 11.2.0.3.0 – Production
Start Date                22-AUG-2013 06:12:07
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/11.2.0/db_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/product/11.2.0/db_1/log/diag/tnslsnr/rac1/listener_bharatdb/alert/log.xml
Listening Endpoints Summary…
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac1)(PORT=1522)))
Services Summary…
Service “bharatdb” has 1 instance(s).
Instance “bharatdb”, status UNKNOWN, has 1 handler(s) for this service…
The command completed successfully

If you can see, listener started stating that it knows there is a services by name bharatdb but its status is unknown.

Check connection to database:

[[email protected] admin]$ sqlplus [email protected]

SQL*Plus: Release 11.2.0.3.0 Production on Thu Aug 22 06:15:47 2013
Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Enter password:
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
Process ID: 0
Session ID: 0 Serial number: 0

We got an error stating that database isn’t available.

Check for pmon process to confirm it,

[[email protected] admin]$ ps -ef | grep pmon
oracle 3830 2218 0 06:16 pts/1 00:00:00 grep pmon

DATABASE IS DOWN !!!!!!

Now we understood  why the listener reported that database knows a service but its status is UNKNOWN.It’s because we have hardcoded the instance name in the listener file.

Dynamic Registration:

Dynamically registering the instance with the listener helps us to solve the above issue.

By default instance automatically registers with the default listener named LISTENER which will be listening on port 1521.

With this, it is no longer necessary to list the database instance in the listener.ora file. Instead, the database instance will contact the listener directly and register itself. PMON process is responsible for dynamic registration.

Let check this out,

I have removed my listner.ora by renaming it.

[[email protected] admin]$ pwd
/u01/app/oracle/product/11.2.0/db_1/network/admin

[[email protected] rac1.localdomain admin]$ ls -lrt
total 24
-rw-r–r–. 1 oracle oinstall  205 May 11  2011 shrept.lst
drwxr-xr-x. 2 oracle oinstall 1024 Aug 21 06:37 samples
-rw——-. 1 oracle oinstall  310 Aug 22 04:12 listener.ora_orig

NOTE: Default listener is able to run without listener.ora file.

Start the listener

[[email protected] admin]$ lsnrctl start

LSNRCTL for Linux: Version 11.2.0.3.0 – Production on 22-AUG-2013 05:30:47
Copyright (c) 1991, 2011, Oracle.  All rights reserved.
Starting /u01/app/oracle/product/11.2.0/db_1/bin/tnslsnr: please wait…
TNSLSNR for Linux: Version 11.2.0.3.0 – Production
Log messages written to /u01/app/oracle/product/11.2.0/db_1/log/diag/tnslsnr/rac1/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.0.0.31)(PORT=1521)))
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
————————
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 11.2.0.3.0 – Production
Start Date                22-AUG-2013 05:30:47
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Log File         /u01/app/oracle/product/11.2.0/db_1/log/diag/tnslsnr/10.0.0.31/listener/alert/log.xml
Listening Endpoints Summary…
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.0.0.31)(PORT=1521)))
The listener supports no services
The command completed successfully

Observe that the listener has started but doesn’t know about any services i.e instances here

Check the availability of the database.

[[email protected]] $ps –ef | grep pmon

Database is down.

Start the database now

[[email protected]] sqlplus / as sysdba
Sql> startup

Now check the listener status again

[[email protected] admin]$ lsnrctl status

LSNRCTL for Linux: Version 11.2.0.3.0 – Production on 22-AUG-2013 05:34:48
Copyright (c) 1991, 2011, Oracle.  All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
————————
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 11.2.0.3.0 – Production
Start Date                22-AUG-2013 05:30:47
Uptime                    0 days 0 hr. 4 min. 1 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Log File         /u01/app/oracle/product/11.2.0/db_1/log/diag/tnslsnr/rac1/listener/alert/log.xml
Listening Endpoints Summary…
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac1)(PORT=1521)))
Services Summary…
Service “bharatdb” has 1 instance(s).
Instance “bharatdb”, status READY, has 1 handler(s) for this service…
The command completed successfully

Now you can see that listener knows about the service bharatdb and the status is shown as Ready. This didn’t come from listener file because I have removed it.

Also notice also that, unlike the static registration, this time the status is READY. The listener knows the instance is ready because the instance itself told the listener it was ready.

This works only if the listener is running on default port 1521.

Lets change the port number and verify dynamic registration

$cat listener.ora
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1.localdomain)(PORT = 1522))
)
)

Start the listener,

[[email protected] admin]$ lsnrctl start

LSNRCTL for Linux: Version 11.2.0.3.0 – Production on 22-AUG-2013 05:46:13
Copyright (c) 1991, 2011, Oracle.  All rights reserved.
Starting /u01/app/oracle/product/11.2.0/db_1/bin/tnslsnr: please wait…
TNSLSNR for Linux: Version 11.2.0.3.0 – Production
System parameter file is /u01/app/oracle/product/11.2.0/db_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/product/11.2.0/db_1/log/diag/tnslsnr/rac1/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac1)(PORT=1522)))
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=rac1.localdomain)(PORT=1522)))
STATUS of the LISTENER
————————
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 11.2.0.3.0 – Production
Start Date                22-AUG-2013 05:46:13
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/11.2.0/db_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/product/11.2.0/db_1/log/diag/tnslsnr/rac1/listener/alert/log.xml
Listening Endpoints Summary…
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac1)(PORT=1522)))
The listener supports no services
The command completed successfully Start the database

Start the database

SQL> startup
ORACLE instance started.
Total System Global Area 492781568 bytes
Fixed Size 2229504 bytes
Variable Size 343935744 bytes
Database Buffers 134217728 bytes
Redo Buffers 12398592 bytes
Database mounted.
Database opened.

Check the listener status,

[[email protected] admin]$ lsnrctl status

LSNRCTL for Linux: Version 11.2.0.3.0 – Production on 22-AUG-2013 05:46:35
Copyright (c) 1991, 2011, Oracle.  All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=rac1.localdomain)(PORT=1522)))
STATUS of the LISTENER
————————
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 11.2.0.3.0 – Production
Start Date                22-AUG-2013 05:46:13
Uptime                    0 days 0 hr. 0 min. 21 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/11.2.0/db_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/product/11.2.0/db_1/log/diag/tnslsnr/rac1/listener/alert/log.xml
Listening Endpoints Summary…
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac1)(PORT=1522)))
The listener supports no services
The command completed successfully

If you can see, database is unable to locate the listener and couldn’t register itself.

So how to solve this now !!!!!

We have 2 options,

Go with static registration process or
Set LOCAL_LISTENER initialization parameter

The LOCAL_LISTENER parameter:

Setting this parameter will allow the database to register itself with a listener running on non-default port.

This parameter specifies a network name that should be resolved through tnsnames.ora

Checking for this parameter in my database,

SQL> sho parameter db_name

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_name                              string      bharatdb

SQL> sho parameter local_listener

NAME                                 TYPE        VALUE
———————————— ———– ——————————
local_listener                       string

Here I don’t have this parameter set. So lets set it,

Option 1:

Provide the network address details directly

SQL> alter system set local_listener='(ADDRESS = (PROTOCOL=TCP)(HOST=rac1)(PORT=1522))’ scope=both;
System altered.

Option 2:

Set a value and resolve it through tnsentry

SQL> alter system set local_listener=listener_bharatdb scope=both;
System altered.

Now resolve listener_bharatdb in tnsnames.ora

In tnsnames.ora:

[[email protected] admin]$ cat tnsnames.ora
listener_bharatdb=(ADDRESS = (PROTOCOL=TCP)(HOST=rac1)(PORT=1522))

Start the database & Check the listener status now,

[[email protected] admin]$ lsnrctl status

LSNRCTL for Linux: Version 11.2.0.3.0 – Production on 22-AUG-2013 05:59:43
Copyright (c) 1991, 2011, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=rac1.localdomain)(PORT=1522)))

STATUS of the LISTENER

Alias LISTENER
Version TNSLSNR for Linux: Version 11.2.0.3.0 – Production
Start Date 22-AUG-2013 05:46:13
Uptime 0 days 0 hr. 13 min. 29 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/11.2.0/db_1/network/admin/listener.ora
Listener Log File /u01/app/oracle/product/11.2.0/db_1/log/diag/tnslsnr/rac1/listener/alert/log.xml
Listening Endpoints Summary…
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac1)(PORT=1522)))
Services Summary…
Service “bharatdb” has 1 instance(s).
Instance “bharatdb”, status READY, has 1 handler(s) for this service…
The command completed successfully

Here also you can see listener status as READY because, database itself is registering with the listener.

Another case,

If database is started before listener, database couldn’t find any listener to register with. So we need to force a registration of the instance as below

Ofcourse, LOCAL_LISTENER param should be set for this to work.

Sql> alter system register.
System altered.

So now when you see an instance with status “READY”, you know that PMON communicates with the listener. An instance with status UNKNOWN is statically registered.

–Ref: https://startupforce.wordpress.com/category/oracle-basics/

=============================================================================================================================================

AWR Reports

AWR reports are available at the location $ORACLEHOME/rdbms/admin . Below is the list of important ones’

awrrpt.sql — basic AWR report
awrsqrpt.sql — Standard SQL statement Report
awrddrpt.sql — Period diff on current instance
awrrpti.sql — Workload Repository Report Instance (RAC)
awrgrpt.sql — AWR Global Report (RAC)
awrgdrpt.sql — AWR Global Diff Report (RAC)
awrinfo.sql — Script to output general AWR information
awrblmig.sql — AWR Baseline Migrate
awrload.sql — AWR LOAD: load awr from dump file
awrextr.sql — AWR Extract
awrddinp.sql — Get inputs for diff report
awrddrpi.sql — Workload Repository Compare Periods Report
awrgdinp.sql — Get inputs for global diff reports
awrgdrpi.sql — Workload Repository Global Compare Periods Report
awrginp.sql — AWR Global Input
awrgrpti.sql — Workload Repository RAC (Global) Report
awrinpnm.sql — AWR INput NaMe
awrinput.sql — Get inputs for AWR report
awrsqrpi.sql — Workload Repository SQL Report Instance

=============================================================================================================================================

Recreating your Oracle Inventory

From 10g onwards, you can reverse engineer and recreate your Oracle inventory if it gets corrupted or accidentally deleted, thereby avoiding time consuming re-installation of Oracle S/W or any other unsupported tricks.

If Oracle inventory is corrupted or missing, you generally get the below error when opatch command is issued.

=============================================

[email protected]:/app/oracle$ opatch lsinventory
Invoking OPatch 11.2.0.1.6

Oracle Interim Patch Installer version 11.2.0.1.6
Copyright (c) 2011, Oracle Corporation. All rights reserved.

Oracle Home : /app/oracle/product/10.2/db
Central Inventory : /app/oracleai/oraInventory
from : /etc/oraInst.loc
OPatch version : 11.2.0.1.6
OUI version : 10.2.0.3.0
Log file location : /app/oracle/product/10.2/db/cfgtoollogs/opatch/opatch2011-12-27_13-19-08PM.log

OPatch failed to locate Central Inventory.
Possible causes are:
The Central Inventory is corrupted
The oraInst.loc file specified is not valid.
LsInventorySession failed: OPatch failed to locate Central Inventory.
Possible causes are:
The Central Inventory is corrupted
The oraInst.loc file specified is not valid.

OPatch failed with error code 73
[email protected]:/app/oracle$

=============================================

..

..

You may also get this error because of incorrect inventory location. So it is a good idea to make sure the location of inventory is specified correctly in one of the following files depending upon you OS.

.

/var/opt/oracle/oraInst.loc
/etc/oraInst.loc

.

Contents of oraInst.loc

bash-3.2$ cat /etc/oraInst.loc
inventory_loc=/app/oraInventory
inst_group=dba

..

If the error occurred due to missing or corrupt inventory, then you can recreate the inventory following the steps below.

Backup your existing oracle corrupted inventory if it exists.
Run the following OUI command from the Oracle home whose inventory is corrupt or missing.

cd $ORACLE_HOME/oui/bin

./runInstaller -silent -attachHome ORACLE_HOME=”/app/oracle/product/10.2/db” ORACLE_HOME_NAME=”Ora10202Home”

Note: Even though -attachHome was introduced with OUI version 10.1, it is doucumented with OUI 10.2 and higher.

=============================================================================================================================================

Convert non-asm standalone database to RAC with ASM – 11gR2

In this post we will see how to convert a standalone database to a RAC database with ASM.
The Setup

11gR2 GI has been installed and configured.
2 Oracle homes have been installed, one is RAC aware and the other is for standalone database. A standalone database is also created.
Oracle home & Grid homes have been installed 2 seperate users oracle & grid respectively

Overview of Non-RAC environment
Hostname Database Name Instance Name Database Storage
rac1 racdb racdb ext3

Overview of RAC environment
Hostname Database Name Instance Name Database Storage
rac1 racdb1 racdb1 ASM
rac2 racdb2 racdb2 ASM
The Process

  1. Create 2 diskgroups DATA & FRA SQL> create diskgroup DATA external redundancy disk ‘/dev/asm-disk2’;
    Diskgroup created. SQL> create diskgroup FRA external redundancy disk ‘/dev/asm-disk3’;
    Diskgroup created.
  2. Set the following parameters related to OMF & FRA to a shared location

SQL> alter system set db_recovery_file_dest_size=1G scope=both;
System altered.

SQL> alter system set db_recovery_file_dest=’+FRA’ scope=both;
System altered.

SQL> alter system set db_create_file_dest=’+DATA’ scope=both;
System altered.

  1. Create redo and undo for second instance Each individual instance requires an independent set of redo and undo segments as redo and undo are handled on a per instance basis

Drop log groups 1,2,3 & recreate log groups 1,2 under thread 1 & 3,4 under thread 2.

SQL> alter database drop logfile group 1;
Database altered.

SQL> alter database add logfile thread 1 group 1 (‘+DATA’) size 10M;
Database altered.

SQL> alter database drop logfile group 2;
Database altered.

SQL> alter database add logfile thread 1 group 2 (‘+DATA’) size 10M;
Database altered.

SQL> alter database drop logfile group 3;
Database altered.

SQL> alter database add logfile thread 2 group 3(‘+DATA’) size 10M;
Database altered.

SQL> alter database add logfile thread 2 group 4 (‘+DATA’) size 10M;
Database altered.

  1. Enable thread 2

SQL> alter database enable thread 2;
Database altered.

  1. Create new undo tablespace for the second instance

SQL> create undo tablespace UNDOTBS2 datafile ‘+DATA’ size 5M;
Tablespace created.

  1. Moving control files to ASM now

a) SQL> shut immediate

b) As grid home user

[[email protected] ~]$ asmcmd
ASMCMD> cp /u02/oradata/racdb/control01.ctl +DATA/RACDB/DATAFILE/
copying /u02/oradata/racdb/control01.ctl -> +DATA/RACDB/DATAFILE/control01.ctl

ASMCMD> cp /u02/oradata/racdb/control02.ctl +DATA/RACDB/DATAFILE/
copying /u02/oradata/racdb/control02.ctl -> +DATA/RACDB/DATAFILE/control02.ctl

c) AS oracle user

[[email protected] ~]$ sqlplus / as sysdba
SQL> startup nomount
SQL> alter system set control_files=’+DATA/RACDB/DATAFILE/control01.ctl’,’+DATA/RACDB/DATAFILE/control02.ctl’ scope=spfile;
System altered.
SQL> shut immediate
SQL> startup

  1. Moving database files to ASM. Take a image copy backup to shared location using RMAN

SQL> shut immediate

[[email protected] ~]$ rman target /
RMAN> startup mount
RMAN> backup as copy database format ‘+DATA’;

Starting backup at 19-AUG-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=8 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=/u02/oradata/racdb/system01.dbf
output file name=+DATA/racdb/datafile/system.260.823906197 tag=TAG20130819T224955 RECID=1 STAMP=823906213
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=/u02/oradata/racdb/undotbs1.dbf
output file name=+DATA/racdb/datafile/undotbs1.264.823906223 tag=TAG20130819T224955 RECID=2 STAMP=823906227
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile file number=00002 name=/u02/oradata/racdb/sysaux01.dbf
output file name=+DATA/racdb/datafile/sysaux.265.823906229 tag=TAG20130819T224955 RECID=3 STAMP=823906233
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
copying current control file
output file name=+DATA/racdb/controlfile/backup.266.823906237 tag=TAG20130819T224955 RECID=4 STAMP=823906241
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=/u02/oradata/racdb/users01.dbf
output file name=+DATA/racdb/datafile/users.267.823906245 tag=TAG20130819T224955 RECID=5 STAMP=823906245
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile copy
input datafile file number=00005 name=+DATA/racdb/datafile/undotbs2.258.823905699
output file name=+DATA/racdb/datafile/undotbs2.268.823906247 tag=TAG20130819T224955 RECID=6 STAMP=823906246
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 19-AUG-13
channel ORA_DISK_1: finished piece 1 at 19-AUG-13
piece handle=+DATA/racdb/backupset/2013_08_19/nnsnf0_tag20130819t224955_0.269.823906247 tag=TAG20130819T224955 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 19-AUG-13

RMAN> switch database to copy;

datafile 1 switched to datafile copy “+DATA/racdb/datafile/system.260.823906197”
datafile 2 switched to datafile copy “+DATA/racdb/datafile/sysaux.265.823906229”
datafile 3 switched to datafile copy “+DATA/racdb/datafile/undotbs1.264.823906223”
datafile 4 switched to datafile copy “+DATA/racdb/datafile/users.267.823906245”
datafile 5 switched to datafile copy “+DATA/racdb/datafile/undotbs2.268.823906247”

RMAN> alter database open;
database opened

  1. Create temporary tablespace

SQL> create temporary tablespace temp01 tempfile ‘+DATA’;
Tablespace created.

SQL> alter database default temporary tablespace temp01;
Database altered.

SQL> drop tablespace temp including contents and datafiles;
Tablespace dropped.

8.Set the required cluster parameters

SQL> alter system set cluster_database=true scope=spfile;
System altered.

SQL> alter system set cluster_database_instances=2 scope=spfile;
System altered.

SQL> alter system set thread=1 scope=spfile sid=’racdb1′;
System altered.

SQL> alter system set thread=2 scope=spfile sid=’racdb2′;
System altered.

SQL> alter system set undo_tablespace=UNDOTBS1 scope=spfile sid=’racdb1′;
System altered.

SQL> alter system set undo_tablespace=UNDOTBS2 scope=spfile sid=’racdb2′;
System altered.

SQL> alter system set instance_number=1 scope=spfile sid=’racdb1′;
System altered.

SQL> alter system set instance_number=2 scope=spfile sid=’racdb2′;
System altered.

SQL> alter system set remote_listener=’rac-scan.localdomain:1521′ scope=spfile sid=’racdb1′;
System altered.

SQL> alter system set remote_listener=’rac-scan.localdomain:1521′ scope=spfile sid=’racdb2′;
System altered.

  1. Create spfile for each node

on Node1:

SQL> create pfile=’/u01/app/oracle/product/11.2.0/db_3/dbs/initracdb1.ora’ from spfile;
File created.

SQL> shut immediate

Set your new environment

Set your ORACLE_HOME to the home which is installed with RAC feature

[[email protected] admin]$ export ORACLE_SID=racdb1

[[email protected] admin]$ sqlplus / as sysdba
SQL> startup pfile=’/u01/app/oracle/product/11.2.0/db_3/dbs/initracdb1.ora’;

SQL> create spfile=’+DATA’ from pfile;
File created.

SQL> shut immediate

[[email protected] dbs]$ echo “spfile=’+data/racdb/PARAMETERFILE/spfile.271.823907679′” > $ORACLE_HOME/dbs/initracdb1.ora

SQL> startup

On Node2:

Set your new environment

Set ORACLE_SID=racdb2 and ORACLE_HOME pointing to the home with RAC feature

echo “spfile=’+data/racdb/PARAMETERFILE/spfile.271.823907679′” > $ORACLE_HOME/dbs/initracdb2.ora

SQL> startup

  1. Create the dictionary views needed for RAC

@?/rdbms/admin/catclust.sql

11.Register the instances with crs

As ORACLE_HOME owner,

[[email protected] dbs]$ srvctl add database -d racdb -o /u01/app/oracle/product/11.2.0/db_3 -y AUTOMATIC
[[email protected] dbs]$ srvctl add instance -d racdb -i racdb1 -n rac1
[[email protected] dbs]$ srvctl add instance -d racdb -i racdb2 -n rac2

  1. Start and stop the database with srvctl to check the configuration is valid or not

[[email protected] ~]$ srvctl start database -d racdb
[[email protected] ~]$ srvctl stop database -d racdb

  1. Use the following command to check the status of all instances converted RAC database

SQL> select * from v$active_instances;

INST_NUMBER INST_NAME
———– ——————————
1 rac1.localdomain:racdb1
2 rac2.localdomain:racdb2

  1. Make relevant db entries in /etc/oratab on both nodes
  2. Setting up the listener

a. Create a new listener

[[email protected] ~]$ srvctl add listener -l LISTENER_RACDB -p TCP:1522
[[email protected] ~]$ srvctl start listener -l LISTENER_RACDB

b. Set local_listener parameter in database (because listener isn’t on default port)

sql> alter system set LOCAL_LISTENER=LISTENER_RACDB scope=both sid=’*’;
System altered.

c. Resolve this local_listener in the tnsnames.ora on both nodes

on Node 1:
LISTENER_RACDB=(ADDRESS = (PROTOCOL=TCP)(HOST=10.0.0.31)(PORT=1522))

on Node 2:
LISTENER_RACDB=(ADDRESS = (PROTOCOL=TCP)(HOST=10.0.0.32)(PORT=1522))

This completes the conversion process

Ref: https://startupforce.wordpress.com/2013/08/21/convert-non-asm-standalone-database-to-rac-with-asm-11gr2/

=============================================================================================================================================

xhost and DISPLAY – again!

xhost and DISPLAY – again!
It’s that time of year again, when people try to install their databases on a Linux or Unix host. And run into issues with xhost and DISPLAY.

The typical complaint is “I issued the DISPLAY command and it tells me I don’t have permission.” or “I tried to runInstaller”

This is often an indication of doing things in the wrong order.

The correct sequence is:

Log on to the computer that *owns* an XWindows screen.  This could be a Linux machine with gnome/KDE/twm, or perhaps a Windows machine with cygwin/X, Xming, or perhaps Hummingbird (now OpenText) eXceed ... these are known as XServers.
Tell that Xserver (screen) to accept remote requests, by issuing "xhost +" (note that the + disables security) 
Log on to the computer / user that is to use the XWindows - the one that contains the runInstaller 
Tell that to display on the computer that is to be used for display using export DISPLAY=(host):display.screen
Run the program that needs a display

Many people, especially students, log in to root on the computer and then su to oracle to do the installs. While this is fundamentally a stupid thing to do, because using root unnecessarily increases potential security risks and is it NOT good practice and will it NOT be allowed in any self-respecting business, the following would be the sequence

Log on to XWindows as root.  Do NOT su to root!  Do NOT pass GO.  Do NOT collect $200.
Open a terminal
Enter: xhost +
Find out which DISPLAY is used by entering: set | grep -i DISPLAY
Enter: su - oracle
Enter: export DISPLAY=:0.0 # of course, base this on step 4
Enter: runInstaller # (or whatever is required, such as xeyes) 

The most common mistake is running xhost after su. This will not work. The xhost must be issued by the first user – the one that owns the screen.

=============================================================================================================================================

What is LOGGING,NOLOGGING and FORCE LOGGING?

Oracle gives us the ability to limit redo generation on tables and indexes by setting them in NOLOGGING mode.

NOLOGGING affect the recoverability. Before going into how to limit the redo generation, it is important to clear the misunderstanding that NOLOGGING is the way out of redo generation, there are some points regarding it:

NOLOGGING is designed to handle bulk inserts of data which can be easy reproduced.

Regardless of LOGGING status, writing to undo blocks causes generation of redo.
LOGGING should not be disabled on a primary database if it has one or more standby databases. For this reason oracle introduced the ALTER DATABASE FORCE LOGGING command in Oracle 9i R2. (Means that the NOLOGGING attribute will not have any effect on the segments) If the database is in FORCE LOGGING MODE. NOLOGGING can be also override at tablespace level using ALTER TABLESPACE … FORCE LOGGING.

FORCE LOGGING can be used on tablespace or database level to force logging of changes to the redo. This may be required for sites that are mining log data, using Oracle Streams or using Data Guard (standby databases).

=============================================================================================================================================

External tables with Datapump

External tables are largely used as a convenient way of moving data into and out of the database.They let you query data in a flat file as though the file were an Oracle table.

Oracle uses SQL*Loader functionality, through the ORACLE_LOADER access driver to move data from the flat file into the database; it uses a Data Pump access driver to move data out of the db into a file in an Oracle-proprietary format, and back into the database from files of that format

Real-Life use of External Tables:

Suppose that you receive a daily .csv file from a reporting department. Instead of writing a SQL*Loader script to import each day’s .csv file into your database, you can simply create an external table and write an “insert … select” SQL query to insert the data directly into your tables. Place the day’s CSV file in the location specified in the external table definition, run the query, and you’re done.

Lets see how its done:

Choose or create a OS directory where you will put your .CSV(..flat file).

In my case i have taken : ‘D:\et\’

My CSV file will look like this having name “emp.CSV”

001,Hutt,Jabba,896743856,[email protected],18
002,Simpson,Homer,382947382,[email protected],20
003,Kent,Clark,082736194,[email protected],5
004,Kid,Billy,928743627,[email protected],9
005,Stranger,Perfect,389209831,[email protected],23

The next step is to create this directories in Oracle, and grant read/write access on it to the Oracle user who will be creating the external table

SQL> create directory opump as ‘D:\et\’;

Directory created.

SQL> grant read,write on directory opump to gg;

Grant succeeded.

Now,create the external table correspond to .CSV file.

SQL> conn gg/gg
Connected.

SQL> create table xtern_empl_rpt
2 ( empl_id varchar2(3),
3 last_name varchar2(50),
4 first_name varchar2(50),
5 ssn varchar2(9),
6 email_addr varchar2(100),
7 years_of_service number(2,0)
8 )
9 organization external
10 ( default directory opump
11 access parameters
12 ( records delimited by newline
13 fields terminated by ‘,’
14 )
15 location (’emp.csv’)
16 );

Table created.

External table is successfully created now.Lets check the data in oracle!

SQL> sho user;
USER is “GG”
SQL> select * from xtern_empl_rpt ;

Note: If u add more records in the csv file i.e. emp.csv then oracle table “xtern_empl_rpt” will automatically updated.

For example i will add a new record in my csv file:

7,Vishu,DBA,123,[email protected],25

Save the file..

Lets see the record if updated in oracle table.

Unloading data into an external file using DATAPUMP.

Oracle 10g lets you create a new external table from data in your database, which goes into a flat file pushed from the database using the ORACLE_DATAPUMP access driver. This flat file is in an Oracle-proprietary format that can be read by DataPump. The syntax is similar to the CREATE TABLE… ORGANIZATION EXTERNAL above, but simpler — since you can’t specify the data format, you can specify very few access_parameters. The key difference is that you must specify the access driver, ORACLE_DATAPUMP, since the access driver defaults to ORACLE_LOADER.

Lets see how its done..

First create a user called “mm” and provide the privileges as below.

SQL> create user mm identified by mm;

User created.

SQL> grant read,write on directory opump to mm;

Grant succeeded.

Now,connect to mm and create the dumpfile using external table.

SQL> conn mm/mm
Connected.

SQL> create table import_empl_info
2 ( empl_id varchar2(3),
3 last_name varchar2(50),
4 first_name varchar2(50),
5 ssn varchar2(9),
6 birth_dt date
7 )
8 organization external
9 ( type oracle_datapump
10 default directory opump
11 location (’empl_info_rpt.dmp’)
12 ) ;

Table created.

It will create the dumpfile called “empl_info_rpt.dmp” which can be moved and used in any other database or same.

IMPORT THE TABLE BACK IN AGAIN..

We will now again load the data into oracle from the dump being created through external table.

First drop the table “import_empl_info”.

SQL> drop table import_empl_info;

Table dropped.

Now load the data again ..

SQL> create table import_empl_info
2 ( empl_id varchar2(3),
3 last_name varchar2(50),
4 first_name varchar2(50),
5 ssn varchar2(9)
6 )
7 organization external
8 ( type oracle_datapump
9 default directory opump
10 location (’empl_info_rpt.dmp’)
11 ) ;

We have seen now how External tables in 9i and 10g provide a convenient, seamless way to move data in and out of the database

Restrictions imposed on External tables:

1.)External are not usable in many ways regular Oracle tables.
2.)You cannot perform any DML operations on external tables other than table creation.
3.)You can’t create an index on an external table.

=============================================================================================================================================

Automatic Storage Management (ASM

What is ASM:

Automatic Storage Management (ASM) simplifies administration of Oracle related files by allowing the administrator to reference disk groups rather than individual disks and files, which are managed by ASM. The ASM functionality is an extention of the Oracle Managed Files (OMF) functionality that also includes striping and mirroring to provide balanced and secure storage. The new ASM functionality can be used in combination with existing raw and cooked file systems, along with OMF and manually managed files.
The ASM functionality is controlled by an ASM instance. This is not a full database instance, just the memory structures and as such is very small and lightweight.
The main components of ASM are disk groups, each of which comprise of several physical disks that are controlled as a single unit. The physical disks are known as ASM disks, while the files that reside on the disks are know as ASM files. The locations and names for the files are controlled by ASM, but user-friendly aliases and directory structures can be defined for ease of reference.
The level of redundancy and the granularity of the striping can be controlled using templates. Default templates are provided for each file type stored by ASM, but additional templates can be defined as needed.
Failure groups are defined within a disk group to support the required level of redundancy. For two-way mirroring you would expect a disk group to contain two failure groups so individual files are written to two locations.

In summary ASM provides the following functionality:

Manages groups of disks, called disk groups.
Manages disk redundancy within a disk group.
Provides near-optimal I/O balancing without any manual tuning.
Enables management of database objects without specifying mount points and filenames.
Supports large files.

Configuration of ASM:

Use the following command to identify the kernel version

[[email protected] u01]$ uname -r
2.6.18-53.el5

Download Oracle ASMLib from following site ==>
http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel5-084877.html

[[email protected] u01]$ rpm -ivh –force –nodeps *

oracleasm-support-2.1.7-1.el5.x86_64.rpm
oracleasm-2.6.18-274.el5-2.0.5-1.el5.x86_64.rpm
oracleasmlib-2.0.4-1.el5.x86_64.rpm
oracleasm-2.6.18-53.1.19.el5xen-2.0.4-1.el5.i686.rpm
oracleasm-2.6.18-53.1.19.el5debug-2.0.4-1.el5.i686.rpm
oracleasm-2.6.18-53.1.19.el5PAE-2.0.4-1.el5.i686.rpm
oracleasm-2.6.18-53.1.19.el5-2.0.4-1.el5.i686.rpm

Configure Oracle ASM Library driver

[[email protected] init.d]# /etc/init.d/oracleasm configure

Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets (‘[]’). Hitting without typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [OK ]
Scanning the system for Oracle ASMLib disks: [OK]

You can check the Oracle ASM status
[[email protected] init.d]# oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes

Creating Raw disks in VM:

You can check the Oracle ASM status
[[email protected] init.d]# oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes

Creating a new Partition for ASM disk

[[email protected] ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won’t be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): p
Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-652, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-652, default 652): +5000M

Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

[[email protected] ~]# fdisk -l
Disk /dev/sda: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 38 305203+ 83 Linux
/dev/sda2 39 3405 27045427+ 83 Linux
/dev/sda3 3406 3916 4104607+ 82 Linux swap / Solaris

Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 609 4891761 83 Linux

To create a ASM disk using Oracle ASMLib

[[email protected] ~]# oracleasm createdisk DISK1 /dev/sdb1
Writing disk header: done
Instantiating disk: done
[[email protected] ~]# oracleasm createdisk DISK2 /dev/sdc1
Writing disk header: done
Instantiating disk: done
[[email protected] ~]# oracleasm createdisk DISK3 /dev/sdd1
Writing disk header: done
Instantiating disk: done

To List and check the Oracle ASM disks

[[email protected] ~]# oracleasm listdisks
[[email protected] ~]# oracleasm querydisk DISK1
Disk “DISK1” is a valid ASM disk

Creating password file for ASM instance:

orapwd file=$ORACLE_HOME/dbs/orapwasm password=oracle

Creating directories:

+ASM,bdump,cdump,udump

Creating pfile for ASM instance:

db_unique_name = ‘ASM’
instance_type = ‘asm’
background_dump_dest = ‘/u01/app/oracle/admin/+ASM/bdump’
core_dump_dest = ‘/u01/app/oracle/admin/+ASM/cdump’
user_dump_dest = ‘/u01/app/oracle/admin/+ASM/udump’
export ORACLE_SID=ASM
sqlplus / as sysdba

SQL>startup nomount pfile=’/u01/initasm.ora’;
ASM instance started
Total System Global Area 79691776 bytes
Fixed Size 1217812 bytes
Variable Size 53308140 bytes
ASM Cache 25165824 bytes

Now creating database through DBCA and use the ASM disks:

=============================================================================================================================================

Why should we perform consistent export?

As per TomKyte….

it makes all of the queries run by exp be “as of the same point in time– consistent with
regards to eachother”

imagine if you started an export at 9am of the EMP and DEPT tables.

EMP started exporting at 9am and DEPT at 9:15am.

Now, the EMP data would be “as of 9am”, but the DEPT data would be as of 9:15am. What if
you import that data now — will it work? maybe, maybe not — perhaps at 9:10am, someone
fired the last employee in department 50 and deleted department 50. Your EMP export
would have them in this deptno, your DEPT export would not have this deptno. The import
would fail.

If you use consistent=y, Oracle will export all of the data “as of 9am”, so deptno=50
will be included in all tables and the import will succeed.

=============================================================================================================================================

Advantages and Disadvantages of Datapump

Advantages:

===> Better control on the job running – it provides features like start, stop, restart

===> Improved performance because of It is server side technology with parallel streams option

===> Using the parallel streams option, data pump can backup large volume of data quickly

===> Data pump is 15-50% faster than the conventional export import.

===> It has the ability to estimate the job times

===> Failed jobs can be restarted

===> Using exclude/include option we can perform fine-grained object selection

===> Backup jobs can be monitored

===> It has the remapping capabilities

===> It supports the export/import operations over the network. The NETWORK_LINK parameter initiate the export using a
database link

===> Using “Query” parameter DBA can extract the data from tables like SELECT

“Content” parameter gives the flexibility for what to import/export. For example Metadata only, data or both

===> It supports full range of data types

===> It supports cross platform compatibility

===> No need to specify the buffer size like in normal exp/imp

===> It has its own performace tuning features

===> V$session_longops view can be used for the time estimation for the data pump jobs

===> It supports interactive mode that allows the dba to monitor or interact with ongoing jobs

===> Dumps will be in compressed

===> Data can be encrypted

===> XML schemas and XML type is supported by the Data Pump

Disadvantages:

===> Export cannot be taken to tape

===> Import will work only with Oracle 10g or above

===> Cannot use with Unix pipes

Related Views:

DBA_DATAPUMP_JOBS
USER_DATAPUMP_JOBS
DBA_DIRECTORIES
DATABASE_EXPORT_OBJECTS
SCHEMA_EXPORT_OBJECTS
TABLE_EXPORT_OBJECTS
DBA_DATAPUMP_SESSIONS

Enjoy:-)

=============================================================================================================================================

DATAPUMP.

Oracle 10g lets you create a new external table from data in your database, which goes into a flat file pushed from the database using the ORACLE_DATAPUMP access driver. This flat file is in an Oracle-proprietary format that can be read by DataPump. The syntax is similar to the CREATE TABLE… ORGANIZATION EXTERNAL above, but simpler — since you can’t specify the data format, you can specify very few access_parameters. The key difference is that you must specify the access driver, ORACLE_DATAPUMP, since the access driver defaults to ORACLE_LOADER.

Lets see how its done..

First create a user called “mm” and provide the privileges as below.

SQL> create user mm identified by mm;

User created.

SQL> grant read,write on directory opump to mm;

Grant succeeded.

Now,connect to mm and create the dumpfile using external table.

SQL> conn mm/mm
Connected.

SQL> create table import_empl_info
2 ( empl_id varchar2(3),
3 last_name varchar2(50),
4 first_name varchar2(50),
5 ssn varchar2(9),
6 birth_dt date
7 )
8 organization external
9 ( type oracle_datapump
10 default directory opump
11 location (’empl_info_rpt.dmp’)
12 ) ;

Table created.

It will create the dumpfile called “empl_info_rpt.dmp” which can be moved and used in any other database or same.

IMPORT THE TABLE BACK IN AGAIN..

We will now again load the data into oracle from the dump being created through external table.

First drop the table “import_empl_info”.

SQL> drop table import_empl_info;

Table dropped.

Now load the data again ..

SQL> create table import_empl_info
2 ( empl_id varchar2(3),
3 last_name varchar2(50),
4 first_name varchar2(50),
5 ssn varchar2(9)
6 )
7 organization external
8 ( type oracle_datapump
9 default directory opump
10 location (’empl_info_rpt.dmp’)
11 ) ;

We have seen now how External tables in 9i and 10g provide a convenient, seamless way to move data in and out of the database

Restrictions imposed on External tables:

1.)External are not usable in many ways regular Oracle tables.
2.)You cannot perform any DML operations on external tables other than table creation.
3.)You can’t create an index on an external table.

Enjoy:-)

=============================================================================================================================================

Difference between Conventional path and Direct Path

Conventional path means that a SQL INSERT statement is used to load data into tables. Direct path deals with data much faster than conventional path. Direct path achieves this performance gain by inserting data directly, bypassing the SQL command processing layer and saves on data copies whenever possible.

This means you don’t get any redo created however and once your load is complete you should backup immediately in order to have your data safe.

Lets elaborate them!

–Conventional path Export.

Conventional path Export uses the SQL SELECT statement to extract data from tables. Data is read from disk into the buffer cache, and rows are transferred to the evaluating buffer. The data, after passing expression evaluation, is transferred to the Export client, which then writes the data into the export file.

–Direct path Export.

When using a Direct path Export, the data is read from disk directly into the export session’s program global area (PGA): the rows are transferred directly to the Export session’s private buffer. This also means that the SQL command-processing layer (evaluation buffer) can be bypassed, because the data is already in the format that Export expects. As a result, unnecessary data conversion is avoided. The data is transferred to the Export client, which then writes the data into the export file.

=============================================================================================================================================

Datapump enhancements in Oracle 11g

Below are some enhancements done in Oracle 11g:

-Compression: parameter now can use to compress both data and Metadata with the help of values “ALL”.

expdp quest/quest DIRECTORY= quest_pump_dir DUMPFILE=quest_comp.dmp compression=ALL

-Data Pump Encryption Enhancements:

Different encryption modes are available during the export. These include Dual, Password, and Transparent. The mode that is used is set using the encryption_mode parameter.
Here is an example of using the new encryption parameters for a data pump export:

expdp quest/quest DIRECTORY=quest_pump_dir DUMPFILE=quest_comp.dmp encryption=all encryption_password=not_quest encryption_algorithm=AES128 encryption_mode=PASSWORD

-Data Pump Rename Table(or remap_table)

Sometimes its desirable to be able to rename a table during the import process. In Oracle 11g you can now do this with Oracle Data Pump. To rename a table during the import process use the remap_table parameter. Using the remap_table parameter define the original table schema and name, and then indicate the new name for the table. Here is an example where the table QUEST.NAMES is renamed to COPY_NAMES:

impdp quest/quest DIRECTORY=quest_pump_dir DUMPFILE=quest.dmp tables=QUEST.NAMES remap_table=QUEST.NAMES:COPY_NAMES

-Data Pump and Partitioned Tables:

If you want to have some control over partitioning of tables during a Oracle Data Pump import you can use the partition_options parameter which is a new feature available in impdp in Oracle 11g. The following optioins are available when using the partition_options parameter:
None – Tables will be imported such that they will look like those on the system on which the export was created.
Departition – Partitions will be created as individual tables rather than partitions of a partitioned table.

Merge – Causes all partitions to be merged into one, unpartitioned table.

In this example we use the partition_options parameter to take all partitions of the NAMES table and combine them into one unpartitioned table on the destination side:

impdp quest/quest DIRECTORY=quest_pump_dir DUMPFILE=quest.dmp tables=QUEST.NAMES partition_options=merge

-Overwrite Dump Files

When using impdp in Oracle 11g you can now overwrite any dump file that might already exist by using the new reuse_dumpfiles parameter as seen in this example:

expdp quest/quest DIRECTORY=quest_pump_dir DUMPFILE=quest.dmp tables=QUEST.NAMES reuse_dumpfiles

Data Pump Data_Options Parameter

Have you ever loaded records into a table with Oracle Data Pump Import, only to have the entire load fail because a few duplicate records were in the import file. Assume that your EMP table has three rows in it. EMPNO is the primary key and here is what a query of the table looks like:

SQL> select empno from emp;

EMPNO

  7369
  8000
  8001

3 rows selected.

When you try to load your Data Pump Export file the load fails because it has these three rows in it and they will violate the primary key of the table. The load and the resulting error messages would look like this:

C:\oracle\product\11.1.0\db_1\NETWORK\ADMIN>impdp scott/robert dumpfile=new_emp.dmp directory=data_pump_dir tables=emp table_exists_action=append

Import: Release 11.1.0.6.0 – Production on Tuesday, 01 January, 2008 23:51:32
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 – Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table “SCOTT”.”SYS_IMPORT_TABLE_01″ successfully loaded/unloaded
Starting “SCOTT”.”SYS_IMPORT_TABLE_01″: scott/ dumpfile=new_emp.dmp directory=data_pump_dir tables=emp table_exists_action=
append
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39152: Table “SCOTT”.”EMP” exists. Data will be appended to existing table but all dependent metadata will be skipped due to tab
le_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
ORA-31693: Table data object “SCOTT”.”EMP” failed to load/unload and is being skipped due to error:
ORA-00001: unique constraint (SCOTT.PK_EMP) violated
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/USER_PREF_STATISTICS
Job “SCOTT”.”SYS_IMPORT_TABLE_01″ completed with 2 error(s) at 23:51:35
And the resulting table looks just the same:

SQL> select empno from emp;

EMPNO

  7369
  8000
  8001

3 rows selected.

The load of the EMP table has completely failed. Data Pump does not come with a reject limit clause like SQL*Loader does, so the whole load of the table just fails, even if it’s just one record causing the problem.

Oracle 11g solves this problem with a new parameter for use with Data Pump Import. This parameter is data_options=skip_constraint_errors. When this parameter is used Data Pump Import will ignore the rows that generate a constraint error (while still not loading them of course) and will continue to load those rows that do not generate an error. Here is an example of an import using this new parameter:

SQL> select empno from emp;

EMPNO

  7369
  8000
  8001

3 rows selected.

C:\oracle\product\11.1.0\db_1\NETWORK\ADMIN>impdp scott/robert dumpfile=new_emp.dmp directory=data_pump_dir tables=emp table_exists_action=append data_options=skip_constraint_errors

Import: Release 11.1.0.6.0 – Production on Tuesday, 01 January, 2008 23:55:33
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 – Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table “SCOTT”.”SYS_IMPORT_TABLE_01″ successfully loaded/unloaded
Starting “SCOTT”.”SYS_IMPORT_TABLE_01″: scott/ dumpfile=new_emp.dmp directory=data_pump_dir tables=emp table_exists_action=
append data_options=skip_constraint_errors
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39152: Table “SCOTT”.”EMP” exists. Data will be appended to existing table but all dependent metadata will be skipped due to tab
le_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “SCOTT”.”EMP” 8.187 KB 13 out of 16 rows
3 row(s) were rejected with the following error:
ORA-00001: unique constraint (SCOTT.PK_EMP) violated

Rejected rows with the primary keys are:
Rejected row #1:
column EMPNO: 8000
Rejected row #2:
column EMPNO: 8001
Rejected row #3:
column EMPNO: 7369
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/USER_PREF_STATISTICS
Job “SCOTT”.”SYS_IMPORT_TABLE_01″ completed with 1 error(s) at 23:55:36
Note in this case that 3 rows were rejected, but the remaining 13 rows were successfully loaded. Here is a select query against the table. This looks much better:

SQL> select empno from emp;

EMPNO

  7369
  7499
  7521
  7566
  7654
  7698
  7782
  7788
  7839
  7844
  7876
  7900
  7902
  7934
  8000
  8001

16 rows selected.

There is one main restriction with this feature and that is that the Data Pump import must use the external table method when loading data. Data Pump can use two methods of importing (or exporting) data, direct path and external tables. When you import using data_options=skip_constraint_errors Data Pump must use external tables. Normally this will happen as a default, so you don’t have to worry about it. However, if you do get an error because Oracle Data Pump is not using the external table method you can use the undocumented parameter access_method=external_table to try to force Data Pump to use external tables.

The new Oracle Data Pump parameter data_options also provides for special handling of other types of data-related issues. Use the XML_CLOBS setting of the data_options parameter of the Data Pump Export utility to override the default behavior of Oracle Data Pump, which is to compress the format of the XMLType CLOB. If you use this option be aware that the XML schemas at the source and destination must be the same.

-The Transportable Parameter:

When the transportable parameter is used in impdp or expdp only the metadata associated with specific tables, partitions, or sub-partitions will be extracted, rather than all metadata. You can then proceed to transport the associated data files as you normally would. You export the data with expdp:
expdp quest/quert DIRECTORY=quest_pump_dir DUMPFILE=quest.dmp tables=QUEST.NAMES transportable=always
You then copy the data files and the dump file set to the destination and plug in the database. You would use impdp as a part of this process to import the metadata into the database, as seen in this example:

impdp quest/quert DIRECTORY=quest_pump_dir DUMPFILE=quest.dmp tables=QUEST.NAMES remap_schema=Robert:new_Robert

=============================================================================================================================================

TRANSFORM in datapump

TRANSFORM – In case you don’t want to import the objects storage attributes but just the content, you can use this parameter. TRANSFORM parameter instruct the data pump import job to modify the DDL that creates the object during import. You can modify 2 basic attributes SEGMENT_ATTRIBUTE and STORAGE.

SEGMENT_ATTRIBUTES – If the value is specified as y, then segment attributes (physical attributes, storage attributes, tablespaces, and logging) are included, with appropriate DDL. The default is y.

STORAGE – If the value is specified as y, the storage clauses are included, with appropriate DDL.

Example:

For the following example, assume that you have exported the employees table in the hr schema. The SQL CREATE TABLE statement that results when you then import the table is similar to the following:

CREATE TABLE “HR”.”EMPLOYEES”
( “EMPLOYEE_ID” NUMBER(6,0),
“FIRST_NAME” VARCHAR2(20),
“LAST_NAME” VARCHAR2(25) CONSTRAINT “EMP_LAST_NAME_NN” NOT NULL ENABLE,
“EMAIL” VARCHAR2(25) CONSTRAINT “EMP_EMAIL_NN” NOT NULL ENABLE,
“PHONE_NUMBER” VARCHAR2(20),
“HIRE_DATE” DATE CONSTRAINT “EMP_HIRE_DATE_NN” NOT NULL ENABLE,
“JOB_ID” VARCHAR2(10) CONSTRAINT “EMP_JOB_NN” NOT NULL ENABLE,
“SALARY” NUMBER(8,2),
“COMMISSION_PCT” NUMBER(2,2),
“MANAGER_ID” NUMBER(6,0),
“DEPARTMENT_ID” NUMBER(4,0)
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 10240 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 121
PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE “SYSTEM” ;

If you do not want to retain the STORAGE clause or TABLESPACE clause, you can remove them from the CREATE STATEMENT by using the Import TRANSFORM parameter. Specify the value of SEGMENT_ATTRIBUTES as n. This results in the exclusion of segment attributes (both storage and tablespace) from the table.

C:> impdp hr/hr TABLES=hr.employees DIRECTORY=dpump_dir1 DUMPFILE=hr_emp.dmp TRANSFORM=SEGMENT_ATTRIBUTES:n:table

The resulting CREATE TABLE statement for the employees table would then look similar to the following

CREATE TABLE “HR”.”EMPLOYEES”
( “EMPLOYEE_ID” NUMBER(6,0),
“FIRST_NAME” VARCHAR2(20),
“LAST_NAME” VARCHAR2(25) CONSTRAINT “EMP_LAST_NAME_NN” NOT NULL ENABLE,
“EMAIL” VARCHAR2(25) CONSTRAINT “EMP_EMAIL_NN” NOT NULL ENABLE,
“PHONE_NUMBER” VARCHAR2(20),
“HIRE_DATE” DATE CONSTRAINT “EMP_HIRE_DATE_NN” NOT NULL ENABLE,
“JOB_ID” VARCHAR2(10) CONSTRAINT “EMP_JOB_NN” NOT NULL ENABLE,
“SALARY” NUMBER(8,2),
“COMMISSION_PCT” NUMBER(2,2),
“MANAGER_ID” NUMBER(6,0),
“DEPARTMENT_ID” NUMBER(4,0)
);

Enjoy:-)

=============================================================================================================================================

STREAMS_CONFIGURATION parameter in Datapump.

It specifies whether or not to import any general Streams metadata that may be present in the export dump file.

Its default value is “y”.

Example:

The following is an example of using the STREAMS_CONFIGURATION parameter. You can create the expfull.dmp dump file used in this example by running the example provided for the Export FULL parameter. See FULL.

C:> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp STREAMS_CONFIGURATION=n

Enjoy:-)

=============================================================================================================================================

REUSE_DATAFILE

Specifies whether or not the import job should reuse existing datafiles for tablespace creation.

REUSE_DATAFILES={y | n}

If the default (n) is used and the datafiles specified in CREATE TABLESPACE statements already exist, an error message from the failing CREATE TABLESPACE statement is issued, but the import job continues.

If this parameter is specified as y, a warning is issued and the existing datafiles are reinitialized. Be aware that specifying Y can result in a loss of data.

Example:

The following is an example of using the REUSE_DATAFILES parameter. You can create the expfull.dmp dump file used in this example by running the example provided for the Export FULL parameter. See FULL.

impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp LOGFILE=reuse.log REUSE_DATAFILES=Y

This example reinitializes datafiles referenced by CREATE TABLESPACE statements in the expfull.dmp file.

=============================================================================================================================================

REMAP_DATFILE parameter in datapump

You are moving some tables from another database and the tablespace in which these tables exist is also new in the target database. Importing those tables will create the tablespaces in the target database too; but here is a small problem – the filesystems are different. For example, the datafile for tablespace was “/u01/data1.dbf” but the target database does not have a mount point called /u01; instead it has “/u02”. Ordinarily you would have to create the tablespace manually and then import the table into the tablespace.

Data Pump eliminates the extra step in one elegant motion. All you have to do is use the REMAP_DATAFILE option as follows:

$ impdp remap_datafile=‘/u01/data1.dbf’:’/u02/data1.dbf’

This creates the same datafile as /u02/data1.dbf. you can use this option to recreate the test data even across different platforms. For instance, the following option in the option converts datafiles from a unix to Windows format.

remap_datafile=’/u01/data1.dbf’:’c:\oradata\data1.dbf’

Enjoy:-)

=============================================================================================================================================

VERSION parameter in datapump

VERSION:

With the Export Data Pump parameter VERSION, you can control the version of the dumpfile set, and make the set compatible to be imported into a database with a lower compatibility level. Note that this does not mean that dumpfile set can be used with versions of Oracle Database prior to 10.1.0 because Export and Import Data Pump only work with Oracle Database 10g release 1 (10.1) or later.

The VERSION parameter can be useful in a downgrade situation, for example. Assume you have a 10.2 database and you export with VERSION=9.2. Only database objects and attributes that are compatible with 9.2 will be exported. You could then import the dump file into a 10.1 database whose compatibility is set to 9.2. A 10.1 database set to 9.2 compatibility could theoretically be downgraded to 9.2.

VERSION={COMPATIBLE | LATEST | version_string}

COMPATIBLE = The version of the metadata corresponds to the database compatibility level. Database compatibility must be set to 9.2 or higher. This is the default value.
LATEST = The version of the metadata corresponds to the database release version.
version_string = A specific database version (e.g.: 11.1.0).

If database compatibility is 11.1.* then datapump dumpfile set compatibility is 2.1
If database compatibility is 10.2.* then datapump dumpfile set compatibility is 1.1
If database compatibility is 10.1.* then datapump dumpfile set compatibility is 0.1

Examples:

If you specify a VERSION for an Export Data Pump job that is older than the current database version, certain Export Data Pump features may be unavailable. For example, specifying VERSION=10.1 will cause an error if data compression is also specified for the job (e.g.: COMPRESSION=all)

SQL> host expdp system/[email protected] version=10.2 compression=all dumpfile=my.dmp

Export: Release 11.2.0.1.0 – Production on Mon Sep 26 12:28:39 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39005: inconsistent arguments
ORA-39055: The COMPRESSION feature is not supported in version 10.2.

Export from Oracle database version:11.2:

expdp hr/hr TABLES=hr.employees2 VERSION=10.2 DIRECTORY=data_pump_dir DUMPFILE=emp2.dmp LOGFILE=emp2.log

Import into oracle database version 10.2:

impdp hr/hr TABLES=hr.employees2 DIRECTORY=data_pump_dir DUMPFILE=emp2.dmp LOGFILE=emp2_imp.log

Enjoy:-)

=============================================================================================================================================

What are invalid objects in the database?

Some types of schema objects reference other objects. For example, a view contains a query that references tables or other views, and a PL/SQL subprogram might invoke other subprograms and might use static SQL to reference tables or views. An object that references another object is called a dependent object, and an object being referenced is a referenced object. These references are established at compile time, and if the compiler cannot resolve them, the dependent object being compiled is marked invalid object.

An invalid dependent object must be recompiled against the new definition of a referenced object before the dependent object can be used. Recompilation occurs automatically when the invalid dependent object is referenced.

Does invalid objects affects performance?

Object invalidation affects applications in two ways. First, an invalid object must be revalidated before it can be used by an application. Re-validation adds latency to application execution. If the number of invalid objects is large, the added latency on the first execution can be significant. Second, invalidation of a procedure, function or package can cause exceptions in other sessions concurrently executing the procedure, function or package. If a patch is applied when the application is in use in a different session, the session executing the application notices that an object in use has been invalidated and raises one of the following 4 exceptions: ORA-4061, ORA-4064, ORA-4065 or ORA-4068.

DEMO:

Create one table, one invalid view and one valid view:

SQL> create table first_table ( col1 number, col2 number);
Table created.

SQL> create force view first_view as select col3 from first_table;
Warning: View created with compilation errors.

SQL> create view second_view as select col2 from first_table;
View created.

Check Status:

SQL> select object_name, object_type,status from user_objects;
OBJECT_NAME OBJECT_TYPE STATUS


FIRST_TABLE TABLE VALID

FIRST_VIEW VIEW INVALID

SECOND_VIEW VIEW VALID

SQL> alter table FIRST_TABLE add (col3 number);
Table altered.

RECOMPILING NOW,

SQL> alter view FIRST_VIEW compile;

SQL> select object_name, object_type,status from user_objects;

OBJECT_NAME OBJECT_TYPE STATUS


FIRST_TABLE TABLE VALID

FIRST_VIEW VIEW VALID

How to recompile large number of invalid objects?

There will be occasions when you are faced with the need to recompile hundreds or thousands of invalid objects. Typically, this occurs after an upgrade to an application, or perhaps after applying patches. Rather than recompiling them individually, use the supplied utility script. On Unix,

SQL> @?/rdbms/admin/utlrp.sql

How to recompile a pakage?

ALTER PACKAGE pkg1 COMPILE REUSE SETTINGS;

=============================================================================================================================================

TABLESPACES,TRANSPORT_TABLESPACES,TRANSPORT_FULL_CHECK

Since all the above parameters are used in conjunction we should know the functionality of each of them.

TABLESPACES:In tablespace mode, only the tables contained in a specified set of tablespaces are unloaded. If a table is unloaded, its dependent objects are also unloaded.

TRANSPORT_TABLESPACES:Use this parameter to specify a list of tablespace names for which object metadata will be exported from the source database into the target database.

TRANSPORT_FULL_CHECK:Specifies whether or not to check for dependencies between those objects inside the transportable set and those outside the transportable set. This parameter is applicable only to a transportable-tablespace mode export.

TRANSPORT_FULL_CHECK={y | n}

If TRANSPORT_FULL_CHECK=y, then Export verifies that there are no dependencies between those objects inside the transportable set and those outside the transportable set. The check addresses two-way dependencies. For example, if a table is inside the transportable set but its index is not, a failure is returned and the export operation is terminated. Similarly, a failure is also returned if an index is in the transportable set but the table is not.

If TRANSPORT_FULL_CHECK=n, then Export verifies only that there are no objects within the transportable set that are dependent on objects outside the transportable set. This check addresses a one-way dependency. For example, a table is not dependent on an index, but an index is dependent on a table, because an index without a table has no meaning. Therefore, if the transportable set contains a table, but not its index, then this check succeeds. However, if the transportable set contains an index, but not the table, the export operation is terminated.

=============================================================================================================================================

STATUS:

Specifies the frequency at which the job status display is updated.
If you supply a value for integer, it specifies how frequently, in seconds, job status should be displayed in logging mode.

DEMO:

SQL> host expdp system/sys dumpfile=EasdXPgjhDAT.dmp directory=ipump full=y status=10

SAMPLE parameter in datapump

SAMPLE:

This parameter allows you to export subsets of data by specifying the percentage of data to be sampled and exported. The sample_percent indicates the probability that a block of rows will be selected as part of the sample. It does not mean that the database will retrieve exactly that amount of rows from the table.

DEMO:

C:>expdp system/rman tables=scott1.countries dumpfile=oe.dmp sample=”scott1″.”countries”:50

Note:Everytime you perform the export count of rows would be different as it take out the sample of subset of a table.

Enjoy:-)

=============================================================================================================================================

QUERY parameter in datapump

QUERY:

Enables you to filter the data that is exported by specifying a clause for a SQL SELECT statement, which is applied to all tables in the export job or to a specific table.

The QUERY parameter cannot be used in conjunction with the following parameters:

CONTENT=METADATA_ONLY

ESTIMATE_ONLY

TRANSPORT_TABLESPACES

DEMO:

SQL> create table empquery as select * from scott.emp;

Table created.

SQL> select deptno from empquery;

DEPTNO

    20
    30
    30
    20
    30
    30
    10
    20
    10
    30
    20

DEPTNO

    30
    20
    10

SQL> create directory ipump as ‘D:\’;

Directory created.

SQL> grant read,write on directory ipump to scott;

Grant succeeded.

SQL> host expdp scott/[email protected] query=empquery:\”WHERE deptno>20\” directory=ipump

Export: Release 11.2.0.1.0 – Production on Sat Sep 24 11:46:58 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “SCOTT”.”SYS_EXPORT_SCHEMA_01″: scott/@delhi query=empquery:”WHERE deptno>20″ directory=ipump
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 320 KB
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported “SCOTT”.”DEPT” 5.937 KB 4 rows
. . exported “SCOTT”.”EMP” 8.570 KB 14 rows
. . exported “SCOTT”.”EMPQUERY” 8.265 KB 6 rows
. . exported “SCOTT”.”SALGRADE” 5.867 KB 5 rows
. . exported “SCOTT”.”TEST” 5.046 KB 5 rows
. . exported “SCOTT”.”BONUS” 0 KB 0 rows
Master table “SCOTT”.”SYS_EXPORT_SCHEMA_01″ successfully loaded/unloaded


Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is:
D:\EXPDAT.DMP
Job “SCOTT”.”SYS_EXPORT_SCHEMA_01″ successfully completed at 11:47:07

You see only 6 rows that are matching the criteria in the query clause are being exported.

Enjoy:-

=============================================================================================================================================

PARFILE parameter in datapump

The PARFILE(parameter file) is opened and read by the client running the expdp image. All parameters are set in this file.

Demo: –

Create a parfile below as scott1.par and perform the export.

SCHEMAS=SCOTT
DUMPFILE=exscott.dmp
LOGFILE=expsdd.log

SQL> host expdp system/sys parfile=D:\scott1.par.txt

Export: Release 11.2.0.1.0 – Production on Sat Sep 24 10:46:11 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “SYSTEM”.”SYS_EXPORT_SCHEMA_03″: system/ parfile=D:\scc.par.txt
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 256 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
…..

Enjoy:-)

=============================================================================================================================================

PARALLEL parameter in datapump

PARALLEL parameter in datapump

Nowadays DBA’s work with databases with gigabytes or terabytes of data. Exporting data can be a very time consuming job when we needed to export lots of data in a short period of time. Datapump EXPDP has the ability to export data as compressed format, thus achieving faster write times, but this will use more processor time. Datapump also has the ability to open several parallel write channels to improve the overall export time.

Today i’ll explain how parallel parameter works. Parallel allows you to launch several dump processes, thus exporting data much faster. You can specify the numbers of processes you want to launch by setting parallel=n. You can specify the filenames of each processes individually or the filename can contain a substitution variable (%U), which implies that multiple files may be generated. The substitution variable is expanded in the resulting filenames into a 2-digit, fixed-width, incrementing integer starting at 01 and ending at 99.

DEMO:

We will here export the scott schemas with parallel=2 thus two dump files will be created.

SQL> host expdp system/sys schemas=scott dumpfile=scott%u.dmp parallel=2

Export: Release 11.2.0.1.0 – Production on Sat Sep 24 10:31:17 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “SYSTEM”.”SYS_EXPORT_SCHEMA_03″: system/ schemas=scott dumpfile=scott%u.dmp parallel=2
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 256 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
. . exported “SCOTT”.”DEPT” 5.937 KB 4 rows
. . exported “SCOTT”.”EMP” 8.570 KB 14 rows
. . exported “SCOTT”.”SALGRADE” 5.867 KB 5 rows
. . exported “SCOTT”.”TEST” 5.429 KB 2 rows
. . exported “SCOTT”.”BONUS” 0 KB 0 rows
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Master table “SYSTEM”.”SYS_EXPORT_SCHEMA_03″ successfully loaded/unloaded


Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_03 is:
D:\APP\BISHWANATH\ADMIN\DELHI\DPDUMP\SCOTT01.DMP
D:\APP\BISHWANATH\ADMIN\DELHI\DPDUMP\SCOTT02.DMP
Job “SYSTEM”.”SYS_EXPORT_SCHEMA_03″ successfully completed at 10:32:11

Now we drop scott and import it again.You notice that during import also we don’t need to mention all the dumpfile names instead just put the parallel parameter.

SQL> drop user scott cascade;

User dropped.

SQL> host impdp system/sys schemas=scott dumpfile=scott%u.dmp parallel=2

Import: Release 11.2.0.1.0 – Production on Sat Sep 24 10:33:27 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table “SYSTEM”.”SYS_IMPORT_SCHEMA_01″ successfully loaded/unloaded
Starting “SYSTEM”.”SYS_IMPORT_SCHEMA_01″: system/ schemas=scott dumpfile=scott%u.dmp parallel=2
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported “SCOTT”.”DEPT” 5.937 KB 4 rows
. . imported “SCOTT”.”EMP” 8.570 KB 14 rows
. . imported “SCOTT”.”SALGRADE” 5.867 KB 5 rows
. . imported “SCOTT”.”TEST” 5.429 KB 2 rows
. . imported “SCOTT”.”BONUS” 0 KB 0 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job “SYSTEM”.”SYS_IMPORT_SCHEMA_01″ successfully completed at 10:33:35

Enjoy:-)

=============================================================================================================================================

NETWORK LINK in datapump

NETWORK LINK: in datapump is the ability to transfer data directly from one instance to another without using a datafile to transport the data. The data is transported across a db link from one instance to another. Datapump still requires a directory on the server to store some information, but the data itself is transferred directly. This is a huge time and space saver for those large data transfer operations. This is how it is done:

Here is the DEMO:

We have two databases on two different servers:

Source database: delhi—->server1

Target database: noida—–>server2

Connect to database “delhi” on server1:

c:>sqlplus sys/[email protected] as sysdba

SQL> create user abc identified by abc;

User created.

SQL> grant CREATE PUBLIC DATABASE LINK,CREATE SESSION to abc;

Grant succeeded.

SQL> grant CREATE TABLE,insert any table to abc;

Grant succeeded.

SQL> grant create session to abc;

Grant succeeded.

SQL> conn abc/abc
Connected.

SQL> create table p
2 (id number);

Table created.

SQL> insert into p values(3);

1 row created.

SQL> /

1 row created.

SQL> /

1 row created.

SQL> /

1 row created.

SQL> /

1 row created.

SQL> commit;

Commit complete.

SQL> select * from p;

ID

     3
     3
     3
     3
     3

Now,connect to database “noida” on server2:

SQL>create directory gpump as ‘D:\network\’;

SQL>grant read,wrie on directory gpump to system;

SQL> create public database link h_link connect to abc identified by abc using ‘delhi’;

Database link created.

Checking database link:

SQL> select * from [email protected]_link;

ID

     3
     3
     3
     3
     3

Now start the import operation that will import schema “abc” from database “delhi” to database “noida” without actually creating any dumpfile;

c:>impdp system/[email protected] LOGFILE=import.schema.log NETWORK_LINK=h_link SCHEMAS=abc directory=gpump

Import: Release 11.2.0.1.0 – Production on Thu Sep 22 16:10:47 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “SYSTEM”.”SYS_IMPORT_SCHEMA_01″: system/@noida LOGFILE=import.schema.log NETWORK_LINK=h_link SCHEMAS=abc directory=gpump
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . imported “ABC”.”P” 5 rows
Job “SYSTEM”.”SYS_IMPORT_SCHEMA_01″ successfully completed at 16:10:47

SQL> conn abc/[email protected]
Connected.
SQL> selct * from tab;

SQL> select * from tab;

TNAME TABTYPE CLUSTERID


P TABLE

SQL> select * from p;

ID

     3
     3
     3
     3
     3

Now, we will export the schema “abc” present in database “delhi:server1” to “noida:server2” but before that drop the user “abc” from database “noida” to avoid ambiguity:

C:>sqlplus sys/[email protected] as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Thu Sep 22 16:51:08 2011

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> drop user abc cascade;

User dropped.

SQL> host expdp system/[email protected] LOGFILE=export_schema.log dumpfile=ff.dmp NETWORK_LINK=h_link SCHEMAS=abc directory=gpump

Export: Release 11.2.0.1.0 – Production on Thu Sep 22 16:52:42 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “SYSTEM”.”SYS_EXPORT_SCHEMA_02″: system/@noida LOGFILE=export_schema.log dumpfile=ff.dmp NETWORK_LINK=h_link SCHEMAS=abc directory=gpump
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported “ABC”.”P” 5.039 KB 5 rows
Master table “SYSTEM”.”SYS_EXPORT_SCHEMA_02″ successfully loaded/unloaded


Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_02 is:
D:\NETWORK\FF.DMP
Job “SYSTEM”.”SYS_EXPORT_SCHEMA_02″ successfully completed at 16:52:46

You will see the export dumpfile will create on server1 from server2.

Enjoy:-)

=============================================================================================================================================

FLASHBACK_TIME parameter in Data Pump

This parameter is used to take the export of an object for a specified period of time.

DEMO:

Here i m creating a table called “tes” in scott schema and inserting some values at a specific timestamp.

SQL> show user
USER is “SCOTT”
SQL> create table tes (id number);

Table created.

SQL> insert into tes values(3);

1 row created.

SQL> /

1 row created.

SQL> /

1 row created.

SQL> /

1 row created.

SQL> /

1 row created.

SQL> /

1 row created.

SQL> commit;

Commit complete.

Note down the timestamp at which the table contains all the data lets say it was

21-09-2011 15:41:00

Now, at timestamp 21-09-2011 15:43:00 deletes all rows from it.

SQL> delete from tes;

6 rows deleted.

SQL> select * from tes;

no rows selected

SQL> host expdp system/[email protected] schemas=scott dumpfile=my.dmp FLASHBACK_TIME=\”TO_TIMESTAMP(’21-09-2011 15:41:00′, ‘DD-MM-YYYY HH24:MI:SS’)\”

Export: Release 11.2.0.1.0 – Production on Wed Sep 21 15:44:49 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “SYSTEM”.”SYS_EXPORT_SCHEMA_03″: system/@delhi schemas=scott dumpfile=my.dmp FLASHBACK_TIME=”TO_TIMESTAMP(’21-09-2011 15:41:00′, ‘DD-MM-YYYY HH24:M
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
. . exported “SCOTT”.”BONUS” 0 KB 0 rows
. . exported “SCOTT”.”DEPT” 5.937 KB 4 rows
. . exported “SCOTT”.”EMP” 8.570 KB 14 rows
. . exported “SCOTT”.”SALGRADE” 5.867 KB 5 rows
. . exported “SCOTT”.”TES” 5.046 KB 6 rows
Master table “SYSTEM”.”SYS_EXPORT_SCHEMA_03″ successfully loaded/unloaded


Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_03 is:
D:\APP\BISHWANATH\ADMIN\DELHI\DPDUMP\MY.DMP
Job “SYSTEM”.”SYS_EXPORT_SCHEMA_03″ successfully completed at 15:45:14

As you see table “tes” is exported with all rows intact at timestamp 21-09-2011 15:41:00.

Now, just take the normal export without flashback_time and you will see 0 rows will be exported this time.

SQL> host expdp system/[email protected] schemas=scott dumpfile=my1.dmp

Export: Release 11.2.0.1.0 – Production on Wed Sep 21 15:45:56 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “SYSTEM”.”SYS_EXPORT_SCHEMA_03″: system/@delhi schemas=scott dumpfile=my1.dmp
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
. . exported “SCOTT”.”DEPT” 5.937 KB 4 rows
. . exported “SCOTT”.”EMP” 8.570 KB 14 rows
. . exported “SCOTT”.”SALGRADE” 5.867 KB 5 rows
. . exported “SCOTT”.”BONUS” 0 KB 0 rows
. . exported “SCOTT”.”TES” 0 KB 0 rows
Master table “SYSTEM”.”SYS_EXPORT_SCHEMA_03″ successfully loaded/unloaded


Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_03 is:
D:\APP\BISHWANATH\ADMIN\DELHI\DPDUMP\MY1.DMP
Job “SYSTEM”.”SYS_EXPORT_SCHEMA_03″ successfully completed at 15:46:22

Enjoy:-)

=============================================================================================================================================

FILESIZE parameter in Datapump

FILESIZE:

Specifies the maximum size of each dump file. If the size is reached for any member of the dump file set, that file is closed and an attempt is made to create a new file, if the file specification contains a substitution variable.( such as %U.dmp)

Example 1:

expdp system/[email protected] SCHEMAS=scott DUMPFILE = demo%U.dmp FILESIZE=50k

The above command will create each dumpfile of size 50k.

Example 2:

The following shows an example in which the size of the dump file is set to 3 megabytes:

expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr_3m.dmp FILESIZE=3M

If three megabytes had not been sufficient to hold all the exported data, then the following error would have been displayed and the job would have stopped:

ORA-39095: Dump file space has been exhausted: Unable to allocate 217088 bytes

Enjoy:-)

=============================================================================================================================================

DIRECTORY and ENCRYPTION parameter in Datapump

DIRECTORY:

Specifies the location to which Export can write the dump file set and the log file.

DATA_PUMP_DIR is the default directory name which points to D:\app\Vishwanath\admin\delhi\dpdump location.

Creation of directory involves two stages:

1.)create directory dpump as ‘D:>backup\dump\’;

Granting read,write permission to user who will do the import or export operation.

2.) grant read,write on directory dpump to scott.

ENCRYPTION:

To enable encryption, either the ENCRYPTION or ENCRYPTION_PASSWORD parameter, or both, must be specified. If only the ENCRYPTION_PASSWORD parameter is specified, then the ENCRYPTION parameter defaults to ALL. If neither ENCRYPTION nor ENCRYPTION_PASSWORD is specified, then ENCRYPTION defaults to NONE.
Data pump encryption is specified by the encryption parameter, the algorithm of the encryption and the mode of the encryption

ENCRYPTION : Encrypt part or all of the dump file where valid keyword
ALL: Both metadata and data are encrypted
DATA_ONLY: Only data is encrypted.
ENCRYPTED_COLUMNS_ONLY: Only encrypted columns are written to the dump file in an encrypted format
METADATA_ONLY: Only metadata is encrypt
NONE: Nothing is encrypted

ENCRYPTION_ALGORITHM Specify how encryption should be done where valid keyword values are: (AES128 | AES192 | AES256 ).

ENCRYPTION_MODE Method of generating encryption key where valid keyword values are :
( DUAL | PASSWORD | TRANSPARENT ) .

ENCRYPTION_PASSWORD Password key for creating encrypted column data

c:>expdp system/[email protected] dumpfile=scott.dmp logfile=scott_log.log schemas=scott encryption=all encryption_password=pass encryption_algorithm=AES256 ENCRYPTION_MODE=password

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “SYSTEM”.”SYS_EXPORT_SCHEMA_01″: system/@delhi dumpfile=scott.dmp logfile=scott_log.log schemas=scott encryption=all encryption_password= encryption_algorithm=AES256 ENCRYPTION_MODE=password
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
. . exported “SCOTT”.”DEPT” 5.945 KB 4 rows
. . exported “SCOTT”.”EMP” 8.578 KB 14 rows
. . exported “SCOTT”.”SALGRADE” 5.875 KB 5 rows
. . exported “SCOTT”.”BONUS” 0 KB 0 rows
Master table “SYSTEM”.”SYS_EXPORT_SCHEMA_01″ successfully loaded/unloaded


Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
D:\APP\BISHWANATH\ADMIN\DELHI\DPDUMP\SCOTT.DMP
Job “SYSTEM”.”SYS_EXPORT_SCHEMA_01″ successfully completed at 11:53:54

SQL> host impdp system/[email protected] dumpfile= scott.dmp logfile=scott_log_imp.log

Import: Release 11.2.0.1.0 – Production on Wed Sep 21 11:55:11 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39002: invalid operation
ORA-39174: Encryption password must be supplied.

c:>impdp system/[email protected] dumpfile= scott.dmp logfile=scott_log_imp_1.log encryption_password=pass

Import: Release 11.2.0.1.0 – Production on Wed Sep 21 11:57:28 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table “SYSTEM”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
Starting “SYSTEM”.”SYS_IMPORT_FULL_01″: system/@delhi dumpfile= logfile=scott_log_imp_1.log encryption_password=
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported “SCOTT”.”DEPT” 5.945 KB 4 rows
. . imported “SCOTT”.”EMP” 8.578 KB 14 rows
. . imported “SCOTT”.”SALGRADE” 5.875 KB 5 rows
. . imported “SCOTT”.”BONUS” 0 KB 0 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Job “SYSTEM”.”SYS_IMPORT_FULL_01″ successfully completed at 11:57:31

Enjoy:-)

=============================================================================================================================================

Some more linux stuff for DBA

The “who” command can be used to list all users who have OS connections.

root> who
root> who | head -5
root> who | tail -5
root> who | grep -i ora
root> who | wc -l

The “head -5” command restricts the output to the first 5 lines of the who command.
The “tail -5” command restricts the output to the last 5 lines of the who command.
The “grep -i ora” command restricts the output to lines containing “ora”.
The “wc -l” command returns the number of lines from “who”, and hence the number of connected users.

Checking alert log for ORA errors:

You can return the error lines in a file using.

root> cat alert_LIN1.log | grep -i ORA-

The “grep -i ORA-” command limits the output to lines containing “ORA-“. The “-i” flag makes the comparison case insensitive. A count of the error lines can be returned using the “wc” command. This normally give a word count, but the “-l” flag alteres it to give a line count.

root> cat alert_LIN1.log | grep -i ORA- | wc -l

How do you see how many instances are running?

In Linux, Unix the command: ps -ef|grep pmon

=============================================================================================================================================

Checking alert log for ORA errors:

You can return the error lines in a file using.

root> cat alert_LIN1.log | grep -i ORA-

The “grep -i ORA-” command limits the output to lines containing “ORA-“. The “-i” flag makes the comparison case insensitive. A count of the error lines can be returned using the “wc” command. This normally give a word count, but the “-l” flag alteres it to give a line count.

root> cat alert_LIN1.log | grep -i ORA- | wc -l

How do you see how many instances are running?

In Linux, Unix the command: ps -ef|grep pmon

Enjoy:-)

=============================================================================================================================================

Useful commands in linux

./program (Execute a program in the current directory)

Piping Commands:

The pipe character, “|”, is used to chain two or more commands together. The output of the first command is “piped” into the next program, and if there is a second pipe, the output is sent to the third program, etc.

For example:

ls -la /usr/bin | less

In this example, we run the command “ls -la /usr/bin”, which gives us a long listing of all of the files in /usr/bin. Because the output of this command is typically very long, we pipe the
output to a program called “less”, which displays the output for us one screen at a time.

Redirecting Program Output to Files:

There are times when it is useful to save the output of a command to a file, instead of displaying it to the screen. For example, if we want to create a file that lists all of the MP3 files in a directory, we can do something like this, using the “>” redirection character:

ls -l /home/vic/MP3/*.mp3 > mp3files.txt

A similar command can be written so that instead of creating a new file called mp3files.txt, we can append to the end of the original file:

ls -l /home/vic/extraMP3s/*.mp3 >> mp3files.txt

which

Shows the full path of shell commands found in your path. For example, if
you want to know exactly where the “grep” command is located on the
filesystem, you can type “which grep”. The output should be something
like: /bin/grep

find

A very powerful command, but sometimes tricky to use. It can be used to
search for files matching certain patterns, as well as many other types of
searches. A simple example is:

find . -name *mp3

This example starts searching in the current directory “.” and all subdirectories,
looking for files with “mp3” at the end of their names.

ps

Typing ps alone would list the current running processes. Below is an example of the output that would be generated by the ps command.

PID TTY TIME CMD
6874 pts/9 0:00 ksh
6877 pts/9 0:01 csh
418 pts/9 0:00 csh

ps -ef

Display full information about each of the processes currently running.

UID PID PPID C STIME TTY TIME CMD
hope 29197 18961 0 Sep27 ? 00:00:06 sshd: [email protected]/87
hope 32097 29197 0 Sep27 pts/87 00:00:00 -csh
hope 7209 32097 0 12:17 pts/87 00:00:00 ps -ef

ps U oracle

Displays process information of a specific user(..such as oracle)

You may kill also using kill -9

ps -l

Displays processes including those that are in a wait state, similar to the below example.

F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD 0 T 0 12308 29722 0 80 0 – 16136 finish pts/0 00:00:00 pico 0 R 0 12530 29722 0 80 0 – 15884 – pts/0 00:00:00 ps 4 S 0 29722 29581 0 80 0 – 16525 wait pts/0 00:00:00 bash

w

Displays information about the users currently on the machine, and their processes. The header shows, in this order, the current time, how long the system has been running, how many users are currently logged on.

11:12am up 608 day(s), 19:56, 6 users, load average: 0.36, 0.36, 0.37
User tty [email protected] idle what
smithj pts/5 8:52am w
jonesm pts/23 20Apr06 28 -bash
harry pts/18 9:01am 9 pine
peterb pts/19 21Apr06 emacs -nw html/index.html
janetmcq pts/8 10:12am 3days -csh
singh pts/12 16Apr06 5:29 /usr/bin/perl -w perl/test/program.pl

[edit]

id

Print your user-id and group id’s

DF

Report how much free disk space is available for each mount you have.

df -h—>Human readable form

df -k—>like –block-size=1K

df -i—>inode information instead of block usage

An inode is what the Linux file system uses to identify each file. When a file system is created (using the mkfs command), the file system is created with a fixed number of inodes. If all these inodes become used, a file system cannot store any more files even though there may be free disk space.

du

Disk Usage in a particular directory. “du -s” provides a summary
for the current directory.

Enjoy:-)

=============================================================================================================================================

What is Mounting and Mount Point in linux.

As you know, you can store your data in different physical storage devices, like floppies, CD-ROMs, and hard disk drives. Your hard disk or disks are also very likely split up into different partitions with different filesystems.

If you’re migrating to Linux from Microsoft Windows, you’re probably used to accessing all your filesystems very easily: you just boot up your puter, go to My Computer, and find all your Windows partitions there immediately. For example, if you have a second hard drive (or a second Windows partition), it automatically appears as D:\ and you can immediately access it. The same goes for floppies, CD-ROMs, digital cameras, and other storage devices – you just plug them in, and you’ll be able to immediately access them. However, this isn’t the case in Linux.

You’re probably a bit confused at first: you put your floppy or CD into the drive and start wondering why you’re not able to access it! This is because your floppies, CDs, hard disk partitions, and other storage devices must be attached to some existing directory on your system before they can be accessed. This attaching is called mounting, and the directory where the device is attached is called a mount point.

After the device is mounted, you can access the files on that device by accessing the directory where the device is attached. When you’re done and want to remove the floppy or CD or other device, you need to detach, unmount, it before removing it.

How to mount

When mounting, you must tell the mount command what is the device or partition you want to mount and what is the mount point. The mount point must be a directory that already exists on your system. For example, to mount your floppy:

$ mount /dev/fd0 /mnt/floppy

In this example, /dev/fd0 is your floppy drive, and /mnt/floppy is the mount point.

Enjoy:-)

=============================================================================================================================================

The TAR command

it is used to collect number of files within a single file.Later u may compress it with gzip command.

Syntax

$ tar -cvf

Examples:

$ tar -cvf my.tar /u01/app/oracle/test/*

Options

-c——>create file.

-v——>verbose means list or show all processed files on screen.

-f——>Name of the tar file.

Now, you may compress it with gzip command.

$ gzip my.tar

it becomes now my.tar.gz.

Now, u may unzip it using gunzip command

$gunzip my.tar.gz

To separate(unarchive) an archive created by tar.

$ tar -xvf my.tar

-x——->Extract the files.

Enjoy:-)

=============================================================================================================================================

Setting FGA at schema level

Setting FGA at schema level
FGA (fine-grained auditing) : Fine-grain auditing (FGA) allows us to audit users accessing data of a certain criteria. As per standard auditing we can audit select, insert, update and delete operations. We use the package dbms_fga to add, remove, enable and disable FGA auditing policies, it is virtually impossible to bypass these policies, one nice feature is that we can attach handlers (like a trigger) to the policies which can execute procedures.

Setting FGA at schema level: Below is the code to set FGA at schema level in which all tables are audited.

DECLARE

CURSOR Get_Tab IS
SELECT Table_Name
FROM dba_Tables
WHERE Owner=’TEST’;—–>Schema name
BEGIN
FOR t IN Get_Tab LOOP
dbms_fga.Add_Policy(Object_scHema => ‘TEST’,Object_Name => t.Table_Name,
Policy_Name => ‘chk_’ ||t.Table_Name,
Statement_Types => ‘insert,update,delete,select’);
END LOOP;
END;
/

Now, do some activities in test schema:

SQL> conn test/[email protected]
Connected.
SQL> select * from tab;

TNAME TABTYPE CLUSTERID


B TABLE
A TABLE

SQL> select * from a;

ID

     1
     1
     1
     1

SQL> select * from b;;
select * from b;
*
ERROR at line 1:
ORA-00911: invalid character

SQL> select * from b;

ID

     2
     2
     2
     2
     2
     2

6 rows selected.

SQL> update a set id=0;

4 rows updated.

SQL> delete from a ;

4 rows deleted.

SQL> insert into b values(22);

1 row created.

SQL> commit;

Commit complete.

Executing FGA on ‘test’:

select db_user ,scn, sql_text from dba_fga_audit_trail where db_user=’TEST’;

output:

Above output is displaying all SQL_TEXT fired on schema test for all tables.

=============================================================================================================================================

Temporary Tablespace Group

Tablespace groups allow users to use more than one tablespace to store temporary segments.It contains only temporary tablespace.It is created implicitly when the first temporary tablespace is assigned to it, and is deleted when the last temporary tablespace is removed from the group.

Benefits:

-It allows the user to use multiple temporary tablespaces in different sessions at the same time.
-It allows a single SQL operation to use multiple temporary tablespaces for sorting.

How can we create tablespace group?

ALTER TABLESPACE temp TABLESPACE GROUP temp_ts_group;

How to add new tablespace into this group?

CREATE TEMPORARY TABLESPACE temp2
TEMPFILE ‘/u01/app/oracle/oradata/DB10G/temp201.dbf’ SIZE 20M
TABLESPACE GROUP temp_ts_group;

How can we view tablespace group?

SELECT * FROM dba_tablespace_groups;

GROUP_NAME TABLESPACE_NAME


TEMP_TS_GROUP TEMP
TEMP_TS_GROUP TEMP2

How can we delete a tablespace group?

ALTER TABLESPACE temp TABLESPACE GROUP ‘ ‘;

HTH:-)

=============================================================================================================================================

What is a Bigfile tablespace?

Bigfile tablespaces are tablespaces with a single large datafile(means you can’t add more than one datafile).

Why do i use it?

-An Oracle database can have a maximum of 64,000 datafiles which limits its total capacity. By allowing tablespaces to have a single large datafile the total capacity of the database is increased. A bigfile tablespace with 8K and 32K blocks can contain 32 terabyte and 128 terabyte datafiles respectively.

-Using fewer larger datafiles allows the DB_FILES and MAXDATAFILES parameters to be reduced, saving SGA and controlfile space.

-The ALTER TABLESPACE syntax has been updated to allow operations at the tablespace level, rather than datafile level.

The size of the file can be specified in kilobytes (K), megabytes (M), gigabytes (G), or terabytes (T).

How do i create it?

CREATE BIGFILE TABLESPACE mybigtbs
DATAFILE ‘d:\mybigtb.dbf’ SIZE 100G;

How do i resize it?

ALTER TABLESPACE mybigtb AUTOEXTEND ON NEXT 10G;

How do i recognize it?

A BIGFILE column (YES or NO) has been added to the DBA_TABLESPACES, USER_TABLESPACES and V$TABLESPACE views to indicate the tablespace type

HTH:-)

=============================================================================================================================================

What are Change Vectors?

Change vectors describing changes made to database blocks. Each change vector describes a single transactional change to a database block. Each change vector is marked by a unique number called as System Change Number (SCN). Change vectors also are stored for changes to the rollback segment data blocks and the transaction table of the rollback segments.

HTH:-)

=============================================================================================================================================

Useful commands in linux

./program (Execute a program in the current directory)

Piping Commands:

The pipe character, “|”, is used to chain two or more commands together. The output of the first command is “piped” into the next program, and if there is a second pipe, the output is sent to the third program, etc.

For example:

ls -la /usr/bin | less

In this example, we run the command “ls -la /usr/bin”, which gives us a long listing of all of the files in /usr/bin. Because the output of this command is typically very long, we pipe the
output to a program called “less”, which displays the output for us one screen at a time.

Redirecting Program Output to Files:

There are times when it is useful to save the output of a command to a file, instead of displaying it to the screen. For example, if we want to create a file that lists all of the MP3 files in a directory, we can do something like this, using the “>” redirection character:

ls -l /home/vic/MP3/*.mp3 > mp3files.txt

A similar command can be written so that instead of creating a new file called mp3files.txt, we can append to the end of the original file:

ls -l /home/vic/extraMP3s/*.mp3 >> mp3files.txt

which

Shows the full path of shell commands found in your path. For example, if
you want to know exactly where the “grep” command is located on the
filesystem, you can type “which grep”. The output should be something
like: /bin/grep

find

A very powerful command, but sometimes tricky to use. It can be used to
search for files matching certain patterns, as well as many other types of
searches. A simple example is:

find . -name *mp3

This example starts searching in the current directory “.” and all subdirectories,
looking for files with “mp3” at the end of their names.

ps

Typing ps alone would list the current running processes. Below is an example of the output that would be generated by the ps command.

PID TTY TIME CMD
6874 pts/9 0:00 ksh
6877 pts/9 0:01 csh
418 pts/9 0:00 csh

ps -ef

Display full information about each of the processes currently running.

UID PID PPID C STIME TTY TIME CMD
hope 29197 18961 0 Sep27 ? 00:00:06 sshd: [email protected]/87
hope 32097 29197 0 Sep27 pts/87 00:00:00 -csh
hope 7209 32097 0 12:17 pts/87 00:00:00 ps -ef

ps U oracle

Displays process information of a specific user(..such as oracle)

You may kill also using kill -9

ps -l

Displays processes including those that are in a wait state, similar to the below example.

F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD 0 T 0 12308 29722 0 80 0 – 16136 finish pts/0 00:00:00 pico 0 R 0 12530 29722 0 80 0 – 15884 – pts/0 00:00:00 ps 4 S 0 29722 29581 0 80 0 – 16525 wait pts/0 00:00:00 bash

w

Displays information about the users currently on the machine, and their processes. The header shows, in this order, the current time, how long the system has been running, how many users are currently logged on.

11:12am up 608 day(s), 19:56, 6 users, load average: 0.36, 0.36, 0.37
User tty [email protected] idle what
smithj pts/5 8:52am w
jonesm pts/23 20Apr06 28 -bash
harry pts/18 9:01am 9 pine
peterb pts/19 21Apr06 emacs -nw html/index.html
janetmcq pts/8 10:12am 3days -csh
singh pts/12 16Apr06 5:29 /usr/bin/perl -w perl/test/program.pl

[edit]

id

Print your user-id and group id’s

DF

Report how much free disk space is available for each mount you have.

df -h—>Human readable form

df -k—>like –block-size=1K

df -i—>inode information instead of block usage

An inode is what the Linux file system uses to identify each file. When a file system is created (using the mkfs command), the file system is created with a fixed number of inodes. If all these inodes become used, a file system cannot store any more files even though there may be free disk space.

du

Disk Usage in a particular directory. “du -s” provides a summary
for the current directory.

Enjoy:-)

=============================================================================================================================================

Top Command

Displays CPU processes in a full-screen GUI. A great way to see the
activity on your computer in real-time. Type “Q” to quit.

Press M

By default top command displays the processes in the order of CPU usage. When the top command is running, press M (upper-case) to display processes sorted by memory usage as shown below.

Press O

To sort top output by any column

Press K

Kill a Task Without Exiting From Top

Display Selected User in Top Output Using

$ top -u vishu

Change Refresh Interval

Press Space bar and then d.Now type the required interval.

Press z or b

Highlight Running Processes in the Linux Top Command Output

Press n

Decrease Number of Processes Displayed in Top Output

The PID column can then be matched with the SPID column on the V$PROCESS view to provide more information on the process.

SELECT a.username,
a.osuser,
a.program,
spid,
sid,
a.serial#
FROM v$session a,
v$process b
WHERE a.paddr = b.addr
AND spid = ‘&pid’;

Enjoy:-)

=============================================================================================================================================

What are bdump,udump,dpdump,adump and cdump in oracle?

Below is the description of all. Have a look!

BDUMP:

SQL> show parameter BACKGROUND_DUMP_DEST

C:\ORACLE\PRODUCT\10.2.0\ADMIN\\BDUMP

This shows the bdump directory you were referring to in your question. This is a directory where debugging trace files for the background processes (LGWR, DBWn, and so on) are written.

UDUMP:

SQL> show parameter USER_DUMP_DEST

C:\ORACLE\PRODUCT\10.2.0\ADMIN\\UDUMP

This specifies the pathname for a directory where debugging trace files will be written on behalf of a user process.

DPDUMP:

The dpdump directory has files created by the data pump export/import utility.

ADUMP:

The adump directory is the area where auditing trace files are stored.

CDUMP:

CDUMP is specifying the core dump directory where all OS error logs (core dump files) will be placed.

Enjoy:-)

=============================================================================================================================================

Users are complaining that their application is hanging.

Every DBA encountered a situation like this in his/her life.

Below are few steps you can follow in sequence to tackle this.

If the user is complaining the hang problem.

First try to connect to the database itself and fire some query to check whether you are allowed to connect or not.
Check for any locked_objects by checking v$locked_object and dba_waiters and dba_blockers.
Then check at os level that which process is consuming the most of the time.

Analyze the problem if the problem relates to a single user process.
Then check what that user is doing by checking the sql statement he is firing.

HTH:-)

=============================================================================================================================================

Difference between Chaining and Migration of a row

Chaining:


It occurs when the row is too large to fit into one data block when it is first
inserted. In this case, Oracle stores the data for the row in a chain of data
blocks (one or more) reserved for that segment. Row chaining most often occurs with large rows, such as rows that contain a column of datatype LONG, LONG RAW, LOB, etc. Row chaining in these cases is unavoidable.

Migration:


It occurs when a row that originally fitted into one data block is updated so
that the overall row length increases, and the block’s free space is already
completely filled. In this case, Oracle migrates the data for the entire row
to a new data block, assuming the entire row can fit in a new block. Oracle
preserves the original row piece of a migrated row to point to the new block
containing the migrated row: the rowid of a migrated row does not change.

HTH:-)

=============================================================================================================================================

Steps involved in startup of database:

NOMOUNT STAGE:

-SGA is allocated.
-Background processes are started.
-Alert log file is opened.

MOUNT STAGE:

-Controlfile is read.
-Name & location of the datafiles,logfiles are detected.

OPEN STAGE:

All Files are opened for use.

HTH:-)

=============================================================================================================================================

Stages involved in shutdown

There are the following steps of shutting down the database.

1)Close the database.
2) Unmount the database.
3)Shut down the instance.

1) Close a database:
When we close the database, oracle writes all the data and recovery data in the SGA to the datafiles and redo log files. Next orcale close all online datafiles and redo log files.
Any offline datafiles of any offline tablespace have been closed already.

2) Unmount the database:

After the database has been closed, Oracle unmount the database to dissociate it from its instance. At this point, instance remains in the memory of the computer.
After the database is unmounted, Oracle closes the control files of the database.

3) Shut down an Instance:

The final step in the shut down the database is the shutting down an instance, The SGA is removed from the memory and the background processes are terminated.

HTH:-)

=============================================================================================================================================

Quiescing the database.

Quiesce means pausing or altering the state of running processes.

Some operations on database objects will fail if non-DBA queries reference the object during the operation, such as moving a table to a new schema.To prevent users being affected by these operations DBAs often shutdown the database and open it in restricted mode. This has an obvious affect on availability as users are locked out of the system until the restriction is lifted.

Why do we quiesce our database?

1.) The quiesce feature is useful when performing table maintenance or complicated data maintenance.

2.) The main advantage of this method is that users do not loose their sessions during the process.

3.) The shared pool does not have to “warm up” after a shutdown so performance should return to normal instantly.

4.) This method is advantageous when performing ALTER TABLE, CREATE OR REPLACE PACKAGE and EXP/IMP operations.

5.) Without the ability to quiesce the database, you would need to shut down the database and reopen it in restricted mode. This is a serious restriction, especially for systems requiring 24 x 7 availability. Quiescing a database is much a smaller restriction, because it eliminates the disruption to users and the downtime associated with shutting down and restarting the database.

How to quiesce the database?

Connect to the database as sys or system user and type the below command.

SQL>ALTER SYSTEM QUIESCE RESTRICTED;

As soon as the above statement is issued, all attempts to activate an inactive non-DBA session are blocked. Once finished the database can be returned to a fully available state using:

How to unquiesce the database?

SQL>ALTER SYSTEM UNQUIESCE;

When you type the above command everything will back to normal;

Enjoy:-)

=============================================================================================================================================

Difference between Statspack and AWR..

1.) STATSPACK does not store the Active Session History (ASH) statistics which are available in the AWR dba_hist_active_sess_history view.

2.)An important difference between STATSPACK and the AWR is that STATSPACK does not store history for new metric statistics introduced in Oracle10g. The key AWR views, dba_hist_sysmetric_history and dba_hist_sysmetric_summary.

3.)Statspack snapshots must be run by an external scheduler (dbms_jobs, CRON, etc.). AWR snapshots are scheduled every 60 minutes by default. Administrators can manually adjust the snapshot interval if so desired.

4.)ADDM captures a much greater depth and breadth of statistics than Statspack does.

5.)Statspack snapshot purges must be scheduled manually.AWR snapshots are purged automatically by MMON every night. MMON, by default, tries to keep one week’s worth of AWR snapshots available. If AWR detects that the SYSAUX tablespace is in danger of running out of space, it will free space in SYSAUX by automatically deleting the oldest set of snapshots.

Enjoy:-)

=============================================================================================================================================

The AWR Report

The AWR Report:

AWR is an enhancement/upgradation of statspack, which help us to tune the database.

Use the below links to know all about AWR:

What is an AWR in oracle? How to create it?

How to interpret AWR report?

Enjoy:-)