1) Oracle Data Pump export/imports is 20 to 40 times faster (I have tested on test server) as compare to exp/imp, even with parallelism set to 1, because of Oracle Data Pump’s use of direct path (when possible).
2) The INCLUDE and EXCLUDE parameters can be used to limit the export/import to specific objects. i.e if we have taken export backup on schema level and at the time of import we want to exclude specific table we simply need to specify the table name which we want to exclude from import list.
Imp test/test schemas=test exclude=TABLE:”= ‘TABLE_NAME'” directory=TEST_DIR dumpfile=esport_data.dmp logfile=expdptest.log
We can Include index ,constraints ,query instead the INDEX=y,n or constraints=y,n in old exp/imp
3) Using NETWORK_LINK parameter we can exp/imp over the database link. i.e Imports the objects
are directly from the source into the local server without being written to a dump file.
impdp test/[email protected] tables=table_name network_link=REMOTE_TEST directory=TEST_DIR logfile=impdptest.log remap_schema=Test:TEST_new
In above example emp table of test schema exists on remote server directly imported on local server test_new schema
4) Datapump have ability to write in many dump files simultaneously using parallel parameter.
5) Estimate parameters helps you estimate the size of export without taking export. It will help you create the required space or take backup on the disk having sufficient space , reduce the probability of exp fails due to lack space
6) Remapping capabilities of datapump you can remap user ,tablespaces , make it easy to transfer data from one tablespace to other tablespaces.
impdp system\[email protected] directory=data_pump_dir dumpfile=mydump01.dmp remap_schema=source_schema:dest_schema remap_tablespace=source_tablespace:dest_tablespace
7) In the expdp dump file always create on the server even if you taking the export from client dump file will creates on the server, it improves performance and no need to data transfer using network.
8) Datapump have ability to restart job that have fails due to lack of space or due to other reasons
9) The ability to detach from and reattach to long-running jobs without affecting the job itself. This allows DBAs and other operations personnel to monitor jobs from multiple locations. In exp if export/import is fails then you have to perform it again but expdp we don’t need to bother about it .Because here you can attach or detach the job .
expdp mm_stg/mm_stg dumpfile=DPUMP:export.dmp logfile= DPUMP:export.log schemas=mm_stg job_name=SYS_EXPORT_SCHEMA
B) If you want to pause the job for some time then press CTL+C , type the command given below
Export> STOP_JOB=IMMEDIATEAre you sure you wish to stop this job ([yes]/no): yes
C) Now you want to restart the job then you need to attach the job
$ expdp mm_stg/mm_stg attach=SYS_EXPORT_SCHEMA
10) It has its own performance tuning features
11) It supports interactive mode that allows the dba to monitor or interact with ongoing jobs
12) The Data Pump method for moving data between different database versions is different from the method used by original Export and Import. With original Export, you had to run an older version of Export to produce a dump file that was compatible with an older database version. With Data Pump, you use the current Export version and simply use the VERSION parameter to specify the target database version. You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g).
expdp username/password TABLES=test.emp VERSION=10.1 dumpfile=DPUMP:export.dmp logfile= DPUMP:export.log
CONCLUSIONOracle Data Pump is a great tool for the fast movement of data between the databases and much of this performance improvement is derived from the use of parameter ìparallelism.î New export and import clients, expdp and impdp, that fully exploit the Data Pump Infrastructure. They are implemented as complete supersets of the original exp and imp, and will eventually replace them.