Oracle 10g Free Training - Data Pump Components

Oracle 10g Free Training - Data Pump Components

In this tutorial let us learn about Data Pump Components viz. Import / Export, New Features In Data Pump Export and Import and learn how the original Import and Export utilities differ from Data Pump Import and Data Pump Export respectively.

Data Pump Components

Oracle Data Pump is made up of three distinct parts:

  • The command-line clients, expdp and impdp
    .
  • The DBMS_DATAPUMP PL/SQL package (also known as the Data Pump API)
    .
  • The DBMS_METADATA PL/SQL package (also known as the Metadata API)

The Data Pump clients, expdp and impdp, invoke the Data Pump Export utility and Data Pump Import utility, respectively. They provide a user interface that closely resembles the original export (exp) and import (imp) utilities.

The expdp and impdp clients use the procedures provided in the DBMS_DATAPUMP PL/SQL package to execute export and import commands, using the parameters entered at the command-line. These parameters enable the exporting and importing of data and metadata for a complete database or subsets of a database.

New Features In Data Pump Export and Import

The new Data Pump Export and Import utilities (invoked with the expdp and impdp commands, respectively) have a similar look and feel to the original Export (exp) and Import (imp) utilities, but they are completely separate. Dump files generated by the new Data Pump Export utility are not compatible with dump files generated by the original Export utility. Therefore, files generated by the original Export (exp) utility cannot be imported with the Data Pump Import (impdp) utility.

Oracle recommends that you use the new Data Pump Export and Import utilities because they support all Oracle Database 10g features, except for XML schemas. Original Export and Import support the full set of Oracle database release 9.2 features. Also, the design of Data Pump Export and Import results in greatly enhanced data movement performance over the original Export and Import utilities.

The following are the major new features that provide this increased performance, as well as enhanced ease of use:

The ability to specify the maximum number of threads of active execution operating on behalf of the Data Pump job. This enables you to adjust resource consumption versus elapsed time. This feature is available only in the Enterprise Edition of Oracle Database 10g.

The ability to restart Data Pump jobs.

The ability to detach from and reattach to long-running jobs without affecting the job itself. This allows DBAs and other operations personnel to monitor jobs from multiple locations. The Data Pump Export and Import utilities can be attached to only one job at a time; however, you can have multiple clients or jobs running at one time. (If you are using the Data Pump API, the restriction on attaching to only one job at a time does not apply.) You can also have multiple clients attached to the same job.

Support for export and import operations over the network, in which the source of each operation is a remote instance.

The ability, in an import job, to change the name of the source datafile to a different name in all DDL statements where the source datafile is referenced.

Enhanced support for remapping tablespaces during an import operation.

Support for filtering the metadata that is exported and imported, based upon objects and object types.

Support for an interactive-command mode that allows monitoring of and interaction with ongoing jobs.

The ability to estimate how much space an export job would consume, without actually performing the export.

The ability to specify the version of database objects to be moved. In export jobs, VERSION applies to the version of the database objects to be exported.

In import jobs, VERSION applies only to operations over the network. This means that VERSION applies to the version of database objects to be extracted from the source database.

Most Data Pump export and import operations occur on the Oracle database server. (This contrasts with original export and import, which were primarily client-based.)

Original Export and Import Vs Data Pump Export and Import

If you are familiar with the original Export (exp) and Import (imp) utilities, it is important to understand that many of the concepts behind them do not apply to Data Pump Export (expdp) and Data Pump Import (impdp). In particular:

  • Data Pump Export and Import operate on a group of files called a dump file set rather than on a single sequential dump file.
    .
  • Data Pump Export and Import access files on the server rather than on the client. This results in improved performance. It also means that directory objects are required when you specify file locations.
    .
  • Data Pump Export and Import use parallel execution rather than a single stream of execution, for improved performance. This means that the order of data within dump file sets is more variable.
    .
  • Data Pump Export and Import represent metadata in the dump file set as XML documents rather than as DDL commands. This provides improved flexibility for transforming the metadata at import time.
    .
  • Data Pump Export and Import are self-tuning utilities. Tuning parameters that were used in original Export and Import, such as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import.


.
  • At import time there is no option to perform interim commits during the restoration of a partition. This was provided by the COMMIT parameter in original Import.
    .
  • There is no option to merge extents when you re-create tables. In original Import, this was provided by the COMPRESS parameter. Instead, extents are reallocated according to storage parameters for the target table.
    .
  • Sequential media, such as tapes and pipes, are not supported.
    .
  • When you are importing data into an existing table using either APPEND or TRUNCATE, if any row violates an active constraint, the load is discontinued and no data is loaded. This is different from original Import, which logs any rows that are in violation and continues with the load.

Recent Tutorials

Subscribe to oracle tutorials

Enter your email address:

Delivered by FeedBurner

Blog Archive

Visitors

DigNow.org