Sunday, September 21, 2008

Data Pump Export

Data Pump Export

The new utility is known as expdp to differentiate it from exp, the original export. In this example, we will use Data Pump to export a large table, CASES, about 3GB in size. Data Pump uses file manipulation on the server side to create and read files; hence, directories are used as locations. In this case, we are going to use the filesystem /u02/dpdata1 to hold the dump files.

create directory dpdata1 as '/u02/dpdata1';
grant read, write on directory dpdata1 to bob;

Next, we will export the data:

expdp bob/abc123 tables=CASES directory=DPDATA1
dumpfile=expCASES.dmp job_name=CASES_EXPORT

Let's analyze various parts of this command. The userid/password combination, tables, and dumpfile parameters are self-explanatory. Unlike the original export, the file is created on the server (not the client). The location is specified by the directory parameter value DPDATA1, which points to /u02/dpdata1 as created earlier. The process also creates a log file, again on the server, in the location specified by the directory parameter. By default, a directory named DPUMP_DIR is used by this process; so it can be created instead of the DPDATA1.

Note the parameter job_name above, a special one not found in the original export. All Data Pump work is done though jobs. Data Pump jobs, unlike DBMS jobs, are merely server processes that process the data on behalf of the main process. The main process, known as a master control process, coordinates this effort via Advanced Queuing; it does so through a special table created at runtime known as a master table. In our example, if you check the schema of the user ANANDA while expdp is running you will notice the existence of a table CASES_EXPORT, corresponding to the parameter job_name. This table is dropped when expdp finishes.

No comments: