Oracle 10g Tutorials : Oracle Net Services

Oracle 10g Tutorials : Oracle Net Services

In this tutorial you will learn Oracle Net Services - An Overview, Using Oracle Net Manager, Creating Listeners, Choosing General Parameters , enable the tracing and logging for the database transactions. We will also learn how to use TNSPING to test Oracle Net connectivity.

Document Summary

This document gives the overview of the oracle net services. It also explains the uses of oracle net manager.

Oracle Net Services Overview

Oracle Net Services provides enterprise-wide connectivity solutions in distributed, heterogeneous computing environments. Oracle Net Services eases the complexities of network configuration and management, maximizes performance, and improves network diagnostic capabilities. Oracle Net, a component of Oracle Net Services, enables a network session from client application to an Oracle database server. Once a network session is established, Oracle Net acts as a data courier for the client application and the database server. It is responsible for establishing and maintaining the connection between the client application and database server, as well as exchanging messages between them. Oracle Net is able to perform these jobs because it is located on each computer in the network. Network sessions are established with the help of a listener. The listener is a separate process that resides on the database server. The listener receives incoming client connection requests and manages the traffic of these requests to the server. The listener brokers the client request, handing off the request to the server. Every time a client requests a network session with a server, a listener receives the actual request. If the client's information matches the listener's information, the listener grants a connection to the server.

Use Oracle Net Manager to create and configure listeners

Navigation – Start->Programs->Oracle-Oracle Client10g Home->Configuration and Migration Tools->Net Manager

Once you click on the Net Manager link on the above Navigation Path, following screen will appear in front of you.

Click on the Local link in the above screen and following screen will appear in front of you.

Click on the Listeners tab to see the list of existing listeners.

Please enter the name of the listener you want to create in the following pop up screen. You can choose any user friendly name.

You will see the LISTENER1 created in the following screen. Click on Add Address button to add the Listening location’s address.

Choose General Parameters from the drop down and you will see following screen.

Let the default values in this screen.

Click on ‘Logging & Tracing’ tab and you will see following screen.

You can enable the tracing and logging for the database transactions and would be required to provide the files to store the logs or traces.

Click on ‘Authentication’ tab and you will see following screen. You can set the password for listener related operations in this screen.

Choose Database Services from Dropdown and you will see following screen.

Give the Global Database Name , SID and Oracle Home Directory to configure the database service for the Listener. Save the information and your listener is created and configured. You can change any of the listener’s parameters through oracle net manager.

Use TNSPING to test Oracle Net connectivity

You can also use tnsping utility on command prompt to check Oracle Net connectivity. Follow these steps to check the connection to particular SID.

  1. Open Command prompt.
  2. Write tnsping <>
  3. If you get the message “Used TNSNAMES adapter to resolve the alias” then tnsping utility is able to resolve the service name.
  4. If you get “TNS-03505: Failed to resolve name” message then tnsping utility is not able to resolve the service name and you need to check the SQLNET.ora and TNSNAMES.ora files on your machine.

Frequently Asked Questions

1. What all tasks can be performed using oracle net manager?

2. What is the use of tnsping utility?

3. How do you enable tracing or logging for a particular listener?

4. What is the significance of “TNS-03505: Failed to resolve name “ message from tnsping utility?


Oracle 10g Free Training - Synonyms

Oracle 10g Free Training - Synonyms

In this tutorial you will learn about Synonyms, Creating Synonyms, Using Synonyms in DML Statements, Dropping Synonyms and viewing Information About Views, Synonyms, and Sequences.

Synonyms

This section describes aspects of managing synonyms, and contains the following topics:

  • About Synonyms
  • Creating Synonyms
  • Using Synonyms in DML Statements
  • Dropping Synonyms

About Synonyms

A synonym is an alias for a schema object. Synonyms can provide a level of security by masking the name and owner of an object and by providing location transparency for remote objects of a distributed database. Also, they are convenient to use and reduce the complexity of SQL statements for database users.

Synonyms allow underlying objects to be renamed or moved, where only the synonym needs to be redefined and applications based on the synonym continue to function without modification.

You can create both public and private synonyms. A public synonym is owned by the special user group named PUBLIC and is accessible to every user in a database. A private synonym is contained in the schema of a specific user and available only to the user and the user's grantees.

Creating Synonyms

To create a private synonym in your own schema, you must have the CREATE SYNONYM privilege. To create a private synonym in another user's schema, you must have the CREATE ANY SYNONYM privilege. To create a public synonym, you must have the CREATE PUBLIC SYNONYM system privilege.

Create a synonym using the CREATE SYNONYM statement. The underlying schema object need not exist, nor do you need privileges to access the object. The following statement creates a public synonym named public_emp on the emp table contained in the schema of jward:

CREATE PUBLIC SYNONYM public_emp FOR jward.emp

The image cannot be displayed, because it contains errors.

Figure 29. Creating a synonym.

When you create a synonym for a remote procedure or function, you must qualify the remote object with its schema name. Alternatively, you can create a local public synonym on the database where the remote object resides, in which case the database link must be included in all subsequent calls to the procedure or function.

Using Synonyms in DML Statements

You can successfully use any private synonym contained in your schema or any public synonym, assuming that you have the necessary privileges to access the underlying object, either explicitly, from an enabled role, or from PUBLIC. You can also reference any private synonym contained in another schema if you have been granted the necessary object privileges for the private synonym.

You can only reference another user's synonym using the object privileges that you have been granted. For example, if you have the SELECT privilege for the jward.emp_tab synonym, then you can query the jward.emp_tab synonym, but you cannot insert rows using the synonym for jward.emp_tab.

A synonym can be referenced in a DML statement the same way that the underlying object of the synonym can be referenced. For example, if a synonym named emp_tab refers to a table or view, then the following statement is valid:

INSERT INTO Emp_tab (Empno, Ename, Job)
VALUES (Emp_sequence.NEXTVAL, 'SMITH', 'CLERK');

The image cannot be displayed, because it contains errors.

Figure 30. Inserting in a table using a synonym (Emp_tab).

If the synonym named fire_emp refers to a standalone procedure or package procedure, then you could execute it with the command

EXECUTE Fire_emp(7344);

Dropping Synonyms

You can drop any private synonym in your own schema. To drop a private synonym in another user's schema, you must have the DROP ANY SYNONYM system privilege. To drop a public synonym, you must have the DROP PUBLIC SYNONYM system privilege.

Drop a synonym that is no longer required using DROP SYNONYM statement. To drop a private synonym, omit the PUBLIC keyword. To drop a public synonym, include the PUBLIC keyword.

For example, the following statement drops the private synonym named emp:

DROP SYNONYM emp;

The image cannot be displayed, because it contains errors.

Figure 31. Dropping a synonym.

The following statement drops the public synonym named public_emp:

DROP PUBLIC SYNONYM public_emp;

The image cannot be displayed, because it contains errors.

Figure 32. Dropping a public synonym.

When you drop a synonym, its definition is removed from the data dictionary. All objects that reference a dropped synonym remain. However, they become invalid (not usable).

Viewing Information About Views, Synonyms, and Sequences

The following views display information about views, synonyms, and sequences:

View
Description
DBA_VIEWS
ALL_VIEWS
USER_VIEWS
DBA view describes all views in the database. ALL view is restricted to views accessible to the current user. USER view is restricted to views owned by the current user.
DBA_SYNONYMS
ALL_SYNONYMS
USER_SYNONYMS
These views describe synonyms.
DBA_SEQUENCES
ALL_SEQUENCES
USER_SEQUENCES
These views describe sequences.
DBA_UPDATABLE_COLUMNS
ALL_UPDATABLE_COLUMNS
USER_UPDATABLE_COLUMNS
These views describe all columns in join views that are updatable.

Oracle 10g Training - Triggers And Triggering Events

Oracle 10g Training - Triggers And Triggering Events

In this tutorial you will learn about Triggers, Triggering Events, Types of Triggers - DML Triggers, System Triggers, Instead-of Triggers and Syntax for Trigger Creation.

Triggers:

Triggers are named PL/SQL blocks that get executed implicitly when a triggering event occurs. Rather that being executed when called (as is the case with procedures and functions), triggers get fired automatically when certain events occur in the system. The action of executing a trigger is called ‘firing’ the trigger. A trigger fires when a triggering event occurs.

Triggering Events:

Triggering Events are events that occur due to the users’ actions (or system events) that cause a trigger to be fired. Triggering events can be insertion, deletion, update etc. When any of these events occurs, it executes the triggers written on that event implicitly.

Types of Triggers:

Although, there may be many types and classifications of triggers, basically, there are three types of triggers:

DML Triggers:

DML triggers are fired by the execution of a DML statement. The DML triggers can be defined on insert, update or delete operations. Whenever a DML operation occurs on a table, the trigger will execute. Also, the triggers can be created in such a way that they get executed either before or after the DML operation occurs.

System Triggers:

System triggers fire when a system event such as a database startup or shutdown happens. System triggers can also be fired on DDL operations such as create table.

Instead-of Triggers:

Instead-of triggers can be defined on operations performed on views only. When you define a instead of trigger on an operation on a view, the trigger code will be executed instead of the operation that fired it. This type of triggers can only be row level.

Syntax for trigger creation:

The syntax for trigger creation is as below:

CREATE [OR REPLACE] TRIGGER trigger_name

{BEFORE/AFTER/INSTEAD OF}triggering_event

[WHEN trigger_condition]

[FOR EACH ROW]

trigger_body;

Below is an example of creating a trigger:

Figure -4 Trigger Creation

Frequently Asked Questions

1. What are the various PL/SQL objects?

2. What is a package?

3. What is a procedure?

4. What is a function?

5. What is a trigger?

6. What are the differences between procedures and functions?

7. Where are the procedures and functions stored?

8. What are triggering events?

9. What are the various types of triggers?

10. What are instead-of triggers?


Oracle 10g Tutorials - Identifying PL/SQL Objects

Oracle 10g Tutorials - Identifying PL/SQL Objects

In this tutorial you will learn about indentifying PL/SQL Objects like Packages, Procedures and Functions using SQL Plus based examples and illustrations.

Document Summary

This document is a simple guide to understanding PL/SQL objects such as stored procedures, functions, triggers etc. This guide provides insight into creating the above-mentioned objects using SQL Plus.

Identifying PL/SQL Objects

PL/SQL objects are named blocks of PL/SQL with declarative, executable and exception handling sections. The PL/SQL objects are stored in the data dictionary with a name and can be reused. PL/SQL objects include Packages, Stored Procedures, Functions and triggers.

Packages:

A Package is a PL/SQL construct that allows related objects to be stored together. Apart from grouping related objects, packages also provide many more advantages such as improved performance and ability to reference the PL/SQL objects contained within a package from other PL/SQL blocks.

A Package consists of two portions:

a) Package Specification

Package Specification consists of information about the contents of the package. Essentially, package specification contains ‘forward declarations’ of the procedures and functions in that package but does not contain the codes for any of these subprograms.

The syntax for creating package specification is as below:

CREATE [OR REPLACE] PACKAGE package_name {IS/AS}
type_definition/
procedure_specification/
function_specification/
variable_declaration/
exception_declaration/
cursor_declaration/
pragma_declaration
END [package_name];

b) Package Body

Package Body contains the actual code for the subprograms in the package. Package body is a separate data dictionary object from the package header. The package body cannot be compiled without the package specification being compiled successfully. The package body should compulsorily contain the definition for all sub program definitions contained in the package specification.

The syntax for creating package specification is as below:

CREATE [OR REPLACE] PACKAGE BODY package_name {IS/AS}
procedure_definitions
function_definitions
END [package_name];

Below is an example of creating a package:

Figure -1 Package Creation

Procedures:

A Procedure is a named PL/SQL block with declarative section, an executable section and an exception handling section. Procedures are stored in the data dictionary and are executed whenever they are invoked from another PL/SQL block. Procedures can be called from any PL/SQL block and arguments can be passed while calling. A procedure call is a PL/SQL statement and cannot be a part of an expression.

The syntax for creating a procedure is as below:

CREATE [OR REPLACE] procedure procedure_name
[(argument_1[{IN/OUT/IN OUT}] data_type,




argument_n[{IN/OUT/IN OUT}] data_type)] {IS/AS}
/*declarative section*/
BEGIN
/*executable section*/
EXCEPTION
/*exception section*/
END [procedure_name];

Below is an example of creating a procedure:

Figure -2 Procedure Creation

Functions:

A Function, just like a procedure, is a named PL/SQL block that has an executable section and an exception handling section. Functions also, are stored in the data dictionary and can be called from other PL/SQL blocks when required. Functions can also have parameters and the arguments can be passed while calling them.

There are some differences between procedures and functions though. Functions can only be called as a part of an expression (as opposed to procedure calls which is a PL/SQL statement by itself). Function body contains a ‘return’ statement that returns the result of function execution to the calling expression. So, the function has a return type that has to be declared in the function header.

The syntax for creating a function is as below:

CREATE [OR REPLACE] FUNCTION function_name
[( argument_1 [{IN/OUT/IN OUT}] data_type,




argument_n [{IN/OUT/IN OUT}] data_type,)]
RETURN return_type {IS/AS}
/*declarative section*/
BEGIN
/*executable section*/
RETURN expression;
EXCEPTION
/*exception section*/
END [function_name];

Below is an example of creating a function:

Figure -3 Function Creation


Oracle 10g Tutorials - SQL*Loader Discarded and Rejected Records

Oracle 10g Free Training - SQL*Loader Discarded and Rejected Records

In this tutorial you will learn about Discarded and Rejected Records - The Bad File, SQL*Loader Rejects, Oracle Database Rejects, The Discard File and Log File and Logging Information

Discarded and Rejected Records

Records read from the input file might not be inserted into the database. Such records are placed in either a bad file or a discard file.

The Bad File

The bad file contains records that were rejected, either by SQL*Loader or by the Oracle database. Some of the possible reasons for rejection are discussed in the next sections.

SQL*Loader Rejects

Datafile records are rejected by SQL*Loader when the input format is invalid. For example, if the second enclosure delimiter is missing, or if a delimited field exceeds its maximum length, SQL*Loader rejects the record. Rejected records are placed in the bad file.

Oracle Database Rejects

After a datafile record is accepted for processing by SQL*Loader, it is sent to the Oracle database for insertion into a table as a row. If the Oracle database determines that the row is valid, then the row is inserted into the table. If the row is determined to be invalid, then the record is rejected and SQL*Loader puts it in the bad file. The row may be invalid, for example, because a key is not unique, because a required field is null, or because the field contains invalid data for the Oracle datatype.

The Discard File

As SQL*Loader executes, it may create a file called the discard file. This file is created only when it is needed, and only if you have specified that a discard file should be enabled. The discard file contains records that were filtered out of the load because they did not match any record-selection criteria specified in the control file.

The discard file therefore contains records that were not inserted into any table in the database. You can specify the maximum number of such records that the discard file can accept. Data written to any database table is not written to the discard file.

Log File and Logging Information

When SQL*Loader begins execution, it creates a log file. If it cannot create a log file, execution terminates. The log file contains a detailed summary of the load, including a description of any errors that occurred during the load.

The other loading parameters such as the log file, the bad file, the discard file, and maximum number of errors are specified in the screen below.

The below screen is the final screen to initiate the data load process using the SQL*Loader. The data load can be scheduled to run immediately or at a specific time later


Oracle 10g Tutorials- SQL*Loader - Input Data and Datafiles

Oracle 10g Free Training - SQL*Loader - Input Data and Datafiles

In this tutorial you will learn about SQL*Loader - Input Data and Datafiles, Fixed Record Format, Variable Record Format and Stream Record Format.

SQL*Loader - Input Data and Datafiles

SQL*Loader reads data from one or more files (or operating system equivalents of files) specified in the control file. From SQL*Loader's perspective, the data in the datafile is organized as records. A particular datafile can be in fixed record format, variable record format, or stream record format. The record format can be specified in the control file with the INFILE parameter. If no record format is specified, the default is stream record format.

Fixed Record Format

A file is in fixed record format when all records in a datafile are the same byte length. Although this format is the least flexible, it results in better performance than variable or stream format. Fixed format is also simple to specify. For example:

INFILE datafile_name "fix n"

Variable Record Format

A file is in variable record format when the length of each record in a character field is included at the beginning of each record in the datafile. This format provides some added flexibility over the fixed record format and a performance advantage over the stream record format. For example, you can specify a datafile that is to be interpreted as being in variable record format as follows:

INFILE "datafile_name" "var n"

Stream Record Format

A file is in stream record format when the records are not specified by size; instead SQL*Loader forms records by scanning for the record terminator. Stream record format is the most flexible format, but there can be a negative effect on performance. The specification of a datafile to be interpreted as being in stream record format looks similar to the following:

INFILE datafile_name ["str terminator_string"]

The terminator_string is specified as either 'char_string' or X'hex_string' where:

  • 'char_string' is a string of characters enclosed in single or double quotation marks
  • X'hex_string' is a byte string in hexadecimal format

When the terminator_string contains special (nonprintable) characters, it should be specified as a X'hex_string'. However, some nonprintable characters can be specified as ('char_string') by using a backslash. For example:

  • \n indicates a line feed
  • \t indicates a horizontal tab
  • \f indicates a form feed
  • \v indicates a vertical tab
  • \r indicates a carriage return

If the character set specified with the NLS_LANG parameter for your session is different from the character set of the datafile, character strings are converted to the character set of the datafile. This is done before SQL*Loader checks for the default record terminator.

Hexadecimal strings are assumed to be in the character set of the datafile, so no conversion is performed.

On UNIX-based platforms, if no terminator_string is specified, SQL*Loader defaults to the line feed character, \n.

On Windows NT, if no terminator_string is specified, then SQL*Loader uses either \n or \r\n as the record terminator, depending on which one it finds first in the datafile. This means that if you know that one or more records in your datafile has \n embedded in a field, but you want \r\n to be used as the record terminator, you must specify it.

The screen below asks for the data file details if it is not already specified in the control file.

The below screen asks for the load method options while loading the data using SQL*Loader.


Oracle 10g - SQL*Loader

Oracle 10g Free Training - SQL*Loader

In this tutorial we will be learning about SQL*Loader, How to make use of SQL*Loader, SQL*Loader Parameters and SQL*Loader Control File along with syntax considerations.

SQL*Loader

SQL*Loader loads data from external files into tables of an Oracle database. It has a powerful data parsing engine that puts little limitation on the format of the data in the datafile. You can use SQL*Loader to do the following:

  • Load data across a network. This means that you can run the SQL*Loader client on a different system from the one that is running the SQL*Loader server.
    .
  • Load data from multiple datafiles during the same load session.
    .
  • Load data into multiple tables during the same load session.
    .
  • Specify the character set of the data.
    .
  • Selectively load data (you can load records based on the records' values).
    .
  • Manipulate the data before loading it, using SQL functions.
    .
  • Generate unique sequential key values in specified columns.
    .
  • Use the operating system's file system to access the datafiles.
    .
  • Load data from disk, tape, or named pipe.
    .
  • Generate sophisticated error reports, which greatly aid troubleshooting.
    .
  • Load arbitrarily complex object-relational data.
    .
  • Use secondary datafiles for loading LOBs and collections.
    .
  • Use either conventional or direct path loading. While conventional path loading is very flexible, direct path loading provides superior loading performance.

A typical SQL*Loader session takes as input a control file, which controls the behavior of SQL*Loader, and one or more datafiles. The output of SQL*Loader is an Oracle database (where the data is loaded), a log file, a bad file, and potentially, a discard file.

The figure below is the OEM screen to guide you to the SQL*Loader. Click on the Load Data from File to invoke the data loading using SQL*Loader.

SQL*Loader Parameters

SQL*Loader is invoked when you specify the sqlldr command and, optionally, parameters that establish session characteristics.

In situations where you always use the same parameters for which the values seldom change, it can be more efficient to specify parameters using the following methods, rather than on the command line:

  • Parameters can be grouped together in a parameter file. You could then specify the name of the parameter file on the command line using the PARFILE parameter.
    .
  • Certain parameters can also be specified within the SQL*Loader control file by using the OPTIONS clause.

Parameters specified on the command line override any parameter values specified in a parameter file or OPTIONS clause.

SQL*Loader Control File

The control file is a text file written in a language that SQL*Loader understands. The control file tells SQL*Loader where to find the data, how to parse and interpret the data, where to insert the data, and more.

Although not precisely defined, a control file can be said to have three sections.

The first section contains sessionwide information, for example:

  • Global options such as bindsize, rows, records to skip, and so on
    .
  • INFILE clauses to specify where the input data is located
    .
  • Data to be loaded

The second section consists of one or more INTO TABLE blocks. Each of these blocks contains information about the table into which the data is to be loaded, such as the table name and the columns of the table.

The third section is optional and, if present, contains input data.

Some control file syntax considerations to keep in mind are:

  • The syntax is free-format (statements can extend over multiple lines).
    .
  • It is case insensitive; however, strings enclosed in single or double quotation marks are taken literally, including case.
    .
  • In control file syntax, comments extend from the two hyphens (--) that mark the beginning of the comment to the end of the line. The optional third section of the control file is interpreted as data rather than as control file syntax; consequently, comments in this section are not supported.
    .
  • The keywords CONSTANT and ZONE have special meaning to SQL*Loader and are therefore reserved. To avoid potential conflicts, Oracle recommends that you do not use either CONSTANT or ZONE as a name for any tables or columns.

The control file for the SQL*Loader is specified in the screen below.


Oracle 10g - Using Data Pump Export

Oracle 10g Free Training - Using Data Pump Export

In this tutorial we shall learn about using Data Pump Export - Invoking Data Pump Export, Data Pump Export Interfaces, Data Pump Export Modes like Full Export Mode, Schema Mode, Table Mode, Tablespace Mode and Transportable Tablespace Mode.

Using Data Pump Export

Data Pump Export (invoked with the expdp command) is a new utility as of Oracle Database 10g. Although its functionality and its parameters are similar to those of the original Export utility (exp), they are completely separate utilities and their files are not compatible.

Data Pump Export (hereinafter referred to as Export for ease of reading) is a utility for unloading data and metadata into a set of operating system files called a dump file set. The dump file set can be imported only by the Data Pump Import utility. The dump file set can be imported on the same system or it can be moved to another system and loaded there.

The dump file set is made up of one or more disk files that contain table data, database object metadata, and control information. The files are written in a proprietary, binary format. During an import operation, the Data Pump Import utility uses these files to locate each database object in the dump file set.

Because the dump files are written by the server, rather than by the client, the data base administrator (DBA) must create directory objects.

Data Pump Export enables you to specify that a job should move a subset of the data and metadata, as determined by the export mode. This is done using data filters and metadata filters, which are specified through Export parameters.

Invoking Data Pump Export

The Data Pump Export utility is invoked using the expdp command. The characteristics of the export operation are determined by the Export parameters you specify. These parameters can be specified either on the command line or in a parameter file.

The figure below shows the OEM screen to initiate the export. Click on Export to Files to initiate the export process.

Data Pump Export Interfaces

You can interact with Data Pump Export by using a command line, a parameter file, or an interactive-command mode.

  • Command-Line Interface: Enables you to specify most of the Export parameters directly on the command line. For a complete description of the parameters available in the command-line interface.
    .
  • Parameter File Interface: Enables you to specify command-line parameters in a parameter file. The only exception is the PARFILE parameter, because parameter files cannot be nested.
    .
  • Interactive-Command Interface: Stops logging to the terminal and displays the Export prompt, from which you can enter various commands, some of which are specific to interactive-command mode. This mode is enabled by pressing Ctrl+C during an export operation started with the command-line interface or the parameter file interface. Interactive-command mode is also enabled when you attach to an executing or stopped job.

Data Pump Export Modes

One of the most significant characteristics of an export operation is its mode, because the mode largely determines what is exported. Export provides different modes for unloading different portions of the database. The mode is specified on the command line, using the appropriate parameter. The available modes are as follows:

Full Export Mode

A full export is specified using the FULL parameter. In a full database export, the entire database is unloaded. This mode requires that you have the EXP_FULL_DATABASE role.

Schema Mode

A schema export is specified using the SCHEMAS parameter. This is the default export mode. If you have the EXP_FULL_DATABASE role, then you can specify a list of schemas and optionally include the schema definitions themselves, as well as system privilege grants to those schemas. If you do not have the EXP_FULL_DATABASE role, you can export only your own schema.

Cross-schema references are not exported unless the referenced schema is also specified in the list of schemas to be exported. For example, a trigger defined on a table within one of the specified schemas, but that resides in a schema not explicitly specified, is not exported. This is also true for external type definitions upon which tables in the specified schemas depend. In such a case, it is expected that the type definitions already exist in the target instance at import time.

Table Mode

A table export is specified using the TABLES parameter. In table mode, only a specified set of tables, partitions, and their dependent objects are unloaded. You must have the EXP_FULL_DATABASE role to specify tables that are not in your own schema, and only one schema can be specified. Note that type definitions for columns are not exported in table mode. It is expected that the type definitions already exist in the target instance at import time. Also, as in schema exports, cross-schema references are not exported.

Tablespace Mode

A tablespace export is specified using the TABLESPACES parameter. In tablespace mode, only the tables contained in a specified set of tablespaces are unloaded. If a table is unloaded, its dependent objects are also unloaded. Both object metadata and data are unloaded. In tablespace mode, if any part of a table resides in the specified set, then that table and all of its dependent objects are exported. You must have the EXP_FULL_DATABASE role to use tablespace mode.

Transportable Tablespace Mode

A transportable tablespace export is specified using the TRANSPORT_TABLESPACES parameter. In transportable tablespace mode, only the metadata for the tables (and their dependent objects) within a specified set of tablespaces are unloaded. This allows the tablespace datafiles to then be copied to another Oracle database and incorporated using transportable tablespace import. This mode requires that you have the EXP_FULL_DATABASE role.

Unlike tablespace mode, transportable tablespace mode requires that the specified tables be completely self-contained. That is, the components of all objects in the set must also be in the set.

Transportable tablespace exports cannot be restarted once stopped. Also, they cannot have a degree of parallelism greater than 1.

The figure below shows the screen for the specifications of the type of the export.

The figure below asks for the log file name and the estimated disk space required for the export function.

The figure below shows the OEM screen which asks for the directory and the file name for the export data.

The figure below shows the final screen in executing the Export command. You can schedule the export to run immediately or to run at a specific time.


Oracle 10g Free Training - Using Data Pump Import

Oracle 10g Free Training - Using Data Pump Import

In this tutorial we shall learn about using Data Pump Import - Invoking Data Pump Import, Data Pump Import Interfaces, Data Pump Import Modes like Full Import Mode, Schema Mode, Table Mode, Tablespace Mode and Transportable Tablespace Mode.

Using Data Pump Import

Oracle export and import utilities enable you to move existing data in Oracle format between one Oracle Database and another. For example, export files can archive database data or move data among different databases that run on the same or different operating systems.

Data Pump Import (invoked with the impdp command) is a new utility as of Oracle Database 10g. Although its functionality and its parameters are similar to those of the original Import utility (imp), they are completely separate utilities and their files are not compatible.

Data Pump Import (hereinafter referred to as Import for ease of reading) is a utility for loading an export dump file set into a target system. The dump file set is made up of one or more disk files that contain table data, database object metadata, and control information. The files are written in a proprietary, binary format. During an import operation, the Data Pump Import utility uses these files to locate each database object in the dump file set.

Import can also be used to load a target database directly from a source database with no intervening dump files. This allows export and import operations to run concurrently, minimizing total elapsed time. This is known as a network import.

Data Pump Import enables you to specify whether a job should move a subset of the data and metadata from the dump file set or the source database (in the case of a network import), as determined by the import mode. This is done using data filters and metadata filters, which are implemented through Import commands.

Invoking Data Pump Import

The Data Pump Import utility is invoked using the impdp command from the command prompt or the OEM. The characteristics of the import operation are determined by the import parameters you specify. These parameters can be specified either on the command line or in a parameter file.

The figure below shows the OEM screen with which the Import/Export can be performed.

Figure. Import From Files

Data Pump Import Interfaces

You can interact with Data Pump Import by using a command line, a parameter file, or an interactive-command mode.

  • Command-Line Interface: Enables you to specify the Import parameters directly on the command line. For a complete description of the parameters available in the command-line interface.
    .
  • Parameter File Interface: Enables you to specify command-line parameters in a parameter file. The only exception is the PARFILE parameter because parameter files cannot be nested.
    .
  • Interactive-Command Interface: Stops logging to the terminal and displays the Import prompt, from which you can enter various commands, some of which are specific to interactive-command mode. This mode is enabled by pressing Ctrl+C during an import operation started with the command-line interface or the parameter file interface. Interactive-command mode is also enabled when you attach to an executing or stopped job.

Data Pump Import Modes

One of the most significant characteristics of an import operation is its mode, because the mode largely determines what is imported. The specified mode applies to the source of the operation, either a dump file set or another database if the NETWORK_LINK parameter is specified.

When the source of the import operation is a dump file set, specifying a mode is optional. If no mode is specified, then Import attempts to load the entire dump file set in the mode in which the export operation was run.

The mode is specified on the command line, using the appropriate parameter. The available modes are as follows:

Full Import Mode

A full import is specified using the FULL parameter. In full import mode, the entire content of the source (dump file set or another database) is loaded into the target database. This is the default for file-based imports. You must have the IMP_FULL_DATABASE role if the source is another database or if the export operation that generated the dump file set required the EXP_FULL_DATABASE role.

Schema Mode

A schema import is specified using the SCHEMAS parameter. In a schema import, only objects owned by the current user are loaded. The source can be a full or schema-mode export dump file set or another database. If you have the IMP_FULL_DATABASE role, then a single schema other than your own schema, or a list of schemas can be specified and the schemas themselves (including system privilege grants) are created in the database in addition to the objects contained within those schemas.

Cross-schema references are not imported. For example, a trigger defined on a table within one of the specified schemas, but residing in a schema not explicitly specified, is not imported.

Table Mode

A table-mode import is specified using the TABLES parameter. In table mode, only the specified set of tables, partitions, and their dependent objects are loaded. The source can be a full, schema, tablespace, or table-mode export dump file set or another database. You must have the IMP_FULL_DATABASE role to specify tables that are not in your own schema.

Tablespace Mode

A tablespace-mode import is specified using the TABLESPACES parameter. In tablespace mode, all objects contained within the specified set of tablespaces are loaded. The source can be a full, schema, tablespace, or table-mode export dump file set or another database.

Transportable Tablespace Mode

A transportable tablespace import is specified using the TRANSPORT_TABLESPACES parameter. In transportable tablespace mode, the metadata from a transportable tablespace export dump file set or from another database is loaded. The datafiles specified by the TRANSPORT_DATAFILES parameter must be made available from the source system for use in the target database, typically by copying them over to the target system.

This mode requires the IMP_FULL_DATABASE role.

The figure below shows the directory name and the dump file name for the import file.

Figure. Parameters for the Import.


Oracle 10g Free Training - Data Pump Components

Oracle 10g Free Training - Data Pump Components

In this tutorial let us learn about Data Pump Components viz. Import / Export, New Features In Data Pump Export and Import and learn how the original Import and Export utilities differ from Data Pump Import and Data Pump Export respectively.

Data Pump Components

Oracle Data Pump is made up of three distinct parts:

  • The command-line clients, expdp and impdp
    .
  • The DBMS_DATAPUMP PL/SQL package (also known as the Data Pump API)
    .
  • The DBMS_METADATA PL/SQL package (also known as the Metadata API)

The Data Pump clients, expdp and impdp, invoke the Data Pump Export utility and Data Pump Import utility, respectively. They provide a user interface that closely resembles the original export (exp) and import (imp) utilities.

The expdp and impdp clients use the procedures provided in the DBMS_DATAPUMP PL/SQL package to execute export and import commands, using the parameters entered at the command-line. These parameters enable the exporting and importing of data and metadata for a complete database or subsets of a database.

New Features In Data Pump Export and Import

The new Data Pump Export and Import utilities (invoked with the expdp and impdp commands, respectively) have a similar look and feel to the original Export (exp) and Import (imp) utilities, but they are completely separate. Dump files generated by the new Data Pump Export utility are not compatible with dump files generated by the original Export utility. Therefore, files generated by the original Export (exp) utility cannot be imported with the Data Pump Import (impdp) utility.

Oracle recommends that you use the new Data Pump Export and Import utilities because they support all Oracle Database 10g features, except for XML schemas. Original Export and Import support the full set of Oracle database release 9.2 features. Also, the design of Data Pump Export and Import results in greatly enhanced data movement performance over the original Export and Import utilities.

The following are the major new features that provide this increased performance, as well as enhanced ease of use:

The ability to specify the maximum number of threads of active execution operating on behalf of the Data Pump job. This enables you to adjust resource consumption versus elapsed time. This feature is available only in the Enterprise Edition of Oracle Database 10g.

The ability to restart Data Pump jobs.

The ability to detach from and reattach to long-running jobs without affecting the job itself. This allows DBAs and other operations personnel to monitor jobs from multiple locations. The Data Pump Export and Import utilities can be attached to only one job at a time; however, you can have multiple clients or jobs running at one time. (If you are using the Data Pump API, the restriction on attaching to only one job at a time does not apply.) You can also have multiple clients attached to the same job.

Support for export and import operations over the network, in which the source of each operation is a remote instance.

The ability, in an import job, to change the name of the source datafile to a different name in all DDL statements where the source datafile is referenced.

Enhanced support for remapping tablespaces during an import operation.

Support for filtering the metadata that is exported and imported, based upon objects and object types.

Support for an interactive-command mode that allows monitoring of and interaction with ongoing jobs.

The ability to estimate how much space an export job would consume, without actually performing the export.

The ability to specify the version of database objects to be moved. In export jobs, VERSION applies to the version of the database objects to be exported.

In import jobs, VERSION applies only to operations over the network. This means that VERSION applies to the version of database objects to be extracted from the source database.

Most Data Pump export and import operations occur on the Oracle database server. (This contrasts with original export and import, which were primarily client-based.)

Original Export and Import Vs Data Pump Export and Import

If you are familiar with the original Export (exp) and Import (imp) utilities, it is important to understand that many of the concepts behind them do not apply to Data Pump Export (expdp) and Data Pump Import (impdp). In particular:

  • Data Pump Export and Import operate on a group of files called a dump file set rather than on a single sequential dump file.
    .
  • Data Pump Export and Import access files on the server rather than on the client. This results in improved performance. It also means that directory objects are required when you specify file locations.
    .
  • Data Pump Export and Import use parallel execution rather than a single stream of execution, for improved performance. This means that the order of data within dump file sets is more variable.
    .
  • Data Pump Export and Import represent metadata in the dump file set as XML documents rather than as DDL commands. This provides improved flexibility for transforming the metadata at import time.
    .
  • Data Pump Export and Import are self-tuning utilities. Tuning parameters that were used in original Export and Import, such as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import.


.
  • At import time there is no option to perform interim commits during the restoration of a partition. This was provided by the COMMIT parameter in original Import.
    .
  • There is no option to merge extents when you re-create tables. In original Import, this was provided by the COMPRESS parameter. Instead, extents are reallocated according to storage parameters for the target table.
    .
  • Sequential media, such as tapes and pipes, are not supported.
    .
  • When you are importing data into an existing table using either APPEND or TRUNCATE, if any row violates an active constraint, the load is discontinued and no data is loaded. This is different from original Import, which logs any rows that are in violation and continues with the load.

Oracle 10g Free Training - Manipulating data through SQL

Oracle 10g Free Training - Manipulating data through SQL

This tutorial will teach users how to manipulate data using the DMLs (Data Manipulation Languages) through SQL, through 4 types of DML statements: Insert statement, Update statement, Delete statement and the Merge statement.

Manipulating data through SQL

Insert statements

There could be two ways of inserting data in oracle10g database. You can directly insert rows through sqlplus after entering the commands or you can use enterprise manager (through internet explorer) to execute the insert statement.

a) Using SQL*Plus to insert data into a table.

  1. Login to 10g SQL*PLUS using your user id and password.
  2. Run the insert statement as shown in the SQL window in figure 1.

Figure 1: Inserting data into a table through SQL*Plus

Update Statements

a) Using SQL* Plus to update data in a table

  1. Login to 10g SQL*PLUS using your user id and password.
  2. Run the update statement in the SQL* PLUS window as shown in the figure 3.

Delete Statements

a) Using SQL* Plus to delete data from a table

  1. 1. Login to 10g SQL*PLUS using your user id and password.
  2. Run the delete statement in the SQL* PLUS window as shown in the figure 3.

Merge Statements

a) Using SQL* Plus to merge data in a table

  1. Login to 10g SQL*PLUS using your user id and password.
  2. Run the merge statement in the SQL* PLUS window as shown in the figure 3.

Oracle 10g Free Training - Sequences

Oracle 10g Free Training - Sequences

In this tutorial you will learn about Sequences, Creating Sequences, Altering Sequences, Using Sequences and Dropping Sequences

  • Referencing a Sequence, Generating Sequence Numbers with NEXTVAL and Using Sequence Numbers with CURRVAL.
  • Caching Sequence Numbers, the Number of Entries in the Sequence Cache and the Number of Values in Each Sequence Cache Entry.

Sequences

This section describes aspects of managing sequences, and contains the following topics:

  • About Sequences
  • Creating Sequences
  • Altering Sequences
  • Using Sequences
  • Dropping Sequences

About Sequences

Sequences are database objects from which multiple users can generate unique integers. The sequence generator generates sequential numbers, which can help to generate unique primary keys automatically, and to coordinate keys across multiple rows or tables.

Without sequences, sequential values can only be produced programmatically. A new primary key value can be obtained by selecting the most recently produced value and incrementing it. This method requires a lock during the transaction and causes multiple users to wait for the next value of the primary key; this waiting is known as serialization. If developers have such constructs in applications, then you should encourage the developers to replace them with access to sequences. Sequences eliminate serialization and improve the concurrency of an application.

Creating Sequences

To create a sequence in your schema, you must have the CREATE SEQUENCE system privilege. To create a sequence in another user's schema, you must have the CREATE ANY SEQUENCE privilege.

Create a sequence using the CREATE SEQUENCE statement. For example, the following statement creates a sequence used to generate employee numbers for the empno column of the emp table:

CREATE SEQUENCE emp_sequence
INCREMENT BY 1
START WITH 1
NOMAXVALUE
NOCYCLE
CACHE 10;

The image cannot be displayed, because it contains errors.

Figure 20. Creating a sequence.

Notice that several parameters can be specified to control the function of sequences. You can use these parameters to indicate whether the sequence is ascending or descending, the starting point of the sequence, the minimum and maximum values, and the interval between sequence values. The NOCYCLE option indicates that the sequence cannot generate more values after reaching its maximum or minimum value.

The CACHE clause preallocates a set of sequence numbers and keeps them in memory so that sequence numbers can be accessed faster. When the last of the sequence numbers in the cache has been used, the database reads another set of numbers into the cache.

The database might skip sequence numbers if you choose to cache a set of sequence numbers. For example, when an instance abnormally shuts down (for example, when an instance failure occurs or a SHUTDOWN ABORT statement is issued), sequence numbers that have been cached but not used are lost. Also, sequence numbers that have been used but not saved are lost as well. The database might also skip cached sequence numbers after an export and import.

Altering Sequences

To alter a sequence, your schema must contain the sequence, or you must have the ALTER ANY SEQUENCE system privilege. You can alter a sequence to change any of the parameters that define how it generates sequence numbers except the sequence starting number. To change the starting point of a sequence, drop the sequence and then re-create it.

Alter a sequence using the ALTER SEQUENCE statement. For example, the following statement alters the emp_sequence:

ALTER SEQUENCE emp_sequence
INCREMENT BY 10
MAXVALUE 10000
CYCLE
CACHE 20;

The image cannot be displayed, because it contains errors.

Figure 21. Altering a sequence.

Using Sequences

To use a sequence, your schema must contain the sequence or you must have been granted the SELECT object privilege for another user's sequence. Once a sequence is defined, it can be accessed and incremented by multiple users (who have SELECT object privilege for the schema containing the sequence) with no waiting. The database does not wait for a transaction that has incremented a sequence to complete before that sequence can be incremented again.

The examples outlined in the following sections show how sequences can be used in master/detail table relationships. Assume an order entry system is partially comprised of two tables, orders_tab (master table) and line_items_tab (detail table), that hold information about customer orders. A sequence named order_seq is defined by the following statement:

CREATE SEQUENCE Order_seq
START WITH 1
INCREMENT BY 1
NOMAXVALUE
NOCYCLE
CACHE 20;

The image cannot be displayed, because it contains errors.

Figure 22. Creating the “ORDER_SEQ” sequence.

Dropping Sequences

You can drop any sequence in your schema. To drop a sequence in another schema, you must have the DROP ANY SEQUENCE system privilege. If a sequence is no longer required, you can drop the sequence using the DROP SEQUENCE statement. For example, the following statement drops the order_seq sequence:

DROP SEQUENCE order_seq;

The image cannot be displayed, because it contains errors.

Figure 28. Dropping the sequence.

When a sequence is dropped, its definition is removed from the data dictionary. Any synonyms for the sequence remain, but return an error when referenced.

Referencing a Sequence

A sequence is referenced in SQL statements with the NEXTVAL and CURRVAL pseudocolumns; each new sequence number is generated by a reference to the sequence pseudocolumn NEXTVAL, while the current sequence number can be repeatedly referenced using the pseudo-column CURRVAL.

NEXTVAL and CURRVAL are not reserved words or keywords and can be used as pseudocolumn names in SQL statements such as SELECT, INSERT, or UPDATE.

Generating Sequence Numbers with NEXTVAL

To generate and use a sequence number, reference seq_name.NEXTVAL. For example, assume a customer places an order. The sequence number can be referenced in a values list. For example:

INSERT INTO Orders_tab (Orderno, Custno)
VALUES (Order_seq.NEXTVAL, 1032);

The image cannot be displayed, because it contains errors.

Figure 23. Inserting values in the “Orders_tab” using the “Order_seq” sequence.

Or, the sequence number can be referenced in the SET clause of an UPDATE statement. For example:

UPDATE Orders_tab
SET Orderno = Order_seq.NEXTVAL
WHERE Orderno = 10112;

The image cannot be displayed, because it contains errors.

Figure 24. Updating the “Orders_tab” table using the “Order_seq” sequence.

The sequence number can also be referenced outermost SELECT of a query or subquery. For example:

SELECT Order_seq.NEXTVAL FROM dual;

The image cannot be displayed, because it contains errors.

Figure 25. Selecting the current value of sequence.

As defined, the first reference to order_seq.NEXTVAL returns the value 1. Each subsequent statement that references order_seq.NEXTVAL generates the next sequence number (2, 3, 4,. . .). The pseudo-column NEXTVAL can be used to generate as many new sequence numbers as necessary. However, only a single sequence number can be generated for each row. In other words, if NEXTVAL is referenced more than once in a single statement, then the first reference generates the next number, and all subsequent references in the statement return the same number.

Once a sequence number is generated, the sequence number is available only to the session that generated the number. Independent of transactions committing or rolling back, other users referencing order_seq.NEXTVAL obtain unique values. If two users are accessing the same sequence concurrently, then the sequence numbers each user receives might have gaps because sequence numbers are also being generated by the other user.

Using Sequence Numbers with CURRVAL

To use or refer to the current sequence value of your session, reference seq_name.CURRVAL. CURRVAL can only be used if seq_name.NEXTVAL has been referenced in the current user session (in the current or a previous transaction). CURRVAL can be referenced as many times as necessary, including multiple times within the same statement. The next sequence number is not generated until NEXTVAL is referenced. Continuing with the previous example, you would finish placing the customer's order by inserting the line items for the order:

INSERT INTO Line_items_tab (Orderno, Partno, Quantity)
VALUES (Order_seq.CURRVAL, 20321, 3);

INSERT INTO Line_items_tab (Orderno, Partno, Quantity)
VALUES (Order_seq.CURRVAL, 29374, 1);

The image cannot be displayed, because it contains errors.

Figure 26. Using CURRVAL to insert data in the table.

Assuming the INSERT statement given in the previous section generated a new sequence number of 347, both rows inserted by the statements in this section insert rows with order numbers of 347.

Uses and Restrictions of NEXTVAL and CURRVAL

CURRVAL and NEXTVAL can be used in the following places:

  • VALUES clause of INSERT statements
  • The SELECT list of a SELECT statement
  • The SET clause of an UPDATE statement

CURRVAL and NEXTVAL cannot be used in these places:

  • A subquery
  • A view query or materialized view query
  • A SELECT statement with the DISTINCT operator
  • A SELECT statement with a GROUP BY or ORDER BY clause
  • A SELECT statement that is combined with another SELECT statement with the UNION, INTERSECT, or MINUS set operator
  • The WHERE clause of a SELECT statement
  • DEFAULT value of a column in a CREATE TABLE or ALTER TABLE statement
  • The condition of a CHECK constraint

Caching Sequence Numbers

Sequence numbers can be kept in the sequence cache in the System Global Area (SGA). Sequence numbers can be accessed more quickly in the sequence cache than they can be read from disk.

The sequence cache consists of entries. Each entry can hold many sequence numbers for a single sequence.

Follow these guidelines for fast access to all sequence numbers:

  • Be sure the sequence cache can hold all the sequences used concurrently by your applications.
  • Increase the number of values for each sequence held in the sequence cache.

The Number of Entries in the Sequence Cache

When an application accesses a sequence in the sequence cache, the sequence numbers are read quickly. However, if an application accesses a sequence that is not in the cache, then the sequence must be read from disk to the cache before the sequence numbers are used.

If your applications use many sequences concurrently, then your sequence cache might not be large enough to hold all the sequences. In this case, access to sequence numbers might often require disk reads. For fast access to all sequences, be sure your cache has enough entries to hold all the sequences used concurrently by your applications.

The Number of Values in Each Sequence Cache Entry

When a sequence is read into the sequence cache, sequence values are generated and stored in a cache entry. These values can then be accessed quickly. The number of sequence values stored in the cache is determined by the CACHE parameter in the CREATE SEQUENCE statement. The default value for this parameter is 20.

This CREATE SEQUENCE statement creates the seq2 sequence so that 50 values of the sequence are stored in the SEQUENCE cache:

CREATE SEQUENCE Seq2
CACHE 50;

The image cannot be displayed, because it contains errors.

Figure 27. Creating a sequence.

The first 50 values of seq2 can then be read from the cache. When the 51st value is accessed, the next 50 values will be read from disk.

Choosing a high value for CACHE lets you access more successive sequence numbers with fewer reads from disk to the sequence cache. However, if there is an instance failure, then all sequence values in the cache are lost. Cached sequence numbers also could be skipped after an export and import if transactions continue to access the sequence numbers while the export is running.

If you use the NOCACHE option in the CREATE SEQUENCE statement, then the values of the sequence are not stored in the sequence cache. In this case, every access to the sequence requires a disk read. Such disk reads slow access to the sequence. This CREATE SEQUENCE statement creates the SEQ3 sequence so that its values are never stored in the cache:

CREATE SEQUENCE Seq3
NOCACHE;

Recent Tutorials

Subscribe to oracle tutorials

Enter your email address:

Delivered by FeedBurner

Blog Archive

Visitors

DigNow.org