How to use oracle sequence in sql loader control file


















Sujee Sujee 4, 5 5 gold badges 28 28 silver badges 35 35 bronze badges. Add a comment. Active Oldest Votes. Sterling Archer Lewis Lewis 41 2 2 bronze badges. I have successfully used a sequence from my Oracle 10g database to populate a primary key field during an sqlldr run: Here is my data.

Rossiar Rossiar 2, 2 2 gold badges 22 22 silver badges 31 31 bronze badges. Thank you REW. I am not using direct path load. Sign up or log in Sign up using Google. Sign up using Facebook. Sign up using Email and Password. Post as a guest Name. If you have specified that a bad file is to be created, the following applies:. If no records are rejected, then the bad file is not created. When this occurs, you must reinitialize the bad file for the next run. If the bad file is created, it overwrites any existing file with the same name; ensure that you do not overwrite a file you wish to retain.

Additional Information: On some systems, a new version of the file is created if a file with the same name already exists. See your Oracle operating system-specific documentation to find out if this is the case on your system.

If you do not specify a name for the bad file, the name defaults to the name of the datafile with an extension or file type of BAD. The bad file is created in the same record and file format as the datafile so that the data can be reloaded after corrections.

The syntax is:. This keyword specifies that a filename for the badfile follows. That is, it cannot determine if the record meets WHEN-clause criteria, as in the case of a field that is missing its final delimiter. If the data can be evaluated according to the WHEN-clause criteria even with unbalanced delimiters , then it is either inserted or rejected. If a record is rejected on insert, then no part of that record is inserted into any table. For example, if data in a record is to be inserted into multiple tables, and most of the inserts succeed, but one insert fails, then all inserts from that record are rolled back.

The record is then written to the bad file, where it can be corrected and reloaded. Previous inserts from records without errors are not affected. The log file indicates the Oracle error for each rejected record. Case 4: Loading Combined Physical Records demonstrates rejected records.

This is to ensure that the row can be repaired in the bad file and reloaded to all tables consistently. Also, if a row is loaded into one table, it should be loaded into all other tables that do not filter it out. Otherwise, reloading a fixed version of the row from the bad file could cause the data to be loaded into some tables twice. Data from LOB files or secondary data files are not written to a bad file when there are rejected rows.

If there is an error loading a LOB, the row is not rejected. The records contained in this file are called discarded records. Discarded records do not satisfy any of the WHEN clauses specified in the control file. These records differ from rejected records.

Discarded records do not necessarily have any bad data. No insert is attempted on a discarded record. If no records are discarded, then a discard file is not created. You can specify the discard file directly with a parameter specifying its name, or indirectly by specifying the maximum number of discards. This keyword specifies that a discard filename follows.

The default filename is the name of the datafile, and the default file extension or file type is DSC. A discard filename specified on the command line overrides one specified in the control file. If a discard file with that name already exists, it is either overwritten or a new version is created, depending on your operating system. The discard file is created with the same record and file format as the datafile. Therefore, it can easily be used for subsequent loads with the existing control file, after you change the WHEN clauses or edit the data.

A discard file named notappl with the file extension or file type of. An attempt is made to insert every record into such a table. Therefore, records may be rejected, but none are discarded. Case 4: Loading Combined Physical Records provides an example of using a discard file. Data from LOB files or secondary data files is not written to a discard file when there are discarded rows. You can limit the number of records to be discarded for each datafile by specifying an integer:.

When the discard limit specified with integer is reached, processing of the datafile terminates and continues with the next datafile, if one exists. You can specify a different number of discards for each datafile. Alternatively, if the number of discards is only specified once, then the maximum number of discards specified applies to all files. A filename specified on the command line overrides any bad file that you may have specified in the control file.

See the Oracle8i National Language Support Guide for information about supported character encoding schemes. The following sections provide a brief introduction to some of the supported schemes. Multibyte character sets support Asian languages.

Data can be loaded in multibyte format, and database objects fields, tables, and so on can be specified with multibyte characters. In the control file, comments and object names may also use multibyte characters. The session character set is the character set supported by your terminal.

During a direct path load, data converts directly into the database character set. The direct path load method, therefore, allows data in a character set that is not supported by your terminal to be loaded. Note: When data conversion is required, the target character set must contain a representation of all characters that exist in the data.

Otherwise, characters that have no equivalent in the target character set are converted to a default character, with consequent loss of data. When you are using the direct path load method, the database character set should be a superset of, or equivalent to, the datafile character sets.

Similarly, during a conventional path load, the session character set should be a superset of, or equivalent to, the datafile character sets. Different datafiles can be specified with different character sets. However, only one character set can be specified for each datafile. However, delimiters and comparison clause values must be specified to match the character set in use in the datafile. To ensure that the specifications are correct, you may prefer to specify hexadecimal strings, rather than character string values.

Data that uses a different character set must be in a separate file. It requires the table to be empty before loading. After the rows are successfully deleted, a commit is issued. You cannot recover the data that was in the table before the load, unless it was saved with Export or a comparable utility. If data does not already exist, the new rows are simply loaded. Case 4: Loading Combined Physical Records provides an example.

The row deletes cause any delete triggers defined on the table to fire. For more information on cascaded deletes, see the information about data integrity in Oracle8i Concepts.

To update existing rows, use the following procedure:. Drop the work table. For example, the table might reach its maximum number of extents. Discontinued loads can be continued after more space is made available. When a load is discontinued, any data already loaded remains in the tables, and the tables are left in a valid state. If the conventional path is used, all indexes are left in a valid state. If the direct path load method is used, any indexes that run out of space are left in an unusable state.

They must be dropped before the load can continue. Other indexes are valid provided no other errors occurred. See Indexes Left in Index Unusable State for other reasons why an index might be left in an unusable state. Use this information to resume the load where it left off. Any indexes that are left in an unusable state must be dropped before continuing the load. The indexes can then be re-created either before continuing or after the load completes.

To continue a discontinued direct or conventional path load involving only one table, specify the number of logical records to skip with the command-line parameter SKIP.

It is not possible for multiple tables in a conventional path load to become unsynchronized. Therefore, a multiple-table conventional path load can also be continued with the command-line parameter SKIP. Use the same procedure that you would use for single-table loads, as described in Continuing Single-Table Loads. If so, the tables are not synchronized and continuing the load is slightly more complex.

If the numbers are the same, you can use Use the same procedure that you would use for single-table loads, as described in Continuing Single-Table Loads. These statements exist to handle unsynchronized interrupted loads. You must use the table-level SKIP clause. Because Oracle8 i supports user-defined record sizes larger than 64k see READSIZE read buffer , the need to break up logical records into multiple physical records is reduced.

However, there may still be situations in which you may want to do so. At some point, when you want to combine those multiple physical records back into one logical record, you can use one of the following clauses, depending on your data:.

For example, two records might be combined if there were a pound sign in character position 80 of the first record. If any other character were there, the second record would not be added to the first.

If the condition is false, then the current physical record becomes the last physical record of the current logical record.

THIS is the default. NEXT If the condition is true in the next record, then the current physical record is concatenated to the current logical record, continuing until the condition is false. For the equal operator, the field and comparison string must match exactly for the condition to be true. For the not equal operator, they may differ in any character.

If the last nonblank character in the current physical record meets the test, then the next physical record is read and concatenated to the current physical record, continuing until the condition is false.

If the condition is false in the current record, then the current physical record is the last physical record of the current logical record. Column numbers start with 1. Either a hyphen or a colon is acceptable start-end or start:end. If you omit end, the length of the continuation field is the length of the byte string or character string. If you use end, and the length of the resulting continuation field is not the same as that of the byte string or the character string, the shorter one is padded.

Character strings are padded with blanks, hexadecimal strings with zeros. The string must be enclosed in double or single quotation marks. The comparison is made character by character, blank padding on the right if necessary. X'hex-str' A string of bytes in hexadecimal format used in the same way as str.

X'1FB would represent the three bytes with values 1F, b , and 33 hexadecimal. This is the only time you refer to character positions in physical records. All other references are to logical records. This allows data values to span the records with no extra characters continuation characters in the middle. Assume that physical data records are 12 characters long and that a period means a space:. Trailing blanks in the physical records are part of the logical records.

You cannot fragment records in secondary data files SDFs into multiple physical records. If record2 also has an asterisk in column 1, then record3 is appended also. If record2 does not have an asterisk in column 1, then it is still appended to record1, but record3 begins a new logical record.

In the next example, you specify that if the current physical record record1 has a comma in the last nonblank data column, then the next physical record record2 should be appended to it.

If a record does not have a comma in the last column, it is the last physical record of the current logical record. In the last example, you specify that if the next physical record record2 has a "10" in columns 7 and 8, then it should be appended to the preceding physical record record1. If a record does not have a "10" in columns 7 and 8, then it begins a new logical record. It defines the relationship between records in the datafile and tables in the database.

The specification of fields and datatypes is described in later sections. The table must already exist.

Otherwise, the table name should be prefixed by the username of the owner, as follows:. It is only valid for a parallel load. You can choose to load or discard a logical record by using the WHEN clause to test a condition in the record.

The WHEN clause appears after the table name and is followed by one or more field conditions. For example, the following clause indicates that any record with the value "q" in the fifth column position should be loaded:. Parentheses are optional, but should be used for clarity with multiple comparisons joined by AND. For example. Then the WHEN clause is evaluated. A record is inserted into the table only if the WHEN clause is true.

Field conditions are discussed in detail in Specifying Field Conditions. If a WHEN directive fails on a record, that record is discarded skipped. The skipped record is assumed to be contained completely in the main datafile; therefore, a secondary data file will not be affected if present. If all data fields are terminated similarly in the datafile, you can use the FIELDS clause to indicate the default delimiters. Note: Terminators are strings not limited to a single character.

Note: Enclosure strings do not have to be a single character. You can override the delimiter for any given column by specifying it after the column name. See Specifying Delimiters for more information on delimiter specification.

Syntax for this feature is given in High-Level Syntax Diagrams. This option inserts each index entry directly into the index, one record at a time. Instead, index entries are put into a separate, temporary storage area and merged with the original index at the end of the load. This method achieves better performance and produces an optimal index, but it requires extra storage space.

During the merge, the original index, the new index, and the space for new entries all simultaneously occupy storage space. The resulting index may not be as optimal as a freshly sorted one, but it takes less space to produce.

It also takes more time because additional UNDO information is generated for each index insert. This option is suggested for use when either of the following situations exists:. The number of records to be loaded is small compared to the size of the table a ratio of , or less, is recommended Specifying Field Conditions A field condition is a statement about a field in a logical record that evaluates as true or false. First, positions in the field condition refer to the logical record, not to the physical record.

Second, you may specify either a position in the logical record or the name of a column that is being loaded. Either start-end or start:end is acceptable.

If you omit end, the length of the field is determined by the length of the comparison string. If the lengths are different, the shorter field is padded.

If the field col2 is an attribute of a column object col1, when referring to col2 in one of the directives, you must use the notation col1. If the comparison is true, the current record is inserted into the table. It can be used in place of a literal string in any field comparison. The condition is TRUE whenever the column is entirely blank. Using it is the same as specifying an appropriately sized literal string of blanks.

For example, the following specifications are equivalent:. Note: There can be more than one blank in a multibyte character set. It is a good idea to use the BLANKS keyword with these character sets instead of specifying a string of blank characters. The character string will match only a specific sequence of blank characters, while the BLANKS keyword will match combinations of different blank characters.

For more information on multibyte character sets, see Multibyte Asian Character Sets. When a data field is compared to a literal string that is shorter than the data field, the string is padded. Character strings are padded with blanks, for example:. This example compares the data in position with 4 blanks. If position contains 4 blanks, then the clause evaluates as true. You may load any number of a table's columns. Columns defined in the database, but not specified in the control file, are assigned null values this is the proper way to insert null values.

A column specification is the name of the column, followed by a specification for the value to be put in that column. The list of columns is enclosed by parentheses and separated with commas as follows:. See Generating Data. If the column's value is read from the datafile, the data field that contains the column's value is specified. In this case, the column specification includes a column name that identifies a column in the database table, and a field specification that describes a field in a data record.

The field specification includes position, datatype, null restrictions, and defaults. It is not necessary to specify all attributes when loading column objects. Any missing attributes will be set to NULL.

Filler fields have names but they are not loaded into the table. Also, filler fields can occur anyplace in the data file. A CHAR field, however, can contain any character data.

You may specify one datatype for each field; if unspecified, CHAR is assumed. The position may either be stated explicitly or relative to the preceding field. The first character position in a logical record is 1. If you omit end, the length of the field is derived from the datatype in the datafile. Note that CHAR data specified without start or end is assumed to be length 1.

If it is impossible to derive a length from the datatype, an error message is issued. Therefore, it starts in column 28 and continues until a slash is encountered. When you are determining field positions, be alert for TABs in the datafile. The load then fails with multiple "invalid number" and "missing field" errors.

These kinds of errors occur when the data contains tabs. When printed, each tab expands to consume several columns on the paper. In the datafile, however, each Tab is still only one character.

The use of delimiters to specify relative positioning of fields is discussed in detail in Specifying Delimiters. For an example, see the second example in Extracting Multiple Logical Records. A logical record may contain data for one of two tables, but not both. The remainder of this section details important ways to make use of that behavior. Some data storage and transfer media have fixed-length physical records. When the data records are short, more than one can be stored in a single, physical record to use the storage space efficiently.

For example, assume the data is as follows:. The same record could be loaded with a different specification. The following control file uses relative positioning instead of fixed positioning.

Instead, scanning continues where it left off. A single datafile might contain records in a variety of formats. A record ID field distinguishes between the two formats.

Department records have a "1" in the first column, while employee records have a "2". The following control file uses exact positioning to load this data:. The records in the previous example could also be loaded as delimited data. The following control file could be used:. This keyword causes field scanning to start over at column 1 when checking for data that matches the second format.

The following functions are described:. The LOAD keyword is required in this situation. The SKIP keyword is not permitted.

In addition, no memory is required for a bind array. This is the simplest form of generated data. It does not vary during the load, and it does not vary between loads. It is converted, as necessary, to the database column type. You may enclose the value within quotation marks, and you must do so if it contains white space or reserved words. Be sure to specify a legal value for the target column.

If the value is bad, every record is rejected. To set a column to null, do not specify that column at all. Oracle automatically sets that column to null when loading the record. Use the RECNUM keyword after a column name to set that column to the number of the logical record from which that record was loaded.

Records are counted sequentially from the beginning of the first datafile, starting with record 1. Thus it increments for records that are discarded, skipped, rejected, or loaded. If the column is of type CHAR, then the date is loaded in the form ' dd-mon-yy.

If the system date is loaded into a DATE column, then it can be accessed in a variety of forms that include the time and the date.

It does not increment for records that are discarded or skipped. MAX The sequence starts with the current maximum value for the column plus the increment. If a record is rejected that is, it has a format error or causes an Oracle error , the generated sequence numbers are not reshuffled to mask this. If four rows are assigned sequence numbers 10, 12, 14, and 16 in a particular column, and the row with 12 is rejected; the three rows inserted are numbered 10, 14, and 16, not 10, 12, This allows the sequence of inserts to be preserved despite data errors.

When you correct the rejected data and reinsert it, you can manually set the columns to agree with the sequence. Because a unique sequence number is generated for each logical input record, rather than for each table insert, the same sequence number can be used when inserting data into multiple tables.

This is frequently useful behavior. For example, your data format might define three logical records in every input record.

To generate sequence numbers for these records, you must generate unique numbers for each of the three inserts. There is a simple technique to do so. Use the number of table-inserts per record as the sequence increment and start the sequence numbers for each insert with successive numbers. Suppose you want to load the following department names into the DEPT table. Each input record contains three department names, and you want to generate the department numbers automatically.

You could use the following control file entries to generate unique department numbers:. They all use 3 as the sequence increment the number of department names in each record. This control file loads Accounting as department number 1, Personnel as 2, and Manufacturing as 3.

The sequence numbers are then incremented for the next record, so Shipping loads as 4, Purchasing as 5, and so on. These datatypes are grouped into portable and nonportable datatypes. Within each of these two groups, the datatypes are subgrouped into length-value datatypes and value datatypes. The main grouping, portable versus nonportable, refers to the platform dependency of the datatype.

This issue arises due to a number of platform specifics such as differences in the byte ordering schemes of different platforms big-endian versus little-endian , differences in how many bits a particular platform is bit, bit, bit , differences in signed number representation schemes 2's complement versus 1's complement , and so on.

Note that not all of these problems apply to all nonportable datatypes. Newsletter Topics Select minimum 1 topic. Anonymous Posted April 21, 0 Comments. ChristopherAvery Posted March 4, 0 Comments. Anonymous Posted March 4, 0 Comments. Hope this helps! Jack D. Register or Login. Welcome back! Sign in with Email. Reset Your Password We'll send an email with a link to reset your password. Stay ahead!



0コメント

  • 1000 / 1000