Default Parallel SQL Execution- Tables and Constraints
If you work with large tables, you may want to consider creating your tables as PARALLEL. This instructs Oracle to set the degree of parallelism for queries and any subsequent INSERT, UPDATE, DELETE, MERGE, and query statements. This example creates a table with a PARALLEL clause of 2:
SQL> create table inventory (inv_id number,
inv_desc varchar2(30), create_dt date default sysdate) parallel 2;
You can specify PARALLEL, NOPARALLEL, or PARALLEL n. If you do not specify n, Oracle sets the degree of parallelism based on the PARALLEL_THREADS_PER_CPU initialization parameter (PARALLEL_THREADS_PER_CPU × CPU_COUNT × INSTANCE_COUNT). The main issue to be aware of here is that if a table has been created with a default degree of parallelism, any subsequent queries will execute with parallel threads. You may wonder why a query or a DML statement is executing in parallel (without explicitly invoking a parallel operation).
Parallel query operations spawn P_0 processes, and if tables are created with a larger default parallel degree, then there is a risk of hitting the “ORA-00020 maximum number of processes” error. There is an available resource parameter PQ_TIMEOUT_ACTION that will timeout parallel queries that are inactive. This will allow parallel queries with high priority to have the needed resources to execute. There is also a simpler way to cancel the runaway SQL without having to manually kill the parallel processes if they get out
of control by a default parallel degree of something like 64, by using the ALTER SYSTEM CANCEL SQL statement.
Parallelism can be important for performance tuning; however, it is also very complicated and can fail in many ways, causing regression in other queries. It would not be recommended to use parallelism until you understand your database’s configuration and how your table’s degree of parallelism could affect the entire system.
Compressing Table Data
As your database grows, you may want to consider table-level compression. Compressed data has the benefit of using less disk space and less memory and reduced I/O. Queries that read compressed data potentially run faster because there are fewer blocks to process. However, CPU usage increases as the data is compressed and uncompressed as writes and reads occur, so there is a trade-off.
Four types of compression are available:
• Basic compression
• Advanced row compression
• Warehouse compression (hybrid columnar compression)
• Archive compression (hybrid columnar compression)
Basic compression is enabled with the COMPRESS or COMPRESS BASIC clause (they are synonymous). This example creates a table with basic compression:
SQL> create table inventory (inv_id number,
inv_desc varchar2(300), create_dt timestamp) compress basic;
Basic compression provides compression as data are direct-path inserted into the table.
Note Basic compression requires the Oracle Enterprise Edition, but it does not require an extra license. Other types of compression are additional license options for the database.As with options of the database, evaluation needs to be done for the storage cost and compression ratio to provide the right cost analysis for this option.
Advanced row compression is enabled with the ROW STORE COMPRESS ADVANCED clause:
SQL> create table inventory (inv_id number,
inv_desc varchar2(300), create_dt timestamp)
row store compress advanced;
Advanced row compression provides compression when initially inserting data into the table as well as in any subsequent DML operations. You can verify the compression for a table via the following SELECT statement:
SQL> select table_name, compression, compress_for from user_tables
where table_name=’INVENTORY’; TABLE_NAME COMPRESS COMPRESS_FOR
———- ————————–INVENTORY ENABLED ADVANCED
You can also create a tablespace with the compression clause. Any table created in the tablespace will inherit the tablespace compression settings.
The following are a couple of table maintenance considerations with compression:
• Columns need to be less than 255.
• Altering to allow compression does not compress the existing data. You will need to rebuild the table.
• In moving the table, you will need to rebuild any associated indexes.
• Compression can be disabled via the NOCOMPRESS clause. Again, this does not affect the existing data.