=================Must Read======================
During a STARTUP operation, Oracle first attempts to read the initialization file, and then it mounts the control file and attempts to open the datafiles referenced in the control files. If the data files are not synchronized, instance recovery is initiated.
Instance recovery occurs in two distinct phases: the first phase uses the online redo log files to restore the datafiles to the state before instance failure in a roll forward operation; after this step is completed, Oracle uses the undo tablespace to roll back any uncommitted transactions. The roll forward operation includes data in the undo tablespace; without a consistent undo tablespace, the roll back operation cannot succeed. Once the roll forward operation completes, the database is open to users while the roll back operation completes. After the roll back phase, the datafiles contain only committed data.
checkpoints keep track of what still needs to be written from the redo log files to the datafiles. Any transactions not yet written to the datafiles are at an SCN after the last checkpoint.
The amount of time required for instance recovery depends on how long it takes to bring the datafiles up-to-date from the last checkpoint position to the latest SCN in the control file. To prevent performance problems, the distance between the checkpoint position and the end of the redo log group cannot be more than 90 percent of the size of the redo log group.
he amount of time required for instance recovery depends on how long it takes to bring the datafiles up-to-date from the last checkpoint position to the latest SCN in the control file. To prevent performance problems, the distance between the checkpoint position and the end of the redo log group cannot be more than 90 percent of the size of the redo log group.
You can tune instance recovery by setting an MTTR target, in seconds, using the initialization parameter FAST_START_MTTR_TARGET. The default value for this parameter is zero; the maximum is 3,600 seconds (1 hour).
With a setting of zero, which disables the target, the likelihood that writes to the redo logs wait for writes to the datafiles is reduced. However, if FAST_START_MTTR_TARGET is set to a low nonzero value, writes to the redo logs most likely wait for writes to the datafiles. Although this reduces the amount of time it takes to recover the instance in the case of an instance failure, it affects performance and response time. Setting this value too high can result in an unacceptable amount of time needed to recover the instance after an instance failure.
Although tablespace point-in-time recovery could effectively restore a table and its contents to a point in time before it was dropped, it was potentially time-consuming and had the side effect of losing work from other transactions that occurred within the same tablespace after the table was dropped.
The recycle bin is a logical structure within each tablespace that holds dropped tables and objects related to the tables, such as indexes. The space associated with the dropped table is not immediately available but shows up in the data dictionary view DBA_FREE_SPACE. When space pressure occurs in the tablespace, objects in the recycle bin are deleted in a first-in first-out (FIFO) fashion, maximizing the amount of time that the most recently dropped object remains in the recycle bin.
The dropped object still belongs to the owner and still counts against the quota for the owner in the tablespace; in fact, the table itself is still directly accessible from the recycle bin.
Retrieving Dropped Tables from the Recycle Bin
Dropped table can be retrieved from the Recycle Bin at the SQL command line by using the FLASHBACK TABLE…TO BEFORE DROP command.
Syntax: flashback table order_items to before drop;
Recycle Bin Considerations and Limitations
A few limitations are associated with the recycle bin: Only non-SYSTEM locally managed tablespaces can have a recycle bin.
A tableís dependent objects are saved in the recycle bin when the table is dropped, except for bitmap join indexes, referential integrity constraints (foreign key constraints), and materialized view logs.
Indexes are protected only if the table is dropped first: explicitly dropping an index does not place the index into the recycle bin.
Using Flashback Table
Flashback Table allows you to recover one or more tables to a specific point in time without having to use more time-consuming recovery operations such as tablespace point-in-time recovery or Flashback Database that can also affect the availability of the rest of the database. Flashback Table works in-place by rolling back only the changes made to the table or tables and their dependent objects, such as indexes. Flashback Table is different from Flashback Drop: Flashback Table undoes recent transactions to an existing table, whereas Flashback Drop recovers a dropped table. Flashback Table uses data in the undo tablespace, whereas Flashback Drop uses the recycle bin.
The FLASHBACK TABLE command brings one or more tables back to a point in time before any number of logical corruptions have occurred on the tables. To be able to flashback a table, you must enable row movement for the table. Because DML operations are used to bring the table back to its former state, the ROWIDs in the table change. As a result, Flashback Table is not a viable option for applications that depend on the tableís ROWIDs to remain constant.
===============================================================================
Requirements for maximizing databaseís availability and minimize recovery time:
scheduled backups
multiplexing control files
multiplexing redo log files
configuring the database in ARCHIVELOG mode and
using a Flash Recovery area.
–Backing up Controlfiles
The control file is a relatively small (in the megabyte range) binary file that contains the datafiles and redo log files (that constitute a database). The control file is created when the database is created and is updated with the physical changes, for example, whenever a datafile is added or renamed.
The control file is updated continuously and should be available at all times. Donít edit the contents of the control file; only Oracle processes should update its contents. When the database is started, Oracle uses the control file to identify the datafiles and redo log files and opens them. Control files play a major role when recovering a database.
Contents of the control file include:
The database name to which the control file belongs. A control file can belong to
only one database.
The database creation time stamp.
The name, location, and online/offline status information of the datafiles.
The name and location of the redo log files.
Redo log archive information.
Tablespace names.
The current log sequence number, which is a unique identifier that is
incremented and recorded when an online redo log file is switched.
The most recent checkpoint information.
Checkpoints are discussed in more detail later in this chapter in the section
"Understanding Checkpoints."
The beginning and ending of undo segments.
Recovery Managerís backup information.
Any structural change such as adding or renaming a file, the information is immediately recorded in the Control file. The Control file does not change and is already preallocated by Oracle.
Back up the control file after any structural changes. Updation of information in control file is done as follows:
The log writer (LGWR) process updates the control file with the current log
sequence number.
CKPT updates the control file with the recent checkpoint information.
When the database is in ARCHIVELOG mode, the archiver (ARCn) processes
update the control file with information such as the archive log filename and log
sequence number.
When multiplexing control files, Oracle updates all the control files at the same time, but uses only the first control file listed in the CONTROL_FILES parameter for reading.
When creating a database, control file names can be listed in the CONTROL_FILES parameter, and Oracle creates as many control files as are listed. A database can have maximum of eight multiplexed control file copies.
When an online redo log file is full, and LGWR starts writing to the next redo log file, ARCn copies the completed redo log file to the archive destination
There should be enough space available for online archived redo log files to recover and roll forward from the last full backup of each datafile that is also online; the remaining archived logs and any previous datafile backups can be moved to another disk or to tape.
When you recover the database by using redo log files, Oracle reads the change vectors in the redo records and applies the changes to the relevant blocks.
LGWR always writes its records to the online redo log file before DBWn writes new or modified database buffer cache records to the datafiles.
The redo log file contains both committed and uncommitted transactions. Whenever a transaction is committed, a system change number is assigned to the redo records to identify the committed transaction.
Oracle recommends that all redo log groups be the same size.
–Log Switch Operations
The LGWR process writes to only one redo log file group at any time. The file that is actively being written to is known as the current log file. The log files that are required for instance recovery are known as the active log files. The other log files are known as inactive.
Oracle automatically recovers an instance when starting up the instance by using the online redo log files. Instance recovery can be needed if you do not shut down the database cleanly or if your database server crashes.
Whenever a log switch occurs, Oracle assigns a log sequence number to the new redo log group before writing to it. If there are lots of transactions or changes to the database, the log switches can occur too frequently. Size the redo log files appropriately to avoid frequent log switches. Oracle writes to the alert log file whenever a log switch occurs.
Redo log files are written sequentially on the disk, so the I/O will be fast if there is no other activity on the disk. (The disk head is always properly positioned.) Keep the redo log files on a separate disk for better performance. If you have to store a datafile on the same disk as the redo log file, do not put the SYSTEM, UNDOTBS, SYSAUX, or any very active data or index tablespace file on this disk. A commit cannot complete until a transactionís information has been written to the redo logs, so maximizing the throughput of the redo log files is a top priority.
LGWR log file switch Database checkpoints are closely tied to redo log file switches. We introduced checkpoints earlier in the chapter in the section “Understanding Checkpoints.” A checkpoint is an event that flushes the modified data from the buffer cache to the disk and updates the control file and datafiles. The CKPT process updates the headers of datafiles and control files; the actual blocks are written to the file by the DBWn process. A checkpoint is initiated when the redo log file is filled and a log switch occurs, when the instance is shut down with NORMAL, TRANSACTIONAL, or IMMEDIATE, when a tablespace status is changed to read-only or put into BACKUP mode, or when other values specified by certain parameters (discussed later in this section) are reached You can force a checkpoint if needed, as shown here:
ALTER SYSTEM CHECKPOINT;
Forcing a checkpoint ensures that all changes to the database buffers are written to the datafiles on disk.
Another way to force a checkpoint is by forcing a log file switch:
ALTER SYSTEM SWITCH LOGFILE;
The size of the redo log affects the checkpoint performance.
All copies of the redo file are the same size and are known as a group, which is identified by an integer. Each redo log file in the group is known as a member. You must have at least two redo log groups for normal database operation.
When multiplexing redo log files, keeping the members of a group on different disks is preferable so that one disk failure will not affect the continuing operation of the database.
If a database is running in ARCHIVELOG mode, redo log members cannot be deleted unless the redo log group has been archived.
–The Flash Recovery area can be a single directory, an entire file system, or an Automatic Storage Management (ASM) disk group. To further optimize the use of disk space for recovery operations, a Flash Recovery area can be shared by more than one database.
–Flash Recovery Area
Using disk space as the primary medium for all database recovery operations is the key component of Oracle 10gís Flash Recovery area. The Flash Recovery area is a single, unified storage area for all recovery-related files and recovery activities in an Oracle database.
The Flash Recovery area can be a single directory, an entire file system, or an Automatic Storage Management (ASM) disk group. To further optimize the use of disk space for recovery operations, a Flash Recovery area can be shared by more than one database.
Major aspects of a Flash Recovery area : What can be kept in the Flash Recovery area and how to set up a Flash Recovery using initialization parameters and SQL commands. Managing parts of the Flash Recovery area using the Enterprise Manager (EM) Database Control, and other advanced management techniques.
Flash Recovery Area Occupants
All the files needed to recover a database from a media failure or a logical error are contained in the Flash Recovery area. The Flash Recovery area can contain the following:
Control files : A copy of the control file is created in the Flash Recovery area when the database is created. This copy of the control file can be used as one of the mirrored copies of the control file to ensure that at least one copy of the control file is available after a media failure.
Archived log files :When the Flash Recovery area is configured, the initialization parameter LOG_ ARCHIVE_DEST_10 is automatically set to the Flash Recovery area location. The corresponding ARCn processes create archived log files in the Flash Recovery area and any other defined LOG_ ARCHIVE_DEST_n locations. Flashback logs If flashback database is enabled, its flashback logs are stored in the Flash Recovery area.
Control file and SPFILE autobackups :The Flash Recovery area holds control file and SPFILE autobackups generated by RMAN if RMAN is configured for control file autobackup. When RMAN backs up datafile #1, which is part of the SYSTEM tablespace, the control file is automatically included in the RMAN backup.
Datafile copies : For RMAN BACKUP AS COPY image files, the default destination for the datafile copies is the Flash Recovery area.
RMAN backup sets : By default, RMAN uses the Flash Recovery area for both backup sets and image copies. In addition, RMAN puts restored archived log files from tape into the Flash Recovery area in preparation for a recovery operation.
The Flash Recovery Area and SQL Commands
You must define two initialization parameters to set up the Flash Recovery area: DB_RECOVERY_ FILE_DEST_SIZE and DB_RECOVERY_FILE_DEST. Because both of these are dynamic parameters, the instance need not be shut down and restarted for the Flash Recovery area to be usable.
DB_RECOVERY_FILE_DEST_SIZE, which must be defined before DB_RECOVERY_FILE_DEST, defines the size of the Flash Recovery area. To maximize the benefits of the Flash Recovery area, it should be large enough to hold a copy of all datafiles, all incremental backups, online redo logs, archived redo logs not yet backed up to tape, control files, and control file autobackups.
At a bare minimum, you need enough space to hold the archived log files not yet copied to tape.
Here is an example of configuring DB_RECOVERY_FILE_DEST_SIZE:
SQL> alter system 2 set db_recovery_file_dest_size = 8g scope=both;
The size of the Flash Recovery area will be 8GB, and because we used the SCOPE=BOTH parameter in the ALTER SYSTEM command, the initialization parameter takes effect immediately and stays in effect even after a database restart. All instances in an RAC database must have the same values for DB_RECOVERY_FILE_DEST_SIZE and DB_RECOVERY_FILE_DEST. The parameter DB_RECOVERY_FILE_DEST specifies the physical location where all Flash Recovery files are stored. The ASM disk group or file system must have at least as much space as the amount specified with DB_RECOVERY_FILE_DEST_SIZE, and it can have significantly more. DB_RECOVERY_FILE_DEST_SIZE, however, can be increased on the fly if more space is needed and the file system where the Flash Recovery area resides has the space available.
In the following example, we use the directory /OraFlash for the Flash Recovery area, like so:
SQL> alter system 2 set db_recovery_file_dest = ë/OraFlashí scope=both;
Clearing the value of DB_RECOVERY_FILE_DEST disables the Flash Recovery area;
the parameter DB_RECOVERY_FILE_DEST_SIZE cannot be cleared until the DB_RECOVERY_FILE_DEST parameter has been cleared.
The Flash Recovery Area and the EM Database Control
You can create and maintain the Flash Recovery area using the EM Database Control. Click the Maintenance tab, and then click the Configure Recovery Settings link to display the Configure Recovery Settings screen.
In the Flash Recovery Area section, the Flash Recovery area has been configured for this database in the file system /OraFlash, with a maximum size of 15,000MB (15GB). Just more than 3GB of space is currently used in the Flash Recovery area. Flashback logging has not yet been enabled for this database.
Flash Recovery Area Management
Because the space in the Flash Recovery area is limited by the initialization parameter DB_ RECOVERY_FILE_DEST_SIZE, the Oracle database keeps track of which files are no longer needed on disk so that they can be deleted when there is not enough free space for new files. Each time a file is deleted from the Flash Recovery area, a message is written to the alert log.
A message is also written to the alert log in other circumstances. If no files can be deleted, and the recovery area used space is at 85 percent, a warning message is issued. When the space used is at 97 percent, a critical warning is issued. These warnings are recorded in the alert log file, can be viewed in the data dictionary view DBA_OUTSTANDING_ALERTS, and are available in the main screen of the EM Database Control.
When you receive these alerts, you have a number of options. If your retention policy can be adjusted to keep fewer copies of datafiles or reduce the number of days in the recovery window, this can help alleviate the space problems in the Flash Recovery area. Assuming that your retention policy is sound, you should instead add more disk space or back up some of the files in the Flash Recovery area to another destination such as another disk or a tape device.
—Performing Backup
Your backup strategy depends on the activity of your database, the level of availability required by your service-level agreements (SLAs), and how much downtime you can tolerate during a recovery effort.
Understanding Backup Terminology
You can make a whole backup, which backs up the entire database, or you can back up only part of the database, which is called a partial backup. Whole backups and partial backups are known as Oracle backup strategies. The backup type can be divided into two general categories: full backups and incremental backups. Depending on whether you make your database backups when the database is open or closed, backups can be further categorized into the backup modes known as consistent and inconsistent backups.
Your backups can be managed using operating system and SQL commands or entirely by RMAN. Many backup types are only available using RMAN, such as incremental backups; unless you have some specific requirements, it is highly recommended that you use RMAN to implement your backup strategy.
Whole database: A whole database backup includes all datafiles and at least one control file. Online redo log files are never backed up; restoring backed up redo log files and replacing the current redo log files will result in loss of data during media recovery. Only one of the control files needs to be backed up; all copies of the control file are identical.
Partial database: A partial database backup includes zero or more tablespaces, which in turn includes zero or more datafiles; a control file is optional in a partial database backup. As you may infer, a partial database backup that includes no tablespaces and does not include the control file backs up zero bytes of data to the backup destination.
Full: A full backup includes all blocks of every datafile backed up in a whole or partial database backup.
Incremental: An incremental backup makes a copy of all data blocks that have changed since a previous backup. Oracle 10g supports five levels of incremental backups, from 0 to 4. An incremental backup at level 0 is considered a baseline backup; it is the equivalent of a full backup and contains all data blocks in the datafile(s) that are backed up. Although incremental backups can take less time, the potential downside is that you must first restore the baseline backup and then apply all incremental backups performed since the baseline backup.
Consistent: A consistent backup, also known as an offline backup, is performed while the database is not open. These backups are consistent because the SCN in the control file matches the SCN in every datafileís header. Although recovering using a consistent backup requires no additional recovery operation after a failure, you reduce your databaseís availability during a consistent backup as well as risk the loss of committed transactions performed since the consistent backup.
Inconsistent: Although the term inconsistent backup may sound like something you might avoid in a database, it is a way to maintain availability of the database while performing backups. An inconsistent backup, also known as an online backup, is performed while the database is open and available to users. The backup is inconsistent because the SCN in the control file is most likely out of synch with the SCN in the header of the datafiles. Inconsistent backups require recovery when they are used for recovering from a media failure, but keep availability high because the database is open while the backup is performed.
Backing Up the Control File
In addition to multiplexing the control file, you can guard against the loss of all control files by backing up the control file to an editable text file; this backup is called a backup to trace. The trace backup is created in the directory specified by the initialization parameter USER_DUMP_ DEST and its format is sid_ora_pid.trc, where sid is the session ID of the user creating the trace backup and pid is the process ID of the user creating the trace backup. This special backup of the control file is not a trace file per se; in other words, it is not a dump file or an error report for a failed user or system process. It is rather a proactive rather than reactive report of the contents of the control file, and the report happens to end up in a directory with other trace files. Back up the control file to trace after any change to the structure of the database, such as adding or dropping a tablespace or creating a new redo log file group. Using the command line to create a backup of the control file is almost as easy as clicking the Backup to Trace button within the EM Database Control:
SQL> alter database backup controlfile to trace;
Database altered.
Another way to back up control file is to make a binary copy of it using a similar ALTER DATABASE command, as in the following example:
SQL>alter database backup controlfile to ‘/u03/oradata/ctlfile20040911.bkp’;
Database altered.
Using RMAN to Create Backups
RMAN is the primary component of the Oracle database used to perform backup and recovery operations. You can use RMAN to back up all types: whole or partial databases, full or incremental, and consistent or inconsistent.
RMAN has a command-line interface for advanced configuration and backup operations; the most common backup functions are available via a GUI interface within the EM Database Control. It includes a scripting language to make it easy to automate backups, and it can back up the most critical types of files in your database: datafiles, control files, archived log files, and SPFILEs. It is not used to back up online redo log files (which you should not ever back up anyway), password files, and text-based init.ora files. In other words, RMAN is a “one-stop shopping” solution for your entire backup and recovery needs. In the rare circumstance that you have to back up outside RMAN, you can register the file created during this backup with RMAN for future use in an RMAN recovery scenario.
Due to the relatively static nature of password files and text-based init.ora files, these can be included in the regular operating system backups, or you can back them up manually whenever they are changed.
Configuring RMAN Backup Settings
Configuring RMAN backup settings is straightforward using the EM Database Control. In the home page, click the Maintenance tab, and then click Configure Backup Settings to open the Device tab screen. There is a separate section in this screen for your disk device and any tape devices. Under the Disk Settings section, you can control the following parameters: Parallelism To take advantage of multiple CPUs or disk controllers, increase the value of this parameter to reduce the overall backup time by performing different portions of the backup in parallel.
Disk Backup Location: If you are not backing up to the Flash Recovery area, change this value to the location where you want the backups stored.
Disk Backup Type: You can choose image copy, backup set, or compressed backup set.
Infrequently used parameters, such as the control file autobackup filename format and the snapshot control file destination filename, are not available from the GUI interface; you must use the RMAN command-line interface to change these values. The following RMAN command-line session uses the RMAN SHOW ALL command to display all default RMAN backup settings:
[[email protected] oracle]$ rman target /
RMAN>show all;
RMAN> Click the Backup Set tab, and specify the maximum size for a backup set piece (a single file). In this case, set the maximum backup set piece size to 2GB to make it easier to move files around on file systems whose file size limit is 2GB.
You use the last tab in the Configure Backup Settings screen, the Policy tab, to set a number of other default backup settings, such as automatically backing up the control file with each backup, skipping read-only and offline datafiles, and using a block- change tracking file. A block-change tracking file keeps track of changed blocks in each tablespace so that incremental backups need not read every block in every datafile to determine which blocks need to be backed up during an incremental backup.
Understanding Image Copies and Backup Sets
Image copies are duplicates of datafiles or archived redo log files, which means that every block of every file is backed up; you can use RMAN or operating system commands to make image copies. In contrast, backup sets are copies of one or more datafiles or archived redo log files that are stored in a proprietary format readable only by RMAN; backup sets consist of one or more physical files and do not include never used blocks in the datafiles being backed up. Backup sets can save even more space by using a compression algorithm designed specifically for the type of data found in an Oracle datafile.
Another difference between image copies and backup sets is that image copies can only be copied to a disk location; backup sets can be written to disk or directly to a tape or other secondary storage device.
Creating Full and Incremental Backups
The Oracle recommended backup strategy uses RMAN to make a one-time whole- database, baseline incremental level 0 online backup weekly, and then a level 1 incremental backup for the other days of the week. You can easily fine-tune this strategy for your own needs by making, for example, a level 2 incremental backup at noon during the weekdays if heavy DML (Data Manipulation Language) is occurring in the database.
Using RMAN, you can accomplish this backup strategy with just a couple of the RMAN commands that follow. First, here is the baseline level 0 backup at the RMAN command prompt:
RMAN>backup incremental level 0 as compressed backupset database;
The entire database is backed up using compression to save disk space, in addition to the space savings already gained by using backup sets instead of image copies. Starting with a baseline level 0 incremental backup, you can make level 1 incremental backups during the rest of the week, as in the following example:
RMAN>backup incremental level 1 as compressed backupset database;
The options are the same as in our previous example, except that only the blocks that have changed since the last backup are copied to the backup set. Another variation is to make an incrementally updated backup. An incrementally updated backup uses an incremental backup and updates the changed blocks in an existing image copy as if the entire image copy were backed up. In a recovery scenario, you can restore the image copy of the datafile(s) without using an incremental backup; the incremental backup is already applied, saving a significant amount of time during a recovery operation. The following RMAN script shows how an incrementally updated backup works at the command line:
run { recover copy of database with tag ‘inc_upd_img’; backup incremental level 1 for recover of copy with tag ‘inc_upd_img’ database; }
This short and cryptic script demonstrates the advantages of using a GUI interface to perform incrementally updated backups. In the Schedule Backup: Options screen, you can click a check box to perform an incrementally updated backup, in addition to the full backup or incremental backup that we discussed previously in this section.
Managing Backups
Managing your database backups is straightforward using the EM Database Control. In this section, we will give you an overview of the RMAN backup and catalog maintenance commands and show you how to monitor the Flash Recovery area and automate backups using the scheduler.
Catalog Maintenance: A number of backup management functions are available in the same screen in the EM Database Control. In the Maintenance tab, click the Manage Current Backups link to open the Manage Current Backups screen.
The four buttons at the top perform the following functions:
Catalog Additional Files: Adds any image copy backups made outside RMAN to the RMAN catalog.
Crosscheck All: Double-checks the backup files listed in the catalog against the actual files on disk to make sure that they are all available.
Delete All Obsolete: Deletes all backup files not needed to satisfy the existing retention policy.
Delete All Expired: Deletes the catalog entry for any backups not found when a crosscheck was performed.
Monitoring the Flash Recovery Area
If you are using the Flash Recovery area for all your backups, you want to make sure that you donít run out of space on the disk used for the Flash Recovery area. If your archived redo logs are sent to the Flash Recovery area, and you run out of disk space, redo log archiving halts, and the database is unavailable until you add more disk space or specify an alternate location.
We showed you how to create or modify the settings of the Flash Recovery area. In this example, we will use the same screen to monitor the status of the Flash Recovery area. In the Maintenance tab on the database home page, click Configure Recovery Settings to open the screen that displays the current status of the Flash Recovery area after you scroll to the bottom of the screen.
It appears that the Flash Recovery area is almost full with 14.621GB out of 15GB used. The Flash Recovery area settings screen includes the following information and options:
The location of the Flash Recovery area The size of the Flash Recovery area How much of the Flash Recovery area is in use
An option to configure an advanced recovery feature, flashback database Click Delete All Obsolete to free up almost 10GB of disk space in the Flash Recovery area. At the RMAN command prompt, the equivalent command is as follows:
RMAN>delete noprompt obsolete;
Not all information about the Flash Recovery area is available via the EM Database Control interface. For all the details concerning the Flash Recovery area, use the dynamic performance view V$RECOVERY_FILE_DEST. After cleaning up obsolete files in the Flash Recovery area, run this query:
SQL>select name, space_limit max_size,
2 space_used used, space_reclaimable obsolete,
3 number_of_files num_files
4 from v$recovery_file_dest;
NAME MAX_SIZE USED OBSOLETE NUM_FILES
/OraFlash 1.5729E+10 5367296000 0 526
1 row selected.
Now, no obsolete files are left in the Flash Recovery area, and we have approximately 5GB used out of 15GB. Currently, 526 files are in the Flash Recovery area; this information is not available via the EM Database Control interface.
Automating Backups
Automating any of the backup operations via the EM Database Control is straightforward.
In the database home page, click the Maintenance tab, click Schedule Backup, and select your backup options such as strategy and destination. In step 3, you can specify when the backup will first run, whether it will repeat, and how many times it will repeat.
Media failures require restoration and then recovery.
A word on Restore and Recovery:
Restoration is nothing but replacing a corrupt file with available backup. Recovery refers to Applying all the changes made to the file since the backup has been taken. Thus, by restoring and applying recovery process, the database is brought to possible latest version.
–LogMiner
Oracle LogMiner is another tool to view past activity in the database. The LogMiner tool can help find changed records in redo log files by using a set of PL/SQL procedures and functions. LogMiner extracts all DDL and DML activity from the redo log files for viewing via the dynamic performance view V$LOGMNR_CONTENTS. In addition, the V$LOGMNR_CONTENTS view also contains the DML statements needed to reverse the change made to the database. This is a tool useful to know when changes were made to a table as well as to automatically generate SQL statements needed to reverse those changes
Unlike Flashback Query feature that allows user to see the contents of a table at a specified time in the past, LogMiner can search a time period for all DDL against the table. A Flashback Query uses the undo information stored in the undo tablespace; LogMiner uses redo logs, both online and archived. Both tools are useful for tracking down how and when changes to database objects took place.
LogMiner does not actually undo the change; it only provides the statements that you can use to undo the change.
Performance Tuning
==============================================================
Sources of information about database performance, including:
The Alert log
Background and user trace files
Dynamic performance views
Data dictionary views
A partial listing of some of the V$ views that are frequently used in performance tuning:
Name Description
V$SGASTAT Shows information about the size of the SGAís components.
V$EVENT_NAME Shows database events that may require waits when requested
by the system or by an individual session. There are
approximately 200 possible wait events.
V$SYSTEM_EVENT Shows events for which waits have occurred for all sessions
accessing the system.
V$SESSION_EVENT Shows events for which waits have occurred, individually
identified by session.
V$SESSION_WAIT Shows events for which waits are currently occurring, individually
identified by session.
V$STATNAME Matches the name to the statistics listed only by number in V$SESSTAT
and V$SYSSAT.
V$SYSSTAT Shows overall system statistics for all sessions, both currently and
previously connected.
V$SESSTAT Shows statistics on a per-session basis for currently connected
sessions.
V$SESSION Shows current connection information on a per-session basis.
V$WAITSTAT Shows statistics related to block contention.
In general, queries that incorporate V$SYSSTAT show statistics for the entire instance since the time it was started. By joining this view to the other relevant views, you get the overall picture of performance in the database. Alternatively, queries that incorporate V$SESSTAT show statistics for a particular session. These queries are better suited for examining the performance of an individual operation or process.
A Sampling of Data Dictionary Views:
Name Description
DBA_TABLES Table storage, row, and block information
DBA_INDEXES Index storage, row, and block information
INDEX_STATS Index depth and dispersion information
DBA_DATA_FILES Datafile location, naming, and size information
DBA_SEGMENTS General information about any space-consuming segment
in the database.
DBA_HISTOGRAMS Table and index histogram definition information
DBA_OBJECTS General information about all objects in the database,
including tables, indexes, triggers, sequences, and partition
The DBA_OBJECTS data dictionary view contains a STATUS column that indicates, through the use of a VALID or an INVALID value, whether a database object is valid and ready to be used or invalid and in need of some attention before it can be used. Common examples of invalid objects are PL/SQL code that contains errors or references to other invalid objects and indexes that are unusable due to maintenance operations or failed direct-path load processes. Some invalid objects, such as some types of PL/SQL code, dynamically recompile the next time they are accessed, and they then take on a status of VALID again. But you must manually correct other invalid objects, such as unusable indexes. Therefore, proactive database management techniques dictate that you identify and remedy invalid objects before they cause problems for database users.
Identifying Unusable Objects Using Data Dictionary
One way to identify invalid objects is to query the DBA_OBJECTS and DBA_INDEXES data dictionary views to find any invalid objects or unusable indexes and then correct them using the commands shown here:
SQL> SELECT owner, object_name, object_type
2 FROM dba_objects
3 WHERE status = ‘INVALID’;
OWNER OBJECT_NAME OBJECT_TYPE
SH P_UPDATE_SALES_HISTORY PROCEDURE SYS DBA_HIST_LATCH VIEW
SQL> ALTER VIEW sys.dba_hist_filestatxs COMPILE;
View altered.
SQL> ALTER PROCEDURE sh.p_update_sales_history COMPILE; Procedure altered.
SQL> SELECT owner, index_name, index_type
2 FROM dba_indexes
3 WHERE status = ‘UNUSABLE’;
OWNER INDEX_NAME INDEX_TYPE
HR JOB_ID_PK NORMAL
SQL> ALTER INDEX hr.job_id_pk REBUILD;
The ALTER Ö COMPILE command also works on invalid PL/SQL triggers, packages, package bodies, and functions.
When rebuilding an index using the REBUILD command, the amount of space used by the index is temporarily larger than the actual space needed to store the index. Make sure that adequate space exists in the tablespace before starting the rebuild process; up to 1.5 times the size of the original index is a good rule of thumb.
Manually Gathering Statistics Using DBMS_STATS
You can also call the DBMS_STATS PL/SQL package directly from a SQL*Plus session. Some of the options for the DBMS_STATS package include the following: Back up old statistics before new statistics are gathered. This feature allows you to restore some or all of the original statistics if the CBO performs poorly after updated statistics are gathered.
Gather table statistics much faster by performing the analysis in parallel. Automatically gather statistics on highly volatile tables and bypass gathering statistics on static tables.
The following example shows how the DBMS_STATS packages can be used to gather statistics on the PRODUCT_HISTORY table in SHís schema:
SQL> EXECUTE DBMS_STATS.GATHER_TABLE_STATS (ëSHí,íPRODUCT_HISTORYí);
You can use the DBMS_STATS package to analyze tables, indexes, an entire schema, or the whole database. A sample of some of the procedures available within the DBMS_STATS package are shown hereunder:
Procedures within the DBMS_STATS Package
Procedure Name Description GATHER_INDEX_STATS Gathers statistics on a specified index GATHER_TABLE_STATS Gathers statistics on a specified table GATHER_SCHEMA_STATS Gathers statistics on a specified schema GATHER_DATABASE_STATS Gathers statistics on an entire database
The presence of accurate optimizer statistics has a big impact on two important measures of overall system performance: throughput and response time.
==============================================================
Detecting Lock Conflicts
Detecting locks in Oracle,
V$SESSION,
V$TRANSACTION,
V$LOCK, and V$LOCKED_OBJECT
to see who is locking what resource.
==============================================================
Automatic Database Diagnostic Monitoring and AWR
Following each AWR statistics collection process, the Automated Database Diagnostic Monitoring (ADDM) feature automatically analyzes the gathered statistics and compares them to the statistics gathered by the previous two AWR snapshots. By comparing the current statistics to these two previous snapshots, the ADDM can easily identify potential database problems such as CPU and I/O bottlenecks, resource- intensive SQL or PL/SQL, lock contention, and the utilization of Oracleís memory structures within the SGA.
Based on these findings, the ADDM may recommend possible remedies.
The goal of these recommendations is to minimize DB Time. DB Time is composed of two types of time measures for non-idle database users: CPU time and wait time. This information is stored as the cumulative time that all database users have spent either using CPU resources or waiting for access to resources such as CPU, I/O, or Oracleís memory structures. High or increasing values for DB Time indicate that users are requesting increasingly more server resources and may also be experiencing waits for those resources, which can lead to less than optimal performance. In this way, minimizing DB Time is a much better way to measure overall database performance than Oracleís old ratio-based tuning methodologies.
DB Time is calculated by combining all the times from all non-idle user sessions into one number.
Therefore, it is possible for the DB Time value to be larger than the total time that the instance has been running.
Once ADDM completes its comparison of the newly collected statistics to the previously collected statistics, the results are stored in the AWR. You can use these statistics to establish baselines against which future performance will be compared, and you can use deviations from these baseline measures to identify areas that need attention. In this manner, ADDM allows you to not only better detect and alert yourself to potential management and performance problems in the database, but also allows you to take corrective actions to rectify those problems quickly and with little or no manual intervention.