Oracle DBA Interview Questions and Answers
Oracle DBA Interview Questions and Answers
Oracle DBA Interview Questions and answers for beginners and experts. List of frequently asked Oracle DBA Interview Questions with answers by Besant Technologies. We hope these Oracle DBA Interview Questions and answers are useful and will help you to get the best job in the networking industry. This Oracle DBAInterview Questions and answers are prepared by Oracle DBA Professionals based on MNC Companies expectation. Stay tuned we will update New Oracle DBA Interview questions with Answers Frequently. If you want to learn Practical Oracle DBA Training then please go through this Oracle DBA Training in Chennai and Oracle DBA Training in Bangalore.
Best Oracle DBA Interview Questions and answers
Besant Technologies supports the students by providing Oracle DBA Interview Questions and answers for the job placements and job purposes. Oracle DBA is the leading important course in the present situation because more job openings and the high salary pay for this Oracle DBA and more related jobs. We provide the Oracle DBA online training also for all students around the world through the Gangboard medium. These are top Oracle DBA Interview Questions and answers, prepared by our institute experienced trainers.
Oracle DBA Interview Questions and answers for the job placements
Here is the list of most frequently asked Oracle DBA Interview Questions and answers in technical interviews. These questions and answers are suitable for both freshers and experienced professionals at any level. The questions are for intermediate to somewhat advanced Oracle DBA professionals, but even if you are just a beginner or fresher you should be able to understand the answers and explanations here we give.
Oracle Database is a database management system, where data are inserted and retrieved.
His duty is to take backup of the growing data, and install patches, and secure the data.
Oracle, ibm db2, Microsoft SQL server, Microsoft Access, MySQL and SQLite, PostgreSQL, MariaDB
It is the medium to connect to the database. As soon as you start a database, the instance sit in RAM, and retrieves the data and update it through SGA. 40 percent of RAM can be used for sizing SGA rest is reserved for OS and others in 64-bit machine and in 32-bit machine, max SGA configured can be 1.5GB only.
Library cache, Database Buffer Cache, Data Dictionary cache, Redo log Buffer cache, Shared Pool.
Library cache is used to store Oracle statements.Data Dictionary Cache contains the definition of Database objects and privileges granted to users.
Data Base buffer cache holds copies of data blocks which are frequently accessed so that they can be retrieved faster for any future requests.
Redo log buffer cache records all changes made to the data files.
Data files, Control files , Redolog file Password files, Parameter files
SCN is a unique ID that Oracle generates automatically for every transaction which is committed., It is recorded for every change in the redo entry. Dba can get the SCN number by querying select SCN from a v$database from SQLPLUS.
The PFILE is read at instance startup time to get specific instance characteristics. The PFILE is text-based and can be edited in an editor like vi on UNIX or Notepad on Windows. Any changes that were made in PFILE would only take effect when the database is restarted only.
The control file is a binary file which has the physical structure of a database.
It contains the number of log files and the respective location of the log file, Database name, and timestamp when the database is created, checkpoint information. It has the information when was the last database backup was taken, whether is failed or success. The last SCN number. The control file is the heart of the Database. That is the reason why without control file database cannot be up. And so This control file has to be multiplexed. So even though it corrupts or lost, from the backup the database can be restarted.
The Control file information can be shown in the initialization parameter file.
We can query v$controlfile to display the names of control files
From sql we can execute in SQL> show parameter control_files;
The primary function of the redo log file is to record all changes made to data. When the instance crash happens and data are missed to write in the data file, then the changes can be obtained from the redo log, so work is never lost.
COLD backup(User managed & RMAN) and HOT backup(User managed & RMAN)
- Recovery catalog contains the inventory of backups taken by RMAN from the database.
- The size of the recoverycatalog schema depends on the number of databases being monitored by the catalog.
- It is used for restoring a physical backup, reconstruct it, and make it available to the Oracle server.
- RMAN can be used without recovery catalog.
- Recovery catalog also holds RMAN stored scripts.
When the database is active and running and during this time a backup is taken then it is hot backup.
Create spfile from pfile;
and then shutdown instance and startup once again.
$cp *.ctl/u01/app/backup/db_name (This command will copy control file )
$cp *.log/u01/app/backup/db_name (This command will copy all redo logfile.
$cp *.dbf/u01/app/backup/db_name (This command will copy all datafile.
You can query the V$ACTIVE_INSTANCES view to determine the member instances of the RAC cluster.
Select status from v$instance;
Select name from v$instance;
Show parameter pfile;
alter system kill session ‘SID,SERIAL#’
startup mount -mount the control file
startup nomount- does not mount the controlfile
Select * from v$version;
Start the database in the nomount mode
- Create the control file from the control file backup and place it in the correct location
- Mount the database
- Recover the database
- Open the database
Yes, the damaged blocks can be recovered.
You cannot up the database if control file is lost or damanged.
$ps-ef | grep databasename
Taking backups of everyday task, so that the datas are not lost . and tracking every single activity in the database.
1. Logical backup (exp/imp –> 9i , expdp/impdp –> 10g)
2. Physical backup
Physical backup – User Managed backup( Cold and Hot backup)
Physical backup – Server Manager backup( RMAN – Recovery Manager )
Logical backup (exp/imp)
We can take export(exp) in four levels.
1. Full database Level export/import
exp system/manager file=’/u01/app/oracle/ctsfull.dmp’ log=’/u01/app/oracle/ctsfull.log’ full=y
imp system/manager file=’/u01/app/oracle/ctsfull.dmp’ log=’/u01/app/oracle/ctsfull_imp.log’ full=y
2. Schema(User) Level export/import
Schema user level export/import
exp system/manager file=’/u01/app/oracle/ctsuser.dmp’ log=’/u01/app/oracle/ctsuser.log’ owner=ram
imp system/manager file=’/u01/app/oracle/ctsuser.dmp’ log=’/u01/app/oracle/ctsuser_imp.log’ fromuser=ram touser=ram
Note: RAM is a user in the database
3.Table Level export/import
exp system/manager file=’/u01/app/oracle/ctstab.dmp’ log=’/u01/app/oracle/ctstab.log’ tables=ram.emp
imp system/manager file=’/u01/app/oracle/ctstab.dmp’ log=’/u01/app/oracle/ctstab_imp.log’ fromuser=ram touser=ram tables=emp
4.Tablespace Level export/import
Shared pool,Db buffer cache and Redo buffer cache are the components of SGa
generates hash plan execution
Data consistency and data recovery
- when checkpoint occurs
- Dirty buffer reaches threshold
- When buffer timout occurs
- When tablespace is put into offline/read only/drop or truncate
- Full ckpt
- Thread ckpt
- File level ckpt
- Object level
- log switch
- Init file(Pfile /spfile) is read
- SGA is allocated
- Instance is started
- Alert/trace logs starts writing
Data Control Language.i.e Grant,Revoke
- Shrink temp files
- Recreate temp files
A java based utility that applies or rollback of patches to oracle software
It is a volume manager which holds datafiles in its diskgroups.
contents are evenly distributed and stripped.
- ARBn(FOr rebalance)
- ASMb(Runs when copy command executes)
- GMON(disk membership)
Modify UNDO_RETENTION or use larger rollback segments
A profile is the resource limit(session per user,connect time,password expiry,password login atteps)
for the current user.
- sysdba: Can do all kinds of DBA works and also can create a DB
- sysoper: can do all kinds of DBa works but cant create a DB with this user.
TNS- Transaparent Network substrate.An alias for OCI which has connect string for the DB
- Hot Backup: Backup is taken when DB is up and running.
- Cold Backup: Backup is taken when DB is down and in mount stage
- Physical -> Mount stage
- Active -> Read only mode
- Logical -> Read write mode
- Maximum Protection
- Maximum Availability
- Maximum Performance
- LMD(GES process)
- LMS(GCS process)
cluster loses connectivity between each other and starts to function independantly.
- OCR bkup –> $CRS_HOME/cdata/server_name
- OLR –> $CRS_HOME/cdata/ **.olr
Creating Oratab entry copying oraenv/db_home to bin location
every 3sec when 100MB is full 1/3 rd of the buffer is full
- Block level backup can be taken
- Block level recovery
- Incremental backup can be taken
- uses Large Pool
- Skips unused blocks
Current –> LGWR currently writing
ACTIVE –> Logfile used for recovery
INACTIVE –>Logfile not used for recovery
INVALID –>When LGWR cannot write or ACCESS
- LGWr(Log writer)
- LNS (Lofile Network Server)
- Arch (Arch process)
- RFS(Remote File Server)
- MRP(MAnaged Recovery Process)
Collection of high impact, low risk and proven fixes for a specific product
All node –> Patches done at all nodes at a time
Rolling –> Each node is patched one by one.
Minimum Downtime Patching –> some nodes are up.some nodes under patching
- External –> No mirroring
- Normal –> Two way mirroring
- High –> Three way mirroring
- Fast start failover
- Active DataGuard added
- ACFS File system Introduced
unwanted process or ubnormally terminated process
Release locks or other resources
restarting dead dispatchers
Datablock –> Extent –> Segment –> Tablespace –> Database
A synonym is also known as alias for a table,view,sequence or program unit
- Redo Transport service
- Redo Apply service
- Role Transition
FOr standalone, if n number of redo logfile in primary, then (n+1) standby redo logfile should be created for standby
For RAC,if n number of redo logfile in primary, then nuber of thread *(n+1) standby redo logfile should be created for standby
alter database enable block change tracking using file ‘Location/blk_change.dbf’;
select * from v$block_change_tracking;
- Generates a set of plans for SQls
- Estimates cost for SQLs
- Compares plan and choose lowest cost
- Shutdown Services to DB
- shutdown DB
- shutdown ASM
- shutdown Nodeapps(VIP,ONS,Network/GSD)
- shutdown Cluster
It is herat of RAC feature used for data Integrity. Each instance has buffer cache.oracle RAC combines these buffers logically and maintains integrity GES and GCS are the services involved in it.
It is done when maintanance is planned.It is a reversible role transition. the primary is converted to standby and viceversa
- awrrpt.sql( for standalone)
- awrgrpt.sql(for RAC at Db evel)
- awrrpti.sql(for RAC at instance level)
- Db file sequential read
- DB file scaterred read
- Log file parallel write
- Log file sync
- Buffer busy waits
- Free buffer waits
CRSd–> Cluster Ready Service Deamon.Mainly for managing HA(High Availability)
It does OCR backup for every 3hrs
To update the DB components with the new atch details and get validated.
check in dba_registry_history
A binary file that records physical straucture of the DB,needed to startup the DBand operate it
It inludes DB name,datafiles,redofile names and its location,timestamp of DB creation,cureent log sequence number,checkpoint info etc..
I will check MRP status in Standby DB
Try to restart MRP and make sure it is not waiting for ‘WAIT_FOR_GAP’ message. If they are lagging, then we must restore the missing logs.
Also, we can check alert logs on both standby and primary database.
Start Stop MRP on standby DB:
Primary_to check error:
SQL> select status,error from gv$archive_dest where dest_id=2;
ERROR ORA-00270: error creating archive log ERROR ORA-00270: error creating archive log ERROR ORA-00270: error creating archive log
Try to ENABLE and DEFER log shipping .
alter system set log_archive_dest_state_2=’ENABLE’ scope=both sid=’*’; alter system set log_archive_dest_state_2=’DEFER’ scope=both sid=’*’;
We can get current time by the query:
alter session set nls_date_format=”yyyy-mm-dd hh24:mi:ss”;SQL> Session altered.
SQL> select sysdate from dual;
Sat Feb 8 11:05:52 EST 2014
I will use below query to find out.
From the views % v$session% and % v$process%
mkdir -p /db_name/oracle/admin/scripts cp -R copy directory tree
rm -rf delete directory
scp rman_driver_10g.sh firstname.lastname@example.org:/GDSNP/oracle/admin/scripts
I will use Dynamic query to kill multiple sessions like.
select ‘alter system kill session ”’||sid||’,’||serial#||”’ immediate;’ from v$session where SERVICE_NAME =’Processing’ and status=’INACTIVE’ ;
Check now sessions should be gone.
select inst_id,sid,serial#,machine,username,OSUSER,status from gv$session where SERVICE_NAME =’Processing’ and status=’INACTIVE’;
select tablespace_name,sum(bytes)/1024/1024/1024 as GB from dba_data_files group by tablespace_name;
I will investigate V$SYSAUX_OCCUPANTS col OCCUPANT_NAME for a30
col SCHEMA_NAME for a20
select OCCUPANT_NAME,SPACE_USAGE_KBYTES/1024 “Space (MB)”,SCHEMA_NAME from V$SYSAUX_OCCUPANTS;
It is great feature, one of my favorites which enables DBA to flashback database back in time with minimal effort and time. To use this feature FLASHBACK_ON parameter needs to be set and flashback point should be created beforehand.
SQL> select flashback_on from v$database; FLASHBACK_ON
RESTORE POINT ONLY
*ERROR at line 1:
ORA-03206: maximum file size of (4194304) blocks in AUTOEXTEND clause is out of range
I will limit the max size within the range of allowed value i.e < 32 G
alter tablespace ORA_DATA add datafile ‘+MRCP_DATA’ size 1G autoextend ON next 25M maxsize 31G
I will check if database is reachable and do ping test
tnsping ‘(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=alphaddbp001- vip)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ORAPDB1)))’
The shrinkage is in blocks, to get how much you can shrink multiply by block%size of the database to TOTAL_BLOCKS SHRINKAGE_POSSIBLE to find out amount of space will be released .
SQL> select file_name, hwm, blocks total_blocks, blocks-hwm+1 shrinkage_possible
- from dba_data_files a,
- ( select file_id, max(block_id+blocks) hwm
- from dba_extents
- group by file_id ) b
- where a.file_id = file_id;
FILE_NAME HWM TOTAL_BLOCKS SHRINKAGE_POSSIBLE
SQL> show parameter block_size
set linesize 4000
set pagesize 0
set markup html on spool on spool select_result.html
Run the required query
select table_name, tablespace_name from all_tables
where owner = ‘SYS’
and tablespace_name is not null;
Spool off ;
A Data Guard is to protect production data and continue the critical business applications in case of any disaster event. The Oracle DataGuard configuration consists of one production database and one or more standby databases irrespective of location (provided they are communicating to each other). Both primary and standby database can be either standalone or RAC Database. 11G also supports heterogeneous configurations in which the primary and standby systems may have different bit/processors or OS version. There are 3 types of standby database.
Physical Standby Database
It is a block to block replica of the production database and so with exact size of Production database. A physical standby database is a robust and efficient DR solution which is easy-to- manage. It reduces downtime for the application with the capabilities of fast role switch in case of planned or unplanned outages and can prevent data loss and corruption. It supports all data types and can reduces workload from primary database. The physical standby database is sync with production by Redo Apply.
Logical Standby Database
This type of standby database can be altered to have a different structure and so not always exact replica of production. The database is updated using SQL statements generated by mining the redo logs. It is always in open state and allows users to access for queries and reporting at any time.
Snapshot standby database
It allows user to use the standby database in read write mode (like standalone database) while still receiving the redo logs from primary. However, received redo logs from Primary will apply only when snapshot is converted back to Physical standby. This is good feature in DR drill/Testing or bug fixing on production or in reporting purpose.
There are 3 types of protection mode for standby database.
Ø Highest level or protection and Zero data loss in case of any DR event.
Ø Use SYNC/AFFIRM I/O where Primary waits for acknowledgement that redo has been transmitted synchronously and written to disk. Primary database issue commits only when transaction gets applied to standby database.
Ø At least one standby database should get synchronized before the transaction commits at primary. If standby site is not available because of network or any reason, Primary database will be brought down to maintain level of data protection. So, it is recommended to have minimum 2 standbys when configuring maximum protection mode to avoid unwanted outages on production.
Ø Standby redo logs are required for this configuration
Ø Highest level of performance and lowest degree of protection compare to other modes. Ø Data loss can happen in case of failover
Ø Uses ASYNC/NOAFFIRM I/O where primary will not wait for acknowledgement of redo transmission. Updates are first committed to production database before the logfile arrives on DR site
Ø It is a Mixture of Maximum Protection and Maximum Performance. It behaves like maximum protection however in case if standby site is not available it starts working in maximum performance.
Ø Can use either SYNC/AFFIRM (Primary waits for acknowledgement that redo has been transmitted synchronously and written to disk) or SYNC/NOAFFIRM (Primary waits only for the acknowledgement that redo has been transmitted synchronously) based on what. DBA need to select the option based on either application is looking for benefit of performance or protection
Ø Standby redo logs are required for this configuration
We can activiate by following below steps.
alter database mount standby database; select name from v$database;
alter system set standby_file_management=manual;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH ;
shutdown immediate startup mount
CREATE RESTORE POINT DR_TEST_25032016 GUARANTEE FLASHBACK DATABASE;
ALTER DATABASE ACTIVATE STANDBY DATABASE;
ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE; ALTER DATABASE OPEN;
metadata of user :
set long 90000 pages 0 lines 400 trimsp on
select dbms_metadata.get_ddl(‘USER’,’NAVA’) from dual;
select dbms_metadata.get_granted_ddl(‘SYSTEM_GRANT’,’NAVA’) from dual; select dbms_metadata.get_granted_ddl(‘OBJECT_GRANT’,’NAVA’) from dual; select dbms_metadata.get_granted_ddl(‘ROLE_GRANT’,’NAVA’) from dual;
ORA-39095: Dump file space has been exhausted: Unable to allocate 8192 bytes
Job “SYS”.”SYS_EXPORT_FULL_07″ stopped due to fatal error at Mon Oct 29 00:56:43 2018 elapsed 0 03:42:59
If dump size is huge and we are limiting the file size using ‘filesize=something’ and with the ‘wildcard’ specification (%U) the dump file can expand up to 99 files only in a single directory. Therefore, we get error related to space has been exhausted.
Calculate the number of dump files getting generated with the help of filesize and estimate_only clause. If estimate goes beyond 99 dump files, keep additional directory.
- Table spaces are not put in the backup mode, therefore there is no extra redo log file during online backups.
- Incremental backups will take the backup data blocks only those got changed since last backup.
- Detection of corrupt blocks.
- Built in reporting and listing commands.
- Parallelization of I/O operations.
- You should change your init<SID>.ora file with the following information
- log_archive_dest=’/u01/oradata/archlog’ (for example)
- log_archive_start=true (prior to 10g)
- sql>startup mount;
- sql> alter database archivelog;
- sql>alter database open;
- Make sure you backup your database before switching to ARCHIVELOG mode.
SHUTDOWN (or) SHUTDOWN NORMAL
New connections will not be% accepted and db will not come down until existing % connections exit or close themselves
New connections will not be% accepted and will not come down until %existing transactions to commit and exit or close themselves.
New connections will not be% accepted and will not come down until all committed transactions %are written in the DB(Rolled_forward) and all uncommitted data is removed (Rolled_back)
Not my recommendation, it is simply switching off.
SHUTDOWN NORMAL, TRANSACTIONAL, IMMEDIATE
are clean shutdown methods as database maintains its consistency.
- Oracle provides several Indexing schemas
- B-tree index – Retrieves a small amount of information from a large table.
- Domain Index – Refers to an application
- Hash cluster Index – Refers to the index that is defined specifically for a hash cluster.
- Alert log file is a log file that records database-wide events which is used for trouble shooting.
- We can find the Log file in BACKGROUND_DUMP_DEST parameter.
- Following events are recorded in ALERT log file:
- Database shutdown and startup information.
- All non-default parameters.
- Oracle internal (ORA-600) errors.
- Information about a modified control file.
- Log switch change.
- It is generated only if the value of SQL_TRACE parameter is set to true for a session.
- SQL_TRACE parameter can be set at database, instance, or session level.
- The location of user process trace file is specified in the USER_DUMP_DEST parameter.
- There are different types of locks, which are given as follows:
- System locks – controlled by oracle and held for a very brief period of time.
- User locks – Created and managed using dbms_lock package.
- Different types of user locks are given as follows
- UL Lock – Defined with dbms_lock package.
- TM Lock – Acquired once for each object, which is being changed. It is a DML lock. The ID1 column identifies the object being modified.
- Db_file_sequential_read event generally indicates index usage.
- It shows an access by row id.
- While the db_file-scattered_read event indicates full table scan.
- Db_file_sequential_read event reads a single block at one time.
- Whereas db_file_scattered_read event reads multiple blocks.
select round ((I.instance/+zn.db)*100,2) as “%AGE_CONN” from (select count(*) as instance from v$session where username not like ‘%SYS%’) I, (select count(*) as db from gv$session where username not like ‘%SYS%’) zn;
WHERE TABLE_NAME = ‘MFST_NBR_T’
AND table_owner = ‘JESIESFS’
and operation = ‘INSERT’
order by start_timestamp desc
select object_name,object_type,owner from dba_objects where object_name=’USER_MCC_REPORT_UPDATE’;
set long 9999999
set pages 0
select text from user_source where name = ‘ENDUR.USER_MCC_REPORT_UPDATE’ order by line;
- select text from dba_source where name = ‘USER_MCC_REPORT_UPDATE’ order by line;
- select text from dba_source where name = ‘USER_MCC_REPORT_UPDATE’ order by line;
- If you have to find out how many sessions are connected to the temporary tablespaces. select count(*),tablespace_name from v$sort_segment group by tablespace_name;
- Database stores most crucial data of business ,so it’s important to keep the data safe and this can be achieved by backup.
- The following files must be backed up
- Database files (Header of datafiles is freezed during backup)
- Control files
- Archived log files
- Parameter files (spfile and pfile)
- Password file
- A full backup is a backup of all the control files, data_files, and parameter_file both SPFILE binary file and PFILE-static).
- You must also backup your ORACLE_HOME binaries which are used for cloning.
- A full backup can be performed when our database runs in NON ARCHIVE LOG mode.
- As a thumb rule, you must shutdown your database before you perform full backup.
- There are 2 types of backup’s
- COLD backup(User managed & RMAN)
- HOT backup(User managed & RMAN)
- Hot backup is taken when the database is still online and database should be in ARCHIVE
- LOG MODE.
- Cold backup is taken when the database is in offline mode.
- Hot backup is inconsistent backup where as cold backup is consistent backup.
- RMAN executable
- Server process
- Target database
- Recovery catalog database
- Media management layer
- Backup sets and backup pieces
- As long as all data files are safe and on a successful completion of it is ok
- We can restore the control file by performing following commands CONNECT INTERNAL STARTUP MOUNT and then
- you can TAKE ANY OFFLINE TABLESPACE (Read-only)
- after that you have to ALTER DATABASE DATAFILE (OFFLINE)
- RECOVER DATABASE USING BACKUP CONTROL FILE
- ALTER DATABASE OPEN RESETLOGS
- BRING READ ONLY TABLE SPACE BACK ONLINE
- Shutdown and back up the system. Now you may start again the DB
- Then give the command ALTER DATABSE BACKUP CONTROL FILE TO TRACE
- This output can be used for control file recovery as well.
- If control file backup is not available, then the following will be required
- CONNECT INTERNAL STARTUP NOMOUNT
- CREATE CONTROL FILE …..;
- But we need to know all of the datafiles, logfiles, and settings of MAXLOGFILES,
- MAXLOGMEMBERS, MAXLOGHISTORY, MAXDATAFILES for the database to use the command.
- DBWR is a background process which writes data blocks info from DB buffer cache to data_files.
- Every 3 seconds
- Whenever checkpoint occurs
- When server process needs free space in database buffer cache to read new blocks.
- Whenever number of changed blocks reaches a maximum value.
- LGWR writes redo or changed information from redo log buffer cache in cyclic fashion to redo log files
- LGWR writes to redo files when the redo log buffer is 1/3 rd full.
- It also writes for every 3 seconds.
- Before DBWR writes modified blocks to the datafiles, LGWR writes to the
- log file
- SYSTEM tablespace is created automatically during database creation and has to be online always.
- SYSAUX tablespace
- UNDO tablespace
- TEMP tablespace
- UNDO & TEMP tablespace are optional when you create a database.
- Init<SID>.ora parameter file or SPFILE is accessed first
- parameters in this file are necessary to start a DB.
- SPFILE is by default created during database creation whereas PFILE should be created from
- PFILE is static text file whereas SPFILE is binary file
- you cannot edit binary file which is spfile but the static pfile file can be edited
- Changes made in SPFILE are dynamically effected with running database whereas PFILE changes are effected after restarting the database.
- We can backup SPFILE using RMAN.
- SPFILE is available from Oracle 9i and above.
- Parameters in SPFILE are changed dynamically..
- RMAN cant backup PFILE, It can backup SPFILE.
- SPFILE should be used as best practice reducing the human typo errors.
- In UNIX they are stored in the location $ORACLE_HOME/dbs and ORACLE_HOME/database for Windows directory.
- Oracle by default starts with SPFILE located in $ORACLE_HOME/dbs.
- You can easily create PFILE from SPFILE simply by giving create PFILE from SPFILE;
- All the parameter values are now updated with SPFILE.
- Similarly, create SPFILE from PFILE; command creates SPFILE from PFILE.
- PGA_AGGREGATE TARGET parameter specifies target aggregate PGA memory available to all server process attached to an instance.
- Oracle sets its value to 20% of SGA.
- It is used to set overall size of work-area required by various components.
- Its value can be known by querying v$pgastat dynamic performance view.
- it can be known by using command from sqlplus as show parameter pga.
- SELECT blocked_locks.pid AS blocked_pid, blocked_activity.usename AS blocked_user, blocking_locks.pid AS blocking_pid, blocking_activity.usename AS blocking_user, blocked_activity.query AS blocked_statement, blocking_activity.query AS blocking_statement FROM pg_catalog.pg_locks blocked_lock
- JOIN pg_catalog.pg_stat_activity blocked_activity ON blocked_activity.pid =
- JOIN pg_catalog.pg_locks blocking_locks
- ON blocking_locks.locktype = blocked_locks.locktype
- AND blocking_locks.DATABASE IS NOT DISTINCT FROM blocked_locks.DATABASE
- AND blocking_locks.relation IS NOT DISTINCT FROM blocked_locks.relation
- AND blocking_locks.page IS NOT DISTINCT FROM blocked_locks.page
- AND blocking_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple
- AND blocking_locks.virtualxid IS NOT DISTINCT FROM blocked_locks.virtualxid
- AND blocking_locks.transactionid IS NOT DISTINCT FROM blocked_locks.transactionid
- AND blocking_locks.classid IS NOT DISTINCT FROM blocked_locks.classid
- AND blocking_locks.objid IS NOT DISTINCT FROM blocked_locks.objid
- AND blocking_locks.objsubid IS NOT DISTINCT FROM blocked_locks.objsubid
- AND blocking_locks.pid != blocked_locks.pid
- JOIN pg_catalog.pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid
- WHERE NOT blocked_locks.granted;
- By this we can easily find out the culprit and kill it .
- set line 400 pages 100
- col COLUMN_NAME for a30
- SELECT conz19.table_name, conz19.column_name, conz19.position, conz.status, conz.owner FROM all_conztraints conz, all_conz_columns conz19
- WHERE conz19.table_name = ‘PMOTN_CD_ASSN_T_OLD’ AND conz.conztraint_type = ‘P’
- AND conz.conztraint_name = conz19.conztraint_name AND conz.owner = conz19.owner
- ORDER BY conz19.table_name, conz19.position;