1) What error did u faced in career and how we can restrict it
> manual error
> patching missed on another nodes -- maker checker
> Block corruption : dr lag of 15 min .
-- Rollback
-- Fragmentation
##############################
Data Pump Questions
##############################
##############################
Compression
##############################
##############################
Backup & Recovery (RMAN) Questions
##############################
1) What is intrablock recovery and interblock recovery
intrablock refers to corruption(could be logical or physical) with-in the database block and interblock refers to corruption (logical) between the blocks. intrablock corruption can be detected by validate statement or analyze or dbverify but interlock can only be detected by dbverify or analyze.
2) can we take rman backup on client machine
Yes, you can run the RMAN client on a client machine and connect to a remote target database. However, the actual backup files will be created by server processes running on the database server host and stored in a location accessible to that server, not directly on the client machine's local disk, unless you configure a shared drive
3) can I take rman backup if database in noarchivelog mode
4) What is fractured block and is it in user managed or rman backup
5) How to check fuzzy and needed archives for restoration
6) What happens when we put tablespace in backup mode. Is data written on datafiles when we put tablespace in backup mode
Ans : yes datafiles are accessible when we put tablespace in backup mode. Datafiles header are frozen . data are written to datafiles and also image copy is written to redo logs hence more redo generation
7) What happens on instance recovery . (does uncommitted data also written on crash
8) difference between obsolete and expired
In Oracle's Recovery Manager (RMAN), the key difference is: obsolete means the backup is no longer needed according to your retention policy, while expired means the actual backup files are missing from the disk or tape where RMAN expects to find them
9) Block Corruption :
10 ) How to Restore obsolete backups
Ans : a) increase retention policy b) run crooscheck after keeping file in location
11) If newly created datafile is lost and we don’t have any backup :
Ans : create datafile. If we have all archives
12) steps for restoring rac database : Ans cluster_database
13) using archives of previous incarnation
Ans : RMAN recovery through RESETLOGS: RMAN> reset database to incarnation 3;
Check the log_archive format as ‘%r’
Oracle 10g introduces a new format specification for archived log files. This new format avoids overwriting archived redo log files with the same sequence number across incarnations.
If Flashback database enabled on both primary and standby then no action required from DBA side. I have tested the same.
If FLASHBACK is NOT enabled on standby database and standby applied redo data past the new resetlogs SCN then recreate the standby database.
-- Primary
RMAN> list incarnation of database;
RMAN> reset database to incarnation 8;
run
{
set until scn 1759315;
restore database;
recover database;
}
SQL> SELECT TO_CHAR(RESETLOGS_CHANGE# - 2) FROM V$DATABASE;
-- standby
If the value of CURRENT_SCN is larger than the value of resetlogs_change# – 2, issue the following statement to flash back the standby database.
SQL> SELECT TO_CHAR(CURRENT_SCN) FROM V$DATABASE;
SQL> FLASHBACK STANDBY DATABASE TO SCN resetlogs_change# -2;
5) Different protection levels in Dataguard
Here's a brief overview of the modes:
Maximum Performance (Default): Asynchronous (ASYNC) redo transport; minimal performance impact, slight risk of data loss.
Maximum Availability: Synchronous (SYNC) transport; zero data loss when the standby is available, but primary continues if standby fails.
Maximum Protection: Synchronous (SYNC) transport with mandatory acknowledgment; guarantees zero data loss, but the primary shuts down if the standby is unavailable, making it rarely used in production.
show configuration
show database proddb
show database proddb_st
edit database proddb_st set state=apply-on;
EDIT DATABASE proddb_st SET PROPERTY LogXptMode='SYNC';
EDIT CONFIGURATION SET PROTECTION MODE AS MaxAvailability;
6) oracle what are benefits of Active dg ?
Enhanced Data Protection and Automatic Corruption Repair
Offloading Read-Only Workloads
7) Can we take rman backup on Dataguard
##############################
Upgrade Questions
##############################
##############################
Migration Questions
##############################
1) 10g migration : backup of dblink , roll back / downgrade , revert compatibility to lower version
2) what does catupgrade.sql does
The catupgrade.sql script is an essential utility in earlier versions of Oracle Database used to perform manual database upgrades from one major release to another. Its primary function is to recreate the data dictionary in the new release's format, ensuring all internal components are compatible
Check violations and Endian format check before we start
Below is using traditional method of transportable taablespace however we can also use oracle provided rman_xttconvert package as per 2471245.1 that uses incremental backup concept to reduce downtime . I see below good writeup on rman_xttconvert method using incremental backup
4) How to lower the compatibility version , to roll back the migration .
Lowering the COMPATIBLE initialization parameter in Oracle Database is generally not supported once it has been raised to a higher version, as the upgrade process modifies on-disk structures (control files, data files, redo logs).
If you have already raised COMPATIBLE to the new version (e.g., 19.0.0 or 23.0.0), a direct downgrade (lowering the COMPATIBLE parameter) is not possible.
Here are the supported scenarios and methods for rolling back a migration, depending on whether COMPATIBLE was raised.
Scenario A: COMPATIBLE was NOT raised (Recommended Migration Practice)
If you upgraded to a new release (e.g., 12c or 19c) but did not change the COMPATIBLE parameter from the old value (e.g., 11.2.0), you can perform a traditional downgrade.
Stop the Upgraded Database: Shutdown the database instance.
Start in Downgrade Mode:
sql
STARTUP DOWNGRADE;
Run Downgrade Script: Execute the catdwgrd.sql script located in the new Oracle home's rdbms/admin directory.
Shutdown:SHUTDOWN IMMEDIATE;
Revert Oracle Home: Set your environment variables (ORACLE_HOME, PATH) back to the old release.
Run Reload Script: Start the database in UPGRADE mode using the old home, then run catrelod.sql.
Recompile Objects: Run utlrp.sql.
Scenario B: COMPATIBLE was raised (Failed/Unsuccessful Migration)
If COMPATIBLE was increased, you cannot simply change it back. If you try, the database will fail to open, likely with ORA-00201: control file version incompatible.
You must use one of the following, in order of preference:
1. Restore from Full Backup (Highest Success Rate)
The safest way to roll back is to restore the database from a full backup taken before the migration.
2. Flashback Database
If you created a guaranteed restore point before the migration, flash back to that point.
3. Data Pump Export/Import (Logical Migration)
If a physical backup is not available, you must move the data logically to an old version database:
Install the original (lower) Oracle version and create a new database.
Use Data Pump Export (expdp) from the new version, but set the VERSION parameter to match the old database version.
Use Data Pump Import (impdp) to load the data into the old version database.
Important Considerations
Time Zone Files: If you upgraded the time zone file version (e.g., to v32) during the migration, you must have that same time zone file available in the old Oracle home, or downgrade will fail.
Multitenant: If you migrated a non-CDB to a PDB, you cannot use downgrade scripts to go back to non-CDB; you must use Data Pump.
Oracle ASM: If you raised compatible.asm or compatible.rdbms for disk groups, those disk groups cannot be mounted by the older version of Oracle.
Disclaimer: Oracle strongly recommends creating a full backup before altering the COMPATIBLE parameter or starting a downgrade.
##############################
Patching Questions
##############################
1) Steps of patching using opatchauto
2) Why do we apply sticky Bit
##############################
Oracle RAC and ASM Questions
##############################
1) oracle difference between node eviction and split brain
Ans : Node Eviction: The protective action taken by Clusterware to resolve a split-brain scenario and maintain data consistency.
2) Why we need scan listener if we already have Vip
why use vip :
1) failover even if taf is not use . user can make connection using vip of down node
2) no need to wait for TCP/IP timeout error dictation in case of node
down
Before SCAN, client-side configuration (like the tnsnames.ora file) had to list every node's VIP in the cluster. If a node was added or removed, every client's configuration needed updating.
Rapid Node Failure Detection: If a node fails, its VIP fails over to another node. The failed-over VIP immediately returns a connection refused error to the client, allowing the client to quickly try another address in its connection string instead of waiting for a lengthy TCP timeout. This makes the failover process fast
Pre 11gR2 database environments before vip , clients may take up to 2 minutes to decide (on TCP level) a node is down. This is purely because of the TCP Timeouts, which can differ from platform to platform. These 2 minutes are unacceptable, and it was a good thing of Oracle to understand and address this issue
3) How to recover from lost of ocr if you don't have backup
To recover a lost Oracle OCR (Cluster Registry) without a backup, you must rebuild it, typically by stopping the clusterware, using rootdelete.sh (if available) and root.sh from the primary node to recreate the OCR and voting disks, and then re-running root.sh on all other nodes, followed by re-applying any patches and reconfiguring resources like VIPs and ONS.
4) List background processes specific to rac
Core Global Processes
LMON (Global Enqueue Service Monitor): Monitors the entire RAC cluster to manage global resources and handle cluster reconfigurations, such as node joins, leaves, or evictions. It monitors the health of other core RAC processes like LMD and LMSn.
LMSn (Global Cache Service Process): These processes (e.g., LMS0, LMS1, etc., up to 10) are responsible for managing the Global Cache Service (GCS) and performing the actual cache fusion data block transfers between instances via the interconnect. They create consistent read versions of blocks and handle messages to remote instances.
LMD (Global Enqueue Service Daemon): Manages global lock requests and handles lock conversions and deadlock detection across the cluster. It communicates lock status between instances to ensure data integrity.
LCKn (Lock Process): Works with LMD to manage global lock structures and states at the instance level (e.g., LCK0).
Supporting RAC Processes
DIAG (Diagnosability Daemon): Monitors the health of instance processes and captures diagnostic data for process failures and general cluster health.
ACMS (Atomic Controlfile Memory Service): An agent process (from Oracle 11g) that ensures distributed SGA memory updates related to the control file are globally committed or aborted across all instances, maintaining consistency.
GTXn (Global Transaction Process): These processes (e.g., GTX0-j, from Oracle 11g) provide transparent support for XA global transactions in a RAC environment, with the number of processes auto-tuned based on workload.
RMSn (RAC Management Processes): Perform manageability tasks for RAC, such as creating resources when new instances are added to the cluster.
RSMN (Remote Slave Monitor): Manages the creation and communication of background slave processes on remote instances, performing tasks on behalf of a coordinating process in another instance.
LMHB (Global Cache/Enqueue Service Heartbeat Monitor): Monitors the heartbeat of the LMON, LMD, and LMSn processes to ensure they are running normally.
These processes work in concert with the standard Oracle background processes (like SMON, PMON, DBWn, LGWR, etc.) to deliver a highly available and scalable database solution.
5) What are rman methods to move database to ASM
Answer : Backup as Copy and Rman Convert
6) What is Use of HAIP
Oracle HAIP (Highly Available IP) provides redundant, load-balanced, and fault-tolerant private network communication (cluster interconnect) for Oracle Real Application Clusters (RAC), eliminating single points of failure by automatically failing over traffic from a down interface to a working one, without requiring OS-level network bonding. It creates virtual IPs on multiple physical network adapters, ensuring uninterrupted communication for database processes and heartbeats between cluster nodes
8) why we should not store voteting disk and ocr on asm
Ans : (because ocr and voting disk are required for bringing crs resource up, which will bring node app service up first then the asm instance.).
But with gpnp file now its possible .
9) why we use switch and not lan card for private interconnect . also why we use straight cable for private interconnect .
For configurations with more than two nodes, it would obviously not work. The other reason might be that when you use a switch and one nic or cable breaks the helathy node would still see that it's physical link is still up and can decide to evict the broken node.
With a crossover cable, the clusterware would have to guess since both physical links would be down.
10) Taking Manaul backup of Ocr and Voting disk
ocrconfig -manualbackup
ocrconfig -export /path/to/ocr_export.dmp ( need crs stop )
Backing up Voting Disks:-
Run the below command to back up the voting disk.
$ dd if=voting_disk_name of=backup_file_name
You don't directly back up the voting disk (VD) with crsctl in modern Oracle Clusterware (11gR2+); instead, it's implicitly backed up within the Oracle Cluster Registry (OCR) which is automatically backed up every 4 hours
As OCRs files are present in ASM diskgroup, While starting the CRS, it wont be able to Access OCR file to find the cluster resource information. Because at this point ASM instance would also be down. So at this point OLR file ( which is present at normal file system) is accessed to start the required resources.
13) How to stop multiple database sharing same home in rac and non rac
-- For Rac
srvctl stop home -oraclehome /u01/app/oracle/product/19c/dbhome_1 -stopoption IMMEDIATE
-- For non-rac using shell script
16) user equivalence : what is use , can we disable user equivalence after rac installation .
User equivalence in Oracle RAC (Real Application Clusters) ensures that the clusterware and database processes can access resources (files, directories, and other nodes) across all nodes without being prompted for a password
17) What is the difference between gpnp and olr functionality ?
OLR is the first file which is read during clusterware startup , it helps in initializing OHASD, ohasd brings up CSSD and gpnpd . CSSD access gpnp profile.
19) what is exclusive mode in crs
crsctl start crs -excl
This mode allows the ASM instance to start without the need for a mounted voting disk, which is necessary for the next steps.
In Oracle, Automatic Memory Management (AMM) and HugePages are generally incompatible, as AMM uses /dev/shm (shared memory files) for SGA allocation, while HugePages require reserving large
2) What is better for plan fix : Sql plan baseline or SQL PROFILES
Profile Persist across database restarts, but can sometimes be lost during major upgrades.
3) What is cardinality feedback
Oracle's Cardinality Feedback, renamed Statistics Feedback in 12c, is a feature that automatically improves execution plans for frequently run queries by learning from past errors; if the optimizer significantly misestimates the number of rows (cardinality) during the first execution, it stores the actual row counts and uses them to generate a better plan for subsequent runs, avoiding repeated mistakes,
4) Name different types of index’s with explanation and what are clustered / non-clustered indexes
Oracle offers various index types, primarily B-Tree (the default), Bitmap, Function-Based, and Reverse Key, designed to speed up data retrieval; Clustered indexes physically order table data by the index key (like a dictionary), meaning the data is the index, allowing only one per table, while Non-Clustered indexes create a separate structure with pointers to data, allowing multiple per table for different lookups, storing data logically separate from its physical location
5) What is the purpose of histograms?
6) difference between ora-600 / ora-7445
The main difference is that an ORA-00600 is a handled exception discovered by a predefined check within the Oracle kernel, while an ORA-07445 is an unhandled exception resulting from a fatal signal or access violation at the operating system
7) Will uncommitted transaction reside in datafiles
Yes, uncommitted transactions can and often do reside temporarily in Oracle datafiles.
This is a normal part of how the Oracle database architecture manages data consistency and recovery, and it is not a problem because of the way data is handled through the interaction of several key component
8)What is transparent Hugepages and how its different from Huge pages
Transparent Huge Pages (THP) automatically manages large memory pages (like 2MB) in the Linux kernel, hiding complexity, while standard HugePages require manual pre-allocation at system startup for specific memory areas (like an Oracle SGA). The key difference for Oracle is that THP can cause performance issues and latency spikes due to its dynamic nature and background processes (like khugepaged), leading Oracle to recommend disabling it in favor of manually configured, static HugePages for database stability and predictable performance
9) what is optimizer_adaptive_features and Optimizer_Adaptive_plans / optimizer_Adaptive_statistics.
OPTIMIZER_ADAPTIVE_FEATURES was a single parameter in Oracle 12c Release 1 that controlled all Adaptive Query Optimization (AQO) features, allowing the database to dynamically adjust SQL execution plans during runtime based on real data, but it became obsolete in Oracle 12c Release 2, replaced by more granular parameters like OPTIMIZER_ADAPTIVE_PLANS and OPTIMIZER_ADAPTIVE_STATISTICS for finer control. These features help the optimizer adapt to data skew and other real-world conditions, improving performance by switching plans on-the-fly (e.g., from index scan to table scan).
Oracle Adaptive Cursor Sharing (ACS) is a performance feature, introduced in Oracle 11g, that allows a single SQL statement with bind variables to use multiple, optimized execution plans, rather than just one, adapting to different bind values to prevent suboptimal performance caused by data skew or uneven data distribution, ensuring efficiency for queries
Check Status of Adaptive cursor sharing
col parameter for a35
col "Session value" for a10
col "Instance value" for a10
SELECT
x.ksppinm "Parameter", Y.ksppstvl "Session Value",Z.ksppstvl "Instance Value" FROM x$ksppi X, x$ksppcv Y, x$ksppsv Z
WHERE x.indx = Y.indx AND x.indx = z.indx
AND x.ksppinm like '_optimizer_adaptive_cursor_sharing%'
alter system set "_optimizer_adaptive_cursor_sharing"=false scope= both;
--Disable bind variable or other parameter
alter system set "cursor_sharing"=exact scope=both;
alter system set "_optimizer_extended_cursor_sharing_rel"=none scope=both;
alter system set "_optimizer_extended_cursor_sharing"=none scope= both;
Enable the Adaptive cursor sharing
alter system set "_optimizer_adaptive_cursor_sharing"=TRUE scope= both;
--Enable bind variable
alter system set "cursor_sharing"=FORCE scope=both;
alter system set "_optimizer_extended_cursor_sharing_rel"=SIMPLE scope=both;
alter system set "_optimizer_extended_cursor_sharing"=UDO scope= both;
select sql_id, child_number, is_bind_sensitive, is_bind_aware from v$sql where sql_id='XXX';
select * from v$sql_cs_statistics where sql_id = '9bmm6cmwa8saf';
select * from v$sql_cs_selectivity where sql_id = '9bmm6cmwa8saf';
From 11.1.0.7 onward it is possible to skip the monitoring that is required to detect bind-sensitive queries by using the BIND_AWARE hint. In the following example, the presence of the hint tells the optimizer that we believe the query is bind-sensitive, so it should use bind-aware cursor sharing from the first execution.
select /*+ bind_aware */ max(id) from acs_test_tab where record_type = :l_record_type;
11) what is bind peeking / bind seeking
In Oracle, bind peeking is an optimizer feature that allows the database to inspect the actual value of a bind variable during the initial hard parse of a SQL statement to create an optimal execution plan
12 ) List Top wait events :
bufFer busy
read by other session
free bufer
control fle single write / control fle parallel write
1) What is Partition Pruning and does it works with Global Indexes and what is dynamic pruning
Oracle Partition Pruning is a powerful database optimization technique where the query optimizer intelligently identifies and ignores irrelevant partitions of a partitioned table, dramatically reducing I/O and processing time by only accessing the specific partitions needed to satisfy a SQL query. It's a core performance feature for large datasets, especially in data warehouses, allowing operations on subsets of data rather than the entire table, often leading to significant speed improvements
2) What are different types of Partioning
Single-level partitioning • Range • List • Hash
Composite-level partitioning - [Range | List | Hash | Interval] – [Range | List | Hash]
Partitioning extensions • Interval • Reference • Interval Reference • Virtual Column Based • Auto
We had situation demanding tuning of MV refresh and options like TUNE_MVIEW and Atomic_Refresh was explored . Hence Documenting materialized view sample commands and views in this blog . After migration if we are facing slowness in Materialized View refresh , we can explore to re-create Materialized View Since main topic for this blog was MV issues in 19c , posting that topic first in line below We also have Real-Time Materialized Views to explore . To Note : 1) We can create Mv on ON PREBUILT TABLE 2) We can create Index on Mv 3) We can gather stats on mv just like other table . 4) Mv can be REFRESH_MODE ON COMMIT / REFRESH ON DEMAND. 5) Mv REFRESH_METHOD can be Complete / FAST or FORCE . A FORCE attempts a FAST...
In this Document Goal Solution References APPLIES TO: Oracle Database - Enterprise Edition - Version 12.2.0.1 and later Oracle Cloud Infrastructure - Database Service - Version N/A and later Information in this document applies to any platform. GOAL Goal of this document is to provide the method to purge the old records of Optimizer Statistics Advisor Task namely AUTO_STATS_ADVISOR_TASK that consumes huge SYSAUX space. Huge no.of old records retained in WRI$_ADV_OBJECTS for AUTO_STATS_ADVISOR_TASK or INDIVIDUAL_STATS_ADVISOR_TASK incurs heavy SYSAUX space. AUTO_STATS_ADVISOR_TASK is meant for Automatic Statistics Advisor task while INDIVIDUAL_STATS_ADVISOR_TASK is for Manual Statistics Advisor task. DBA_ADVISOR_PARAMETERS displays all advisor task parameters and their current values in the database. There is a parameter name called EXECUTION_DAYS_TO_EXPIRE. This parameter is set in no.of days. Executions older than the value(no.of days) set would be purged automaticall...
We had multiple requirement for Compression hence documenting handy notes that can help myself and others . To automate compression we have feature of Oracle Automatic Data Optimisation which we will discuss later ==> Compression types : We can know type of compression from below sql SQL> select name, currently_used from dba_feature_usage_statistics where lower(name) like ‘%compress%’; BASIC compression, introduced in Oracle 8 already and only recommended for Data Warehouse OLTP compression, introduced in Oracle 11 and recommended for OLTP Databases as well QUERY LOW compression (Exadata only), recommended for Data Warehouse with Load Time as a critical factor QUERY HIGH compression (Exadata only), recommended for Data Warehouse with focus on Space Saving ARCHIVE LOW compression (Exadata only), recommended for Archival Data with Load Time as a critical factor A...
Comments
Post a Comment