Posts

Showing posts from December, 2024

Flash-forward Oracle database to restore Point - Moving back and forth with time without resetlogs

  We  had  requirement to flashback database to   take export   backup of table having issues  We All know we can flashback database  to restore point but  less we knew  that we can also flash forward database to restore point .   Thanks to blog posted in reference link  we came to  know we can  also flash forward  database to  restore point .  We  have performed this steps on  standby  though we can perform same on production too    Below are steps performed  this  1)   Enable Flashback database  and create  restore point One  2)   Application perform deployment and Notice  they need to flashback  -->  flashback   3) Before we flash back database  Create  new  restore point Two 4) Flashback to   restore point One 5 )  Perform sanpshot standby  to   tak...

oracle datapump sample commands

  EXPDP to multiple directories  : Suppose you wish to take a expdp backup of a big table, but you don’t sufficient space in a single mount point to keep the dump. In this case, we take expdp dump to multiple directory. Create directories to pointing to diff PATH SQL> create directory DIR1 as '/home/oracle/DIR1'; Directory created. SQL> create directory DIR2 as '/home/oracle/DIR2'; Directory created. parfile content dumpfile=DIR1:test_%U.dmp,  DIR2:test_%U.dmp logfile=test.log directory=DIR1 parallel=2 tables=raj.test EXCLUDE/INCLUDE option in EXPDP dumpfile=test.dmp logfile=test1.log directory=TEST exclude=TABLE:"IN ('EMP_TAB','DEPT')" schemas=DBACLASS Exclude few schemas while import: dumpfile=test.dmp logfile=test1.log directory=TEST EXCLUDE=SCHEMA:"IN ('WMSYS', 'OUTLN')" export/Import only TABLE and INDEX ( OBJECT_TYPE) dumpfile=FULL.dmp logfile=full.log directory=exp_dir directory=DBATEST INCLUDE=TABLE,INDEX...

dataguard monitoring information

  ==>  On Standby  -- Applicable for 2 NODE RAC ALSO column applied_time for a30 set linesize 140 select to_char(sysdate,'mm-dd-yyyy hh24:mi:ss') "Current Time" from dual; SELECT DB_NAME, APPLIED_TIME, LOG_ARCHIVED-LOG_APPLIED LOG_GAP , (case when ((APPLIED_TIME is not null and (LOG_ARCHIVED-LOG_APPLIED) is null) or (APPLIED_TIME is null and (LOG_ARCHIVED-LOG_APPLIED) is not null) or ((LOG_ARCHIVED-LOG_APPLIED) > 5)) then 'Error! Log Gap is ' else 'OK!' end) Status FROM ( SELECT INSTANCE_NAME DB_NAME FROM GV$INSTANCE where INST_ID = 1 ), ( SELECT MAX(SEQUENCE#) LOG_ARCHIVED FROM V$ARCHIVED_LOG WHERE DEST_ID=1 AND ARCHIVED='YES' and THREAD#=1 ), ( SELECT MAX(SEQUENCE#) LOG_APPLIED FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND APPLIED='YES' and THREAD#=1 ), ( SELECT TO_CHAR(MAX(COMPLETION_TIME),'DD-MON/HH24:MI') APPLIED_TIME FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND APPLIED='YES' and THREAD#=1 ) UNION SELECT DB_NAME, APPLIED...

Oracle Sql Developer setting for Tcps

Please  note   Sql developer has issues   connecting to Tcp .    Configure SQL Developer To Connect To TCPS Enabled DB (Doc ID 2908673.1) 1. Install Oracle Client and Sqldeveloper (Preferably latest version to avoid any incompatibility) 2. Open Sql Developer, Go to Tools --> Preferences --> Database --> Advanced. 3. Check "Use Oracle Client". Click on configure and set client type  and Client location and click Test . You should see success. If Success, click Ok. 4. Check "Use OCI/Thick Driver". Click Browse and select correct location for Tnsnames Directory and click ok. 5. Click on New connection. Here you have the option to choose 2 types of connection, TNS and Custom JDBC. 5.1 TNS Connection  Select Connection type as TNS and select correct network alias as per your tnsnames.ora and click on test. You should see success and then click on Connect. 5.2 Custom JDBC Select Connection type as Custom JDBC and enter following custom ...

Tcps Connection to Oracle database from Gcp

##################### GCP ##################### 1)  Open Firewall for all scan ip and Vip  2)  Use latest version of  Oracle Driver . Use  following dependency in pom.xml  <dependency>          <groupId>com.oracle.database.jdbc</groupId>           <artifactId>ojdc9<//artifactId>          <version>21.5.0.0</version> </dependency>  3)  The certificate is stored in Gcs bucket and it needs to be fetched programmatically in the microservice . If we  use different bucket we need to upload same to storage bucket and name of certificate in code  in property file  plus in trust-store password in secret ORADB_EXACC_TRUSTSTORE_PWD  Also add  the Jvm trust-store properties and the certificate name in application.yml file  certificateName:  projectId:  storeBucketName :  hhtpProtocols :...

Concurrent Writes May Corrupt LOB Segment When Using Auto Segment Space Management (ORA-1555) (Doc ID 253131.1)

  Applies to: Oracle Database - Enterprise Edition - Version 9.2.0.1 to 9.2.0.3 [Release 9.2] Oracle Database Cloud Schema Service - Version N/A and later Gen 1 Exadata Cloud at Customer (Oracle Exadata Database Cloud Machine) - Version N/A and later Oracle Database Exadata Express Cloud Service - Version N/A and later Oracle Cloud Infrastructure - Database Service - Version N/A and later Information in this document applies to any platform. Symptoms   Symptoms Due to unpublished bug 2643723 in the space allocation algorithm, concurrent writes to the same LOB segment may cause LOB data corruption. One session may overwrite blocks in the LOB segment that have just been allocated and written to by another session, which would cause a block to be allocated to 2 different LOBs - of which one will become corrupted. This issue only happens in the event of a race condition where process A needs a rollback segment extended while trying to write to a LOB segment...

Oracle Automatic Data Compression using Information Lifecycle Management

  We  had requirement from customer to  share strategy for automatic data compression   that directed us to  ILM   With Oracle 12c, the feature Automatic Data Optimization (ADO) can help us with auto Compression  of data under predefined conditions.  ADO is part of Information Lifecycle Management (ILM).   ADO requires Advanced Compression Option. Enable Heat Map   :  To use ADO for compression, the Heat Map must be enabled. Indeed once enabled, Heat Map will collect statistics required for ADO actions. All accesses are tracked by the in-memory activity tracking module. SQL> alter system set heat_map=ON scope=both; Before adding the ADO policy, we can verify that heat map statistics are already collected. SELECT OBJECT_NAME, to_char(TRACK_TIME,'DD-MON-YYYY HH24:MI:SS'), SEGMENT_WRITE "Seg_write", SEGMENT_READ "Seg_read", FULL_SCAN, LOOKUP_SCAN FROM v$heat_map_segment WHERE object_name='ARTICLE'; Adding a segment level r...

Oracle Database Table and Index Partitioning Handy Cookbook

  We have many partitioning maintenance task so its good  to have handy commands and new features    Types Of  Partitioning :  Range Partition :- Range Partitioning maps data to partitions based on ranges of partition key values that you establish for each partition. It is the most common type of partitioning and is often used with dates. SQL> create table details(order_id number,order_date date) partition by range (order_date) (partition p1 values less than ('01-jan-2018') tablespace t1,partition p2 values less than ('01-mar-2018') tablespace t2,partition p3 values less than ('01-aug-2018') tablespace t3,partition p4 values less than ('01-dec-2018') tablespace t4 ); LIST PARTITION  List Partitioning is used to list together unrelated data into partitions. It is a technique where you specify a list of discrete values for the partitioning key in the description for each partition. SQL> CREATE TABLE sales_list (salesman_id NUMBER(5),sales_state VAR...