Oracle E-Business Suite R12 | SGA vs PGA vs NOT being a DBA!

May 7, 2018

Hi guys

I know the subject title is not very professional this time. Yet, I want to make a claim that I figured out something, for which I spent more than couple of years time and have followed up few Oracle community threads (without much interesting results)

We had to retired a hardware that was recommended by the Oracle EBS implementation partner, within 2 years once after we went online with the R12 instance. We had 10g 10.2.0.3 with the instance, things were getting messy and slow & the new support partner recommended for a better hardware.

I always had eyes on this retired server. It had Linux, hence we couldn’t come up with a practical requirement to integrate the Linux server with our Windows domain environment and it was kept switched off until the virtualization project came online.

We needed “something” to hold a copy of the EBS instance while it was being virtualized.

So, I cloned this machine & before continuing let me describe what this is hardware is like:

Processor:


Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 2
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 23
Model name: Intel(R) Xeon(R) CPU E5420 @ 2.50GHz
Stepping: 6
CPU MHz: 1992.000
BogoMIPS: 4987.29
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 6144K
NUMA node0 CPU(s): 0-7

Storage space


[root@erp-bak ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdj2 59G 12G 44G 22% /
tmpfs 9.7G 3.9M 9.7G 1% /dev/shm
/dev/sdj1 2.0G 325M 1.5G 18% /boot
/dev/sdj5 738G 531G 170G 76% /u01
/dev/mapper/mpathbp1 591G 320G 242G 57% /u02
/dev/mapper/mpathcp1 197G 38G 150G 20% /u03
/dev/sdf1 591G 70M 561G 1% /u04
/dev/sdi1 269G 59M 256G 1% /u05

in addition to the local disks this server have partitions mounted from a IBM SAN.

Once the clone was done, I realized that the instance was extremely slow & our part time DBA started making excuses like “See that’s why we are changing the hardware” (He had 2G SGA and 1G PGA with 20 job_queue_processes against nearly 1TB database)

I opened few discussions with Oracle communities and was pointed towards a tone of documents suggesting me how to fine tune the hardware and database for better performances. Actually nothing were applicable as I didn’t have much hands on experiences with a database & I couldn’t find a person who could really HELP me.

Then I started taking interest about database technology, which I should have years back & came across SGA/PGA and JVM etc & as I had an idle instance, started trying out whatever I have “learned” against it.

While doing the 11g R2 the hard way I realized that I can use AMM and forget about tuning different parameters for memory optimization. Well, still the goddamn instance lagged like hell & I was almost done with it!.

Few of the persistent issues were:

After a cold boot

  • The login form would load at client end after waiting almost 3-4 minutes, which gets faster during consecutive attempts.
  • It takes ages for to open the concurrent programs window
  • Our custom forms & LOVs lag to extremes and so on..

Even shutting down the instance for anything was turning into a nightmare as the database always took more than 15-20 minutes and I had to kill multiple processes manually in order to bring it offline!

Then on a different note, while trying to learn SQL learning I landed against an ask Tom thread, where the asker says “I have setup both SGA and PGA 3GB”, still the SQL runs slow…

I did a fresh clone. Our database was upgraded to 11g almost year back. The default clone had 1G for both SGA and PGA. I altered them with 3G and 3G & bullseye

I went back and altered the SGA and PGA with 4G which was 40% of the total physical memory available for the hardware. I did three shutdowns and restarts of the physical server, did a dozen application and database startup to confirm that what I am experiencing is NOT a once in bluemoon phenomena. Each of my attempt to shutdown the database gracefully were completed within few seconds, not a single time I had to kill the Linux processes to bring it down!

I modified one of the main forms for a custom application and changed few VIEW calls with better logic & I can’t be happier!

Now, said that, don’t rush to me saying “I also did 4G for SGA and PGA and moron I still have a slow instance”. There are many factors affecting the performance of your database and application & most important few are:

Age of your hardware, especially the spinning disks. The aged they are, the worse the performance is going to be as there is hell loads of I/O happens when you are accessing/processing the data from a database.

Recently I was going through a MS SQL discussion about Multi-Tenant architecture and one of the contributors were discussing about a hosting firm that keeps on changing their hardware once in 6 months. I think he was just BLUFFING! ;)

I hope someone gets benefitted by the minor finding I have made YESTERDAY (6th May 2018)!

regards,

rajesh

 

 

Advertisements

RMAN | Restore Linux Backup to Windows

March 3, 2018

 

Update(06-March-2018)

Once after I built the Windows Instance from Linux Backups, I have started a thread with community.oracle.com, expecting answers for few concerns. Below, please have a look at the thread

https://community.oracle.com/message/14730577#14730577

jgarry states, as the redo logs are not applied, I am risking data loss. Now, I don’t really think someone would move from Linux to Windows for Oracle database, when the opposite happens most of the times.

We will consider the entire exercise as limited LAB & for some reasons, if this has to be performed for a production instance, make sure AN IMAGE BACKUP/Cold Backup is available to avoid possible data loss. Cold backup routine as below:

RMAN> shutdown immediate;

RMAN> startup mount;

RMAN>backup database; #replace with your backup routine

RMAN>

 

Hi guys

Greetings. I have been hell busy during last few weeks. Traveling, fixing stuffs & as usual learning new “things”. This time I am working with RMAN, the recovery manager for Oracle database & trying to establish something that is NOT that orthodox or this is how I feel once after going through many documents.  Well, remember we did hack installations, we got “stuffs” work ;). So why not give it a try?

Attempted: Restoring Linux RMAN backup(s) to Windows. Doable? Well, YES.

Cons: No idea yet (6-March-2018, possible data loss as the redo log files will not be readable)

Could be used at Production: At your own risk

 

I am not going to break the flow anywhere, it is a lengthy one shot document. Prior giving it a try, make sure you have copied the backups from Linux machine to your Windows Machine.


Microsoft Windows [Version 10.0.16299.248]
(c) 2017 Microsoft Corporation. All rights reserved.

C:\WINDOWS\system32>d:

D:\>cd Oracle11g64\product\11.2.0\dbhome_1\BIN

D:\Oracle11g64\product\11.2.0\dbhome_1\BIN>oradim -new -sid LINUXDB
Instance created.

D:\Oracle11g64\product\11.2.0\dbhome_1\BIN>set ORACLE_SID=LINUXDB

D:\Oracle11g64\product\11.2.0\dbhome_1\BIN>rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Thu Mar 1 13:34:58 2018

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

connected to target database (not started)

RMAN> set DBID=439294518  shutdown immediate

using target database control file instead of recovery catalog
Oracle instance shut down
--We will use a copy of Windows environment specific INIT file to
--Manually create a INIT file for our new database
--sample
--make sure you have created all paths mentioned in the INIT file prior restarting the database
LINUXDB.__db_cache_size=486539264
LINUXDB.__java_pool_size=8388608
LINUXDB.__large_pool_size=8388608
LINUXDB.__shared_pool_size=746586112
LINUXDB.__streams_pool_size=0
*.audit_file_dest='D:\Oracle11g64\admin\LINUXDB\adump'
*.compatible='11.2.0.4.0'
*.control_files='D:\Oracle11g64\oradata\LINUXDB\control01.ctl','D:\Oracle11g64\oradata\LINUXDB\control02.ctl','D:\Oracle11g64\oradata\LINUXDB\control03.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='DB11G' #source database name
*.db_recovery_file_dest='D:\Oracle11g64\oradata\LINUXDB\flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=DB11GXDB)'
*.job_queue_processes=10
*.log_archive_dest_1='LOCATION=H:\db11g\archivelog'
*.log_archive_format='DB11G_ARC%S_%R.%T'
*.open_cursors=300
*.pga_aggregate_target=418381824
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=1256194048
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
--Sample INITFILE end

RMAN> startup nomount;

connected to target database (not started)
Oracle instance started

Total System Global Area 1252663296 bytes

Fixed Size 2280816 bytes
Variable Size 402653840 bytes
Database Buffers 838860800 bytes
Redo Buffers 8867840 bytes

RMAN> restore controlfile from 'H:\db11g\backup\bkpcontrol_file.ctl_DB11G_20180301';

Starting restore at 01-MAR-18
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=63 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
output file name=D:\ORACLE11G64\ORADATA\LINUXDB\CONTROL01.CTL
output file name=D:\ORACLE11G64\ORADATA\LINUXDB\CONTROL02.CTL
output file name=D:\ORACLE11G64\ORADATA\LINUXDB\CONTROL03.CTL
Finished restore at 01-MAR-18

RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1

RMAN> catalog start with 'H:\db11g\backup';

Starting implicit crosscheck backup at 01-MAR-18
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=191 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=129 device type=DISK
Crosschecked 18 objects
Finished implicit crosscheck backup at 01-MAR-18

Starting implicit crosscheck copy at 01-MAR-18
using channel ORA_DISK_1
using channel ORA_DISK_2
Finished implicit crosscheck copy at 01-MAR-18

searching for all files in the recovery area
cataloging files...
no files cataloged

searching for all files that match the pattern H:\db11g\backup

List of Files Unknown to the Database
=====================================
File Name: H:\DB11G\backup\archive_DB11G_lvl0_07ssmmpl_1_1
File Name: H:\DB11G\backup\archive_DB11G_lvl0_08ssmmpl_1_1
File Name: H:\DB11G\backup\archive_DB11G_lvl0_0kssmnh4_1_1
File Name: H:\DB11G\backup\archive_DB11G_lvl0_0lssmnh4_1_1
File Name: H:\DB11G\backup\bkpcontrol_file.ctl_DB11G_20180301
File Name: H:\DB11G\backup\ctl_c-439294518-20180301-00
File Name: H:\DB11G\backup\ctl_c-439294518-20180301-01
File Name: H:\DB11G\backup\ctl_c-439294518-20180301-02
File Name: H:\DB11G\backup\ctl_c-439294518-20180301-03
File Name: H:\DB11G\backup\ctl_c-439294518-20180301-04
File Name: H:\DB11G\backup\rman_comp_DB11G_lvl0_01ssmmoc_1_1
File Name: H:\DB11G\backup\rman_comp_DB11G_lvl0_02ssmmoc_1_1
File Name: H:\DB11G\backup\rman_comp_DB11G_lvl0_03ssmmoc_1_1
File Name: H:\DB11G\backup\rman_comp_DB11G_lvl0_04ssmmoc_1_1
File Name: H:\DB11G\backup\rman_comp_DB11G_lvl0_05ssmmoe_1_1
File Name: H:\DB11G\backup\rman_comp_DB11G_lvl0_0essmng7_1_1
File Name: H:\DB11G\backup\rman_comp_DB11G_lvl0_0fssmng7_1_1
File Name: H:\DB11G\backup\rman_comp_DB11G_lvl0_0gssmng7_1_1
File Name: H:\DB11G\backup\rman_comp_DB11G_lvl0_0hssmng7_1_1
File Name: H:\DB11G\backup\rman_comp_DB11G_lvl0_0issmnga_1_1

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: H:\DB11G\backup\archive_DB11G_lvl0_07ssmmpl_1_1
File Name: H:\DB11G\backup\archive_DB11G_lvl0_08ssmmpl_1_1
File Name: H:\DB11G\backup\archive_DB11G_lvl0_0kssmnh4_1_1
File Name: H:\DB11G\backup\archive_DB11G_lvl0_0lssmnh4_1_1
File Name: H:\DB11G\backup\bkpcontrol_file.ctl_DB11G_20180301
File Name: H:\DB11G\backup\ctl_c-439294518-20180301-00
File Name: H:\DB11G\backup\ctl_c-439294518-20180301-01
File Name: H:\DB11G\backup\ctl_c-439294518-20180301-02
File Name: H:\DB11G\backup\ctl_c-439294518-20180301-03
File Name: H:\DB11G\backup\ctl_c-439294518-20180301-04
File Name: H:\DB11G\backup\rman_comp_DB11G_lvl0_01ssmmoc_1_1
File Name: H:\DB11G\backup\rman_comp_DB11G_lvl0_02ssmmoc_1_1
File Name: H:\DB11G\backup\rman_comp_DB11G_lvl0_03ssmmoc_1_1
File Name: H:\DB11G\backup\rman_comp_DB11G_lvl0_04ssmmoc_1_1
File Name: H:\DB11G\backup\rman_comp_DB11G_lvl0_05ssmmoe_1_1
File Name: H:\DB11G\backup\rman_comp_DB11G_lvl0_0essmng7_1_1
File Name: H:\DB11G\backup\rman_comp_DB11G_lvl0_0fssmng7_1_1
File Name: H:\DB11G\backup\rman_comp_DB11G_lvl0_0gssmng7_1_1
File Name: H:\DB11G\backup\rman_comp_DB11G_lvl0_0hssmng7_1_1
File Name: H:\DB11G\backup\rman_comp_DB11G_lvl0_0issmnga_1_1

--Once the catalog built, query the schema for datafile details
RMAN> report schema;

RMAN-06139: WARNING: control file is not current for REPORT SCHEMA
Report of database schema for database with db_unique_name DB11G

List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 0 SYSTEM *** /u01/app/oracle/oradata/DB11G/system01.dbf
2 0 SYSAUX *** /u01/app/oracle/oradata/DB11G/sysaux01.dbf
3 0 UNDOTBS1 *** /u01/app/oracle/oradata/DB11G/undotbs01.dbf
4 0 USERS *** /u01/app/oracle/oradata/DB11G/users01.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 20 TEMP 32767 /u01/app/oracle/oradata/DB11G/temp01.dbf

--We can get the last sequence number of the archivelog by running the below query
--Which we will use for media recovery

RMAN> list archivelog all;

List of Archived Log Copies for database with db_unique_name DB11G
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
1 1 5 A 01-MAR-18
Name: /u02/archivelog/DB11G_ARC0000000005_0969624950_0001.arc

2 1 6 A 01-MAR-18
Name: /u02/archivelog/DB11G_ARC0000000006_0969624950_0001.arc

3 1 7 A 01-MAR-18
Name: /u02/archivelog/DB11G_ARC0000000007_0969624950_0001.arc

4 1 8 A 01-MAR-18
Name: /u02/archivelog/DB11G_ARC0000000008_0969624950_0001.arc

--As 8 being the last sequence, we will add 8+1=9 AS the sequence number for the media recovery
RMAN> run
2> {
3> SET UNTIL SEQUENCE 9 THREAD 1;
4> set newname for datafile 1 to 'D:\Oracle11g64\oradata\LINUXDB\system01.dbf';
5> set newname for datafile 2 to 'D:\Oracle11g64\oradata\LINUXDB\sysaux01.dbf';
6> set newname for datafile 3 to 'D:\Oracle11g64\oradata\LINUXDB\undotbs01.dbf';
7> set newname for datafile 4 to 'D:\Oracle11g64\oradata\LINUXDB\users01.dbf';
8> restore database;
9> switch datafile all;
10> recover database;
11> }

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 01-MAR-18
using channel ORA_DISK_1
using channel ORA_DISK_2

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00004 to D:\Oracle11g64\oradata\LINUXDB\users01.dbf
channel ORA_DISK_1: reading from backup piece H:\DB11G\BACKUP\RMAN_COMP_DB11G_LVL0_0ISSMNGA_1_1
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00003 to D:\Oracle11g64\oradata\LINUXDB\undotbs01.dbf
channel ORA_DISK_2: reading from backup piece H:\DB11G\BACKUP\RMAN_COMP_DB11G_LVL0_0GSSMNG7_1_1
channel ORA_DISK_1: piece handle=H:\DB11G\BACKUP\RMAN_COMP_DB11G_LVL0_0ISSMNGA_1_1 tag=DAILYFULL_DB_LVL0_BKP
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00002 to D:\Oracle11g64\oradata\LINUXDB\sysaux01.dbf
channel ORA_DISK_1: reading from backup piece H:\DB11G\BACKUP\RMAN_COMP_DB11G_LVL0_0FSSMNG7_1_1
channel ORA_DISK_2: piece handle=H:\DB11G\BACKUP\RMAN_COMP_DB11G_LVL0_0GSSMNG7_1_1 tag=DAILYFULL_DB_LVL0_BKP
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:00:08
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00001 to D:\Oracle11g64\oradata\LINUXDB\system01.dbf
channel ORA_DISK_2: reading from backup piece H:\DB11G\BACKUP\RMAN_COMP_DB11G_LVL0_0ESSMNG7_1_1
channel ORA_DISK_1: piece handle=H:\DB11G\BACKUP\RMAN_COMP_DB11G_LVL0_0FSSMNG7_1_1 tag=DAILYFULL_DB_LVL0_BKP
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:22
channel ORA_DISK_2: piece handle=H:\DB11G\BACKUP\RMAN_COMP_DB11G_LVL0_0ESSMNG7_1_1 tag=DAILYFULL_DB_LVL0_BKP
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:00:35
Finished restore at 01-MAR-18

datafile 1 switched to datafile copy
input datafile copy RECID=5 STAMP=969632161 file name=D:\ORACLE11G64\ORADATA\LINUXDB\SYSTEM01.DBF
datafile 2 switched to datafile copy
input datafile copy RECID=6 STAMP=969632161 file name=D:\ORACLE11G64\ORADATA\LINUXDB\SYSAUX01.DBF
datafile 3 switched to datafile copy
input datafile copy RECID=7 STAMP=969632161 file name=D:\ORACLE11G64\ORADATA\LINUXDB\UNDOTBS01.DBF
datafile 4 switched to datafile copy
input datafile copy RECID=8 STAMP=969632162 file name=D:\ORACLE11G64\ORADATA\LINUXDB\USERS01.DBF

Starting recover at 01-MAR-18
using channel ORA_DISK_1
using channel ORA_DISK_2

starting media recovery

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=7
channel ORA_DISK_1: reading from backup piece H:\DB11G\BACKUP\ARCHIVE_DB11G_LVL0_0KSSMNH4_1_1
channel ORA_DISK_2: starting archived log restore to default destination
channel ORA_DISK_2: restoring archived log
archived log thread=1 sequence=8
channel ORA_DISK_2: reading from backup piece H:\DB11G\BACKUP\ARCHIVE_DB11G_LVL0_0LSSMNH4_1_1
channel ORA_DISK_1: piece handle=H:\DB11G\BACKUP\ARCHIVE_DB11G_LVL0_0KSSMNH4_1_1 tag=TAG20180301T132700
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
archived log file name=H:\DB11G\ARCHIVELOG\DB11G_ARC0000000007_0969624950.0001 thread=1 sequence=7
channel ORA_DISK_2: piece handle=H:\DB11G\BACKUP\ARCHIVE_DB11G_LVL0_0LSSMNH4_1_1 tag=TAG20180301T132700
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:00:01
archived log file name=H:\DB11G\ARCHIVELOG\DB11G_ARC0000000008_0969624950.0001 thread=1 sequence=8
media recovery complete, elapsed time: 00:00:02
Finished recover at 01-MAR-18

RMAN> exit

Recovery Manager complete.

D:\Oracle11g64\product\11.2.0\dbhome_1\BIN>

--SQL Activities
Microsoft Windows [Version 10.0.16299.248]
(c) 2017 Microsoft Corporation. All rights reserved.

C:\WINDOWS\system32>d:

D:\>cd D:\Oracle11g64\product\11.2.0\dbhome_1\BIN

D:\Oracle11g64\product\11.2.0\dbhome_1\BIN>set ORACLE_SID=LINUXDB

D:\Oracle11g64\product\11.2.0\dbhome_1\BIN>sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Thu Mar 1 14:18:44 2018

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> SELECT GROUP#, STATUS FROM V$LOG
2 /

GROUP# STATUS
---------- ----------------
1 INACTIVE
3 CURRENT
2 INACTIVE

SQL> SELECT MEMBER FROM V$LOGFILE;

MEMBER
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/DB11G/redo03.log
/u01/app/oracle/oradata/DB11G/redo02.log
/u01/app/oracle/oradata/DB11G/redo01.log

--As the datafiles for the redo logs are pointing towards the LINUX file system
--We need to recreate redo log files for the Windows environment
--Prior that, we will try to drop those redo log groups which are inactive
--In our case redo log group 3 is the one active, hence 1,2 should be available to be dropped

SQL> alter database drop logfile group 1;

Database altered.

SQL> alter database drop logfile group 2;
alter database drop logfile group 2
*
ERROR at line 1:
ORA-01567: dropping log 2 would leave less than 2 log files for instance
linuxdb (thread 1)
ORA-00312: online log 2 thread 1: '/u01/app/oracle/oradata/DB11G/redo02.log'

--Minimum 2 redo log files are required for the instance, so dropping redo log group 2 fails in our case
--So we will add more redo groups to the database

SQL> alter database add logfile group 4
2 ('D:\Oracle11g64\oradata\LINUXDB\redo04.rdo','D:\Oracle11g64\oradata\LINUXDB\redo04a.rdo') size 50M;

Database altered.

SQL> alter database add logfile group 5
2 ('D:\Oracle11g64\oradata\LINUXDB\redo05.rdo','D:\Oracle11g64\oradata\LINUXDB\redo05a.rdo') size 50M;

Database altered.

SQL> alter database add logfile group 6
2 ('D:\Oracle11g64\oradata\LINUXDB\redo06.rdo','D:\Oracle11g64\oradata\LINUXDB\redo06a.rdo') size 50M;

Database altered.

--Now try to drop the rego log file group 2
SQL> alter database drop logfile group 2;

Database altered.

--Can we drop the redo group 3 finally?
SQL> alter database drop logfile group 3;
alter database drop logfile group 3
*
ERROR at line 1:
ORA-01623: log 3 is current log for instance linuxdb (thread 1) - cannot drop
ORA-00312: online log 3 thread 1: '/u01/app/oracle/oradata/DB11G/redo03.log'

--We cannot drop the redo group 3 because it is being the current redo group for the database
--We cannot switch the log file group because the database is not open

SQL> alter system switch logfile;
alter system switch logfile
*
ERROR at line 1:
ORA-01109: database not open

--We cannot rename the redo log group members because the filenames are not recognized by the Windows environment

SQL> alter database rename file '/u01/app/oracle/oradata/DB11G/redo03.log' to 'D:\Oracle11g64\oradata\LINUXDB\redo03.log';
alter database rename file '/u01/app/oracle/oradata/DB11G/redo03.log' to 'D:\Oracle11g64\oradata\LINUXDB\redo03.log'
*
ERROR at line 1:
ORA-01511: error in renaming log/data files
ORA-01516: nonexistent log file, data file, or temporary file
"/u01/app/oracle/oradata/DB11G/redo03.log"

--Though we know the database cannot be opened, let us give it an attempt
SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-00344: unable to re-create online log
'/u01/app/oracle/oradata/DB11G/redo03.log'
ORA-27040: file create error, unable to create file
OSD-04002: unable to open file
O/S-Error: (OS 3) The system cannot find the path specified.
--Expected, we will proceed to next step

SQL> shutdown immediate
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
SQL> startup nomount
ORACLE instance started.

Total System Global Area 1252663296 bytes
Fixed Size 2280816 bytes
Variable Size 402653840 bytes
Database Buffers 838860800 bytes
Redo Buffers 8867840 bytes
SQL> alter database backup controlfile to trace as 'D:\Oracle11g64\oradata\LINUXDB\ctrlfile.trc';
alter database backup controlfile to trace as 'D:\Oracle11g64\oradata\LINUXDB\ctrlfile.trc'
*
ERROR at line 1:
ORA-01507: database not mounted

SQL> alter database mount
2 /

Database altered.
--We will trace the current control file to a readable format now

SQL> alter database backup controlfile to trace as 'D:\Oracle11g64\oradata\LINUXDB\ctrlfile.trc';

Database altered.

SQL> shutdown immediate;
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.

--Start the database in nomount state, so that we can try to create a fresh control file for the database
--Using NORESETLOGS
--COPY AND PASTE THE CREATE CONTROL FILE SEGEMENT FROM THE controlfile trace
--Do not forget to remove the line(s) pointing towards redo log files from LINUX enviornment
SQL> startup nomount
ORACLE instance started.

Total System Global Area 1252663296 bytes
Fixed Size 2280816 bytes
Variable Size 402653840 bytes
Database Buffers 838860800 bytes
Redo Buffers 8867840 bytes
SQL> CREATE CONTROLFILE REUSE DATABASE "DB11G" NORESETLOGS ARCHIVELOG
2 MAXLOGFILES 16
3 MAXLOGMEMBERS 3
4 MAXDATAFILES 100
5 MAXINSTANCES 8
6 MAXLOGHISTORY 292
7 LOGFILE
8 GROUP 4 (
9 'D:\ORACLE11G64\ORADATA\LINUXDB\REDO04.RDO',
10 'D:\ORACLE11G64\ORADATA\LINUXDB\REDO04A.RDO'
11 ) SIZE 50M BLOCKSIZE 512,
12 GROUP 5 (
13 'D:\ORACLE11G64\ORADATA\LINUXDB\REDO05.RDO',
14 'D:\ORACLE11G64\ORADATA\LINUXDB\REDO05A.RDO'
15 ) SIZE 50M BLOCKSIZE 512,
16 GROUP 6 (
17 'D:\ORACLE11G64\ORADATA\LINUXDB\REDO06.RDO',
18 'D:\ORACLE11G64\ORADATA\LINUXDB\REDO06A.RDO'
19 ) SIZE 50M BLOCKSIZE 512
20 -- STANDBY LOGFILE
21 DATAFILE
22 'D:\ORACLE11G64\ORADATA\LINUXDB\SYSTEM01.DBF',
23 'D:\ORACLE11G64\ORADATA\LINUXDB\SYSAUX01.DBF',
24 'D:\ORACLE11G64\ORADATA\LINUXDB\UNDOTBS01.DBF',
25 'D:\ORACLE11G64\ORADATA\LINUXDB\USERS01.DBF'
26 CHARACTER SET AL32UTF8
27 ;
CREATE CONTROLFILE REUSE DATABASE "DB11G" NORESETLOGS ARCHIVELOG
*
ERROR at line 1:
ORA-01503: CREATE CONTROLFILE failed
ORA-01192: must have at least one enabled thread

--So, next attempt we will try to create the control file using RESETLOGS

SQL> CREATE CONTROLFILE REUSE DATABASE "DB11G" RESETLOGS ARCHIVELOG
2 MAXLOGFILES 16
3 MAXLOGMEMBERS 3
4 MAXDATAFILES 100
5 MAXINSTANCES 8
6 MAXLOGHISTORY 292
7 LOGFILE
8 GROUP 4 (
9 'D:\ORACLE11G64\ORADATA\LINUXDB\REDO04.RDO',
10 'D:\ORACLE11G64\ORADATA\LINUXDB\REDO04A.RDO'
11 ) SIZE 50M BLOCKSIZE 512,
12 GROUP 5 (
13 'D:\ORACLE11G64\ORADATA\LINUXDB\REDO05.RDO',
14 'D:\ORACLE11G64\ORADATA\LINUXDB\REDO05A.RDO'
15 ) SIZE 50M BLOCKSIZE 512,
16 GROUP 6 (
17 'D:\ORACLE11G64\ORADATA\LINUXDB\REDO06.RDO',
18 'D:\ORACLE11G64\ORADATA\LINUXDB\REDO06A.RDO'
19 ) SIZE 50M BLOCKSIZE 512
20 -- STANDBY LOGFILE
21 DATAFILE
22 'D:\ORACLE11G64\ORADATA\LINUXDB\SYSTEM01.DBF',
23 'D:\ORACLE11G64\ORADATA\LINUXDB\SYSAUX01.DBF',
24 'D:\ORACLE11G64\ORADATA\LINUXDB\UNDOTBS01.DBF',
25 'D:\ORACLE11G64\ORADATA\LINUXDB\USERS01.DBF'
26 CHARACTER SET AL32UTF8
27 ;

Control file created.

SQL> alter database open resetlogs;

Database altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1252663296 bytes
Fixed Size 2280816 bytes
Variable Size 402653840 bytes
Database Buffers 838860800 bytes
Redo Buffers 8867840 bytes
Database mounted.
Database opened.
SQL> create temporary tablespace TEMP01 TEMPFILE 'D:\Oracle11g64\oradata\LINUXDB\TEMP01.dbf' SIZE 200M;

Tablespace created.

SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP01;

Database altered.

SQL> drop tablespace TEMP including contents and datafiles;

Tablespace dropped.

SQL> create temporary tablespace TEMP TEMPFILE 'D:\Oracle11g64\oradata\LINUXDB\TEMP02.dbf' SIZE 200M;

Tablespace created.

SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP;

Database altered.

--You may drop the TEMP01 tablespace and content once after a shutdown, startup procedure

--Now we will attempt to change the DBID & DATABASE name for the database

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 1252663296 bytes
Fixed Size 2280816 bytes
Variable Size 402653840 bytes
Database Buffers 838860800 bytes
Redo Buffers 8867840 bytes
Database mounted.
SQL> quit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

D:\Oracle11g64\product\11.2.0\dbhome_1\BIN>nid target=/ DBNAME=LINUXDB

DBNEWID: Release 11.2.0.4.0 - Production on Thu Mar 1 14:52:32 2018

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

Connected to database DB11G (DBID=439294518)

Connected to server version 11.2.0

Control Files in database:
D:\ORACLE11G64\ORADATA\LINUXDB\CONTROL01.CTL
D:\ORACLE11G64\ORADATA\LINUXDB\CONTROL02.CTL
D:\ORACLE11G64\ORADATA\LINUXDB\CONTROL03.CTL

Change database ID and database name DB11G to LINUXDB? (Y/[N]) => Y

Proceeding with operation
Changing database ID from 439294518 to 3227660209
Changing database name from DB11G to LINUXDB
Control File D:\ORACLE11G64\ORADATA\LINUXDB\CONTROL01.CTL - modified
Control File D:\ORACLE11G64\ORADATA\LINUXDB\CONTROL02.CTL - modified
Control File D:\ORACLE11G64\ORADATA\LINUXDB\CONTROL03.CTL - modified
Datafile D:\ORACLE11G64\ORADATA\LINUXDB\SYSTEM01.DB - dbid changed, wrote new name
Datafile D:\ORACLE11G64\ORADATA\LINUXDB\SYSAUX01.DB - dbid changed, wrote new name
Datafile D:\ORACLE11G64\ORADATA\LINUXDB\UNDOTBS01.DB - dbid changed, wrote new name
Datafile D:\ORACLE11G64\ORADATA\LINUXDB\USERS01.DB - dbid changed, wrote new name
Datafile D:\ORACLE11G64\ORADATA\LINUXDB\TEMP02.DB - dbid changed, wrote new name
Control File D:\ORACLE11G64\ORADATA\LINUXDB\CONTROL01.CTL - dbid changed, wrote new name
Control File D:\ORACLE11G64\ORADATA\LINUXDB\CONTROL02.CTL - dbid changed, wrote new name
Control File D:\ORACLE11G64\ORADATA\LINUXDB\CONTROL03.CTL - dbid changed, wrote new name
Instance shut down

Database name changed to LINUXDB.
Modify parameter file and generate a new password file before restarting.
Database ID for database LINUXDB changed to 3227660209.
All previous backups and archived redo logs for this database are unusable.
Database is not aware of previous backups and archived logs in Recovery Area.
Database has been shutdown, open database with RESETLOGS option.
Succesfully changed database name and ID.
DBNEWID - Completed succesfully.

D:\Oracle11g64\product\11.2.0\dbhome_1\BIN>sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Thu Mar 1 14:52:57 2018

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup nomount;
ORACLE instance started.

Total System Global Area 1252663296 bytes
Fixed Size 2280816 bytes
Variable Size 402653840 bytes
Database Buffers 838860800 bytes
Redo Buffers 8867840 bytes

SQL> create spfile from pfile;

File created.

SQL> shutdown immediate;
ORA-01507: database not mounted

ORACLE instance shut down.
SQL> startup nomount;
ORACLE instance started.

Total System Global Area 1252663296 bytes
Fixed Size 2280816 bytes
Variable Size 402653840 bytes
Database Buffers 838860800 bytes
Redo Buffers 8867840 bytes
SQL> alter system set DB_NAME=LINUXDB scope=spfile;

System altered.

SQL> shutdown immediate;
ORA-01507: database not mounted

ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1252663296 bytes
Fixed Size 2280816 bytes
Variable Size 402653840 bytes
Database Buffers 838860800 bytes
Redo Buffers 8867840 bytes
Database mounted.
SQL> alter database open resetlogs;

Database altered.

Give it a try, recompile the invalid objects (my case there were none).

That’s all folks


Oracle| Altering table structures through PL/SQL block

February 21, 2018

Hi guys

Prior moving to Oracle EBS (R12) we were using a mini ERP system, that was as robust as possible, however lacked many features those were required for modern business environments. While the company that distributed the software in the market over a decade finally dissolved, we were also provided a full copy of the source codes & the same was in my custody for whole these years.

Recently I wanted to revive it, after WebLogic 12c & seeing the possibility of “JAVA WEBSTART” in small & middle scale industries.

One of the major hurdles I came across were with the columns those saved document numbers. This software which was designed during late 90s only had maximum 6 digits document numbers across modules, totaling 496 tables!

It was not a very feasible idea to go through each table & to modify them using GUI tools like Oracle SQL developer. Instead I planned for a PL/SQL procedure.

& here comes the PL/SQL block


SET SERVEROUTPUT ON SIZE UNLIMITED;

DECLARE

CURSOR T1(OWNER_NAME VARCHAR2) IS
SELECT TABLE_NAME FROM DBA_TABLES
WHERE
OWNER=OWNER_NAME;

CURSOR C1(OWNER_NAME VARCHAR2, TBL_NAME VARCHAR2) IS
SELECT COLUMN_NAME, DATA_TYPE, DATA_LENGTH, DATA_PRECISION, DATA_SCALE FROM DBA_TAB_COLUMNS
WHERE
OWNER=OWNER_NAME
AND TABLE_NAME = TBL_NAME;

V_EXECUTE_MODIFY BOOLEAN := FALSE; --'TRUE' FOR TABLE MODIFY

BEGIN
FOR T IN T1('OPMAN') LOOP
--DBMS_OUTPUT.PUT_LINE(T.TABLE_NAME||'----'||T.TNAME);
FOR C IN C1('OPMAN',T.TABLE_NAME) LOOP
IF (C.DATA_TYPE='NUMBER' AND C.DATA_PRECISION=6 AND C.DATA_SCALE=0) THEN

DBMS_OUTPUT.PUT_LINE(T.TABLE_NAME || ' COLUMN NAME -->'||C.COLUMN_NAME);
--T_YR_MTH
IF (V_EXECUTE_MODIFY) THEN
IF (C.COLUMN_NAME 'T_YR_MTH') THEN
/*INCLUDE ALL COLUMN NAMES THOSE SHOULDN'T BE AFFECTED BY THE PL/SQL BLOCK RUN! eg:NOT IN ('ABC','DEF','GHI')*/
EXECUTE IMMEDIATE 'ALTER TABLE '||T.TABLE_NAME||' MODIFY '||C.COLUMN_NAME||' NUMBER';
END IF;

END IF;
END IF;

END LOOP;

END LOOP;
END;

So what does this block do?


CURSOR T1(OWNER_NAME VARCHAR2) IS
SELECT TABLE_NAME FROM DBA_TABLES
WHERE
OWNER=OWNER_NAME;

This block fetches all table names from the DBA_TABLES for the owner that is passed in as a parameter, in this specific case ‘OPMAN’


CURSOR C1(OWNER_NAME VARCHAR2, TBL_NAME VARCHAR2) IS
SELECT COLUMN_NAME, DATA_TYPE, DATA_LENGTH, DATA_PRECISION, DATA_SCALE FROM DBA_TAB_COLUMNS
WHERE
OWNER=OWNER_NAME
AND TABLE_NAME = TBL_NAME;

The above cursor takes in two parameters, OWNER name and the table name, that comes from the 1st cursor


FOR T IN T1('OPMAN') LOOP
--DBMS_OUTPUT.PUT_LINE(T.TABLE_NAME||'----'||T.TNAME);
    FOR C IN C1('OPMAN',T.TABLE_NAME) LOOP
    IF (C.DATA_TYPE='NUMBER' AND C.DATA_PRECISION=6 AND C.DATA_SCALE=0) THEN

        DBMS_OUTPUT.PUT_LINE(T.TABLE_NAME || ' COLUMN NAME -->'||C.COLUMN_NAME);
        --T_YR_MTH
        IF (V_EXECUTE_MODIFY) THEN
        IF (C.COLUMN_NAME 'T_YR_MTH') THEN
        /*INCLUDE ALL COLUMN NAMES THOSE SHOULDN'T BE AFFECTED BY THE PL/SQL BLOCK RUN! eg:NOT IN ('ABC','DEF','GHI')*/
        EXECUTE IMMEDIATE 'ALTER TABLE '||T.TABLE_NAME||' MODIFY '||C.COLUMN_NAME||' NUMBER';
        END IF;

        END IF;
        END IF;

        END LOOP;

END LOOP;

Let us analyze this part of the block now.

The first cursor passes the table name to 2nd cursor and the loop goes through all columns matching data type, precision and scale. I’ve checked for “Number”, precision 6 & scale 0 which are common for all columns those were defined for document numbers. However there were few instances where YearMonth (YYYYMM) values captured for month end processes, which shouldn’t be checked. As our mini ERP is a standard business application, it was designed following same nomenclature across tables while columns defined. Thus it become easier for me to isolate the columns whose were matching same data type, precision and scale & to limit the alter commands modifying only those columns which were defined for document numbers.

The above example could be further developed to easily alter tables while the entire application is modernized and to easily modify columns with bigger sizes.

Interesting? Why don’t you give it a try and post your comments?

regards,

rajesh


ORA-00837: Specified value of MEMORY_TARGET greater than MEMORY_MAX_TARGET

January 7, 2018

Hi guys

Recently I was trying to understand the AUTOMATIC MEMORY MANAGEMENT feature of 11g onwards database & ended up in a drastic situation where I changed MEMORY_TARGET parameter with a value higher than MEMORY_MAX_TARGET

This has blocked the database from starting & a quick googling brought me to correct solution

As sys, connect to idle database

#sqlplus / as sysdba

SQL> create pfile from spfile;

–Applies to where spfile is used instead of pfile

Edit the pfile & comment/delete the MEMORY_MAX_TARGET parameter, get back to SQL>

SQL>create spfile from pfile;

SQL>startup;

This should resolve the error ORA-00837: Specified value of MEMORY_TARGET greater than MEMORY_MAX_TARGET

Hope this helps few folks out there!

regards,

rajesh


Oracle EBS R12(12.0.x) | Disable concurrent programs executing immediately after a clone

November 28, 2017

Hi guys

One of the annoyances suffered by business is the alerts and concurrent programs those start sending highly confusing messages to users immediately after a cloning completion. Application tier cloning finishes by trying to start all related services after a successful cloning. Hence the best method to stop the concurrent manager/alerts from firing up is NOT to let the cloning process start application tier level services, so that you can disable alerts/concurrent programs those are NOT required by the cloned instance.

First step is to alter the $COMMON_TOP/clone/bin/adcfgclone.pl (perl script, please refer the image)

image

Change the portion in the highlighted block as shown below. This will stop the clone script from kick starting the application services after a successful clone.

elsif (!($PLATFORM eq "win32")) {
print "\n Not Starting application Services for $s_dbSid:\n";
#bugfix:5838814 subgovin
#runPipedCmd("$s_config_home/admin/scripts/adstrtal.$scrExt -nopromptmsg", "$s_apps_user", "$gConfig{'appspass'}");
}

Now, you may proceed to disable all or selected alerts/concurrent programs using backend before starting the Application services.

Please note, this workaround is ONLY tested on 12.0.x releases of Oracle Applications. Please adjust to your environments.

References:

http://onlineappsdba.blogspot.com/2008/06/prevent-concurrent-requests-executing.html

http://oracle-latest-technology.blogspot.com/2013/11/how-to-disable-all-scheduled-concurrent.html

regards,

rajesh


Oracle Backup to Google Drive?

June 6, 2017

Hi guys

This is a follow up post to my previous post about using a simple batch script for creating a dump export file on regular basis for Oracle database.

Backup. The most essential, however many times highly ignored element of the digital world even today as many small scale industries find the investments made to this particular mechanism hardly comes in effect, unless a disaster arises. My personal experiences with convincing the management to go for sophisticated backup solutions were always the toughest, until we had a HUGE disaster.

As a thumb rule, the first thing I always did for an Oracle database was to setup a dump export every night (if the database is truly small in size), after the normal working hours, in addition to RMAN backups. These export files are kept in a different partition & regularly monitored and purged by the beginning of a new month, keeping the last day backup for the previous month, which is deleted by the beginning of a new year.

Keeping the backup in the same hardware could prove fatal when the hardware fails, and almost all the servers are configured to use RAID, using different levels. In such scenarios even if the drives are intact, retrieving the data from raided volumes will be a professional job, costing investment and time.

For small databases, like the one I have mentioned with my previous post we can design multiple options like mapping a network folder & copying the files automatically once after a new dump file is created as the part of a backup script.

I have devised two methods for my party, and they were

  1. FTP the compressed latest dump file to another machine hosting FTP server
  2. Using google drive (Free 15GB), upload the latest compressed dump file

The first method was already explained here so I will go to 2nd method in which Google drive sync is used to insure that the party has a valid backup stored somewhere in the cloud

  • Database dump export size: 300MB approximately
  • Zipped dump file size: 50MB approximately

Install google drive on your Windows 2008 x, Windows 2012 server machine. You may need to install corresponding Visual C++ Redistributable packages in order to come across python related errors. Please read more here for solutions.

Once the google drive starts working fine, you can use the following script, which will create a dump file first, then create a zip file against the latest dump file created and then copy the zip file to google drive for cloud synching.

Please note, I have moved the google drive folder from the default location to somewhere else, like E:\Google_Drive to make sure that my batch file has shortest path entry for the copying. If you plan the same, you can change the default location for google drive by exiting the application first, then pointing google drive to your folder of choice when google drive complains about missing default location

Windows batch file for Creating, zipping & copying the files to Google Drive

@echo off
FOR /F "tokens=2-4 delims=/ " %%a IN ('date/t') DO exp system/password@connectionstring full = y file=d:\Orabackup\exp_%%b%%a%%c.dmp 

SETLOCAL
::Get the latest dump file name, generated using exp command
::Switch to the folder where the dump (.dmp) files are stored
CD D:\Orabackup\
:: D:\Orabackup is the folder where everyday dump files are stored.
for /f "tokens=*" %%a in ('dir *.dmp /o:-d /b') do set NEWEST=%%a&& goto :next
:next
REM echo The most recently created file is %NEWEST%
::http://stackoverflow.com/questions/15567809/batch-extract-path-and-filename-from-a-variable
FOR %%i IN ("%NEWEST%") DO (
REM ECHO filedrive=%%~di
REM ECHO filepath=%%~pi
SET ZIPNAME=%%~ni
REM ECHO fileextension=%%~xi
)

SET ZIPNAME=%ZIPNAME%.zip
::You can use built-in zip or 7-Zip to create archives
zip %ZIPNAME% %NEWEST%
::E:\Google_Drive is the folder used by the google drive in my setup
COPY %ZIPNAME% E:\Google_Drive

del %ZIPNAME%

::Exit

While this method looks pretty awesome for small size databases, please be noted that, may not be at all feasible for larger ones. I will OPT this method for a backup dump file that could be compressed to a size of 400-500MB maximum, including the possibilities of corrupt compressed files.

Whatever, as far the party has a reliable internet connection with decent bandwidth, based on the size of compressed file, will always have access to a recent backup dump file, stored free in the cloud!

Does it look decent? ;)

Tip: Running Google drive sync as Windows Service

regards,

rajesh


Oracle Applications R12 | error while loading shared libraries: libig.so.0…

December 11, 2016

 

applprod@erp-prodbak.my.home:/u01/applprod/PROD/apps/apps_st/appl/au/12.0.0/forms/US>frmcmp_batch module=$AU_TOP/forms/US/XXITMDTLS.fmb userid=apps/apps output_file=$ONT_TOP/forms/US/XXITMDTLS.fmx Module_Type=FORM compile_all=Special
frmcmp_batch: error while loading shared libraries: libig.so.0: cannot open shared object file: No such file or directory
applprod@erp-prodbak.my.home:/u01/applprod/PROD/apps/apps_st/appl/au/12.0.0/forms/US>

Recently I cloned our Production instance to check the feasibility of OEL 6.7 & as usual securely copied most of the bash profile for the Oracle and Application manager users from the production server.

I came across the “cannot open shared object file: No such file or directory” error while trying to compile an old form module & hadn’t have a clue for how to deal with this error, that happened to me first time in last 8 years with Oracle applications.

I crosschecked the folder permissions against production instance and everything was fine. All I could remember which could have gone wrong was narrowed down to the bash profile for the application user and after validating I realized that I was sourcing a wrong environment file!

Instead of sourcing the “/apps/apps_st/appl/APPSPRODBAK_hostname.env” file, I was sourcing “/apps/apps_st/appl/PRODBAK_hostname.env”. I realized it while trying to echo the $ORACLE_HOME variable as application manager user.

So, if you come across “error while loading shared libraries” error while trying to compile, before start dismantling your instance, try to see whether you have sourced the environment variables correctly.

regards,

rajesh