Let us see how Oracle APEX (Which will eventually replace Oracle FORMS (regardless whether you like that fact or not) for small to medium applications) could be installed on your Windows 10/Server this time.
We’ll be trying the latest version of ORACLE APEX 18.2 & Windows 10 Pro running the latest version 1809 (as on date, fully patched) and Oracle database 11g R2 (188.8.131.52). Oracle depends upon VC++ components loads, so make sure your Windows box is fully patched.
Download Oracle APEX installer from APEX repository. Please note, for APEX 18.x, you MUST have 11G R2 184.108.40.206 or later database. 11g R2 220.127.116.11 is only available for customers with valid support, hence if you are a student, freelancer, you best bet is 12c database, that is still freely available to download.
Unzip the downloaded .zip file to a finely structured directory. For example, you can extract the zip file to D:\APEX, that will create another “apex” folder within “D:\APEX\”. Check the attached image
I usually keep all the Oracle Installers under a single folder, called “D:\Oracle_Installers”. So basically the rest of the instructions, I will be using the same directory structure. Please don’t get confused.
Switch to the apex folder from an elevated command prompt, in my case to “D:\Oracle_Installers\apex_18.2\apex”. Source the database environment (eg: SET ORACLE_SID=ORCL) in case if you have multiple Oracle products installed in the same box
Start SQLPLUS as sysdba
A default installation of 11g R2 installs APEX, which you should remove at this stage. If you are too eager to know which version is installed, you my run the following query
select version from dba_registry where comp_id='APEX';
If you haven’t removed the APEX installation, it should return you a row with installed version information of APEX
You should remove the existing APEX prior installing the 18.x version(s). To remove the installed version, please execute as following
Wait for the uninstallation to complete, exit SQL and start SQLPLUS once again as sysdba (Mandatory)
Now we can start the installation of APEX 18.x. Prior starting the installation let us create a dedicated tablespace for APEX.
@apexins APEX APEX TEMP /i/
(Note the forward slash "/")
Wait for the installation to finish. My box has a SSD, hence the installation completes in few minutes time (3-5 minutes I believe). It may take bit more time, depending upon the hardware resources available.
Once the installation completes successfully, we have to configure the APEX. To configure execute “apex_epg_config.sql” that is available in the current path. Please note, here is the catch, as with the installation, Oracle expects forward slashes around the parent folder where you have the image files. So, I have extracted the installation files to “D:\Oracle_Installers\apex_18.2\” & this is the PARENT directory for apex installer. Hence we will call the EPG script as below (correct your path appropriately)
(Note all back slashes are replaced with forward slashes)
Once the script finally exists after successfully copying the image files to APEX repositories, you need to unlock the following Oracle database schemas
ALTER USER anonymous ACCOUNT UNLOCK;
ALTER USER xdb ACCOUNT UNLOCK;
ALTER USER apex_public_user ACCOUNT UNLOCK;
ALTER USER flows_files ACCOUNT UNLOCK;
Now, go ahead with the last configuration by executing
This script will ask for ADMIN user details, Select ADMIN as admin user and provide an email address. Please note, you must provide a punctuation symbol for the password and allowed punctuations are as below
That’s all folks. If there are no errors, you are all good to go. You may need to configure your database server for better performances, which we will discuss bit later.
As Organizations grow, will end up with multiple software solutions taking care of different areas of the business. Based on the availability, there would be situations where business will end up with different database technologies and data exchanges between such will become a mandatory element to avoid duplication and additional efforts.
Today we will discuss about a requirement that involves MS SQL Server and Oracle database. While Microsoft has well established solutions called as Linked Servers, that allows the MS SQL to connect with heterogeneous database technologies, Oracle’s approach is pretty tedious and may require more efforts that one could anticipate. Yes, I am talking about Oracle “Golden Gate”, which allows an Oracle database to access other database technologies through a “gateway”
Instead of discussing the complex “Golden Gate”, we will see how simple (& complex at the same time) it is to setup Linked Servers from Microsoft SQL Server (for Oracle)
Install the client
Depending upon your OS architecture(32Bit/64Bit), you need to install the appropriate Oracle client. I suggest, install 64Bit Oracle Client on 64Bit OS and 32Bit client over 32Bit OS. You save loads of efforts by doing so. I always do a full installation of the Oracle client and later add missing components those are mandatory for “Distributed Transactions”. So what is a distributed transaction?
Your MS SQL Database table receives a row (record)
Your expectation is to replicate the same row to Oracle table
You have an “after insert” trigger defined with the MS SQL Table
A full installation of Oracle client (Administration) doesn’t install the mandatory component to facilitate the above requirement. You must install “Oracle Services for Microsoft Transaction Server” in order to do such a distributed transaction from MS SQL to Oracle database, after invoking the Client Installer. Make sure while the installer kicks in, you select the already existing Oracle home to make sure that the installation will not create another home for the additional components those you will install. Cross check whether the Oracle OLE DB driver is installed & install the Oracle Services for Microsoft Transaction Server component.
Analyze your Oracle provider and make changes to the stack
Make sure your provider, in our case, “OraOLEDB.Oracle” is configured prior you create linked servers.
Enable the following options
Support “Like” operator
Index as access path. Disabling this option is mandatory for “Distributed Transaction”. If your Oracle table has indexes and this option is not disabled, an insert attempt from table trigger will fail with the following error
The OLE DB provider “OraOLEDB.Oracle” for linked server “ERPPROD” returned a “NON-CLUSTERED and NOT INTEGRATED” index “XXFPPUNCHM_N1” with the incorrect bookmark ordinal 0.
Create a linked Server
Right click on the “Linked Server” node and select “New Linked Server…”. I am providing you details of a linked server that is already created at my end. Adjust your linked server details accordingly. Make sure, the Oracle client installation folder is your “PATH” & that the tnsnames.ora file has an entry for the Oracle service (that you will enter in “Provider string” column.
If you have entered the mandatory elements correct, you have successfully created a linked server. If anything goes wrong you will be prompted about it, and address them. Please note, you can always revisit and change the Server options at later stages. However, options under “General” cannot be modified. If requires modifications, you need to drop and recreate the linked server once again.
Test the newly created linked Server
The actual issues starts from now. Please note, my laptop that I use for all developments has multiple versions of Oracle database and clients installed in addition to .Net development tools, Android, PHP etc to name few. If you are planning to implement the linked server solution for a production environment, make sure you have only one Oracle product installed along with MS SQL server. On the other hand you are going to have end number of complexities, few of which are not easily addressed.
For example, my development machine has:
Oracle database 11g
Oracle Client 12c (12.1)
Oracle Database 12c (12.2)
Which is a more than complex situation to address when it is all about MS SQL linked servers. One of the toughest issues to address is the following error
The OLE DB provider “OraOLEDB.Oracle” for linked server “” supplied inconsistent metadata for a column. The column “” (compile-time ordinal 2) of object “””.””” was reported to have a “DBCOLUMNFLAGS_ISFIXEDLENGTH” of 16 at compile time and 0 at run time.
It took me almost 1.5 days to figure out what could be wrong as a simple SQL query like following against the newly created linked server continuously provided me the above error.
Select count(*) from [ERPTEST]..[APPS].[XXFPPUNCHM]
I’ve come across a post over stackoverflow.com which said, this could be due multiple Oracle products being installed in the same box & there were few instructions to overcome, which didn’t workout for me. However, I was successful with the production server, in which I only had the Oracle 11g client installed. To insure the real time replication of the data from MS SQL to Oracle database, I had to alter few registry values & restart the server. To my utter surprises, the same scenario I tested over three different boxes & all three experiences were different from each other.
The production server where I have SQL Server 2014 standard edition would not post rows to Oracle database, that is a part of “Distributed Transaction” without the registry hacks.
My development laptop wouldn’t even fetch rows from Oracle database without tweaking the PATH environment element & registry with proper .dll paths
My home PC does everything without having to tweak the path of registry where I have almost the same setup like my development laptop. The ONLY one difference with my home PC is, instead of Oracle client 12c, I have 11g client.
Now we will address each of such situations. Please note, the following exercises require you to make registry changes, so please make sure that you take a full backup of the registry prior attempting any given possible solutions. (If you are having ONLY one Oracle product installed (Database or Client, please move to Step#2 )
Step#1: Register Oracle OLEDB driver (This is to insure that we are using the same stack across the solution). Only one version of OLEDB driver could be activated at a time, regardless how many Oracle products are installed. If you had 12c installed after 11g, you must be having the 12c OLEDB driver activated.
From an elevated command prompt, switch to Oracle Client/Database BIN folder (eg: D:\oracle\product\11.2.0\dbhome_1\BIN)
Issue the following command
This should register the OLE DB driver for you.
Step#2: Check your OS PATH environment element, your client/database bin path must be the first Oracle product entry, eg:
This will insure that tnsnames.ora will be sought in this path, in addition to Oracle dlls. As we are using Oracle database 11g as the first product from the list of other Oracle products installed in the environment element PATH (refer the image above), we will hack the registry with all elements related to the specific product (Again, please make a backup of the registry, minimum the specific key)
Refer the image above and adjust the entries as per your Oracle installation. Once the registry is modified, restart your computer (mandatory)
Once the box restarted, try to insert a row into the Oracle table. Example
insert into [PRODBAK]..[APPS].[XXFPPUNCHM]
SQL Management studio must stop complaining about “The OLE DB provider “OraOLEDB.Oracle” for linked server “” supplied inconsistent metadata for a column. The column “” (compile-time ordinal 2) of object “””.””” was reported to have a “DBCOLUMNFLAGS_ISFIXEDLENGTH” of 16 at compile time and 0 at run time.” right after setting up the correct .dll files and the relevant paths in the registry.
Now create your table trigger, through which you want to push a row to the Oracle table. A simple after insert trigger could be defined like following:
create trigger addRecordsToERPTable2 on [UNIS].[dbo].[tRajesh]
insert into [PRODBAK]..[APPS].[XXFPPUNCHM]
Step#3: Now we will try to initiate a “Distributed Transaction” with the new registry and other hacks
You may come across an error while a distributed transaction is initiated, Management Studio complaining about “Msg 8501, Level 16, State 3, Procedure addRecordsToERPTable2, Line 13 MSDTC on server ” is unavailable.” This is a pretty simple error to address. Open the Windows Services & check whether the service “Distributed Transaction Coordinator” has started. My development machine initially had this service startup mode set as “Manual”, I changed it to “Automatic delayed start” and started the service. Adjust according to your situation.
Basically the above few things should address most of the common issues you would face with Oracle Linked Server from MS SQL.
Finally, Oracle clearly states, there are limitations using their driver for Linked Servers from MS SQL. So, expect for unexpected while using such a setup. For me, it was simple transactions. If you are expecting rapid replications based on complex business requirements, please do test your scenarios as much as possible prior adapting the above hacks.
A pretty long title? Well, recently I came across a situation where I needed a trigger with MS SQL server table to insert some information into our Oracle database.
The MS SQL Server is hosted in a Windows 64 bit OS, with Oracle 11g 64Bit client installed (For 64Bit OS, you must install Oracle client 64Bit for the Oracle OLEDB provider)
I did some sample inserts using the Management studio and created a trigger like following with one of the sample tables:
create trigger addRecordsToERPTable on [UNIS].[dbo].[tRajesh]
insert into [XYZ].[APPS].[XXFPPUNCHM]
So the idea was pretty simple, like an audit, as soon as the SQL table “rRajesh” has a new row inserted, the after insert trigger should sent the same row to underlying table over Oracle. Instead I started getting the following error:
OLE DB provider “OraOLEDB.Oracle” for linked server “XYZ” returned message “New transaction cannot enlist in the specified transaction coordinator. “. Msg 7391, Level 16, State 2, Procedure addRecordsToERPTable, Line 5 The operation could not be performed because OLE DB provider “OraOLEDB.Oracle” for linked server “XYZ” was unable to begin a distributed transaction.
I’m not very familiar with MS SQL or the complexities related to Linked Server environments. So, started my next series of Google searches. I referred tons of discussions, however was not getting anywhere with the dreaded situation. During the frantic search for a solution, I executed the instructions available over different links.
Even after making changes as mentioned with the above threads, I still kept on receiving the same errors while a row was inserted into my SQL sample table. So I continued searching for a solution and came across a thread
Although the article addresses pretty Old OS and Oracle environments, the solution is still applicable on later OS and Oracle clients. For example, My MS SQL Server is installed over Windows 2008 R2 and the Oracle client I am using with the server is 11G R2 64Bit.
Let us see quickly what Microsoft provides as a solution.
I checked the registry of my server and found something pretty interesting like below:
Now, Oracle names almost all their major dll files in a particular fashion. Most of the times you will find the dll files having the major version numbers by the end of the filename, for example, if your Oracle database is 8.0, your client dll file will be “Oraclient8.dll” and if you are using Oracle 11g, the filename would be “Oraclient11.dll”
After taking a full backup of the registry, I modified the values with 11g specific & restarted the Server (as per the instructions available for Oracle 8.1 in the Microsoft document.)
Once the server started, I went ahead and tried to insert a new row into my sample table and that was it. No more errors and the row was inserted to both MS SQL table and Oracle table at the same time.
So if you were frantically searching for a solution, this post may help you to resolve it.
I confirm, I am not a DBA, yet my last 10 years with Oracle EBS experiments have given me opportunities to learn few facts those are not documented generally.
A general approach towards deploying a new setup/custom development for EBS is 1st developed and tested thoroughly for which a business needs testing environment. To provide such, a DBA “clones” or duplicates the existing environment and this process is called cloning.
A majority of the environments may note have the hardware with same configurations as it is available for the Production, this causes the cloned instances to perform poorly, thus making the entire testing pretty painful.
During the initial stages of learning, I had tough time digesting our part time DBA’s arguments and reasoning for the slowness, tied to the hardware limitations. Our TEST server has dual processors and 20GB memory with more than 3 TB storage that is hooked up to a 1GB network interface.
As my confidence kept on building, I started exploring the database setups and realized the DBA had only allocated 2GB for SGA & 1GB for the PGA. While questioned, he gave me all possible excuses like “Well, your instance doesn’t require more” and so on.
We got rid of him & had another DBA in the due course, who was far better and more experienced handling bigger environments. Yet we had this nagging issues related to the test instance being slow, lagging beyond our extended patience & I started asking oracle communities questions about how to speed up things.
“Not being a DBA” had it’s own limitations. Many instructions were beyond my understanding, many sounded impractical or illogical & finally one day I decided to setup the R12 (12.0.6, 11gR2 database) with 3GB for both SGA, PGA following an old discussion available with asktom.
That was the beginning. Our TEST instance performance improved to a level that, it become difficult to distinguish between the production and test instances when the Production instance is a beast (minimum for our environment)
Although, the manual SGA/PGA settings resolved one of the major performance related issues (login page load, opening large forms etc), I was still NOT convinced by the performance (Queries taking long time..). For some other reasons, I had to restart the TEST server (Which usually happens once in many months time) & was amazed to see that the performance gains were distinguishable from earlier times. As soon as I tried to access the instance, the login screen was loaded (not ignoring the cache mechanisms) faster, so did rest of the accesses. Forms based interfaces were loading faster and the queries were returned instantly etc.
I opted for another cloning. Dropped the entire R12 instances and repeated the cloning, adjusted the database parameters and “RESTARTED” the physical server, started the database and application tiers and insured that the performance gains I observed were not just a hit and miss, actually a permanent HIT that exists.
I continued my experiments further. My next attempt was to upgrade the SGA, PGA with more memory and this time I opted for 4GB for each sector. After the database & application restart I realized the poor performance. I was expecting the instance to be faster! Instead I was dealing with an instance that was as horrible as possible prior to my 3GB/restart patterns.
So I rolled back to 3GB SGA, PGA setups and was living with it until our finance consultant asked me for a fresh clone with most recent data. This time I decided to duplicate database using RMAN rather than a traditional cold back restore and cloning.
As RMAN duplicate database doesn’t require me to alter the database (as the spfile is already available), all I needed was to run the “adautoconfig” from the application tier and go online. I felt something awkward
Although the instance was “Okay”, there were few things which were NOT AT all okay. The database alert log was showing redo log files related issues. I checked the redo log files, things were looking Okay against the documents those I were referring to do the RMAN duplicate database…yet
I took the application tier offline and altered the database like following:
Increased the SGA max memory to 5GB and target to 4GB, restarted the database (40% of total available memory)
Left PGA at 3GB (Thus totaling the memory dedicated for the database to 8GB out of 20GB available)
From the database tier, executed “adautoconfig”
Execute “adautoconfig” from the application tier
Shutdown database and restarted the physical server
Restarted database and application tiers & boom, the instance was once again giving me the buttery smooth experience. I ain’t sure, whether it is technically correct, it was all about executing “adautoconfig” against both tiers after database parameter changes. I will ask Oracle community soon to confirm it.
Now, all the above experiments with an obsolete release of Oracle application R12 (12.0.6) over an OS (Linux 6) that’s not supported were performed over many months time. Lack of true technical training must be one of the reasons contributing towards painful experiences.
Well, whatever, today I feel like a winner. Just because I am able to host an instance of the application over a hardware that was labelled obsoleted just because someone didn’t know how to fine tune an instance of Oracle application, even though he is certified to do that!
If you are like me, not certified, a jack of everything, who loves to keep on experimenting, do give the above a try & post me your experiences!
Recently one of our accountants forwarded me a screenshot, that was showing “FRM-40735 WHEN-VALIDATE-ITEM trigger raised unhandled exception ORA-01403” while he was trying to enter invoices against a “NEWLY” created vendor/supplier.
Our Oracle Application R12 (12.0.6) is considered 99.99999% stable, without a single technical or functional issue that really become a show stopper throughout last many years.
Well, this particular issue looked perplexing as it was not dealt by Oracle Application’s error reporting & slowly we started dwelling Oracle support documents those were dealing with the given forms error “FRM-40735 WHEN-VALIDATE-ITEM trigger raised unhandled exception ORA-01403”
Most of the documentations where mentioning about IBY duplicate pay party, which was not our case. Hence, I decided to open the associated form APXINWKB.fmb & located the WHEN-VALIDATE-ITEM trigger associated with the column “Purchase Order Number”. I couldn’t find any irregularities between an order that didn’t raise the error and this particular Purchase order did raise the exception, which was unhandled.
After two days of continuous attempts, I remembered that such errors happen in other forms modules when we had missing information for new vendors/suppliers. Must be due to a bug, there were times when site level details were NOT populated to organizations level details for a vendor/customer & I decided to go through all mandatory elements those were expected while creating a new vendor/supplier.
I sat with my colleague and we reached to “Payment Method”, and realized that the default payment method was not set for this particular vendor against the organization where we were getting this unhandled exception.
Once the payment method was set, the invoice was posted for the vendor successfully! So, if you come across these kind of unhandled exceptions across Oracle’s proprietary forms those deal with payments/invoices, prior exhausting yourself with cloning and patching, make sure you have all the mandatory elements for customer/vendors are properly filled in and assigned to all the organizations.
Hope this finding helps few consultants out there!
Okay, I was silent for couple of months. I took a much needed break and back to work now. As few of you may already know, I am not a DBA (Certified), yet I have dealt with Oracle databases throughout my career & today was “another day” when I came across something new after restoring RMAN backup to a TEST environment.
Actually the entire “how to document” was provided by a APPS DBA friend (thanks to such geeks who are never bothered about someone else “learning the tricks” and challenging them! Geeks remain geeks) & without giving much attention to few elements, I “successfully” duplicated the 11g R2 database. Once after the database came online, I realized that the instance was pretty slow & immediately monitored the alert logs.
I started reading few entries like following:
Thread 1 cannot allocate new log, sequence 56
Private strand flush not complete
Current log# 3 seq# 55 mem# 0: /u01/oratest/TEST/db/apps_st/data/redo03a.log
Current log# 3 seq# 55 mem# 1: /u01/oratest/TEST/db/apps_st/data/redo03b.log
Beginning log switch checkpoint up to RBA [0x38.2.10], SCN: 5986177240123
Thread 1 advanced to log sequence 56 (LGWR switch)
Current log# 4 seq# 56 mem# 0: /u01/oratest/TEST/db/apps_st/data/redo04a.log
Current log# 4 seq# 56 mem# 1: /u01/oratest/TEST/db/apps_st/data/redo04b.log
Completed checkpoint up to RBA [0x38.2.10], SCN: 5986177240123
Thu Oct 04 12:14:14 2018
Beginning log switch checkpoint up to RBA [0x39.2.10], SCN: 5986177240998
Thread 1 advanced to log sequence 57 (LGWR switch)
Current log# 1 seq# 57 mem# 0: /u01/oratest/TEST/db/apps_st/data/redo01a.log
Current log# 1 seq# 57 mem# 1: /u01/oratest/TEST/db/apps_st/data/redo01b.log
Thread 1 cannot allocate new log, sequence 58
Private strand flush not complete
Current log# 1 seq# 57 mem# 0: /u01/oratest/TEST/db/apps_st/data/redo01a.log
Current log# 1 seq# 57 mem# 1: /u01/oratest/TEST/db/apps_st/data/redo01b.log
Thu Oct 04 12:14:25 2018
Beginning log switch checkpoint up to RBA [0x3a.2.10], SCN: 5986177241136
Thread 1 advanced to log sequence 58 (LGWR switch)
Current log# 2 seq# 58 mem# 0: /u01/oratest/TEST/db/apps_st/data/redo02a.log
Current log# 2 seq# 58 mem# 1: /u01/oratest/TEST/db/apps_st/data/redo02b.log
Thu Oct 04 12:14:47 2018
Thread 1 cannot allocate new log, sequence 59
I landed over a discussion at https://community.oracle.com/thread/364032?start=0&tstart=0 & few others, and at many places I read suggestions towards the redo log files getting filled too fast because of the smaller sizes allocated. I check the production instance, the redo log files were of size 1000M when the TEST instance log files were of size 100M!
So my next requirement was to resize the redo log files without damaging the database.
I came across an excellent post here https://uhesse.com/2010/01/20/how-to-change-the-size-of-online-redologs/ that explains how to create new redo log files and to drop the old ones without affecting the database (or users). The best part is, you don’t even have to take the database offline for any of the suggested activities.
So if you ever face such a situation, give it a try. You would be happy like me :)
I know the subject title is not very professional this time. Yet, I want to make a claim that I figured out something, for which I spent more than couple of years time and have followed up few Oracle community threads (without much interesting results)
We had to retired a hardware that was recommended by the Oracle EBS implementation partner, within 2 years once after we went online with the R12 instance. We had 10g 10.2.0.3 with the instance, things were getting messy and slow & the new support partner recommended for a better hardware.
I always had eyes on this retired server. It had Linux, hence we couldn’t come up with a practical requirement to integrate the Linux server with our Windows domain environment and it was kept switched off until the virtualization project came online.
We needed “something” to hold a copy of the EBS instance while it was being virtualized.
So, I cloned this machine & before continuing let me describe what this is hardware is like:
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 4
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model name: Intel(R) Xeon(R) CPU E5420 @ 2.50GHz
CPU MHz: 1992.000
L1d cache: 32K
L1i cache: 32K
L2 cache: 6144K
NUMA node0 CPU(s): 0-7
in addition to the local disks this server have partitions mounted from a IBM SAN.
Once the clone was done, I realized that the instance was extremely slow & our part time DBA started making excuses like “See that’s why we are changing the hardware” (He had 2G SGA and 1G PGA with 20 job_queue_processes against nearly 1TB database)
I opened few discussions with Oracle communities and was pointed towards a tone of documents suggesting me how to fine tune the hardware and database for better performances. Actually nothing were applicable as I didn’t have much hands on experiences with a database & I couldn’t find a person who could really HELP me.
Then I started taking interest about database technology, which I should have years back & came across SGA/PGA and JVM etc & as I had an idle instance, started trying out whatever I have “learned” against it.
While doing the 11g R2 the hard way I realized that I can use AMM and forget about tuning different parameters for memory optimization. Well, still the goddamn instance lagged like hell & I was almost done with it!.
Few of the persistent issues were:
After a cold boot
The login form would load at client end after waiting almost 3-4 minutes, which gets faster during consecutive attempts.
It takes ages for to open the concurrent programs window
Our custom forms & LOVs lag to extremes and so on..
Even shutting down the instance for anything was turning into a nightmare as the database always took more than 15-20 minutes and I had to kill multiple processes manually in order to bring it offline!
Then on a different note, while trying to learn SQL learning I landed against an ask Tom thread, where the asker says “I have setup both SGA and PGA 3GB”, still the SQL runs slow…
I did a fresh clone. Our database was upgraded to 11g almost year back. The default clone had 1G for both SGA and PGA. I altered them with 3G and 3G & bullseye
I went back and altered the SGA and PGA with 4G which was 40% of the total physical memory available for the hardware. I did three shutdowns and restarts of the physical server, did a dozen application and database startup to confirm that what I am experiencing is NOT a once in bluemoon phenomena. Each of my attempt to shutdown the database gracefully were completed within few seconds, not a single time I had to kill the Linux processes to bring it down!
I modified one of the main forms for a custom application and changed few VIEW calls with better logic & I can’t be happier!
Now, said that, don’t rush to me saying “I also did 4G for SGA and PGA and moron I still have a slow instance”. There are many factors affecting the performance of your database and application & most important few are:
Age of your hardware, especially the spinning disks. The aged they are, the worse the performance is going to be as there is hell loads of I/O happens when you are accessing/processing the data from a database.
Recently I was going through a MS SQL discussion about Multi-Tenant architecture and one of the contributors were discussing about a hosting firm that keeps on changing their hardware once in 6 months. I think he was just BLUFFING! ;)
I hope someone gets benefitted by the minor finding I have made YESTERDAY (6th May 2018)!