Oracle Backup to Google Drive?

June 6, 2017

Hi guys

This is a follow up post to my previous post about using a simple batch script for creating a dump export file on regular basis for Oracle database.

Backup. The most essential, however many times highly ignored element of the digital world even today as many small scale industries find the investments made to this particular mechanism hardly comes in effect, unless a disaster arises. My personal experiences with convincing the management to go for sophisticated backup solutions were always the toughest, until we had a HUGE disaster.

As a thumb rule, the first thing I always did for an Oracle database was to setup a dump export every night (if the database is truly small in size), after the normal working hours, in addition to RMAN backups. These export files are kept in a different partition & regularly monitored and purged by the beginning of a new month, keeping the last day backup for the previous month, which is deleted by the beginning of a new year.

Keeping the backup in the same hardware could prove fatal when the hardware fails, and almost all the servers are configured to use RAID, using different levels. In such scenarios even if the drives are intact, retrieving the data from raided volumes will be a professional job, costing investment and time.

For small databases, like the one I have mentioned with my previous post we can design multiple options like mapping a network folder & copying the files automatically once after a new dump file is created as the part of a backup script.

I have devised two methods for my party, and they were

  1. FTP the compressed latest dump file to another machine hosting FTP server
  2. Using google drive (Free 15GB), upload the latest compressed dump file

The first method was already explained here so I will go to 2nd method in which Google drive sync is used to insure that the party has a valid backup stored somewhere in the cloud

  • Database dump export size: 300MB approximately
  • Zipped dump file size: 50MB approximately

Install google drive on your Windows 2008 x, Windows 2012 server machine. You may need to install corresponding Visual C++ Redistributable packages in order to come across python related errors. Please read more here for solutions.

Once the google drive starts working fine, you can use the following script, which will create a dump file first, then create a zip file against the latest dump file created and then copy the zip file to google drive for cloud synching.

Please note, I have moved the google drive folder from the default location to somewhere else, like E:\Google_Drive to make sure that my batch file has shortest path entry for the copying. If you plan the same, you can change the default location for google drive by exiting the application first, then pointing google drive to your folder of choice when google drive complains about missing default location

Windows batch file for Creating, zipping & copying the files to Google Drive

@echo off
FOR /F "tokens=2-4 delims=/ " %%a IN ('date/t') DO exp system/password@connectionstring full = y file=d:\Orabackup\exp_%%b%%a%%c.dmp 

SETLOCAL
::Get the latest dump file name, generated using exp command
::Switch to the folder where the dump (.dmp) files are stored
CD D:\Orabackup\
:: D:\Orabackup is the folder where everyday dump files are stored.
for /f "tokens=*" %%a in ('dir *.dmp /o:-d /b') do set NEWEST=%%a&& goto :next
:next
REM echo The most recently created file is %NEWEST%
::http://stackoverflow.com/questions/15567809/batch-extract-path-and-filename-from-a-variable
FOR %%i IN ("%NEWEST%") DO (
REM ECHO filedrive=%%~di
REM ECHO filepath=%%~pi
SET ZIPNAME=%%~ni
REM ECHO fileextension=%%~xi
)

SET ZIPNAME=%ZIPNAME%.zip
::You can use built-in zip or 7-Zip to create archives
zip %ZIPNAME% %NEWEST%
::E:\Google_Drive is the folder used by the google drive in my setup
COPY %ZIPNAME% E:\Google_Drive

del %ZIPNAME%

::Exit

While this method looks pretty awesome for small size databases, please be noted that, may not be at all feasible for larger ones. I will OPT this method for a backup dump file that could be compressed to a size of 400-500MB maximum, including the possibilities of corrupt compressed files.

Whatever, as far the party has a reliable internet connection with decent bandwidth, based on the size of compressed file, will always have access to a recent backup dump file, stored free in the cloud!

Does it look decent? ;)

Tip: Running Google drive sync as Windows Service

regards,

rajesh


Oracle database 11g on Windows 2008 R2 & later | ORA-12518 error

June 6, 2017

Hi guys

Recently I was approached by a party to migrate their 14+ years old mini ERP system that is client/server architecture to a new hardware. This legacy application has such a small footprint that, the export dump was hardly reaching 300Mbs in size after a full database export.

Scenario

  1. OS: Windows 2003 SP3, 32-Bit
  2. Oracle Database: 10g Release 1
  3. Client side, Developer 6i with Patch 18
  4. Clients using Windows 7, 64-Bit with DLL hacks for running forms/reports based application

Requirement(s)

  1. Database upgrade to 11g R2 64-Bit, in order to maximize the performance and properly utilize the new hardware (HP DL380 g9 with 32GB memory and more than 1TB storage)

We’ve initiated the migration by testing all possible scenarios using ORACLE VirtualBox. Created a VM for Windows 2008 R2 server, Created both Windows 10 & Windows 7 SP1 VMs for client side testing. After thorough checking to insure that there were no technical errors, decided to move the solution to physical server. Throughout the testing using VMs we never changed any database parameters (not even CASE SENSITIVE logon), yet all clients were happily connecting and executing forms and reports as expected.

The following were performed on the physical server after installing & updating Windows Server 2008 R2

  1. Installed Developer Suite 6i
  2. Installed Oracle Database 11g r2 (11.2.0.4), accommodating 40% the physical memory and set the memory management as automatic.
  3. Configured RMAN
  4. Imported specific users from the latest export dump

We tried to start the application from one of the clients and the application started & the queries were executed at lightning speed. Client exit from the application and tried to restart, hitting Oracle not found error!

All of a sudden from the smirks, panic took over all the parties involved. We shutdown the services, restarted. Client gets connected, and 2nd attempt returns the same error  ‘Oracle not found’

Again the client connects and trying to run the reports prompts them a popup windows asking for logon to database.

After cross checking between VMs and Physical server, we confirm that both the scenarios share the same database and client settings (totally missing the PROCESSES parameter). Yet the physical scenario was continuously falling victim to Oracle not found error

A quick googling brought to me a blog page that asked to check the number of processes parameter set in the database instance, which was 150 by default in our case. I have cross checked with the VM instance and found the value to be 200. I have to redo the exercises to figure out whether I have changed the parameter after creating the database using DBCA.

Next I ran the following SQL as root to figure out the max processes count registered by the database

select resource_name, current_utilization, max_utilization from v$resource_
limit where resource_name in (‘processes’,’sessions’);

which returned me the following:

RESOURCE_NAME                  CURRENT_UTILIZATION MAX_UTILIZATION
—————————— ——————- —————

processes                                       47             175

sessions                                        54             173

Obviously, the max_utilization has crossed the default value of 150 processes & I changed the parameter with 450 using alter system command.

Alter system set processes=450 scope=spfile;

After altering the system, a shutdown immediate followed by startup has fixed nightmare.

Further readings has given me a fairly good idea that 11g R2 has not the above said issues related to number of processes, many other related to network stack.

So, if you are planning to setup 11g to work with Developer 6i, which is not certified as a combination by Oracle, be prepared to bite the silver bullet(S)

regards,

rajesh


ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance

June 5, 2017

Hi guys

I’m back to blogging after couple of busy weeks and quite bit of traveling. I’m currently playing around with Oracle 12c database Release 2 & Developer 6i with Patch 18, again hacked using Patch 3 DLL files in order to execute forms/reports on Windows7-10

Obvious that, starting from Oracle 11g, Oracle has introduced stricter password policies by implementing case sensitive logons, number of attempts and password age etc. I hardly believe small industries are really ever going to implement these policies as Oracle intended to in real life scenarios as it would require a full time DBA, a lot of tracking and auditing (which in my experience never happens)

Anyway, for testing, I have always kept the commands ready to disable said all three security elements. Once a new test database is made, prior attempting any other, I change password complexity, expiry & reuse times using the below given alter commands

  • alter system set sec_case_sensitive_logon=false scope=both;
  • alter profile DEFAULT limit PASSWORD_REUSE_TIME unlimited;
  • alter profile DEFAULT limit PASSWORD_LIFE_TIME  unlimited;

With 12c Oracle has made many changes to the security, SQL Net connections etc. If you are truly interested, please refer this document to understand what has been deprecated Upgrade Guide 12c Release 1 (12.1) E41397-11

According to the documentation, SEC_CASE_SENSITIVE_LOGON is maintained only for backward compatibility & most probably will be dropped from future builds. I’ve checked altering sec_case_sensitive_logon on a 12c R2 database it works. However, during each startup, I receive a notification that says “ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance”

image

and executing the following command returns a number of parameters those are not supposed to be used against 12c release Winking smile


SELECT name from v$parameter WHERE isdeprecated = 'TRUE' ORDER BY name;

and the following will be listed

NAME
——————————————————————————–
O7_DICTIONARY_ACCESSIBILITY
active_instance_count
asm_preferred_read_failure_groups
background_dump_dest
buffer_pool_keep
buffer_pool_recycle
commit_write
cursor_space_for_time
db_block_buffers
fast_start_io_target
instance_groups

NAME
——————————————————————————–
lock_name_space
log_archive_start
parallel_adaptive_multi_user
plsql_debug
plsql_v2_compatibility
rdbms_server_dn
remote_os_authent
resource_manager_cpu_allocation

sec_case_sensitive_logon

serial_reuse
sql_trace

NAME
——————————————————————————–
standby_archive_dest
unified_audit_sga_queue_size
user_dump_dest
utl_file_dir

26 rows selected.

SQL>

So, we could see that sec_case_sensitive_logon has been listed as deprecated. Now, how exactly we are going to work around this little annoyance totally depends upon us, developers & DBAs. While I prefer a test environment with no password related hassles, a production environment should be designed to accommodate case sensitive logons & other recommended password policies as Oracle may not re-introduce the parameter in future builds.

While the notification/warning we receive about deprecated parameters are generic to all deprecated parameters, in this post, I have only mentioned about case sensitive passwords.

regards,

rajesh


Oracle 12c database installation | Points to consider

May 7, 2017

Hi guys

Have you noticed the major changes with the installation media of Oracle 12c database release 2? Oracle has made it as a single package, eliminating the user to merge the folders from 2 different installation media to get it done. While I welcome such a move from Oracle with whole heart, once again there are few difficulties getting the software installed properly.

One of the major issues is, when the administrator decides to disable hidden admin shares to secure the box from accessed using C$, D$ calls from a remote machine with domain/local administrative privileges. Disabling the admin shares are handled by tweaking the registry & easily forgotten in the long run. Lack of earlier experiences with installing Oracle software could force one to finally formatting the box to address the registry tweaking for disabling the admin shares.

Few of the errors present are like following:

e2

e1

And the error message reads like following:

Cause – Failed to access the temporary location.  Action – Ensure that the current user has required permissions to access the temporary location.  Additional Information:
– PRVG-1901 : failed to setup CVU remote execution framework directory “C:\Users\username\AppData\Local\Temp\CVU_12.2.0.1.0_username\” on nodes “hostname”  – Cause:  An operation requiring remote execution could not complete because
         the attempt to set up the Cluster Verification Utility remote
         execution framework failed on the indicated nodes at the
         indicated directory location because the CVU remote execution
         framework version did not match the CVU java verification
         framework version. The accompanying message provides detailed
         failure information.  – Action:  Ensure that the directory indicated exists or can be created and
         the user executing the checks has sufficient permission to
         overwrite the contents of this directory. Also review the
         accompanying error messages and respond to them. Summary of the failed nodes hostname- Version of exectask could not be retrieved from node “hostname”  – Cause: Cause Of Problem Not Available  – Action: User Action Not Available  – Version of exectask could not be retrieved from node “hostname”  – Cause: Cause Of Problem Not Available  – Action: User Action Not Available

 

Hence, the 1st place to look for is the registry, precisely, Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters and to make sure that a REG DWORD named “AutoShareWks” exists with a value “0”. If yes, change the value to “1” and restart the box.

2nd, if you have a 32bit Oracle client installed prior attempting to install the 12c/11g 64Bit database, make sure you Stop “OracleRemExecServiceV2” service using the Windows Service console. There is a conflict between the 32Bit & 64Bit installation procedures and unless this particular service is stopped, the 64Bit installation will not proceed.

Now attempt the 12c/11g 64Bit database installation.

If you come across more errors, please update us using the comments section and we would love to investigate them.

regards,

rajesh


Migrate from Microsoft SQL Server to Oracle 11g

April 11, 2017

Hi guys

This time I am going to share my experience with migrating a small MS SQL database to Oracle database using Oracle’s SQL Developer

Scenario

Microsoft SQL Server 2008 or later

Oracle database 11g (or later, 12c not tested)

Requirement

Business requires a 3rd party software that depends upon MS SQL Server to be migrated to Oracle platform

Pre-requisites

Oracle database 11g installed and instance is up and online

Oracle SQL Developer 4.2 (Used for this demonstration). Cannot confirm whether 4.1 uses the same approaches. Try it and let me also know

JDBC driver for MS SQL connectivity. For SQL Developer 4.x you need to download the driver from following link

http://sourceforge.net/projects/jtds/files/

Please follow https://kentgraziano.com/2013/01/14/tech-tip-connect-to-sql-server-using-oracle-sql-developer/ to learn how to install and configure the driver in order to establish a connection from SQL developer to MS SQL server.

Scope

I am going to migrate a database called “OPMS” from SQL Server 2008 R2 express edition to Oracle database 11g R2 64Bit

image

Please note, the JDBC driver fails to connect to the SQL Server using Windows Authentication, hence you must define a login with for your database, change the instance authentication methods to mixed in order to establish a successful connection.

image

As you could see with the above image, I don’t have any connections to Oracle database defined.

For the database migration, We must define two connection. One connection using SYSTEM user & the 2nd connection using MWREP user that we will create in like following

DROP USER MWREP CASCADE
/
CREATE USER MWREP IDENTIFIED BY mwrep
DEFAULT TABLESPACE BAYAN
TEMPORARY TABLESPACE TEMP1
QUOTA UNLIMITED ON BAYAN
/
GRANT DBA,CONNECT,RESOURCE TO MWREP

Once the migration is over, you may drop this repository safely. Hence don’t worry about the grants given to the new user. I failed to successfully get the things done properly without DBA role assigned to this migration schema

I’ve struggled hours to understand why the “tables” were not created as a part of the migration process which said “Successfully completed” after the execution and my probing through the log files pointed towards a schema OPMS not existing in indirect wordings.

(I’m pretty sure that I missed something & the intended schema was NOT created during the migration process run. Regardless, creating the Schema manually gives many tuning choices like tablespace selection, quota setting etc)

So, to get the migration work successfully, you need to create a schema with the the same name of your SQL Database. In my example the SQL database name is “OPMS”, the same I pre-defined with the Oracle database, although the script generated by the migration has DDL for creating the user against default tablespace “USERS”. Well, I didn’t want my OPMS schema using “USERS” tablespace…

I created the OPMS user as below

DROP USER OPMS CASCADE
/
CREATE USER OPMS IDENTIFIED BY opms
DEFAULT TABLESPACE BAYAN
TEMPORARY TABLESPACE TEMP1
QUOTA UNLIMITED ON BAYAN
/
GRANT DBA,CONNECT,RESOURCE TO OPMS

I am all set to start the migration now, so should be you!

Created a new connection for user “System”

image

Created another connection for “MWREP” user, which will hold the migration repository

image

Now we have to create the migration repository. Right click on the MWREP connection and expand Migration Repository, then Associate Migration repository

image

Progress

image

Finished

image

If the repository association fails for any reasons, you have restart by dropping the migration schema that you have created and go through the steps once again.

As I have completed creating the migration repository, next step is to connect the SQL developer to MS SQL Server (in my case SQL Server 2008 R2 Express edition)

image

So you have all the 3 connections required for the migration now.

image

As soon as you connect the MWREP, you will notice that Migration Windows showing an entry like seen with the below image

image

Now we will start the migration.

image

The welcome screen provides you an overview of the activities those will be completed for the SQL database migration. Move ahead

image

By default your migration repository will be selected, however cross check it and click “Next”

image

Provide a meaningful name for your Project and Select output directory

image

Make sure you have selected the correct Source database

image

The default database for the currently connected SQL database user will be selected by default for capturing. Confirm and click next

image

Under the convert step (6), make necessary changes. Refer the image for more details

image

A number of objects will be selected, unless you are pretty confident about objects you don’t want to migrate, leave the default selection intact.

image

Select the “System” connection for Target Database

image

Make sure you select SQL connection for source and MWREP connection as target in the move data step (9)

image

Click finish & the migration immediately starts. Depending upon the size of your source database it may take while for the process to create and move data between the technologies.

image

 

Progress

image

 

image

 

image

Create a connection to Oracle database for the newly created (We created the OPMS schema prior the migration) schema & verify whether the objects were created by the migration process.

image

That’s all folks!

rajesh


RMAN vs Dump Export

April 9, 2017

Hi guys

Recently I spent pretty good amount of time trying out RMAN & was able to apply what I learned at multiple production environments those were purely depending upon export dumps over a decade. Once after committing many hours for recovery using RMAN to a standby instance, I started wondering whether such efforts are really worth for a database that was hardly couple of GBs after 10+ years of data collection. I read many articles from reputed Oracle related sites, including this one.

A majority of the small scale industries don’t invest on DBAs because, FOR them DBAs are found doing “nothing” at all throughout the salary periods & most of them have unmatched ego that don’t allow them to learn anything other than what they are “certified to”. I have some pretty bad experiences with bunch of DBAs who didn’t even have a clue about SGA, PGA setups for a 10g database because they were “Certified” for 11g

Throughout the years, we have many hardware crashes and always restored the database(s) from export dumps (.dmp). A clearly documented schema/tablespace details were all we needed as the data that was expected to be restored were always sized in few GBs.

An Oracle developer with moderate database skills could, within an hour time can go online with the database restored using the simple import, which may not be the case with RMAN. RMAN strongly depends upon many factors for backup, restore & and from my limited DBA skills, should be adapted to bigger database environments.

So, you have a very small database and want to restore it from a dump file after a hardware crash or while switching to better hardware, you are very happy to know that importing from a dump export is much easier than RMAN (as you don’t have a clue what it is). Is it a fail proof method? Well, depends. If you are dealing with a hardly documented environment, you can land in hot soup with import screaming about non-existing objects referred by the currently imported schema. An interesting discussion you may find here that is closed as answered once after I confirmed it.

Obviously, I started the topic explaining how easy it looks to use pure dump files for restoration purposes, however ending the topic by saying that if you have a supported database, please implement RMAN immediately, which is a beautiful piece of technology helping you to recover the entire database with moderate level of DBA skills.

regards,

rajesh


Windows | ORA-12560: TNS:protocol adapter error

February 25, 2017

Hi guys

Not many DBAs prefer Windows for their Oracle databases. Linux is most preferred by most of them & most of the DBAs I know setup the bash profile under Oracle user to setup the environment during each logon to the server.

Our legacy business application database runs on Windows 2003 & trust me, we never had a single database crash (Other than the physical hardware failure that forced us to recover the database once). Depending upon how huge the database and application, the choices for hosting the Oracle database differ from one business to other.

We decided to upgrade our Oracle 10g 10.1.x.x 32Bit database to 11g R2 & as usual I have replicated the environment using my home semi-server class desktop, before the Production environment at work.

Installed 10g 32Bit, created the database using dump export file (The total size of the database is less than 7GB, hence I avoided the hectic RMAN backup and restore part)

  1. Configured RMAN against the new database & made full backup for archive logs and database.
  2. Installed 11g 11.2.0.4 64Bit database (Software Only installation)
  3. Created a new Windows Service using oradim
  4. Restored the database from RMAN backups & upgraded the database to 11g

So far so good. I had to restart the computer & after rechecking the database was up and running, tried to access the instance using sqlplus & was presented with

ORA-12560: TNS:protocol adapter error

REG_SID_MISSING

I setup ORACLE_SID=SID at the CMD window & sqlplus was happy after that.

Usually, Windows doesn’t need environment variables set exclusively for the database as Windows registry takes care of it. This is very efficient when the box has only one database running. If you have more than one database or multiple Oracle homes, the scenario changes.

In addition to, Oracle always looks for the executable based on the PATH information it reads. For example my box has 10g,11g,12c database software installed without any databases created during the installation time.

Let us consider the scenario like I didn’t re-order the PATH entries after the latest installation of 12c & try to open SQL or RMAN. The call will find the executable from 12c path entry BIN as default, and a beginner could have enough confusions due to it.

In my case, I needed my 10g instance first, hence I moved the 10g folder as the 1st entry for Oracle products, and once I finished with 10g moved 11g home folder to the 1st position.

SID_Missing

Anyway, after confirming the path settings, my immediate attention was towards registry, as Oracle services completely depend upon the registry values for each service registered.

To my utter surprise, found the 11g Service entry didn’t have ORACLE_SID string created during the instance creation using ORADIM.exe

REG_SID_MISSING

Oracle 11g 11.2.0.4 has a huge bug list and interim patches those should be applied before moving to Production instance. I really don’t know whether the missing ORACLE_SID string entry was due to one of such bugs.

So I stopped the Oracle service, added ORACLE_SID string entry with the value for my database

REG_SID_ADD

Restarted the service & sqlplus connected to the instance happily without setting up the environment variable like set ORACLE_SID=SIDNAME

REG_SID_ADDED

While the easiest solution is to setup both ORACLE_HOME, ORACLE_SID when someone wants to use the sqlplus or RMAN exclusively as a part of the database access, the above method is a definite way to deal with “ORA-12560: TNS:protocol adapter error”

regards,

rajesh