Linux | Send mail using internal mail command

November 16, 2016

Hi guys

We are on VEEM+VMWare infrastructure from a while, yet I am paranoid to maintain copies of the backups on different media once after going through couple of nightmares. We take weekly cold backup for our ERP Production server, move the tar files to a standby Linux server, and move those backups once again to an external HDD.

So basically I have a full VM backed up, the same VM holds a weekly cold backup, standby Linux server holding a copy of the cold backup files & to finish it, again copied to an external HDD. The funniest part is, we are moving the entire VMs to a TANDBERG Quick Station as well!

Though everything works fine till date, the last part of the deal needs to intimate me about successful completion of copying the tar files to the external media, ie, HDD that is formatted using NTFS, so that I can use it on both Linux and Windows environments

Be warned: The below bash script only works in an environment that has an internal SMTP server (or I don’t know how to relay the messages through an external SMTP relay and to disappoint you further, I don’t care about relaying through external SMTP). In addition, you must be on Linux 6 and above to use the internal mail command as demonstrated below. Linux 5 doesn’t support many switches provided with the example.

Further, below example demonstrates the basic level of error capturing with “bash” scripts as well

#!/bin/bash
/bin/cp -rf /u02/backup/PROD_DAILY_BACKUP*.* /media/Elements/ 2> /dev/null

if [ $? -eq 0 ]
then
echo "The files were successfully copied to external hard disk" | mailx -v -s "ERP Tar Files Moved to External HDD | Success" -S smtp=smtp://server.domain.com -S from="ERP Alerts <someone@example.com>" someoneelse@example.com,someone2@example.com
else
echo "Files were not copied to external HDD" | mailx -v -s "ERP Tar Files to External HDD | Failed" -S smtp=smtp://server.domain.com -S from="ERP Alerts <someone@example.com>" someoneelse@example.com,someone2@example.com
fi

Try it and let e know whether it worked for you :)

regards,

rajesh


Oracle E-Business Suite R12.0 | Automating clone process

February 16, 2016

Hi guys

A clone is the exact replica of a production instance, against what you do all tests, custom development and patch deployments to insure that your attempts are NOT going to break the PRODUCTION instance once such are moved over to.

How often consultants & users may request for a fresh clone (with latest data) depends upon many factors. During the implementation time, a DBA could be bombarded with requests for cloning almost once in couple of days. Although I am “not a dba”, I have been doing cloning to learn & understand the technology from last couple of years time & trust me, it is NOT at all fun. Especially once after you are familiar with the tasks.

Throughout last few months I was trying to “fully automate” the entire cloning process and made significant advancements with the process. I will share my experiences with you today

Scenario:

We have a cron job initiated by “root” user, starting by 2:30 PM every Friday, that shuts down the application and database after running pre-cloning. The same script makes tar balls for both the database and application nodes in separate files and then copies the tar balls to our TEST instance server.

Logically once the tar balls are copied to TARGET (TEST) server, following activities are expected from the DBA

  1. Stop the application & database instances those are online in the TEST server
  2. Extract the tar balls copied from PRODUCTION instance to relevant folders
  3. Clone database tier, followed by application tier
  4. Tune the database with TEST server specific SGA/job queue processes etc parameters

What if I am too lazy & a scripting junkey who wants to automate the entire activities using shell scripts? The following demonstrates such an attempt

Why not a cron job? ;)

The first step will be, creating auto response files for both database and application nodes. I have already detailed a how to here

ebsclone.sh | This shell script calls a number of other shell scripts to facilitate the entire cloning process

Please note, my Oracle database user is “oraprod” and application manager is “applprod”. If you are planning to copy the below script(s), make sure you change the physical paths, user details according to your specific environment.

Both the database and application manager user accounts are enhanced with .bash_profile values. Hence most of my scripts will not populate the environment variables prior executing other scripts.

I am using “expect”, that YOU must install, if not already installed in order to automate the cloning process. If you are using Oracle linux, you can install expect by issuing the following command:

yum install expect -y
#!/bin/bash
# As a precaution to make sure the port pool is available during automated
# Cloning, we will kill all orphaned processes those were not closed during
# DB, APPS stop

# Kill all processes for both oraprod &amp; applprod users
echo &quot;Killing all processes for Oracle user&quot;
pkill -u oraprod
echo &quot;Killing all processes for Application user&quot;
pkill -u applprod
echo &quot;All processes for both Oracle and Application users were killed&quot;

sleep 30

echo &quot;$(date)&quot;

#Remove the existing physical folder for database files
cd /u01
find oraprod -delete
echo &quot;finished deleting Oracle top at $(date)&quot;
#Extract files for database top from the cold backup archive
echo &quot;Extract database backup file at $(date)&quot;
time tar -zxf /u02/backup/PROD_DAILY_BACKUP_db.tar.gz
echo &quot;Finished extracting database backup file at $(date)&quot;

#Remove the existing physical folder for application files

cd /u03
find applprod -delete
echo &quot;finished deleting Application top at $(date)&quot;

#Extract files for application top from the cold backup archive
echo &quot;Extract application backup file at $(date)&quot;
time tar -zxf /u02/backup/PROD_DAILY_BACKUP_apps.tar.gz u06/applprod/PROD/apps
echo &quot;Finished extracting application backup file at $(date)&quot;

#Move the files around based on your configuration files (db.xml &amp; apps.xml)

mv /u01/u05/oraprod /u01
mv /u03/u06/applprod /u03

#Change the ownership of the folders, so that corresponding users could read &amp; execute files

chown -R oraprod:oinstall /u01/oraprod
chown -R applprod:oinstall /u03/applprod

########################################
#Start the cloning
########################################

echo &quot;Database cloning phase starts now, $(date)&quot;

#su - oraprod -c &quot;perl /u01/oraprod/PROD/db/tech_st/10.2.0/appsutil/clone/bin/adcfgclone.pl dbTier /u01/clonescripts/db.xml&quot;

/root/scripts/dbclone.sh

sleep 30

echo &quot;Application cloning phase starts now, $(date)&quot;

# su - applprod -c &quot;perl /u03/applprod/PROD/apps/apps_st/comn/clone/bin/adcfgclone.pl appsTier /u01/clonescripts/apps.xml&quot;

/root/scripts/appsclone.sh

echo &quot;EBS Cloning completed, $(date)&quot;
######################################
#Optional steps for changing database SGA parameters,
#startup configuration files to spfile etc
######################################
echo &quot;Changing database parameters, $(date)&quot;

/root/scripts/dbfix.sh

echo &quot;Done! Application online with changed database parameters, $(date)&quot;

Now I will copy the code for each script called within the ebsclone.sh script
dbclone.sh | script enabled with expect which will not ask for the apps password

#!/usr/bin/expect -f
set force_conservative 0  ;

# set to 1 to force conservative mode even if
# script wasn't run conservatively originally

if {$force_conservative} {
        set send_slow {1 .1}
        proc send {ignore arg} {
                sleep .1
                exp_send -s -- $arg
        }
}

set timeout -1

spawn su - oraprod -c &quot;perl /u01/oraprod/PROD/db/tech_st/10.2.0/appsutil/clone/bin/adcfgclone.pl dbTier /u01/clonescripts/db.xml&quot;

match_max 100000

expect -exact &quot;\r
Enter the APPS password : &quot;
send -- &quot;apps\r&quot;

expect eof

appsclone.sh | script enabled with expect which will not ask for the apps password

#!/usr/bin/expect -f
set force_conservative 0  ;

# set to 1 to force conservative mode even if
# script wasn't run conservatively originally

if {$force_conservative} {
        set send_slow {1 .1}
        proc send {ignore arg} {
                sleep .1
                exp_send -s -- $arg
        }
}

set timeout -1

spawn su - applprod -c &quot;perl /u03/applprod/PROD/apps/apps_st/comn/clone/bin/adcfgclone.pl appsTier /u01/clonescripts/apps.xml&quot;

match_max 100000

expect -exact &quot;\r
Enter the APPS password : &quot;
send -- &quot;apps\r&quot;

expect eof

dbfix.sh | Changing database parameters like SGA, job queue processes etc

#!/bin/bash
su - applprod -c &quot;/u03/applprod/PROD/inst/apps/PRODBAK_erp-prodbak/admin/scripts/adstpall.sh apps/apps&quot;
su - oraprod -c /root/scripts/dbalter.sh
su - applprod -c &quot;/u03/applprod/PROD/inst/apps/PRODBAK_erp-prodbak/admin/scripts/adstrtal.sh apps/apps&quot;

Finally dbalter.sh | All database alter commands are included within this file

#!/bin/bash
export ORACLE_HOME=/u01/oraprod/PROD/db/tech_st/10.2.0
export ORACLE_SID=PRODBAK

#http://www.cyberciti.biz/faq/unix-linux-test-existence-of-file-in-bash/
#Check whether spfile already exist
file=&quot;/u01/oraprod/PROD/db/tech_st/10.2.0/dbs/spfilePRODBAK.ora&quot;
if [ -f &quot;$file&quot; ]
then
	echo &quot;$file found. Aborting database configuration now&quot;
exit;
else
	echo &quot;$file not found.&quot;
fi

sqlplus &quot;/ as sysdba&quot; &lt;&lt;EOF
create spfile from pfile;
shutdown immediate;
startup;
alter system set sga_max_size=8G scope=spfile;
alter system set sga_target=8G scope=spfile;
alter system set job_queue_processes=10 scope=both;
shutdown immediate;
! cp /u01/oraprod/PROD/db/tech_st/10.2.0/dbs/initPRODBAK.ora /u01/oraprod/PROD/db/tech_st/10.2.0/dbs/initPRODBAK.ora.original
! &gt;/u01/oraprod/PROD/db/tech_st/10.2.0/dbs/initPRODBAK.ora
! echo &quot;spfile=/u01/oraprod/PROD/db/tech_st/10.2.0/dbs/spfilePRODBAK.ora&quot; &gt;&gt;/u01/oraprod/PROD/db/tech_st/10.2.0/dbs/initPRODBAK.ora
startup;
exit;
EOF

The above five scripts should do what they are meant to. Just copy the files to same folder
Change the execute mode of ebsclone.sh

chmod +x ebsclone.sh

and execute the ebsclone.sh as “root” (attempts made with other users will fail the cloning)

#./ebsclone.sh

Prior attempting, please make sure all the above scripts are modified with absolute paths, referring to your existing partitions & other

Download the scripts here

References:

Sample expect script

https://community.oracle.com/thread/2558592?start=0&tstart=0

Linux: Find whether a file already exist

http://www.cyberciti.biz/faq/unix-linux-test-existence-of-file-in-bash/

Happy cloning!


Linux LVM (Logical Volume Manager) | AKA Spanned volumes

February 3, 2016

Hi guys

We’ve a EBS instance that totals almost 1TB physical size hosted on a high end IBM server & periodically we clone the instance to insure that the cold backups are reliable for DR purposes.

Recently we’ve decommissioned one HP ML110 G6 server with single xeon processor, 8GB memory that was dedicated for obsolete bio-metric monitoring and reporting running Windows 2003. I thought of using the same server for future restorations of EBS cold backups & realized that the server doesn’t support RAID 5 & moreover the built-in RAID is categorized under “fakeRAID”, which uses the built-in RAID technology, depending upon the CPU for the crippled RAID processing.

Using the HP Pavilion Easy Setup CD, I created an array and to my total disappointment found that Linux doesn’t read the fakeRAID while an installation is attempted.

The above were attempted because the ML110 G6 had 4 numbers of 500GB SATA HDD drives and I needed 1TB on a single volume. My database instance size as on date is 493GB, which would scream lack of space on a single 500GB partition. So I started reading about software RAID, which was too complex to setup with my minimal exposure to Linux. Further readings brought me to LVM (Logical Volume Manager) using which one can create spanned volumes as like in Windows.

Before proceeding further, please be aware of the RISKS associated with spanned volumes AKA LVM with multiple drives

How to?

We’ll consider a fresh installation of CentOS6/RHEL6/OEL6 for the exercises

Source thread (Please, please read)

Hardware: HP ML110 G6, 8GB memory, 4x500GB SATA HDD

Linux installation details

Installed Linux on HDD#1 (/dev/sda), 10GB boot, 4GB Swap, 110G / & balance as extended partition

Now, I am left with 3 HDDs, which are “untouched”, ie, no partitions are made

  1. /dev/sdb
  2. /dev/sdc
  3. /dev/sdd

As I have mentioned, my requirement was to have 1TB of storage for the cloning purposes, hence I chose 2x500GB (/dev/sdb, /dev/sdc)

First I created partitions using “fdisk”, the age old command line utility, even though better structured GUI is available with latest Linux distributions

Login to terminal as “root”

$fdisk /dev/sdb

n (new parition) -> p (primary partition) -> 1 (number of partitions) -> w (Write changes)

Repeated the same for /dev/sdc

$fdisk /dev/sdc

n (new parition) -> p (primary partition) -> 1 (number of partitions) -> w (Write changes)

We’ll use the following 3 commands to create our LVM

  1. pvcreate
  2. vgcreate
  3. lvcreate

create two physical volumes

$pvcreate /dev/sdb1 /dev/sdc1

create one volume group with the two physical volumes

$vgcreate VG_DATA /dev/sdb1 /dev/sdc1

create one logical volume

$lvcreate -l 100%FREE -n DATA VG_DATA

create the file system on your new volume

$mkfs.ext4 /dev/VG_DATA/DATA  #You may use ext3, based on your Linux distribution

$mkdir -p /u01

mount the volume (mount /dev/VG_DATA/DATA /u01)

That’s all folks, I have created my1st LVM aka spanned volume in Linux.

If you are planning to create logical volumes using multiple disks, be aware of the risks. You may lose millions worth data if no proper backups are taken and recovery could be a nightmare!

Not limited to total data loss, performance issues also should be considered, especially when such a setup hosts databases which require faster I/O.

for Windows7bugs

rajesh


Oracle R12 Cloning | dbTier "ouicli.pl INSTE8_APPLY 1"

October 28, 2015

Hi guys

There could be thousands (exaggerated) reasons why a Oracle cloning process could go all bad. I’m not an application DBA, however, have enough experience with the architecture, technology as I interact with it everyday as a part of my job.

Few months back, I started doing something what a DBA should do, cloning. My prior attempts were mostly at home, using virtual machines and test instances and they were NOT as mission critical as what we do at work.

So, after the storage device was revamped with new partition structures I was asked to do a cloning for the production instance. Let me explain how the application was deployed prior the storage restructuring

  1. We had the database tier on mount point /u05
  2. Application on /u06 mount point

So, I recreated the same mount points and started the cloning process for dbTier and the process got terminated at 2% and the log files shown me an error that I was not familiar with.

“ouicli.pl INSTE8_APPLY 1”

Google searches fetched me hundreds of results for “ouicli.pl INSTE8_APPLY”, however the error codes were mostly for 255 or “-1” and apparently I didn’t have any clue what was going wrong.

So I unzipped the tar ball for database tier once again, and the cloning process got aborted at 2%,  and I was getting nervous as I was expected to make the instance online by early morning 7AM

Most of the reference materials were explaining about non-existent Oracle inventory locations, and I confirmed that it was not the case from my part (Obviously, I was overlooking at this constrain itself!)

After half a dozen times tasting failure, finally I tried to see what was written inside the oraInst.loc file

 

 
oraprod@erp-prod:/home/oraprod>cd $ORACLE_HOME 
oraprod@erp-prod:/u05/oraprod/PROD/db/tech_st/10.2.0>cat oraInst.loc 
inventory_loc=/u01/oraprod/PROD/db/tech_st/10.2.0/admin/oui/PROD_erp-prod/oraInventory 

and I realized that inventory location was wrongly pointing towards an non-existing mount point and physical location!

I modified the oraInst.loc content with the correct mount point

 
inventory_loc=/u05/oraprod/PROD/db/tech_st/10.2.0/admin/oui/PROD_erp-prod/oraInventory 

and the cloning process went ahead without giving another errors.

We had an instance that was running from last 6 years, which was only once cloned from a cold backup during the storage device change, and somehow the inventory location remain unchanged with the repositories.

I hope this finding could help few newbies like me out there

 

regards,

rajesh


How to get a computer by computer view of installed software using the MAP toolkit

November 23, 2014

As an Administrator maintaining Windows domains, one of the herculean tasks usually one run into is to make a software asset inventory. There are plenty of excellent software to do the job for you, obviously for some cost.

Here we are suggesting you a cheaper alternative, using Microsoft’s own MAP toolkit. Be ready to sweat a bit, and we are sure you would love the outcome.

The entire write up is copied from Microsoft blog and tested by us for assuring, if you follow the instructions as given, within few hours of time you will have a neat software inventory list.

The original link is here

One of the most frequent questions we get at MAPFDBK@microsoft.com is how to get a list of the software discovered by the MAP toolkit on a computer by computer basis.  Most of the users who ask are using this to help them answer a licensing question but it can be used in a number of other scenarios as well for example Software Asset Management or user profiling for VDI (see http://blogs.technet.com/b/mapblog/archive/2012/07/09/planning-for-desktop-virtualization-with-the-map-toolkit-7-0-4-of-4.aspx).

In MAP 7.0, provided this information through a database view and Microsoft Excel.  The name of the view is InstalledProducts_view.

In MAP 8.0, this view has been renamed to [UT_WinServer_Reporting].[InstalledProductsView].

This view contains several key pieces of information that you can use to do a number of things including:

  • Understand what applications and versions are installed throughout your organization
  • See the Operating Systems on which these apps are running and whether the machine is physical or virtual
  • See who is using the machines on which the apps are running
  • Get important license related information such as processor counts, total cores and logical processor counts

To get started, you will need to open Excel and connect to your local SQL Server database that is storing the MAP data that you want to view.  There are two different ways to connect, depending on the version of SQL Server that you are using.

Using your own SQL Server instance

If you are using your own instance (the non-default MAP install), you will select the Data option on the Excel ribbon and select the ‘From other sources’ option.  Then select ‘From SQL Server’.

image_thumb6

Enter your server name and instance name and click ‘Next’.

image_thumb5

Select the database that contains the data you want and then pick InstalledProducts_view row under ‘Name’ for databases created with MAP 7.0.

For MAP 8.0, use [UT_WinServer_Reporting].[InstalledProductsView].

image_thumb7

You can also add some additional information to help describe the connection.  Then click ‘Finish’ and select the location where you want the query results to populate.

image_thumb9

Using the default (LocalDB) instance

In MAP 7.0, the default database installed moved to SQL Server 2012 LocalDB.  There are a couple of steps that are different than those used in other versions of SQL Server.

First, make sure that you have the SQL Server 2012 Native Client installed.  You can get it from

http://www.microsoft.com/en-us/download/details.aspx?id=29065.

With Excel open and the Data ribbon highlighted, select the ‘From other data sources’ option and select ‘From Data Connection Wizard’

image_thumb12

Select the ‘Other/Advanced’ option.

image_thumb13

Then select the option for SQL Server Native Client 11.0 as highlighted below.  If this option is not available, make sure that you have the native client installed – http://www.microsoft.com/en-us/download/details.aspx?id=29065.

image_thumb15

Next, you enter in the server name.  If you are using the default install the server name will be: (localdb)\maptoolkit.

Set the option in #2 to Use Windows NT Integrated Security

Hit ‘Test Connection’

image_thumb18

If you’ve done it correctly, you will get a success message!

image_thumb21

Then follow the same steps as above where you select the database name and the InstalledProducts_view for 7.0.  For 8.0, use [UT_WinServer_Reporting].[InstalledProductsView].

Populate the results in your spreadsheet!

What do I do next?

Well – that is entirely up to you.  One thing that we like to do is to create a pivot table and drill down into this information. Here is one that I created.  I filtered down the application name to include only those that had SQL Server components.  I could look at this by physical/virtual and by operating system.

Pretty cool – huh!

image_thumb23

As a reminder, here is a link to some valuable MAP community supported content.

Enjoy!


Oracle Payroll | R12 | Simple view for employee paid salaries

June 19, 2014

 

Recently I were requested to build a report by the HR/Payroll team, running which they can generate the salary paid details for employees. Ie, a tabular listing with paid month, and total salary earned, grouped by year factor

0046

I found the request being one of the toughest, as my exposure to Payroll module and base tables was limited almost none, other than knowing the person and assignment tables and views!

Gradually I started going through the custom reports developed by our implementer and restructured few of their custom functions into a best possible view what meets our current requirements. As we are not using customized packages for the salary calculations, you should able to alter the below SQL and create your own with almost no efforts. We hope you will enjoy the solution!

Script for view

CREATE OR REPLACE VIEW XXEMPLOYEE_SALARIES_MONTHLY
AS
SELECT pap.person_id, pap.employee_number,to_char(ppa.date_earned,'Mon-YYYY') earned_month,
TO_NUMBER(to_char(ppa.date_earned,'MM')) MONTH_NUMBER,
TO_NUMBER(to_char(ppa.date_earned,'YYYY')) YEAR_FACTOR,
 sum(to_number(prrv.result_value)) PAID_AMOUNT
 FROM PAY_ELEMENT_TYPES_F petf
   ,PAY_INPUT_VALUES_F pivf
   ,PAY_PAYROLL_ACTIONS ppa
   ,PAY_ASSIGNMENT_ACTIONS paa
   ,PAY_RUN_RESULTS prr
   ,PAY_RUN_RESULT_VALUES prrv
   ,PER_ALL_ASSIGNMENTS_F paaf
   ,PER_ALL_PEOPLE_F pap
   ,PAY_ELEMENT_CLASSIFICATIONS pec
WHERE 1=1
  AND pec.classification_id = petf.classification_id
 and prrv.input_value_id = pivf.input_value_id
AND CLASSIFICATION_NAME IN ('Earnings','Supplemental Earnings')--Add in more based on your setup
  and pivf.name in ('Pay Value')
  AND petf.element_type_id = prr.element_type_id
  AND paa.assignment_action_id = prr.assignment_action_id
  AND prr.run_result_id = prrv.run_result_id
  AND petf.business_group_id = 81
  AND ppa.business_group_id = pap.business_group_id
  AND ppa.payroll_action_id = paa.payroll_action_id
  AND SYSDATE BETWEEN TRUNC(petf.effective_start_date) AND TRUNC(petf.effective_end_date)
  AND last_day(ppa.date_earned) BETWEEN TRUNC(pap.effective_start_date) AND TRUNC(pap.effective_end_date)
  AND last_day(ppa.date_earned) BETWEEN TRUNC(paaf.effective_start_date) AND TRUNC(paaf.effective_end_date)
  AND paaf.assignment_id = paa.assignment_id
  AND paaf.person_id = pap.person_id
 --and prrv.result_value > '0'
  AND paaf.business_group_id = pap.business_group_id
 AND pap.business_group_id = 81--double check
 GROUP BY pap.person_id,pap.employee_number, to_char(ppa.date_earned,'Mon-YYYY'),to_char(ppa.date_earned,'MM'),to_char(ppa.date_earned,'YYYY')
UNION ALL
SELECT  pap.person_id, pap.employee_number,to_char(ppa.date_earned,'Mon-YYYY') earned_month,
TO_NUMBER(to_char(ppa.date_earned,'MM')) MONTH_NUMBER,
TO_NUMBER(to_char(ppa.date_earned,'YYYY')) YEAR_FACTOR,
nvl(sum(to_number(prrv.result_value)),0)*-1  PAID_AMOUNT
                     FROM PAY_ELEMENT_TYPES_F petf
                                                 ,PAY_INPUT_VALUES_F pivf
                           ,PAY_PAYROLL_ACTIONS ppa
                            ,PAY_ASSIGNMENT_ACTIONS paa
                            ,PAY_RUN_RESULTS prr
                           ,PAY_RUN_RESULT_VALUES prrv
                           ,PER_ALL_ASSIGNMENTS_F paaf
                           ,PER_ALL_PEOPLE_F pap
                           ,PAY_ELEMENT_CLASSIFICATIONS pec
                    WHERE 1=1
                                    AND pec.classification_id = petf.classification_id
                  and prrv.input_value_id = pivf.input_value_id
                  AND CLASSIFICATION_NAME IN ('Voluntary Deductions','Involuntary Deductions','Social Insurance')--Add in more based on your setup
                  and pivf.name in ('Pay Value')
                               AND petf.element_type_id = prr.element_type_id
                  AND paa.assignment_action_id = prr.assignment_action_id
                  AND prr.run_result_id = prrv.run_result_id
                               AND ppa.business_group_id = pap.business_group_id
                  AND ppa.payroll_action_id = paa.payroll_action_id
                AND SYSDATE BETWEEN TRUNC(petf.effective_start_date) AND TRUNC(petf.effective_end_date)
  AND last_day(ppa.date_earned) BETWEEN TRUNC(pap.effective_start_date) AND TRUNC(pap.effective_end_date)
  AND last_day(ppa.date_earned) BETWEEN TRUNC(paaf.effective_start_date) AND TRUNC(paaf.effective_end_date)
                AND paaf.assignment_id = paa.assignment_id
                  AND paaf.person_id = pap.person_id
    --              and prrv.result_value > '0.00'
                    AND paaf.business_group_id = pap.business_group_id
                    AND pap.business_group_id = 81--double check
                  GROUP BY pap.person_id,pap.employee_number, to_char(ppa.date_earned,'Mon-YYYY'),to_char(ppa.date_earned,'MM'),to_char(ppa.date_earned,'YYYY')
 order by 2,5,4;

Sample Query

SELECT PERSON_ID, EMPLOYEE_NUMBER,earned_month,year_factor,
 SUM(PAID_AMOUNT) PAID_SALARY
 FROM XXEMPLOYEE_SALARIES_MONTHLY
 WHERE
 1=1
 AND EMPLOYEE_NUMBER =:P_EMPLOYEE_NUMBER
 AND YEAR_FACTOR BETWEEN NVL(:P_START_YEAR,YEAR_FACTOR) AND NVL(:P_END_YEAR,YEAR_FACTOR)
 GROUP BY PERSON_ID,EMPLOYEE_NUMBER,earned_month,YEAR_FACTOR, MONTH_NUMBER
 ORDER BY YEAR_FACTOR, MONTH_NUMBER

Enjoy another quality post from us guys :)

for Windows7bugs

rajesh


Linux shell script file for Start/Stop Weblogic Services

April 23, 2014

 

This time we are sharing a shell script for starting/stopping Weblogic services. This shell script can

  1. Start Weblogic Admin Server (after starting Node Manager)
  2. Once the Admin server started, you can start the forms and reports services using the Admin Console
  3. Unless conflicts, you should able to access the Oracle enterprise manager console as well

Copy the content below in plain text file first, change the extension to .sh and set the execute permissions

 

 

 

 

#!/bin/sh

if [ -z "$1" ]; then
echo "You must supply either start or stop command while calling this script! correct usage: weblogic_start_stop.sh start|stop"
exit
fi

bold=`tput bold`
normal=`tput sgr0`

case "$1" in
        'start')
        echo "Starting Management Node & Weblogic Server 10.3.6"

echo "Starting NodeManager"

nohup $WLS_HOME/server/bin/startNodeManager.sh > /dev/null 2>&1 &

sleep 10

output=`ps -ef | grep -i nodemanager.javahome | grep -v grep | awk {'print $2'} | head -1`

set $output

pid=$1

echo "Weblogic NodeManager Service was started with process id : ${bold}$pid${normal}"

echo "Starting WebLogic Domain"

nohup $MW_HOME/user_projects/domains/ClassicDomain/bin/startWebLogic.sh > /dev/null 2>&1 &

# Sleep until exiting
sleep 60
echo "All done, exiting"
exit
esac

################################Stopping the services##################################

case "$1" in
        'stop')
echo "Stopping Weblogic Server & Node Manager"

nohup $MW_HOME/user_projects/domains/ClassicDomain/bin/stopWebLogic.sh > /dev/null 2>&1 &

sleep 30

# echo "Killing Nodemanager process now"

output=`ps -ef | grep -i nodemanager.javahome | grep -v grep | awk {'print $2'} | head -1`
set $output
pid=$1
echo "Killing Weblogic NodeManager Service Process : ${bold}$pid${normal}"

kill -9 `ps -ef | grep -i nodemanager.javahome | grep -v grep | awk {'print $2'} | head -1`

echo "All done, exiting"
exit
esac

 

 

 

 

If you have queries, please ask them using comments section.

for Windows7bugs

rajesh