Search This Blog

Sunday, 20 April 2014

How to upgrade Weblogic 10.3.5 to 10.3.6



How to upgrade Web logic Server (WLS) 10.3.5 to 10.3.6 in Windows 64 Bit.

Pr-requisite before Upgrading:

1.Shutdown WebLogic Server and the Oracle Business Intelligence by selecting:

Start > All Programs > Oracle Business Intelligence > Stop BI Services and stopped Oracle WebLogic NodeManager (c_Middleware_wlserver_10.3) service

2. Download the  Patch - p13529623_1036_Generic.zip from Oracle Support site.

How to download the patch ? 
 Screenshot shows the patch for Linux. Similarly you can choose windows from drop down option.



3. Unzipp the p13529623_1036_Generic.zip, you will find the following file:
     wls1036_upgrade_generic.jar
4. Backup Middleware folder and inventory directory.

5. Check the Java version.For WebLogic 10.3.5 installations you need JDK6, but you can also use it for WebLogic 10.3.6 installations.

6. WebLogic 10.3.6 installations can use either JDK6 or JDK7 depending on your requirements. This is the same JDK7 you will use for development or production servers. Do not use JDK7 for an Oracle BI Apps 11.1.1.7.0 installation.

7. 

Upgrade :

1. (UNIX or Linux only) Include the -d64 flag in the installation command when using a 32/64-bit  hybrid JDK (such as for the HP-PA, HPIA, and Solaris64 platforms). For example, if installing in graphical mode using the Package installer:
  • java -d64 -jar wlsversion_generic.jar
2. If you are using the Sun 64-bit JDK, use the following command to install WebLogic Server: Navigate to the patch folder. Run the command
    java -Xmx1024m -jar wls1036_upgrade_generic.jar
      OR for 32 bit java
   java  -jar wls1036_upgrade_generic.jar
   

3. 
     

4.  For upgrading choose 'Use an existing Middleware Home'  You will see the Middleware Home 
     Directory greyed out.
 

5. Next follow the GUI on screen instructions for Upgrade.

Post Installation : 

How to check/verify the Weblogic version ?

1.  Once the server is started you can access the administrator console using the "http://hostname:7001/console" URL. Log in using the username and password
2.   

 3.

 

4.   Run the following command

java -cp /optional/Oracle/Middleware/wlserver_10.3/server/lib/weblogic.jar weblogic.version –verbose

 


Wednesday, 4 September 2013

DAC 10g Installation error in Linux 5 : oracle reinstall error: “Installation cannot continue...

oracle reinstall error: “Installation cannot continue. Make sure that you have read/write permissions to the inventory directory…”

 $ ./runInstaller

You do not have sufficient permissions to access the inventory '/u01/app/oraInventory'. Installation cannot continue. Make sure that you have read/write permissions to the inventory directory and restart the installer.: Permission denied

 Oracle thinks there is still a oraInventory, even though I got rid of it. There is a file somewhere (not in /u01) telling oracle that oraInventory exists. So I delete the install dir (home/oracle/database) and start over:

 $ rm -rf database/

$ unzip linux.x64_11gR2_database_1of2.zip
$ unzip linux.x64_11gR2_database_2of2.zip
$ cd database/
$ ./runInstaller
You do not have sufficient permissions to access the inventory '/u01/app/oraInventory'. Installation cannot continue. Make sure that you have read/write permissions to the inventory directory and restart the installer.: Permission denied

It wasn’t in the database dir, it was here: /etc/oraInst.loc. After doind a few oracle installs I guessed it was in /etc, but you could use find or locate to find this file.

as root:
rm /etc/oraInst.loc

 as oracle:

$ ./runInstaller
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB. Actual 9801 MB Passed
Checking swap space: must be greater than 150 MB. Actual 2015 MB Passed
..

It works!

 Or while running DAC installation on linux

 [dac10g@slc01mmi ~]$ cd dac/Disk1

[dac10g@slc01mmi Disk1]$ sh runInstaller

Platform is Linux X86_64

You do not have sufficient permissions to access the inventory '/scratch/rj/oraInventory'. Installation cannot continue. Make sure that you have read/write permissions to the inventory directory and restart the installer.: Permission denied

 $ rm –r scratch/rj/oraInventory

 $ ./runInstaller

You do not have sufficient permissions to access the inventory '/scratch/rj/oraInventory'. Installation cannot continue. Make sure that you have read/write permissions to the inventory directory and restart the installer.: Permission denied

It wasn’t in the database dir, it was here: /etc/oraInst.loc. After doing a few oracle installs I guessed it was in /etc, but you could use find or locate to find this file.

as root:
rm /etc/oraInst.loc

 as su – dac10g

oracle reinstall error: “Installation cannot continue.

oracle reinstall error: “Installation cannot continue. Make sure that you have read/write permissions to the inventory directory…”

 

$ ./runInstaller
You do not have sufficient permissions to access the inventory '/u01/app/oraInventory'. Installation cannot continue. Make sure that you have read/write permissions to the inventory directory and restart the installer.: Permission denied

 

 

Oracle thinks there is still a oraInventory, even though I got rid of it. There is a file somewhere (not in /u01) telling oracle that oraInventory exists. So I delete the install dir (home/oracle/database) and start over:

 

$ rm -rf database/
$ unzip linux.x64_11gR2_database_1of2.zip
$ unzip linux.x64_11gR2_database_2of2.zip
$ cd database/
$ ./runInstaller
You do not have sufficient permissions to access the inventory '/u01/app/oraInventory'. Installation cannot continue. Make sure that you have read/write permissions to the inventory directory and restart the installer.: Permission denied

It wasn’t in the database dir, it was here: /etc/oraInst.loc. After doind a few oracle installs I guessed it was in /etc, but you could use find or locate to find this file.

as root:
rm /etc/oraInst.loc

 

as oracle:
$ ./runInstaller
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB. Actual 9801 MB Passed
Checking swap space: must be greater than 150 MB. Actual 2015 MB Passed
..

It works!

 

 

 

Or while running DAC installation on linux

 

[dac10g@slc01mmi ~]$ cd dac/Disk1

[dac10g@slc01mmi Disk1]$ sh runInstaller

Platform is Linux X86_64

You do not have sufficient permissions to access the inventory '/scratch/rj/oraInventory'. Installation cannot continue. Make sure that you have read/write permissions to the inventory directory and restart the installer.: Permission denied

 

$ rm –r scratch/rj/oraInventory

 

$ ./runInstaller
You do not have sufficient permissions to access the inventory '/scratch/rj/oraInventory'. Installation cannot continue. Make sure that you have read/write permissions to the inventory directory and restart the installer.: Permission denied

It wasn’t in the database dir, it was here: /etc/oraInst.loc. After doing a few oracle installs I guessed it was in /etc, but you could use find or locate to find this file.

as root:
rm /etc/oraInst.loc

 

as su – dac10g

 

 

Linux New User Account

Create New User Accounts in Linux

Execute the following commands as root (using the sudo command).

Login with ROOT user

1. To find the path of useradd:

whereis useradd

2. To add the user:

sudo useradd -d directory_path username

For example:

sudo useradd -d /scratch/oracle oracle1

Or:

sudo useradd -d /scratch/fmwumbtest/ testUser

(You should create the directory on the local disk. On Linux, that means on /scratch.)

3. Then, change the password:

sudo passwd username

(You get prompted for the password.)

4. Add the user to a group (such as dba and g579):

usermod -G group_name username

5. Test the account by changing user:

su - username

(The dash means run their setup commands in the new account's root dir)

 

Removing User Accounts

On occasion, you may wish to remove a user's access from your server altogether.

If you are a Red Hat user, the easiest way to remove an unneeded user account is with the ``userdel'' command, which must be typed as ``root''. An example follows:

/usr/sbin/userdel baduser

The above command will remove the entry matching the username ``baduser from the ``/etc/passwd'', file, and, if you're using the Shadow password format (which you should be; see Section 6.6 for details), the ``/etc/shadow''.

Note: Note: The ``/etc/group'' is not modified, to avoid removing a group that other user(s) may also belong to. This isn't much of a big deal, but if this bothers use, you can edit the group file and remove the entry manually.

Should you wish to remove the user's home directory as well, add the ``-r'' option to the ``userdel'' command. For example:

/usr/sbin/userdel -r baduser

I recommend not removing an account right away, but first simply disable it, especially if you are working with a corporate server with lots of users. After all, the former user may one day require the use of his or her account again, or may request a file or two which was stored in their home directory. Or perhaps a new user (such as an employee replacement) may require access to the former user's files.

 

Examples

find -name 'mypage.htm'

In the above command the system would search for any file named mypage.htm in the current directory and any subdirectory.

find / -name 'mypage.htm'

In the above example the system would search for any file named mypage.htm on the root and all subdirectories from the root.

find -name 'file*'

In the above example the system would search for any file beginning with file in the current directory and any subdirectory.

Error while connecting to Database

[rjkm@slc01mmi dbhome_1]$ sqlplus / as sysdba

 SQL*Plus: Release 11.2.0.3.0 Production on Sun Sep 1 12:34:23 2013

 Copyright (c) 1982, 2011, Oracle.  All rights reserved.

 Connected to an idle instance.

 SQL> CREATE TABLESPACE INFA_901_TS

       DATAFILE '/scratch/rjkm/app/oradata/OBIAPPS/INFA_901_TS.dbf'

       SIZE 1024M

       EXTENT MANAGEMENT LOCAL AUTOALLOCATE;  2    3    4

     CREATE TABLESPACE INFA_901_TS

*

ERROR at line 1:

ORA-01034: ORACLE not available

Process ID: 0

Session ID: 0 Serial number: 0

 [rjkm@slc01mmi dbhome_1]$ lsnrctl start

 LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 01-SEP-2013 12:38:00

 Copyright (c) 1991, 2011, Oracle.  All rights reserved.

 Starting /scratch/rjkm/app/product/11.2.0/dbhome_1/bin/tnslsnr: please wait...

 TNSLSNR for Linux: Version 11.2.0.3.0 - Production

System parameter file is /scratch/rjkm/app/product/11.2.0/dbhome_1/network/admin/listener.ora

Log messages written to /scratch/rjkm/app/diag/tnslsnr/slc01mmi/listener/alert/log.xml

Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))

Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=1521)))

 Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521)))

STATUS of the LISTENER

------------------------

Alias                     LISTENER

Version                   TNSLSNR for Linux: Version 11.2.0.3.0 - Production

Start Date                01-SEP-2013 12:38:02

Uptime                    0 days 0 hr. 0 min. 0 sec

Trace Level               off

Security                  ON: Local OS Authentication

SNMP                      OFF

Listener Parameter File   /scratch/rjkm/app/product/11.2.0/dbhome_1/network/admin/listener.ora

Listener Log File         /scratch/rjkm/app/diag/tnslsnr/slc01mmi/listener/alert/log.xml

Listening Endpoints Summary...

  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))

  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=1521)))

The listener supports no services

The command completed successfully

[rjkm@slc01mmi dbhome_1]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Sun Sep 1 12:41:00 2013

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup

ORACLE instance started.

Total System Global Area 1820540928 bytes

Fixed Size                  2229304 bytes

Variable Size             469765064 bytes

Database Buffers         1342177280 bytes

Redo Buffers                6369280 bytes

Database mounted.

Database opened.

SQL> conn infa901/infa901

Connected.

SQL>

Friday, 16 August 2013

Change Data Capture using Mapping variable

Informatica Parameters and Variables


Purpose

A mapping can utilize parameters and variables to store information during the execution. Each parameter and variable is defined with a specific data type and their main purpose is to provide increased development flexibility. 

Parameters are different from variables in the fact that:


  • Value of a parameter is fixed during the run of the mapping

  • Variables can change in value during run-time.


Both parameters and variables can be accessed from any component in the mapping which supports it. To create a parameter or variable, go to Mapping -> Parameters and Variables from within the Mapping Designer in the Designer client.


The format is $$VariableName or $$ParameterName .

What is Mapping Variable?


These are variables created in Power Center Designer, which you can use in any expression in a mapping, and you can also use the mapping variables in a source qualifier filter, user-defined join, or extract override, and in the Expression Editor of reusable transformations.

Mapping Variable Starting Value


Mapping variable can take the starting value from



  1. Parameter file


  2. Pre-session variable assignment


  3. Value saved in the repository


  4. Initial value


  5. Default Value


The Integration Service looks for the start value in the order mentioned above. Value of the mapping variable can be changed with in the session using an expression and the final value of the variable will be saved into the repository. The saved value from the repository is retrieved in the next session run and used as the session start value.


Changing values of Variables


To change the value of a variable, one of the following functions can be used within an expression: SETMAXVARIABLE($$Variable, value) , SETMINVARIABLE($$Variable, value), SETVARIABLE($$Variable, value) , SETCOUNTVARIABLE($$Variable), where:


  • SETVARIABLE sets the variable to a value that you specify (executes only if a row is marked as insert or update). At the end of a successful session, the Integration Service saves either the MAX or MIN of (start value.final value) to the repository, depending on the aggregate type of the variable. Unless overridden, it uses the saved value as the start value of the variable for the next session run.

  • SETCOUNTVARIABLE - increments a counter variable. If the Row Type is Insert increment +1, if Row Type is Delete increment -1. A value = 0 is used for Update and Reject.

  • SETMAXVARIABLE - compare current value to value passed into the function. Returns the higher value and sets the current value to the higher value.

  • SETMINVARIABLE - compare current value to the value passed into the function. Returns the lower value and sets the current value to the lower value.


At the end of a successful session, the values of variables are saved to the repository. The SETVARIABLE function writes the final value of a variable to the repository based on the Aggregation Type selected when the variable was defined.

Change Data Capture Implementation using Mapping variable.


We will implement Change Data Capture for CUSTOMER data load. We need to load any new/latest customer or changed customers data to a flat file. Since the column UPDATE_TS value changes for any new or updated customer record, we will be able to find the new or changed customer records using UPDATE_TS column.


As the first step lets start the mapping and create a mapping variable as shown in below image.



  • $$M_DATA_END_TIME as Date/Time





Now bring in the source and source qualified to the mapping designer workspace. Open the source qualifier and give the filter condition to get the latest data from the source as shown below.



  • STG_CUSTOMER_MASTER.UPDATE_TS > CONVERT(DATETIME,'$$M_DATA_END_TIME')





Note: This filter condition will make sure that, latest data is pulled from the source table each and every time. Latest value for the variable $M_DATA_END_TIME is retrieved from the repository every time the session is run.


Now map the column UPDATE_TS to an expression transformation and create a variable expression as below.



  • SETMAXVARIABLE($M_DATA_END_TIME,UPDATE_TS)





Note: This expression will make sure that, latest value from the column UPDATE_TS is stored into the repository after the successful completion of the session run.


Now you can map all the remaining columns to the downstream transformation and complete all other transformation required in the mapping.





That's all you need to configure Change Data Capture, Now create your workflow and run the workflow.


Once you look into the session log file you can see the mapping variable value is retrieved from the repository and used in the source SQL, just like shown in the image below.





You can look at the mapping variable value stored in the repository, from workflow manager. Choose the session from the workspace, right click and select 'View Persistent Value'. You get the mapping variable in a pop up window, like shown below.




Parameter files


Parameter file is an ASCII file which is used to set values of mapping paramteres and variables. Parameters can be set on workflow, worklet or session level. The physical location of a parameter file is set in Workflow manager in Workflows -> Edit. It can also be specified using the PMCMD command when starting a session task.


Parameter file structure


Parameters can be grouped into the following sections:



  • [Global]


  • [Service: service name]


  • [folder name.WF:workflow name]


  • [folder name.WF:workflow name.WT:worklet name]


  • [folder name.WF:workflow name.WT:worklet name.WT:worklet name...]


  • [folder name.WF:workflow name.ST:session name]


  • [folder name.session name]


  • [session name]


Examples / useful tips



  • The value is initialized by the specification that defines it, however it can be set to a different value in a parameter file, specified for the session task


  • Initialization priority of Parameters: Parameter file, Declared initial value, Default value


  • Initialization priority of Variables: Parameter file, Repository value, Declared initial value, Default value


  • Parameters and variables can only be utilized inside of the object that they are created in.


  • Parameters and variables can be used in pre and post-SQL


  • Sample parameter file:


  • [Service:IntegrationSvc_01]


  • $$SuccessEmail=dwhadmin@etl-tools.info


  • $$FailureEmail=helpdesk@etl-tools.info


  • [DWH_PROJECT.WF:wkf_daily_loading]


  • $$platform=hpux


  • $$DBC_ORA=oracle_dwh


  • [DWH_PROJECT.WF:wkf_daily_loading.ST:s_src_sa_sapbw]


  • $$DBC_SAP=sapbw.etl-tools.info


  • $$DBC_ORA=oracle_sap_staging



Update Without Update Strategy

Suppose if we have requirement to implement the update rows without using update strategy transformation. May be few rows will be updated in this requirement. To improve the performance of ETL, is there any other way?

One way is to use the Lookup transformation to identify which row is to be inserted or updated, by comparing with the target table data in lookup cache. This also has drawback, what if the lookup cache increases in future. This may not be good idea.

Using the session properties "Treat source rows As". Here will explain about with illustration.

During session configuration, you can select a single database operation for all rows using the Treat Source Rows As setting from the 'Properties' tab of the session.

  • Insert :- Treat all rows as inserts. 
  • Delete :- Treat all rows as deletes.
  • Update :- Treat all rows as updates. 
  • Data Driven :- Integration Service follows instructions coded into Update Strategy flag rows for insert, delete, update, or reject. 

Specifying Operations for Individual Target Rows

Once you determine how to treat all rows in the session, you can also set options for individual rows, which gives additional control over how each rows behaves. Define these options in the Transformations view on Mapping tab of the session properties. 

  • Insert :- Select this option to insert a row into a target table. 
  • Delete :- Select this option to delete a row from a table. 
  • Update :- You have the following options in this situation: 
    • Update as Update :- Update each row flagged for update if it exists in the target table. 
    • Update as Insert :- Insert each row flagged for update. 
      • Update else Insert :- Update the row if it exists. Otherwise, insert it. 
  • Truncate Table :- Select this option to truncate the target table before loading data.

Implementation

Now we understand the properties we need to use for our design implementation.We can create the mapping just like an 'INSERT' only mapping, with out LookUp, Update Strategy Transformation. During the session configuration lets set up the session properties such that the session will have the capability to both insert and update.

First set Treat Source Rows As property as shown in below image.


Now lets set the properties for the target table as shown below.  Choose the properties Insert and Update else Insert.


Thats all we need to set up the session for update and insert with out update strategy.