Monday 31 December 2018

Find Weblogic PSU

Solution:

[user@hostname bin]$ cd /u02/app/Oracle/Middleware/wlserver_10.3/server/bin
[user@hostname bin]$ . setWLSEnv.sh

[user@hostname bin]$ java weblogic.version

WebLogic Server 10.3.6.0.181016  Tue Nov 15 08:52:36 PST 2011 1441050

Use 'weblogic.version -verbose' to get subsystem information

Use 'weblogic.utils.Versions' to get version information for all modules
[user@hostname bin]$

Find Weblogic Version

Solution-1:

[user@hostname ~]$ cd /u02/app/Oracle/Middleware/wlserver_10.3/server/lib/
[user@hostname lib]$ java -cp weblogic.jar weblogic.version

WebLogic Server 10.3.6.0  Tue Nov 15 08:52:36 PST 2011 1441050

Use 'weblogic.version -verbose' to get subsystem information

Use 'weblogic.utils.Versions' to get version information for all modules
[user@hostname lib]$

Thursday 27 December 2018

RHEL : Enable/Disable IPV6

Disable IPv6:
As root user run
sysctl -w net.ipv6.conf.all.disable_ipv6=1
sysctl -w net.ipv6.conf.default.disable_ipv6=1

Enable IPv6:
As root user run
sysctl -w net.ipv6.conf.all.disable_ipv6=0
sysctl -w net.ipv6.conf.default.disable_ipv6=0

Sunday 2 December 2018

CRS-1714: Unable to discover any voting files after restarting the server

Error:

In ../../../crs/trace/alert.log

CRS-1714: Unable to discover any voting files, retrying discovery in 15 seconds;

CRS-8503 [] [] [] [] [] [] [] [] [] [] [] []

Solution:

Check for the shared disk permission is under grid user and grid primary group

Eg:

Here the permission of the shared asm disks are under root

[root@testdb01 dev]# ls -ltr test_data test_fra test_ocr
brw-rw---- 1 root disk 253, 3 Dec  2 12:36 test_data
brw-rw---- 1 root disk 253, 5 Dec  2 12:36 test_fra
brw-rw---- 1 root disk 253, 6 Dec  2 12:36 test_ocr
[root@testdb01 dev]#

Change the owner to grid with oinstall and start the serevices

[root@testdb01 dev]# chown grid:oinstall test_data test_fra test_ocr
brw-rw---- 1 grid oinstall 253, 3 Dec  2 12:36 test_data
brw-rw---- 1 grid oinstall 253, 5 Dec  2 12:36 test_fra
brw-rw---- 1 grid oinstall 253, 6 Dec  2 12:36 test_ocr
[root@testdb01 dev]#

Thursday 15 November 2018

DB : Enable archive in RAC

Solution:

ALTER SYSTEM SET log_archive_dest_1='location=/oradata/ORCL/archive/' SCOPE=spfile;
ALTER SYSTEM SET log_archive_format='arch_%t_%s_%r.arc' SCOPE=spfile;
ALTER SYSTEM SET cluster_database=FALSE SCOPE=spfile;
srvctl stop database -d ORCL

Connect to single node as sysdba and execute the below statements

sqlplus / as sysdba
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER SYSTEM SET cluster_database=TRUE SCOPE=spfile;
SHUTDOWN IMMEDIATE;
srvctl start database -d ORCL

Sunday 7 October 2018

utlrp.sql & adutlrpcmp.sql hangs

Issue:

Compiling the invalid objects hangs on both utlrp.sql & adadmin

SQL> @utlrp.sql

TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_BGN  2018-10-07 08:50:38

DOC>   The following PL/SQL block invokes UTL_RECOMP to recompile invalid
DOC>   objects in the database. Recompilation time is proportional to the
DOC>   number of invalid objects in the database, so this command may take
DOC>   a long time to execute on a database with a large number of invalid
DOC>   objects.
DOC>
DOC>   Use the following queries to track recompilation progress:
DOC>
DOC>   1. Query returning the number of invalid objects remaining. This
DOC>      number should decrease with time.
DOC>         SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);
DOC>
DOC>   2. Query returning the number of objects compiled so far. This number
DOC>      should increase with time.
DOC>         SELECT COUNT(*) FROM UTL_RECOMP_COMPILED;
DOC>
DOC>   This script automatically chooses serial or parallel recompilation
DOC>   based on the number of CPUs available (parameter cpu_count) multiplied
DOC>   by the number of threads per CPU (parameter parallel_threads_per_cpu).
DOC>   On RAC, this number is added across all RAC nodes.
DOC>
DOC>   UTL_RECOMP uses DBMS_SCHEDULER to create jobs for parallel
DOC>   recompilation. Jobs are created without instance affinity so that they
DOC>   can migrate across RAC nodes. Use the following queries to verify
DOC>   whether UTL_RECOMP jobs are being created and run correctly:
DOC>
DOC>   1. Query showing jobs created by UTL_RECOMP
DOC>         SELECT job_name FROM dba_scheduler_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>
DOC>   2. Query showing UTL_RECOMP jobs that are running
DOC>         SELECT job_name FROM dba_scheduler_running_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>#
^CDECLARE
*
ERROR at line 1:
ORA-01013: user requested cancel of current operation
ORA-06512: at "SYS.DBMS_UTILITY", line 1294
ORA-06512: at line 1
ORA-06512: at "SYS.UTL_RECOMP", line 413
ORA-06512: at "SYS.UTL_RECOMP", line 559
ORA-06512: at "SYS.UTL_RECOMP", line 807
ORA-06512: at line 4



SQL>


Fix:

Find the JOB_NAME responsible for recompile the objects. JOB_NAME will be "UTL_RECOMP_SALVE_#".
SELECT * FROM DBA_SCHEDULER_RUNNING_JOBS;

Find out the OBJ# of those jobs:
SELECT * FROM SCHEDULER$_JOB;

Change the status to 1 for those jobs :
UPDATE SCHEDULER$_JOB SET JOB_STATUS = 1 WHERE OBJ# in (1577769);
COMMIT:

Now run recompile using utlrp.sql or adadmin


Thursday 19 July 2018

DB : MEMORY_MAX_TARGET & MEMORY_TARGET are set to 0

When MEMORY_MAX_TARGET & MEMORY_TARGET are set to 0 , database will run on a variation of ASMM or manual.

If MEMORY_TARGET is set to a non-zero value:

If SGA_TARGET and PGA_AGGREGATE_TARGET are set, they will be considered the minimum values for the sizes of SGA and the PGA respectively. MEMORY_TARGET values can range from SGA_TARGET + PGA_AGGREGATE_TARGET to MEMORY_MAX_TARGET.
If SGA_TARGET is set and PGA_AGGREGATE_TARGET is not set, we will still auto-tune both parameters. PGA_AGGREGATE_TARGET will be initialized to a value of MEMORY_TARGET - SGA_TARGET.
If PGA_AGGREGATE_TARGET is set and SGA_TARGET is not set, we will still auto-tune both parameters. SGA_TARGET will be initialized to the minimum non-zero value of MEMORY_TARGET - PGA_AGGREGATE_TARGET and SGA_MAX_SIZE and will auto tune its components.
If neither is set, they will be auto-tuned without any minimum or default values. We will have a policy of distributing the total memory set by MEMORY_TARGET parameter in a fixed ratio to the the SGA and PGA during initialization. The policy is to give 60% to the SGA and 40% to the PGA at startup.

If MEMORY_MAX_TARGET has not been explicitly set, but MEMORY_TARGET has, the instance automatically sets MEMORY_MAX_TARGET to the same value as MEMORY_TARGET. If MEMORY_TARGET has not been explicitly set, but MEMORY_MAX_TARGET has, then MEMORY_TARGET defaults to 0. After instance startup, it then is possible to dynamically change MEMORY_TARGET to a non-zero value, provided that it does not exceed the value of MEMORY_MAX_TARGET.

If MEMORY_TARGET is not set or set to set to 0 explicitly (default value is 0 for 11g):

If SGA_TARGET is set we will only auto-tune the sizes of the components of the SGA. PGA will be autotuned independent of whether it is explicitly set or not. However, the combination of SGA and PGA will not be auto-tuned, i.e. the SGA and PGA will not share memory and resize as with the case of MEMORY_TARGET being set to a non-zero value.
If neither SGA_TARGET nor PGA_AGGREGATE_TARGET is set, we will follow the same policy as we have today; PGA will be auto-tuned and the SGA will not be auto-tuned and parameters for some of the SGA components will have to be set explicitly (for SGA_TARGET).
If only MEMORY_MAX_TARGET is set, MEMORY_TARGET will default to 0 and we will not auto tune the SGA and PGA. It will default to 10gR2 behavior.
If SGA_MAX_SIZE is not user set, it is internally set to MEMORY_MAX_TARGET. 

Saturday 7 April 2018

EBS : Concurrent manager not starting after cloning R12.1

Error:

CONC-SM TNS FAIL
Routine AFPEIM encountered an error while starting concurrent manager STANDARD


Solution:
Step 1:

Stop the Concurrent Manager

Step 2:

SQL> select count(*) from fnd_nodes;

  COUNT(*)
----------
         2

SQL> select unique(node_name) from fnd_concurrent_queues;

NODE_NAME
------------------------------
PROD
DEV

SQL> update fnd_concurrent_queues set node_name='DEV' where node_name='PROD';

43 rows updated.

SQL> commit;

Commit complete.

Step 3:

Start the Concurrent Manager

Wednesday 31 January 2018

RMAN : Clear archivelogs from flash_recovery_area

Manually delete the archive log in the flash_recovery_area and run the below commands.


$ rman target /
RMAN> crosscheck archivelog all;
RMAN> delete expired archivelog all;

SQL Developer displaying Junk characters (??????)

Change the Encoding in SQL Developer Preferences as below Tools --> Preferences --> Environment --> Encoding --> UTF...