Friday, January 07, 2011

Installing Fusion Middleware Control 11g (Enterprise Manager)

Installing Fusion Middleware Control 11g on to a Weblogic domain is a very troublesome affair. By default Weblogic 10.3.3 does not come with the Enterprise Manager. Also note that if you install Fusion Middleware 11g (which includes JDeveloper, ADF and Weblogic), Enterprise Manager is not bundled.

Go to and download
1) Weblogic (at time of writing, 10.3.3)
2) RCU - Repository Creation utility (at time of writing,
3) Application Development Runtime (at time of writing, and

To install Enterprise Manager, do the following -
1) Install Weblogic
2) Run RCU (rcuHome\BIN\rcu.bat)
3) Install Application Development Runtime
4) Patch Application Development Runtime

Create Domain and select Oracle Enterprise Manager - This will automatically select its dependency - Oracle JRF

Please note Oracle JRF is the ADF runtime that is needed for all ADF applications.

To test,
1) Start Admin Server
2) Open browser and point to the Admin Server URL/em (example http://localhost:7001/em)
3) Login with the admin user credentials.

Thursday, November 04, 2010

Toplink Cache and Weblogic Cluster

JPA cache

JPA (Java Persistence) implementations use a level 2 cache, which is a cache behind the session cache (aka unit of work cache). This cache is typically used when using EntityManager’s find operation or querying for entities using the primary key. This cache is also used to initialize collection members after loading up an entities collection.

JPA cache in an application server cluster such as that of Weblogic

When the JPA application is deployed on a single node of an application server, and there is no out of band access to the database, cache is really a boon. However, as soon as there are external writes to the database, the cache invalidation problem becomes a problem. The external write could be either some other application writing to the database or the application itself deployed in an application cluster scenario.

For example, consider an application which manages car distributorship. The application queries for the cars in the inventory and on sale, updates the inventory as sold. When such a query is made, it is possible that the Car entity may be loaded into the L2 cache of the JPA implementation. Now when the purchase operation updates one node in the cluster and if it is not refreshed or invalidated in the other node, then the application is potentially dealing with stale data in the cache.

To handle such situation obviously some cache synchronization techniques need to be employed. In this entry, I will be documenting three strategies that can be used with Oracle Toplink working in a Weblogic cluster environment

1. Disable the cache

2. Use Toplink Cache Coordination

3. Use Toplink Grid (Oracle Coherence integration)

Disabling the Toplink L2 cache

L2 cache can be disabled per entity or as a whole. To disable the cache for all entities, add the following property to the persistence.xml

<property name="eclipselink.cache.shared.default" value="false"/>

To disable cache per entity, you will need to use the following entry in the eclipselink-orm.xml

<cache shared="false" />

For example

<?xml version="1.0" encoding="UTF-8"?>

<entity-mappings version="2.1"



<entity class="mypackage.MyEntity">

<cache shared="false" />



However, please note the bug in the current implementation - The work around suggested in this bug needs to be done.

Use Toplink Cache Coordination

Cache coordination is a mechanism of Eclipslink (Toplink) which allows the JPA caches on the individual nodes to communicate and synchronize the changes. The communication itself could be done through the following transports –

1. JMS

2. RMI


JMS also allows for asynchronous coordination.

The following strategies can be employed to synchronize the changes in the cache –

1. SEND_OBJECT_CHANGES – This is the default and sends update events only for changes in the attributes of an entity. New object creations (for example adding a new member to a collection) is not propagated

2. INVALIDATE_CHANGED_OBJECTS – This option invalidates the entity on the peer cache whenever it changes.

3. SEND_NEW_OBJECTS_WITH_CHANGES – This option adds to the first option to also send newly created entities. This option takes care of refreshing additions of a member to a collection.

4. NONE – No updates sent

To set up cache coordination, two configurations need to be done –

1. Set up the coordination transport

2. Set up the coordination type for the entities

To set up the cache coordination transport, edit the persistence.xml and add the following properties –

<property name="eclipselink.cache.coordination.protocol" value="rmi"/>

<property name="eclipselink.cache.coordination.rmi.multicast-group" value=""/>

<property name="eclipselink.cache.coordination.rmi.multicast-group.port" value="9872"/>

<property name="eclipselink.cache.coordination.jndi.user" value="weblogic"/>

<property name="eclipselink.cache.coordination.jndi.password" value="Welcome1"/>

<property name="eclipselink.cache.coordination.propagate-asynchronously" value="false"/>

<property name="eclipselink.cache.coordination.naming-service" value="jndi"/>

<property name="eclipselink.cache.coordination.rmi.url" value="t3://localhost:7004"/>

<property name="eclipselink.cache.coordination.packet-time-to-live" value="4"/>

This sets up the configuration for RMI. Please note that the RMI URL can point to any of the Weblogic managed servers since the JNDI tree is replicated in a cluster. Alternatively, the “port” can also be left out.

To set up the cache coordination type, edit the eclipselink-orm.xml and add the following –

<cache coordination-type="INVALIDATE_CHANGED_OBJECTS" />

For example –

<?xml version="1.0" encoding="UTF-8"?>

<entity-mappings version="2.1"



<entity class="mypackage.MyEntity">

<cache coordination-type="INVALIDATE_CHANGED_OBJECTS" />



However, please note the bug in the current implementation – The work around suggested in this bug needs to be done.

Use Toplink Grid

Toplink Grid is the integration of Toplink with Oracle Coherence. Toplink Grid is part of Active Cache which also includes Coherence Web.

Toplink Grid provides three strategies to integrate a JPA (Toplink) application with Coherence –

1. Grid Cache

2. Grid Read

3. Grid Write

Grid Cache is the simplest and the least intrusive for a vanilla JPA application. This basically ties the L2 cache of Toplink with Coherence so that every read from JPA cache results in a get from Coherence and similarly, every write to JPA cache results in a put to Coherence.

Grid Read and Grid Write require code changes and allow Toplink to read through or write through Coherence. However with this feature, the full benefit of Data Grid can be realized.

In this entry, the configurations for Grid Cache is described. The following steps need to be performed for configuring –

1. Create Coherence Cache configuration and refer to this from the JPA application

2. Configure Coherence Cluster and refer to this from the JPA application

3. Set up related shared libraries in Weblogic and refer to these libraries from the JPA application

4. Configure JPA entities to use Grid Cache

Coherence Cache configuration

Create coherence-cache-config.xml file in some known location say D:\ and add the Cache configuration to this file.

<?xml version="1.0"?>

<!DOCTYPE cache-config SYSTEM "cache-config.dtd">

















<high-units> 10000 </high-units>

<eviction-policy> LFU </eviction-policy>







After this create a JAR file for the above file and add the JAR file as a shared library (target to all relevant servers) in Weblogic console and refer to this shared library from MyApp.ear\META-INF\weblogic-application.xml as follows –




Coherence cluster configuration

In Weblogic console, find “Coherence Clusters” under Services. Create a new Coherence Cluster. Specify the following –

Name: CoherenceCluster

Unicast Listen Address: localhost

Unicast Listen Port: Unique port number

Unicast Port Auto Adjust: true

Multicast Listen Address:

Multicast Listen Port: Unique port number

Refer to the above Coherence Cluster in MyApp.ear\META-INF\weblogic-application.xml as follows –




Related Library configurations

Create shared libraries (target to all relevant servers) for the following in Weblogic console –

1. D:\Oracle\Middleware11.1.1.3\wlserver_10.3\common\deployable-libraries\active-cache-1.0.jar

2. D:\Oracle\Middleware11.1.1.3\wlserver_10.3\common\deployable-libraries\toplink-grid-1.0.jar

3. D:\Oracle\Middleware11.1.1.3\coherence_3.5\lib\coherence.jar

Refer to the above shared libraries from MyApp.ear\META-INF\weblogic-application.xml. Add the following elements










Note that reference to the cache configuration should be above reference to coherence.

Configure JPA entities to use Grid Cache

To set up the Grid Cache, edit the eclipselink-orm.xml

For example –

<?xml version="1.0" encoding="UTF-8"?>

<entity-mappings version="2.1"



<entity class="mypackage.MyEntity">

<customizer class="oracle.eclipselink.coherence.integrated.config.GridCacheCustomizer"/>



Running Coherence

To start Coherence Server, run the following

D:\>java -server -Xms512m -Xmx512m -javaagent:D:\Oracle\Middleware11.1.1.3\modules\org.eclipse.persistence_1.0.0.0_2-0.jar -cp D:\Oracle\Middleware11.1.1.3\cohe


.0.0_11-1-1-3-0.jar;D:\Oracle\Middleware11.1.1.3\modules\;MyApp.jar -Dtangosol.coherence.cacheconfig=d:\coherence-cache-config.xml -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.clusterport=7777 -Dtango


Labels: , , , ,

Wednesday, November 03, 2010

Java Garbage collection

Garbage collection

Java Garbage collection strategy and configuration chosen has a significant impact on the behavior of an application, particularly server side enterprise applications. There are two aspects to this –

1. Memory usage pattern of the application

2. Type of application

The garbage collection configuration needed to serve an application that creates a lot of short lived objects is different from that of an application which creates more persistent objects.

Similarly, the type of the application determines the GC to be used. Real time or near real time applications cannot take in application pauses caused by GC processing.

Garbage collection strategies

There are two parts to garbage collection –

1. Process of identifying stale objects and marking them

2. Process of garbage collection itself

Most modern collectors use either reference counting or object traversal techniques to mark stale objects. Object traversal is more popular, where by the collector uses some well known root objects to traverse the object tree and identify any object that is not referenced anywhere.

Following are some of the garbage collection algorithms –

1. Mark and Sweep - In this strategy, the GC runs through all the marked objects and frees the memory from the heap. It is not very suitable where there are lots of new objects being created and will also leave the memory fragmented.

2. Mark, Sweep and Compact - In this strategy, the GC runs through all the marked object and not only frees the memory, but also consolidates the heap space so that contiguous blocks of free memory are made available. While this strategy does not fragment memory, it is still expensive for large amounts of new object creation

3. Incremental - This strategy breaks the memory into train cars and trains and deals with memory allocation and freeing up on these managed train cars and trains.

4. Copy - In this strategy, the heap space is broken in two semi-spaces – to-semi-space and from-semi-space. All new memory allocation is performed on the from-semi-space. At some threshold, the garbage collector kicks in and copies over all the used objects to the to-semi-space. After the copy, the to-semi-space becomes the new from-semi-space. Stale objects are left behind and during the next copy, they are overwritten. This strategy is very good for a lot of new object creation. However, if the persistent object lingering on is high, it may result in a lot of copy operations thereby adding to the cost.

Garbage collectors could use any of the above collection algorithms and execute in the following modes –

1. Stop the world - Typically in this mode, the garbage collector stops all other JVM threads when it is processing. This results in intermittent pauses in application processing because of GC runs. This is generally optimized for application throughput.

2. Parallel - In this mode, the garbage collector probably has multiple threads (probably equal to the number of CPUs, however generally the parallelism can be controlled) sharing the garbage collection load. Mostly, this also stops other JVM threads and result in application processing breaks, albeit smaller ones. This approach is also generally targeted for application throughput.

3. Concurrent - In this mode, the garbage collector threads run in parallel with other JVM threads and allows for garbage collection along with the other threads. The garbage collection processing itself is broken down into phases, and application threads may be paused for a couple of phases only. This allows near real time application processing with probably lesser throughput.

So, from an application perspective, it is not really possible to choose both throughput and near real time performance and really is the compromise that application deployment and designer personnel have to make.

GC strategies in Hotspot

Hotspot breaks the heap space into three areas -

1. New or Young generation area – This area uses the copy strategy as discussed before and is optimized for new object creation and is really dedicated for newly created objects and objects with short life cycle.

This area is further sub divided into Eden and two survivor spaces (To-semi-space and from-semi-space).

Eden is the area where all new objects are created. When a threshold is reached, the GC copies the currently used objects to from-survivor-semi-space.

When the survivor threshold is reached in the from-survivor-semi-space, it further copies to the to-survivor-semi-space thereby making it the new from-semi-space. After a few runs of GC on the from-survivor-semi-space, an object which is still alive is said to have tenured and will be then moved to the old generation area.

2. Old generation area - This is the area of the heap space dedicated for long standing objects. Typically this area uses Mark, Sweep and Compact collectors.

3. Permanent area - This area is used by JVM for storing permanent objects such as Classes and Methods.

Configuring GC in Hotspot

The total heap size in Hotspot can be configured using –Xms (initial size, default is 2M) and –Xmx (max size, default is 64M)

Please note that the total heap size covered by the above configuration only includes the Young generation and old generation areas and EXCLUDES permanent area. To configure permanent area size, use -XX:PermSize (initial size of the permanent space, default is 4M) and -XX:MaxPermSize (max size of the permanent space, default is 32M). It is generally advisable to set this value at an appropriately high value with both initial and maximum set to the same as every time it will be resized, it will cause a full GC run.

The Young generation and old generation sizes can be further controlled using a ratio using the configuration -XX:NewRatio (ratio of YG to OG; default ranges from 2 to 12 depending on the processor and whether client or server setting). If its value is 2, then it means it is half of old area and is really 1/3rd the total heap size. If more control is needed, then -XX:NewSize (initial size of YG) and -XX:MaxNewSize (max size of YG) can be used.

To control the size of Eden and Survivor areas, -XX:SurvivorRatio (default is 8) can be used. This controls the ratio of one of the survivor semi-spaces to Eden. Default values of 8 means it is 1/8th the size of Eden.

To control the threshold when objects are copied between survivor spaces, use -XX:TargetSurvivorRatio (default is 50). This is the percentage of free space in a survivor space before the objects are copied to the from survivor semi-space. So, by default, when it is 50%, objects are copied over. For large heap spaces, this should be higher at 80 or 90 to avoid frequent copies.

To control the threshold when objects tenure in a survivor space, use -XX:MaxTenuringThreshhold. It specifies the number of times objects will be copied over before tenuring.

Apart from the above heap size configuration, new garbage collectors were introduced in JDK 1.4 -

1. Low pause collector – A Parallel copy collector is used on the new generation area along with concurrent mark, sweep and compact collector for the old generation. To choose this strategy, use - XX:+UseParNewGC (for parallel copy collector) and -XX:+UseConcMarkSweepGC (for concurrent mark, sweep and compact collector)

By default, the parallel copy collector will start as many threads as CPUs on the machine, but if the degree of parallelism needs to controlled, then it can be specified by the following option -XX:ParallelGCThreads=

This collector will give relatively less pauses for the application.

2. Throughput collector – In this, only Parallel copy collector can be used on the new generation area. To enable this, use XX:+UseParallelGC

Configuring GC on JRockit

JRockit typically divides the heap into two areas –

Nursery – This area is meant for newly created objects and typically after two runs of collection, old objects are tenured into the old area

Old area – This area is meant for the more old persistent objects

The heap size can be configured using –Xms (initial size of heap) and –Xmx (max size of heap). Nursery size can be configured using –Xns value.

JRockit collection strategy can be either dynamic or static. When configured to dynamic (which is the default), it can be specified its priority – whether it should optimize for near real time performance or application throughput using –XgcPrio flag. Values could be throughput or pausetime.

To take control of the GC strategies, the mode can be switched to static with the use of –Xgc flag. Values for this are singlepar, genpar, singlecon, gencon.

1. Single heap area with Parallel collector (singlepar) – Uses a single sized heap (not partitioned to nursery and old area) with parallel collectors (with stop the world semantics) which will use mark, sweap and compact algorithm to GC. For applications that don’t allocate a lot of short lived objects, this will improve memory utilization and throughput, although with possible long GC waits.

2. Single heap area with Concurrent collector (singlecon) – Uses a single sized heap with concurrent collectors that work concurrently with application threads. GC pauses are shorter; however application throughput will be impacted. Not good for applications that generate lot of short living objects

3. Generational heap area with Parallel collector (genpar) – Uses parallel collector on partitioned heap (nursery and old area) with stop the world semantics. This is optimal for throughput applications that may allocate large number of short living objects though may have longer pause times

4. Generational heap area with Concurrent collector (gencon) – Uses concurrent collector on partitioned heap. Optimal for real time semantics for applications that also have high number of short lived objects.

Labels: , , ,

Monday, August 23, 2010

Internationalization and Localization

Internationalization (i18n) is the concept of architecting a software to make it locale sensitive. During this process, the software is broken into localizable modules and these modules are then used in the code. Translating the localizable module for a specific locale is called localization (l10n).

More specifically, during internationalization, the software is decomposed into code and resources and then during localization, for each locale, a version of the resource is created. The user chooses the locale in which he is using the software and the software automatically uses the resources specific for the locale.

A locale represents a region or country and is typically represented by (1) language (2) country (3) further variations. For example, "en" represents english and "en_US" represents US english. Language and country specifications are standardized by ISO and are specified at the following URLs - and Variations on the other hand are quite proprietary to the computing environment.

Depending on the type of the software, user can set the locale on the OS, browser or on the software itself. For thick applications like Microsoft Word etc, the locale information is read from the operating system. For web based application, typically, the locale information is read from the user's browser setting using ACCEPT-LANGUAGE HTTP header. Other applications (thick or web based) may give locale setting as probably preference settings on the application for the user.

In the Java world, locale is abstracted using the class java.util.Locale and a resource (actually collection of resource) using java.util.ResourceBundle. Resource itself is accessed as a name-value pair from the ResourceBundle.

Resource bundles are organized in families in a tree structure with the resource bundle with the base name forming the root.

For example consider a resource bundle family MyResource. Consider that it has bundles for english, US english and UK english, french, French french and Canadian french. So, the names of the resource bundles would be MyResource, MyResource_en, MyResource_en_US, MyResource_en_US, MyResource_fr, MyResource_fr_FR, MyResource_fr_CA.

This forms a hierarchy of bundles as below -
-- MyResource_en
-- -- MyResource_en_US
-- -- MyResource_en_UK
-- MyResource_fr
-- -- MyResource_fr_FR
-- -- MyResource_fr_CA

A resource bundle family also has a default locale setting. For example "en" could be the default locale.

When a resource is looked up in a bundle family, if the locale is specified, then the bundle matching the exact locale in the hierarchy is looked up. If an exact bundle corresponding to the locale is not found, then the nearest bundle for the locale is used. If the locale is not specified, then, the bundle corresponding to the default locale is looked up.

For example, if we look up a resource by specifying as below

String baseName = "MyResources";
Locale locale = new Locale("en", "US");
String resourceName = "Name";
ResourceBundle bundle = ResourceBundle.getBundle(baseName, locale);
String resource = bundle.getString(resourceName);

If the input locale is null, the default locale which is en is chosen. In this case, the resource is looked up in MyResource_en and then MyResource.

If the input locale is fr, then the resource is looked up in MyResource_fr and if not found, in MyResource.

If the input locale is en_US (as in the snippet above), then the resource is looked up in MyResource_en_US, if not found, then in MyResource_en and if not found again, in MyResource.

When looking for the bundles with the above names, it searches for classes with the above names and if such classes exist in the classpath, it is used. Otherwise, it looks for property files by appending ".properties" to the names and loads them up as resources.

Coming to JEE world, JSF specifications have the faces-config/application/message-bundle and faces-config/application/resource-bundle tags which can be used in the faces-config.xml configuration for specifying the known resource and message bundles.

faces-config/application/locale-config/default-locale can be used to specify the default locale and faces-config/application/locale-config/supported-locale to specify the supported locales.

Typically, following is done to configure -
  1. Resource bundle family property files are created
  2. Using faces-config/application/resource-bundle, the bundle family is declared specifying the base name and a variable name which can then be used later to refer the bundle using Expression Language statements in JSF/JSPX files
  3. Specify the faces-config/application/locale-config/default-locale and set of faces-config/application/locale-config/supported-locale for all the supported locales in the family
  4. In JSF/JSPX, the bundle is directly used using expression language and the resource referenced using the subscript operator
For example,

Create the following files

Enter the resource Colour=Colour in org.myapp.MyResources and then translate to US english in org.myapp.MyResources_en_US as Colour=Color

Edit faces.config.xml and add the following XML tags -

-- [faces-config]
---- [application]
------ [resource-bundle]
-------- [base-name] org.myapp.MyResources [/base-name]
-------- [var] bundle [/var]
------ [/resource-bundle]
---- [locale-config]
------ [default-locale]en[/default-locale]
------ [supported-locale]en_US[/supported-locale]
------ [supported-locale]it[/supported-locale]
---- [/locale-config]
-- [/application]

Then in the JSF/JSPX, it can be used as below

[af:inputText label="#{bundle.Colour}" id="it1"/]

Please note that JSF 1.2 onwards, loadBundle tag is not necessary.

JSF also has internal messages which are used by JSF components. For example, messages which are shown on built in validators etc. If it is desired to override these messages, then the message-bundle tag can be used.

At runtime, depending on the user's browser settings, HTTP message will have the ACCEPT-LANGUAGES header, which the JSF runtime will use to set the locale appropriately, thereby loading the correct resource bundles.

Thursday, August 19, 2010

Security Auditing in WebLogic

Weblogic provides an extensible mechanism to audit the activities of weblogic security framework. Weblogic security framework is the framework which takes care of all security related features in a weblogic server including authentication and authorization. Auditing is turned off by default.

The security framework comes out of the box with a DefaultAuditProvider which can be installed and configured into the security framework. When installed and configured in, it starts logging audit information into a file named DefaultAuditRecorder.log in the server log directory. If the default audit provider is not sufficient, users can write custom audit providers and have it installed and perform custom activities such as storing in database etc. The security framework is designed in a pluggable fashion and there can be multiple audit providers if desired.

All the components of the security framework such as authentication providers, authorization providers etc. emit audit events. If an audit provider is installed, then it receives these audit events and it can do whatever it wants with it. Obviously, the DefaultAuditProvider simply logs these events.

A generic audit event has the following information – (1) event type (2) severity level (information, warning, error, success and failure). The severity levels also have ranks associated with them, with “information” having the least rank and “failure” the maximum and (3) optional “context info” about the event (things like ejb method name and parameters in an authorization audit event).

The DefaultAuditProvider has configuration to choose the severity and context information to be filtered or propagated. This configuration is of course specific to the DefaultAuditProvider. A custom audit provider can have any other configuration as desired. Once the audit provider is installed, its settings can be changed at runtime without having to bounce WLS instance. All the settings and configurations are done through WLS console.

The security framework components also add extra data to the event by inheriting from the generic audit event. For example, events generated by the authentication provider have sub type which specifies the action being performed – such as authentication, identity assertion, user being locked etc. An authorization provider adds information regarding the resource being accessed and the subject accessing the resource. The interfaces for the sub events are well defined and a custom auditor can be more intelligent when dealing with the events. The DefaultAuditProvider does not bother much and just logs it using the “toString” semantics.