Wednesday, March 27, 2013

Career, Work, and Health (4)

Treating bad nutritional choices with drugs—choices that lead to morbidity in later life, after years and years of self-abuse—will never be an efficacious solution

We have to be responsible for our own health and rely on vigilant avoidance of the underlying causes of disease. We need to adopt scientifically supported superior nutrition and rid ourselves of the idea that doctors and pharmaceutical companies are our saviors, capable of enabling us to live long and productive lives.

Western Diets and Western Diseases


Today, the American diet takes over 60% of its calories from processed foods.  Processed foods are generally mixed with additivescoloring agents, and preservatives to extend shelf life, and they're placed in plastic bags and cardboard boxes.  Americans consume less than 10% of their calories from unrefined plant foods such as fruits, beans, seeds, and vegetables.  Even worse, half the vegetable consumption is from white potato products, including fries and chips.  Many phytochemicals in freshly harvested plant foods are lost or destroyed by modern processing techniques, including cooking.  Since neither processed foods nor animal products contain a significant load of antioxidant nutrients or any phytochemicals, the western diet is dramatically disease-promoting[2].

As processed foods and fast foods expanded into the underdeveloped world, we saw rural areas starting to develop higher rates of cancer and obesity.  The study by CDC in 2006 also shows that recent immigrants to the United States are far healthier than their US-born counterparts. The reason? The diets and lifestyles in the US are far less healthy than those in many other countries.  The result today is a nation with exploding numbers of people with immune system disorders, allergies, autoimmune diseases, cardiovascular diseases, diabetes, and cancers.

Read more.

See Also

Analyzing the Performance Issue Caused by WebLogic Session Size Too Big

In this article, we will show you lots of interesting stuffs:
  1. How to investigate an issue with too many live objects kept in old generation, which cannot be reclaimed in Full GC by HotSpot VM
    • For background information regarding Garbage Collectors, read [1-7]
  2. How important sesssion-timeout element in web.xml is
  3. How to use java tool to generate heap dump
  4. How to use Eclipse Memory Analyzer to analyze heap dump (i.e. fod.hprof file)
    • Download link: here

The Issues


We have seen slow performance in one of our benchmarks (i.e., FOD).  Here are the symptoms:
  • GC took about 50% of the application's CPU time
  • Full GC's were not triggered by full Perm Gen, but by full Old Gen
    • The old generation is completely full and it isn't clearing up anything but a few soft references during Full GC.
So, the symptoms all point to too many live objects kept in Old Gen and they have caused many Full GC's.  There could be different causes for the above symptoms.  To investigate, you need to create heap dumps and check out what objects HotSpot VM is holding onto.  Below we will show you how to investigate this.

Generating Heap Dump


First, we have used a Java tool jmap to generate a summary of the heap contents:

$./jmap -histo 16345 >/scratch/aime1/tmp/fod.summary

 num     #instances         #bytes  class name
----------------------------------------------
   1:      15777231      822030680  [Ljava.lang.Object;
   2:       3074270      463615192  [C
   3:       4076972      331840256  [Ljava.util.HashMap$Entry;
   4:       9287101      297187232  java.util.HashMap$Entry
   5:       2105437      151591464  org.apache.myfaces.trinidad.bean.util.PropertyHashMap
   6:       4075444       97810656  java.util.HashMap$FrontCache
   7:       1893014       90864672  java.util.HashMap
   8:       2018043       80721720  org.apache.myfaces.trinidad.bean.util.FlaggedPropertyMap
   9:       3017161       72411864  java.lang.String


Unfortunately, it didn't tell us much regarding the biggest java object that have taken ~800M bytes.  But, we did notice that two other property maps also took up a lot of memory space.  So, the next step is to generate a full dump as follows:

$./jmap -dump:live,file=/scratch/aime1/tmp/fod.hprof 16345

To analyze the full heap dump, you need to use Eclipse Memory Analyzer which we have chosen the standalone version.

Analyzing Heap Dump With Memory Analyzer (MAT)


I started the Memory Analyzer with the following command:

$ cd /scratch/sguan/mat/
$ ./MemoryAnalyzer &

When I opened the heap dump file (i.e., fod.hprof), MAT failed with a message related to Java Heap Space.  So, I need to modified the following line in the MemoryAnalyzer.ini file:

-vmargs
-Xmx1024m

by changing -Xmx1024m to -Xmx6240m.  Note that how big you should set for -Xmx option depends on:
  • Size of heap dump
    • Our original file size is 6.4g and was reduced to 3.6g after loading.  Note that MAT will remove unreachable objects in the loading process.
  • Size of physical RAM in your system

The Culprit— WebLogic Session Objects


From the MAT, we have found out that the biggest live object kept in the heap was:
  • weblogic.servlet.internal.session.MemorySessionContext @ 0x705e0ff08 
Also, for two previously-mentioned property map objects, they are also related to the session object.  If you trace those objects back to the GC root, you will see that they are pointed to by the session state. They are the reason the session state is so big.

session-timeout Element in web.xml


Session object is used by WebLogic container to track a user's session that uses the StoreFrontModule web application in our FOD benchmark.  Every time a new user comes in, it starts a session. This session object keeps user's data or states in it. If the user leaves, the session is considered idle and the server will retain it in memory for a period of time before reclaiming it.

Java web applications can be configured with a session timeout value which specifies the number of minutes a session can be idle before it is abandoned.  This session timeout element is defined in the Web Application Deployment Descriptor web.xml[9]. For example, this was the session timeout value we had:

<session-config>
<session-timeout>35</session-timeout>
</session-config>

The problem with our benchmark's slowness is due to this session timeout value being too high.  This has caused the sessions being piled up and taking up lots of memory. Setting it to be a lower value (see [10] on how to patch web.xml in an EAR), our benchmark then performs normally.

Saturday, March 23, 2013

WARNING: VERSION_DISPLAY_ENABLED_IN_PRODUCTION_STAGE

We have seen the following warnings in the WebLogic server log file:
  • WARNING: VERSION_DISPLAY_ENABLED_IN_PRODUCTION_STAGE The setting to enable version display will be ignored because the application is running in production mode.

In this article, we will show you:
  • How to eliminate those warnings from the server log file
  • How to patch web.xml in the StoreFrontModuleNew.ear

How to Eliminate "WARNING: VERSION_DISPLAY_ENABLED_IN_PRODUCTION_STAGE"?


To eliminate those warnings, you need to modify the following context parameter in the web.xml file:

<context-param>
<description>Whether the 'Generated by...' comment at the bottom of ADF Faces HTML pages should contain version number information.</description>
<param-name>oracle.adf.view.rich.versionString.HIDDEN</param-name>
<param-value>false</param-value>
</context-param>

by changing its value from false to true. Note that parameter
  • oracle.adf.view.rich.versionString.HIDDEN 
determines whether the 'Generated by...' comment at the bottom of ADF Faces HTML pages should contain version number information or not.

How to Patch web.xml in Java EAR File?


StoreFrontModuleNew.ear is the application deployed in the WebLogic server for our ADF FOD benchmark.  Here is the location of web.xml in the StoreFrontModuleNew.ear file.

StoreFrontModuleNew.ear
  |
  +--StoreFrontWebAppNew.war
       | 
       +--WEB-INF
            | 
            +--web.xml
                          
To unjar the EAR, you do:
  • jar xvf StoreFrontModuleNew.ear
Before you do that, you create a working directory tmp first.  Then you copy EAR file to the tmp directory and un-jar it there.  Similarly, to unjar StoreFrontWebAppNew.war, you create another working directory tmp2 at its level and copy it into tmp2 directory and un-jar it there.  In the following, we show you the whole sequence of steps:

$cd /scratch/aime1/kjeyaram/
$mkdir tmp
$cd tmp
$cp ../StoreFrontModuleNew.ear .
$jar -xvf StoreFrontModuleNew.ear
$ls
$mkdir tmp2
$cd tmp2
$cp ../StoreFrontWebAppNew.war .
$jar xvf StoreFrontWebAppNew.war
$ls -lrt
$cd WEB-INF/
$vi web.xml
$cd ..
$zip -f ../StoreFrontWebAppNew.war WEB-INF/web.xml
$cd ..
$zip -f ../StoreFrontModuleNew.ear StoreFrontWebAppNew.war
$cd ..
$rm -rf tmp

We have used zip command[1] to replace existing web.xml in the zip archive with the following option:



-f
Replace (freshen) an existing entry in the zip archive only if it has been modified more recently than the version already in the zip archive; unlike the update option (-u) this will not add files that are not already in the zip archive.
Note that you can also use jar command as follows:

  • jar uf ../StoreFrontWebAppNew.war WEB-INF/web.xml


References

Tuesday, March 19, 2013

Career, Work, and Health (3)

We all enjoy our daily work and some may be even addicted to work.  But, do remember to take enough sleep to live a longer and healthier life.
Below are some highlights from the article.

Sleep—The Most Important Predictor of How long You'll Live

There is strong evidence supporting the argument that the amount of time you sleep—even more than whether you smoke, exercise, or have high blood pressure or cholesterol levels—could be the most important predictor of how long you'll live.

On average, adults need 7-9 hours of sleep to stay healthy every night. Those who slept 5 hours or less a night had a 15% greater mortality risk compared with those sleeping 7 hours. While not getting enough sleep is clearly associated with increased health risks, so is getting too much sleep. Those who slept 9 hours had a 42% increase in mortality risk.

Too Little Sleep May Fuel Insulin Resistance
  • Sleep deficiency results in a higher than normal blood sugar level, which may increase your risk for diabetes.
  • After four nights of sleep deprivation (sleep time was only 4.5 hours per night), study participants' insulin sensitivity was 16 percent lower, while their fat cells' insulin sensitivity was 30 percent lower, and rivaled levels seen in those with diabetes or obesity.
  • Researchers at the University of Chicago found that losing just 3 to 4 hours of sleep over a period of several days is enough to trigger metabolic changes that are consistent with a prediabetic state.

See Also

Verifying SSH Key Fingerprint and More

We have two different users on a remote server and wish to set up public key authentication over SSH for both of them.

At beginning, we were able to connect to user bench, but not user aime1.  This has stirred up our investigation into the issue. Yes, SSH does support multiple user authentications over SSH. It turns out the culprit was that we have forgotten to change the access permission of authorized_keys file for the user aime1:

$chmod 600 authorized_keys

In this article, we will show you:
  • How to investigate ssh connection issues?
  • What is going on underneath the ssh session?

The Solution


As described in [1], you need to copy the content of id_rsa.pub from the local server to the authorized_keys file at the remote server (note that .ssh folders are located differently with different users) .  Also, you need to change the authorized_keys' permission to 600.  If you forgot to do latter, SSH won't automatically authenticate local server to remote server when local server offers its RSA public key. Note that we have chosen RSA, instead DSA, as the authentication keys over SSH.

What's Going on Underneath?


To debug any connection issue, you can add -v option as shown below:

$ ssh -v aime1@remoteServer
OpenSSH_6.0p1, OpenSSL 1.0.1c 10 May 2012
debug1: Reading configuration data HKEY_CURRENT_USER/SOFTWARE/Mortice Kern Syste
ms/etc/ssh_config
debug1: Reading configuration data HKEY_LOCAL_MACHINE/SOFTWARE/Mortice Kern Syst
ems/etc/ssh_config
debug1: Connecting to remoteServer [10.133.188.166] port 22.
debug1: Connection established.
debug1: identity file /C=/Documents and Settings/aroot/.ssh/id_rsa type 1
debug1: identity file /C=/Documents and Settings/aroot/.ssh/id_rsa-cert type -1
debug1: identity file /C=/Documents and Settings/aroot/.ssh/id_dsa type -1
debug1: identity file /C=/Documents and Settings/aroot/.ssh/id_dsa-cert type -1
debug1: identity file /C=/Documents and Settings/aroot/.ssh/id_ecdsa type -1
debug1: identity file /C=/Documents and Settings/aroot/.ssh/id_ecdsa-cert type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_4.3
debug1: match: OpenSSH_4.3 pat OpenSSH_4*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.0
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-md5 none
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024 p="" sent="">debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Server host key: RSA 
97:a2:85:f3:f1:28:ab:c2:70:df:58:2f:c3:61:65:9a

debug1: Host 'remoteServer' is known and matches the RSA host key.
debug1: Found key in /C=/Documents and Settings/aroot/.ssh/known_hosts:1
debug1: ssh_rsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: Roaming not allowed by server
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,gssapi-with-mic,password
debug1: Next authentication method: publickey
debug1: Offering RSA public key: /C=/Documents and Settings/aroot/.ssh/id_rsa
debug1: Authentications that can continue: publickey,gssapi-with-mic,password
debug1: Trying private key: /C=/Documents and Settings/aroot/.ssh/id_dsa
debug1: Trying private key: /C=/Documents and Settings/aroot/.ssh/id_ecdsa
debug1: Next authentication method: password
aime1@remoteServer's password:

As shown above, local server failed to automatically authenticated itself to the remote server (i.e., SSH still prompted user for the password).  In the middle of output, the following three lines are interesting:

debug1: Server host key: RSA 97:a2:85:f3:f1:28:ab:c2:70:df:58:2f:c3:61:65:9a
debug1: Host 'remoteServer' is known and matches the RSA host key.
debug1: Found key in /C=/Documents and Settings/aroot/.ssh/known_hosts:1

What happened is that the fingerprint of remote server's RSA public key was returned and it matched one entry stored in the known_hosts file of the local server.  In public-key cryptography, a public key fingerprint is a short sequence of bytes used to authenticate or look up a longer public key[5]. Fingerprints are created by applying a cryptographic hash function to a public key. Since fingerprints are shorter than the keys they refer to, they can be used to simplify certain key management tasks.

The RSA public key from the remote server (i.e., a Linux box) is stored here:
  • /etc/ssh/ssh_host_rsa_key.pub

To verify that, you can run:

$ssh-keygen -l -f /etc/ssh/ssh_host_rsa_key.pub

2048 97:a2:85:f3:f1:28:ab:c2:70:df:58:2f:c3:61:65:9a /etc/ssh/ssh_host_rsa_key.pub

or you can try:

[aime1@remoteServer ~]$ ssh localhost
he authenticity of host 'localhost (127.0.0.1)' can't be established.
RSA key fingerprint is 97:a2:85:f3:f1:28:ab:c2:70:df:58:2f:c3:61:65:9a.


This remote server's finger print was matched against entries stored in the local server's known_hosts file.  To verity it, you can run:

$ ssh-keygen -l -f known_hosts
2048 97:a2:85:f3:f1:28:ab:c2:70:df:58:2f:c3:61:65:9a remoteServer,10.133.188.166 (RSA)


In the verbose output, it also told us that it matched the first entry on the known hosts list.

Later, the output also told us that something was not quite working:

debug1: Authentications that can continue: publickey,gssapi-with-mic,password
debug1: Trying private key: /C=/Documents and Settings/aroot/.ssh/id_dsa
debug1: Trying private key: /C=/Documents and Settings/aroot/.ssh/id_ecdsa
debug1: Next authentication method: password
!> aime1@remoteServer's password:

SSH on the local server failed to authenticate itself to the remote server with its offered RSA key.  So, it tried the following other ways of authentication in sequence:
  • id_dsa
  • id_ecdsa
  • password

Without much ado, we know what went wrong and have fixed the issue.  Then when we tried it again, it was authenticated automatically.  From the verbose output, we can tell the difference (i.e., it succeeded to authenticate with RSA key):

debug1: Server accepts key: pkalg ssh-rsa blen 277
debug1: read PEM private key done: type RSA
debug1: Authentication succeeded (publickey).
Authenticated to remoteServer ([10.133.188.166]:22).
debug1: channel 0: new [client-session]
debug1: Entering interactive session.
Last login: Mon Mar 18 18:30:26 2013 from localServer


Recap


We need to change the file access permission of authorized_keys as shown below:

Before
-rw-rw-r-- 1 aime1 svrtech  396 Mar 19 08:48 authorized_keys

After
-rw------- 1 aime1 svrtech  396 Mar 19 08:48 authorized_keys


After the change, we can now establish ssh session without being asked for password:

$ ssh aime1@remoteServer
Last login: Mon Mar 18 20:17:51 2013 from localServer
[aime1@remoteServer ~]$

Where Are Server's Authentication Keys Stored?


If your on a Linux box, they are probably stored in the following files:
  • RSA
    • /etc/ssh/ssh_host_rsa_key.pub 
  • DSA 
    • /etc/ssh/ssh_host_dsa_key.pub). 

If you're on a Mac OSX, they are stored in one of the following files:
  • /etc/ssh_host_rsa_key.pub
  • /etc/ssh_host_key.pub
  • /etc/ssh_host_dsa_key.pub

Monday, March 18, 2013

How to List Current Message Levels of All Loggers Using WLST?

In our Fusion application deployed in WebLogic Server, it has generated too many messages. Therefore, we would like to disable most of them. However, before we do that, we also want to check out what current message levels of loggers are.

In this article, we will show you how to:
  • Redirect WLST print messages to an output file
  • List current message levels of all loggers associated with a managed server (i.e., "MS_1")
    • With the getLogLevel command

Redirect WLST print Statement


If you try to redirect WLST output to a log file using the following WLST command. 

wls:/fod_domain/serverConfig> redirect('./logs/wlst.log', 'false')

It won't work.  Instead , you should follow the instructions below[1] to redirect WLST's print statement to a file:

from java.io import File
from java.io import FileOutputStream
f = File("/scratch/aime1/tmp/wlst.log")
fos = FileOutputStream(f)
theInterpreter.setOut(fos)
print "start the script"

How to List Current Message Levels of All Loggers?


To get the current message level, you can use getLogLevel command. Note that you must be connected to WebLogic Server before you use the configuration commands.

From below, find the full list of commands that achieved the task:

$cd $MW_HOME/wlserver/common/bin
$./wlst.sh


Initializing WebLogic Scripting Tool (WLST) ...
Welcome to WebLogic Server Administration Scripting Shell
Type help() for help on available commands

wls:/offline> connect ('weblogic','weblogic1','t3://localhost:7001')
Connecting to t3://localhost:7001 with userid weblogic ...
Successfully connected to Admin Server "AdminServer" that belongs to domain "fod_domain".

Warning: An insecure protocol was used to connect to the
server. To ensure on-the-wire security, the SSL port or
Admin port should be used instead.

wls:/fod_domain/serverConfig> from java.io import File
wls:/fod_domain/serverConfig> from java.io import FileOutputStream
wls:/fod_domain/serverConfig> f = File("/scratch/aime1/tmp/wlst.log")
wls:/fod_domain/serverConfig> fos = FileOutputStream(f)
wls:/fod_domain/serverConfig> theInterpreter.setOut(fos)
print "start the script"
listLoggers(target='MS_1')

Note that we typed CTRL-D to exit from WLST at the end.

The Output

Here is the output from the getLogLevel command (note that it lists messages levels of all loggers associated with a server named MS_1):

wls:/fod_domain/serverConfig> start the script
wls:/fod_domain/serverConfig> Location changed to domainRuntime tree. This is a read-only tree with DomainMBean as the root.
For more help, use help('domainRuntime')
-------------------------+-----------------
Logger                   | Level
-------------------------+-----------------
<root>                   | NOTIFICATION:1
Security                 | <Inherited>
com                      | <Inherited>
com.oracle.coherence     | TRACE:16
...

References

Saturday, March 16, 2013

Career, Work, and Health (2)

Without saying, one of the important health articles I have written is this one:

Below are some highlights from that article. 

Cancer and Fever 

In clinical practice, doctors observe that many cancer patients simply don't develop fevers, and cancer patients often report that they were never ill.  Two of my friends told me exactly that and one of them is cancer victim and another cancer survivor.
 

Fever and Heat Therapy

Historically, heat have been recognized for their beneficial effects.  The following list shows how various cultures have used simple forms of heat as a way of both cleansing and healing.

  • Ancient Greek medicine
  • Roman hot sulfur baths
  • Finnish saunas
  • European and American spa treatments
  • Japanese hot tubs
  • Native American Indian sweat lodges
  • Therapeutic hot springs worldwide

See Also

Tuesday, March 12, 2013

HotSpot—java.lang.OutOfMemoryError: PermGen space

There could be different causes that lead to out-of-memory error in HotSpot VM.  For example, you can run out of memory in PermGen space:
  • java.lang.OutOfMemoryError: PermGen space

In this article, we will discuss:
  • Java Objects vs Java Classes
  • PermGen Collection[2]
  • Class unloading
  • How to find the classes allocated in PermGen?
  • How to enable class unloading for CMS?
Note that this article is mainly based on Jon's excellent article[1].

Java Objects vs Java Classes


Java objects are instantiations of Java classes. HotSpot VM has an internal representation of those Java objects and those internal representations are stored in the heap (in the young generation or the old generation[2]). HotSpot VM also has an internal representation of the Java classes and those are stored in the permanent generation.

PermGen Collector


The internal representation of a Java object and an internal representation of a Java class are very similar.  From now on, we use Java objects and Java classes to refer to their internal representations.  The Java objects and Java classes are similar to the extent that during a garbage collection both are viewed just as objects and are collected in exactly the same way.

Besides its basic fields, Java class also include the following:
  • Methods of a class (including the bytecodes)
  • Names of the classes (in the form of an object that points to a string also in the permanent generation)
  • Constant pool information (data read from the class file, see chapter 4 of the JVM specification for all the details).
  • Object arrays and type arrays associated with a class (e.g., an object array containing references to methods).
  • Internal objects created by the JVM (java/lang/Object or java/lang/exception for instance)
  • Information used for optimization by the compilers (JITs)
There are a few other bits of information that end up in the permanent generation but nothing of consequence in terms of size. All these are allocated in the permanent generation and stay in the permanent generation.

Class Loading/Unloading


Back in old days, most classes were mostly static and custom class loaders were rarely used.  Then class unloading may not be necessary.  However, things have changed and sometimes you could run into the following message:
  • java.lang.OutOfMemoryError: PermGen space
In this case, there are at least two options:
  • Increasing the size of PermGen
  • Enabling class unloading

Increasing the Size of PermGen


Sometimes there is a legitimate need to increase PermGen size by setting the following options:
  • -XX:PermSize=384m -XX:MaxPermSize=384m 
However, before you do that, you may want to find out what Java classes were allocated in PermGen by running
  • jmap -permstat
This is supported in JDK5 and later on both Solaris and Linux.

Enabling Class Unloading


By default, most HosSpot Garbage Collectors do class unloading except CMS collector[2] (enabled by  -XX:+UseConcMarkSweepGC).

If you use CMS collector and run into PermGen's out-of-memory error, you could consider enabling class unloading by setting:
  • -XX:+CMSClassUnloadingEnabled
  • -XX:+CMSPermGenSweepingEnabled

Depending on the release you may have, earlier versions (i.e.,  Java 6 Update 3 or earlier) require you to set both options.  However, in later releases you only need to specify:
  • -XX:+CMSClassUnloadingEnabled

The following are the cases that you want to enable class unloading in CMS collectors:
  • If your application is using multiple class loaders and/or reflection, you may need to enable collecting of garbage in permanent space.
  • Objects in permanent space may have references to normal old space thus even if permanent space is not full itself, references from perm to old space may keep some dead objects unreachable for CMS if class unloading is not enabled.
  • Lots of redeployment may pressure PermGen space
    • A class and it's classloader have to both be unreachable in order for them to be unloaded. A class X with classloader A and the same class X with classloader B will result in two distinct objects (klassses) in the permanent generation. 

References

  1. Understanding GC pauses in JVM, HotSpot's CMS collector.
  2. Understanding Garbage Collection
  3. Presenting the Permanent Generation
  4. Diagnosing Java.lang.OutOfMemoryError
  5. A Case Study of java.lang.OutOfMemoryError: GC overhead limit exceeded
  6. Understanding Garbage Collector Output of Hotspot VM
  7. Java HotSpot VM Options

Monday, March 11, 2013

Starting Up the Bash Subshell with the -x Option

You can enable debugging mode within a bash script, say myscript.sh, by adding the following command line:
  • set -x

For example,

$vi myscript.sh 
#!/bin/bash
set -x


Adding "set -x" in the script, it enables the printing of command traces before executing command.  This is equivalent to the long command notation:

  • set -o xtrace

However, there are cases that you cannot modify the bash script itself because of permission or you simply don't want to modify the file.  Then there is another option:

$ bash -x myscript.sh

This will allow you to execute the script with command traces which are good for debugging.

References

  1. Bash Guide for Beginners

Sunday, March 10, 2013

Career, Work, and Health


Read my article on "The Pros and Cons of Flu Vaccination."  Here are the cons of flu vaccination:
  • Flu vaccine appeared to actually increase people's risk of getting sick with H1N1, and cause more serious bouts of illness to boot.
  • Natural infection is more beneficial
  • Seasonal flu vaccine may weaken your natural immunity
  • The flu shot covers less than 10 percent of the circulating viruses creating these illness.
  • Some patients developed Guillain-Barre syndrome after getting a seasonal flu shot (see the video)
  • The age-related decline of the immune system affects the body’s response to vaccination
  • Could we vaccinate our kids too early and too much?
  • Each dose flu vaccine contains traces of formaldehyde and 25 micrograms of thimerosal (a mercury containing compound; a preservative).
  • There are known adverse reactions to the flu vaccine
You can find more health articles here.

Friday, March 8, 2013

JRockit: Unable to open temporary file /mnt/hugepages/jrock8SadIG

After system maintenance, our JRockit failed to create the Java virtual machine because it could not acquire the large pages as shown below:

$ bin/java -Xms2560m -Xmx2560m -XlargePages -Xgc:genpar -XlargePages:exitOnFailure -version
[ERROR][osal   ] Unable to open temporary file /mnt/hugepages/jrock8SadIG
[ERROR][memory ] Could not acquire large pages for Java heap.
[ERROR][memory ] Could not setup java heap.
Could not create the Java virtual machine.

In this article, we will discuss how to investigate and resolve this issue.

What to Check?


When using JRockit we have to make a hugetblfs file system available in the directory /mnt/hugepages. Since the message says that it cannot open temporary file /mnt/hugepages/jrock8SadIG, the first thing to check is if that directory was mounted (note that it was mounted before system maintenance).

$ mount -l
nodev on /mnt/hugepages type hugetlbfs (rw)

As shown above, /mnt/hugepages directory was mounted.  The next thing to check is why JRockit cannot open temporary file in that directory.  Could it be privilege?

As described in [1], to make the /mnt/hugepages directory accessible for the oracle user, you need to do:
  • chmod -R 777 /mnt/hugepages

It turns out that it was the culprit.  After issuing the above command, JRockit was able to create Java virtual machine.

References

  1. Tune the JVM that runs Coherence
  2. JRockit: Could not acquire large pages for 2Mbytes code
  3. How to Test Large Page Support on Your Linux System
  4. Understanding Application Memory Performance - Red Hat

Monday, March 4, 2013

JRockit: Could not acquire large pages for 2Mbytes code

When we tried to enable Large Pages for JRockit, we have seen the following message from the WebLogic Server console output:

  • WARN codegc Could not acquire large pages for 2Mbytes code (at 0x2aaab0622000).

In this article, we will show you how to investigate and resolve this issue.

How to Test for Large Pages Support?


Similar to [1], here are the VM options for testing large-pages support for JRockit:

$  bin/java -Xms2560m -Xmx2560m -XlargePages -Xgc:genpar -XlargePages:exitOnFailure -version

When we ran the above command, we have seen the following warning:

[WARN ][codegc ] Could not acquire large pages for 2Mbytes code (at 0x2aaab0622000).
[WARN ][codegc ] Falling back to normal page size.
java version "1.6.0_29"
Java(TM) SE Runtime Environment (build 1.6.0_29-b11)
Oracle JRockit(R) (build R28.2.0-79-146777-1.6.0_29-20111005-1807-linux-x86_64, compiled mode)

Enabling Large Pages Support in Linux Kernel


We have followed the procedure described in [2,5] to enable huge-pages support in the Linux kernel.  One of the requirements is to mount hugepages directory on the hugetlbfs type filesystem[6] (note that this step is required for JRockit, but not HotSpot).  For example, we have hugepages directory mounted as follows:

$ mount -l
nodev on /mnt/hugepages type hugetlbfs (rw,noexec,nosuid,nodev,sync,uid=59951)

How to Investigate?


When we tried to explicitly disabled large pages for Java code (but not Java heap) , the following VM options ran fine:

$  bin/java -Xms2560m -Xmx2560m -XlargePages -Xgc:genpar -XlargePages:exitOnFailure -XX:+UseLargePagesForHeap -XX:-UseLargePagesForCode -XX:+FlightRecorder -XX:FlightRecorderOptions=defaultrecording=false  -version
java version "1.6.0_29"
Java(TM) SE Runtime Environment (build 1.6.0_29-b11)
Oracle JRockit(R) (build R28.2.0-79-146777-1.6.0_29-20111005-1807-linux-x86_64, compiled mode)


So, the issue is related to Java code reservation in Huge Pages.  After further investigation, we have found the real problem is that: when we mounted hugepages directory, we have chosen the following option:
  • noexec
    • Do not allow direct execution of any binaries on the mounted filesystem
After we have removed that constraint and rebooted the system, we have finally resolved the issue as shown below:

$  bin/java -Xms2560m -Xmx2560m -XlargePages -Xgc:genpar -XlargePages:exitOnFailure -XX:+UseLargePagesForHeap -XX:+UseLargePagesForCode -XX:+FlightRecorder -XX:FlightRecorderOptions=defaultrecording=false -version
java version "1.6.0_29"
Java(TM) SE Runtime Environment (build 1.6.0_29-b11)
Oracle JRockit(R) (build R28.2.0-79-146777-1.6.0_29-20111005-1807-linux-x86_64, compiled mode)

Here are the settings of newly mounted hugepages directory:

$ mount -l
nodev on /mnt/hugepages type hugetlbfs (rw)

Acknowledgement


This issue was resolved based on the feedbacks from Scott Oats.

References

  1. How to Test Large Page Support on Your Linux System?
  2. Java SE Tuning Tip: Large Pages on Windows and Linux 
  3. Oracle® JRockit Command-Line Reference Release R28
  4. Memlock limit too small (one of the requirements for large page support)
  5. How to acquire large pages for Java heap
  6. Linux / Unix Command: mount
  7. Oracle® JRockit Performance Tuning Guide Release R28


Saturday, March 2, 2013

Managing OATS Services Manually

Similar to LoadRunner[1], OATS (Oracle Application Testing Suite)[2] provides solution enables you to define and manage your application testing process, validate application functionality, and ensure that your applications will perform under load.

At the time of installation, there are three OATS services
  • OracleATSAgent
  • OracleATSHelper
  • OracleATSServer
deployed in the /etc/rc[0-6].d hierarchy and configured to start in runlevels 3, 4, and 5 automatically. In this article, we will show how to manage them using the following Linux commands
  • chkconfig[3]
  • service[4]

Checking Current Status and Startup Information



To list the current status of services, you can do:

# /sbin/service --status-all | grep OATS
OATS Agent Manager is running
OATS Helper Service is running

To list the current startup information for services, you can do:

bash-3.2# /sbin/chkconfig --list | grep OracleATS
OracleATSAgent  0:off   1:off   2:off   3:on    4:on    5:on    6:off
OracleATSHelper 0:off   1:off   2:off   3:on    4:on    5:on    6:off
OracleATSServer 0:off   1:off   2:off   3:on    4:on    5:on    6:off

Managing OATS Services Manually


You use chkconfig to disable starting services at boot time. By default, OATS services are configured to start in runlevels 3, 4 and 5 automatically.

To disable OATS init scripts, do the following
# chkconfig --level 345 OracleATSHelper off
# chkconfig --level 345 OracleATSServer off
# chkconfig --level 345 OracleATSAgent off
After you disable all OATS services, you can confirm the results with:

bash-3.2# /sbin/chkconfig --list | grep OracleATS
OracleATSAgent  0:off   1:off   2:off   3:off   4:off   5:off   6:off
OracleATSHelper 0:off   1:off   2:off   3:off   4:off   5:off   6:off
OracleATSServer 0:off   1:off   2:off   3:off   4:off   5:off   6:off

Then you can manage them with "service [start|stop]" command manually:

bash-3.2# /sbin/service OracleATSAgent status
OATS Agent Manager is running

bash-3.2# /sbin/service OracleATSAgent stop
Shutting down oats-am:                                     [  OK  ]

bash-3.2# /sbin/service OracleATSAgent status
OATS Agent Manager is not running

bash-3.2# /sbin/service OracleATSAgent start
Starting oats-am:                                          [  OK  ]

bash-3.2# /sbin/service OracleATSServer start
Starting OracleATSServer:Server is starting in running mode...

bash-3.2# /sbin/service OracleATSHelper start
Starting oats-hs:                                          [  OK  ]

Final Words


Be warned that OATS support team recommends you to use the original way of service set up, not this manual way of starting or stopping services.