How-ToJavaLiferay

How to install Liferay 6.x on Glassfish v2 Clustered Environment

4 Mins read

We recently started experimenting with Liferay 6.0.5 in Linux machine based Glassfish v2 clustered environment. The Bundled version of Liferay comes with Glassfish V3.0.1 which does not support clustering. So we wanted to experiment and see if this works with the v2 which supports clustering. At high level if you are interested in knowing that Liferay 6.0.5 on Glassfish v2 clustered environment works or not then the answer is YES, it does with flying colors :). You can continue reading if you are interested what are the steps we followed to do this.

This installation of Liferay was fairly simple on our 2 node clustered setup. This post assumes that you already have a Glassfish V2 cluster setup ready and want to run the Liferay war on it. Here are simple steps to do this

  1. Download Liferay Glassfish Bundle

  2. Download Liferay 6.0.5 latest Glassfish Bundle file from Liferay.com We have downloaded the Glassfish v3 bundled war file just to speed up the deployment process. This way we need not search for many dependency jar files which may be required to run Liferay separately. Now extract this zip file into a folder say Liferay.

  3. Extract Liferay war and dependencies from bundle

  4. Copy these three jar files from

    liferayliferay-portal-6.0.5glassfish-3.0.1domainsdomain1lib

    • portal-service.jar
    • portlet.jar
    • hsql.jar

    Place these jars into the [GlassfishV2 Home]/lib directory

  5. Deploy Liferay war to Glassfish v2

  6. Use the liferay-portal.war from

    liferayliferay-portal-6.0.5glassfish-3.0.1domainsdomain1autodeploy

    directory and deploy it to Glassfish v2 cluster using following asadmin command

    asadmin deploy --target [cluster_name] [war_location]liferay-portal.war
    
  7. Restart the cluster

  8. Restart the cluster servers and hit the cluster nodes. By default the Liferay is installed on the root context, just hitting the server listening port should bring up Liferay home page. You can also go to the Glassfish Admin console > Web Applications and click on the launch button to launch the portal application.

  9. Problems you may face

    • Out Of Memory Error

    • You may encounter out-of-memory errors when launching the GlassFish with Liferay portal. To avoid these errors, set larger initial and maximum memory heap sizes with the -Xms and -Xmx options on admin console.

      • Go to Configurations> cluster1-config > JVM Settings > JVM Options (Tab)
      • Change the value of -Xmx option to maximum available on your system e.g. -Xmx1024m
      • Save the settings
      • Restart the cluster.
       
    • HSQL DB setup needs write permission on file system

    • The Liferay default setup uses HSQL Database for quick installation. This creates a local database instance in the application server file system. If you do not have write permissions on application server directories then you may see some errors similar to below stack-trace in the log

      12:29:29,588 INFO  [[/liferay-portal]:646] Initializing Spring root WebApplicationContext
      Loading jar:file:/usr/local/tomcat/webapps/liferay-portal/WEB-INF/lib/portal-impl.jar!/portal.properties
      12:29:54,699 WARN  [ThreadPoolAsynchronousRunner:608] com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector@1a0283e -- APPARENT DEADLOCK!!! Creating emergency threads for unassigned pending tasks!
      12:29:54,708 WARN  [ThreadPoolAsynchronousRunner:624] com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector@1a0283e -- APPARENT DEADLOCK!!! Complete Status:
              Managed Threads: 3
              Active Threads: 3
              Active Tasks:
                      com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask@eeabe8 (com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#2)
                      com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask@15837e8 (com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#0)
                      com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask@be6108 (com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#1)
              Pending Tasks:
                      com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask@d47880
                      com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask@1335b86
                      com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask@bdec44
                      com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask@e29f36
                      com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask@429be9
                      com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask@10a0d51
                      com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask@f04dae
      Pool thread stack traces:
              Thread[com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#2,5,main]
                      java.lang.Thread.sleep(Native Method)
                      com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask.run(BasicResourcePool.java:1805)
                      com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:547)
              Thread[com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#1,5,main]
                      java.lang.Thread.sleep(Native Method)
                      com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask.run(BasicResourcePool.java:1805)
                      com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:547)
              Thread[com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#0,5,main]
                      java.lang.Thread.sleep(Native Method)
                      com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask.run(BasicResourcePool.java:1805)
                      com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:547)
      
      
      12:30:04,272 WARN  [BasicResourcePool:1841] com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask@be6108 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (30). Last acquisition attempt exception:
      java.sql.SQLException: The database is already in use by another process: lockFile: org.hsqldb.persist.LockFile@d7a94b3d[file =/usr/local/data/hsql/lportal.lck, exists=false, locked=false, valid=false, ] method: openRAF reason: java.io.FileNotFoundException: /usr/local/data/hsql/lportal.lck (No such file or directory)
              at org.hsqldb.jdbc.Util.sqlException(Unknown Source)
              at org.hsqldb.jdbc.jdbcConnection.(Unknown Source)
              at org.hsqldb.jdbcDriver.getConnection(Unknown Source)
              at org.hsqldb.jdbcDriver.connect(Unknown Source)
              at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:134)
              at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:182)
              at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:148)
              at com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1014)
              at com.mchange.v2.resourcepool.BasicResourcePool.access$800(BasicResourcePool.java:32)
              at com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask.run(BasicResourcePool.java:1810)
              at com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:547)
      12:30:04,290 WARN  [BasicResourcePool:1841] com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask@eeabe8 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (30). Last acquisition attempt exception:
      
      

      To avoid this problem you need to make sure the user running application server has full write access on the file system/directories where Liferay portal can create HSQL database. This may not be needed when you are using an external database. If you need to use some other database say Oracle then you need to have respective driver jars in the server CLASSPATH and JNDI configuration changes need to be done on portal-ext.properties.

Hope you found this post useful! What are the issues you faced during this install? Please don’t forget to share with me in comments I may try to provide some help/guidance if possible.

installation Liferay 6.x portal on Glassfish v2 Clustered server Environment, liferay cluster

0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *