I am trying to configure GC logs in weblogic startup scripts on one of the linux servers (oel7)
MEM_ARGS=”-Xms2048m -Xmx2048m -XX:PermSize=512m -XX:MaxPermSize=512m -Xloggc:/scratch/oracle/middleware/user_projects/domains/oamdomain/servers/oam_server2/logs/oamserver2$$-gc.log -XX:+PrintGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution”
With the above parameters, I am unable to start weblogic servers… getting the below error.
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Invalid file name for use with -Xloggc: Filename can only contain the characters [A-Z][a-z][0-9]-_.%[p|t] but it has been oamserver2-gc.log -XX:+PrintGC
Note %p or %t can only be used once
MEM_ARGS has been split into two parts:
export MEM_ARGS=” -Xms2048m -Xmx2048m -XX:PermSize=512m -XX:MaxPermSize=512m -Xloggc:/scratch/oracle/middleware/user_projects/domains/oamdomain/servers/oam_server2/logs/oamserver2$$-gc.log”
export JAVA_OPTIONS=”-XX:+PrintGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution $JAVA_OPTIONS”
The weblogic startup script was creating problems while starting the server when we have everything part of mem_args. May be some sort of validation was happening somewhere 😉
Hope it helps.
All java process pertaining to IDM stack were getting killed very periodically. And the servers had to be restarted very often.
Weblogic 11g, IDM stack java 1.70.79
- There were no thread dumps or core dumps.
- System logs were clean.
- Greped the weblogic logs for errors, could not find anything
- After carefully going through each and every line of the weblogic logs, I could find below messages:
####<Jul 26, 2017 6:36:47 PM IST> <Notice> <WebLogicServer> <myhost.me.me.com> <AdminServer> <Thread-1> <<WLS Kernel>> <> <ae8121cbb691c6b3:-fc73acf:15d7e34bd34:-8000-00000000000002ff> <1501074407979> <BEA-000388> <JVM called WLS shutdown hook. The server will force shutdown now>
####<Jul 26, 2017 6:36:47 PM IST> <Alert> <WebLogicServer> <myhost.me.me.com> <AdminServer> <Thread-1> <<WLS Kernel>> <> <ae8121cbb691c6b3:-fc73acf:15d7e34bd34:-8000-00000000000002ff> <1501074407980> <BEA-000396> <Server shutdown has been requested by <WLS Kernel>>
The logs clearly indicates that the JVM itself initiated the shutdown of the servers and OS has nothing to do with it.
The problem might occur due to incompatibility of softwares. In our case there was no customization or any of our own developed softwares deployed. Ours was clean install.
This points me to JDK version. After pointing to the correct java, the issue has been resolved.
Hope it helps.
We are given linux systems which has Python 2.7.5 installed. And our application requires Python version greater than 2.7.9. When we try to upgrade or install the Python to the latest version, it fails as the default installations requires an entry to be made on the read only file system /usr/local/bin/. Even with root access we cannot modify the /usr/local/bin directory.
So how do we upgrade or install a new version of Python? Below are the steps:
- Download the version of Python which we are interested
- untar the binary to a directory.
- cd <go to the directory where it has been untared>
- run the configure command: ./configure –prefix=/myworks/softwares/pythonV2.7.13 –enable-shared –with-ensurepip=yes && make
- run make command: make altinstall. This command will install the python to the directory specified in prefix path.
- go to the .bash_profile and add the statement: export LD_LIBRARY_PATH=/myworks/softwares/pythonV2.7.13/lib:$LD_LIBRARY_PATH
- Reload the .bash_profile. Its done
Hope it helps
One of my application uses Derby Database to store configuration information. I just wanted to see what tables are present in derby db. As I was using Derby for the first time, I did not know the basic commands to check the list of tables from the linux system (where my derby db has been installed) sql prompt. I wanted to have a SQL Client which can connect to the remote db and then list the available tables and the data in it. This is when I started googling for SQL Client. I found the suitable SQL client installer (squirrel-sql-3.7.1-standard.jar).
Installing and configuring the SQL Client:
- Run the command “java -jar squirrel-sql-3.7.1-standard.jar” a gui would open and click on next.
- Download Apace Derby (db-derby-10.10.2.0-bin.zip)
- Unzip the the derby zip file to a folder.
- Click on the “squirrel-sql.bat” file from the location where it is installed, it would open “SQuirreL SQL Client Version 3.7.1” GUI Editor.
- Under Drivers tab on the top left of the editor, click on drivers and then double click on Apache Derby Client.
- Click on the “Extra Class Path” tab on the new window and add the derbyclient.jar from the folder where we have unzipped the derby package previously. Click OK. The driver would be registered.
- Click on Alias, and then “add Alias” and input the required details. TEST the connection and click on OK.
Hope it helps.
set serveroutput on
stmt_task := DBMS_SQLTUNE.CREATE_TUNING_TASK(sql_id => ‘8bk0dw24d58jg’);
EXEC DBMS_SQLTUNE.execute_tuning_task(task_name => ‘TASK_1342’);
set long 9999999
SET PAGESIZE 1000
SET LINESIZE 32767
SELECT DBMS_SQLTUNE.report_tuning_task(‘TASK_1342’) AS recommendations FROM dual;
Below are few of the spark configurations that I have used while tuning the Apache Spark Job.
- “-XX:MetaspaceSize=100M”: Before adding the parameter, Full GC’s were observed due to the metaspace resizing. Added the parameter and after that no full gc on account of metaspace resizing observed
- “-XX:+DisableExplicitGC”: In Standalone mode, System.gc is being invoked by the code every 30 minutes which does a full GC (not a right practice). After adding this parameter no full GC on account of System.gc observed
- “-Xmx2G”: OOM was observed with the default heap size (1G) when executing the run with more than 140K messages in the aggregation window. After heap has been increased to 2GB (the maximum allowed in this box) I was able to process 221K messages successfully in the aggregation window. At 250k messages we are getting OOM.
- spark.memory.fraction – 0.85: In spark 1.6.0, the default value of 0.75 by storage/executor memory. This value has been increased to give more memory to the storage/executor memory, this is done to avoid OOM.
- Storage level has been changed to ‘Disk_Only’:Before the change, we were getting OOM when processing 250K messages during the aggregation window of 300 seconds. After the change, we could process 540K messages in the aggregation window without getting OOM. Even though, IN-Memory gives better performance, due to limitation of the hardware availability i had to implement Disk-Only.
- spark.serializer is set to KryoSerializer: Java serilizer has bigger memory footprint, To avoid the high memory footprint and for better performance we used this serializer
- “-Xloggc:~/etl-gc.log -XX:+PrintGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCCause” : This parameters needs to be added as part of good performance practice. Also, it will be helpful to diagnose the problem by looking at the gc logs. The overhead of these parameters are very minimal in production
- spark.streaming.backpressure.enabled – true: This enables the Spark Streaming to control the receiving rate based on the current batch scheduling delays and processing times so that the system receives only as fast as the system can process.
- spark.io.compression.codec set to org.apache.spark.io.LZFCompressionCodec: The codec used to compress internal data such as RDD partitions, broadcast variables and shuffle outputs.
Hope it helps.
AWS Elastic Load Balancer has an idle timeout value set at 60 seconds. If there is no activity for 60 seconds, then the connection is teared down and HTTP error code 504 was thrown to the customer. Here are the steps to change the timeout value in the AWS Elastic Load Balancer:
- Sign in to AWS Console
- Go to EC2 Services
- On the left panel, click on the Load Balancing > Load Balancers
- In the top panel, select the Load Balancer for which you want to change the idle timeout
- Now in the bottom panel, under the ‘Attributes’ section, click on the ‘Edit idle timeout’ button. The default value would be 60 seconds. Change it to the value that you would like. (say 180 seconds)
- Click on ‘Save’ button
Here is the usecase: I have 3 scenarios named A, B and C which are to be load tested with 6, 3 and 1 threads respectively.
These 3 scenarios have 7 use cases (T1 to T7)and are to be executed using defined percentages as shown below:
How do we configure it in JMeter? Had it been just the users with 3 scenarios we would have configured them in ThreadGroup. But how abt T1 to T7?
First create 3 threads groups with desired no. of users as shown below:
Then under the thread group add Throughput controller which is under logical controller. Configure the percentage and add the request to the throughput controller as show below:
Hope this helps.
- Excessive Layering – Most of the underlying performance starts with the excessive layering antipattern. The application design has grown over the usage of controllers, commands and facades. In order to decouple each layer, the designers are adding facades at each of the tiers. Now, for every request at the web tier, the request call goes through multiple layers just to fetch the results. Imagine doing this for thousands of requests coming in and the load the JVM need to handle to process these requests. The number of objects that get created and destroyed when making these calls add to the memory overhead. This further limits the amount of requests that can be handled by each server node. Based on the size of the application, deployment model, the number of user’s, appropriate decision need to be taken to reduce the number of layers. E.g. if the entire application gets deployed in the same container, there is no need to create multiple layers of process beans, service beans(business beans), data access objects etc. Similarly, when developing an internet scale application, large number of layers start adding overheads to the request processing. Remember, large number of layers means large number of classes which effectively start impacting the overall application maintainability.
- Round Tripping– With the advent of ORM mappings, Session/DAO objects, the programmer starts making calls to beans for every data. This leading to excessive calls between the layers. Another side issue is the number of method calls each layer start having to support this model. Worse case is, when the beans are web service based. Client tier making multiple web service calls within a single user request have a direct impact on the application performance. To reduce the round tripping, the application needs to handle or combine multiple requests at the business tier.
- Overstuffed Session– Session object is a feature provided by the JEE container to track user session during the web site visit. The application start with the promise of putting very minimal information in the session but over a period of time, the session object keeps on growing. Too much of data or wrong kind of data is stuffed into the session object. Large data objects will mean that the objects placed in the session will linger on till the session object is destroyed. This impacts the number of user’s that can be served by the application server node. Further, I have seen, application using session clustering to support availability requirements but adding significant overheads to the network traffic and ability of application to handle higher number of users. To unstuff the session object, take an inventory of what all goes there, see what is necessary, what objects can be defaulted to request scope. For others, remove the objects from session when their usage is over.
- Golden Hammer (Everything is a Service) – With the advent of SOA, there is tendency to expose the business services, which can be orchestrated into process services. In the older applications, one can observe similar pattern being implemented with EJBs. This pattern coupled with the bottom up design approach at times, means exposing each and every data entity as a business service. This kind of design might be working correctly functionally, but from the performance and maintenance point of view, it soon becomes a night mare. Every web service call adds overhead in terms of data serialization and deserialization. At times, the data(XML) being passed with web service calls is also huge leading to performance issues. The usage of services or ejb’s should to be evaluated from application usage perspective. Attention needs to be paid on the contract design.
- Chatty Services – Another pattern observed is the way the service is implemented via multiple web service calls each of which is communicating a small piece of data. This results in explosion of web services and which leads to degradation of performance and unmaintainable code. Also, from the deployment perspective, the application starts running into problems. I have come across projects which have hundred plus services all getting crammed into a single deployment unit. When the application comes up, the base heap requirement is already in 2Gb range leaving not much space for application to run. If the application is having too many fine grained services, then it an indication towards the application of this antipattern.
Refer to : https://www.linkedin.com/pulse/application-performance-antipatterns-munish-kumar-gupta