Distributed Performance Testing in the Cloud with JMeter and AWS

JMeter is a wonderful tool to stress test your website and your application architecture , however if you are trying to simulate many users (>1000) one JMeter instance (=pc) will not be sufficient. You will have to set up a JMeter cluster with multiple machines. JMeter is capable or running distributed tests , but it comes with limitations.

Since most of us don’t have multiple servers laying around somewhere, we usually go to cloud service providers like AWS , spin up a couple of EC2 instances and turn them off whenever we’re done. Here is the problem, JMeter uses Java RMI (Remote Method Invocation) to communicate to its slaves, but these connections require all machines to be on the same subnet and this is not feasible with EC2 instances.

Below, I explain how to get around this problem using a 3 node configuration in AWS to execute tests. I assume that you have a written the test already and have the .jmx file ready to go.

The basic configuration:

The idea is that we have 1 master instance that sends the test to 2 slaves . These slaves execute the test and send the results back to the master who will collect and combine the results.

A few notes:

  • Your test will be executed on both slave machines and not divided across them — this means that if you want to run 300 threads in your test, your target will be hit with 600 threads.
  • The master does NOT execute any tests. Gathering the results and orchestrating the tests is enough to handle for one machine.
  • The test will be sent out to the slaves from the master. There’s no need to copy the test file to all slaves.

Before we start:

It is important to understand how JMeter communicates. JMeter creates 3 connections between two machines, illustrated below.

Two connections are used for RMI (A: executing methods, B: receiving results) and the third one is for JMeter.

To get around the RMI limitations, we are going to set up a few SSH Tunnels ( SSH Port Forwarding ). This will make our slaves available to our master. We are going to use the port range between 24000 and 26999 (we don’t really need that many ports, but it helps to spread the range to know which ports are going to the slave’s vs. the ports that are coming in to the master).

Preparing EC2 instances:

We are using 3 EC2 instances (medium instances usually do the job) with Ubuntu server (but any other Linux distribution will work), Java has been installed on all instances. In addition to that, I downloaded and extracted JMeter on all of my machines.

SSH keep alive

Since we are using SSH tunnels to the slaves (and we don’t know how long our coffee breaks are going to take) it is a good idea to keep the SSH alive (stale connections are the worst nightmare when it comes to debugging a JMeter clusters)

You can do that in on the master in /etc/ssh/ssh_config by adding (or changing) the lines

ServerAliveInterval 60
ServerAliveCountMax 3

This will send null packages to the respective slaves and keep the connections open.

Optional

You could give those instances static IP addresses, and configure the hosts file to use domain names. Here is a possible hosts file ( /etc/hosts ) on master

123.123.123.123 slave01
123.123.123.124 slave02

By doing this, you can create a central script to create the tunnels every time you start the machines.

SSH Port Forwarding

The idea is that we use the localhost address and forward certain ports. JMeter will be configured to use the localhost IP (+ port) to connect to the server.

Based on the graph above, we need to create three tunnels (2 outgoing, 1 incoming) for each slave. Luckily, we can set that up with two commands:

For slave 1:

ssh -L 24001:127.0.0.1:24001 
-R 25000:127.0.0.1:25000
-L 26001:127.0.0.1:26001 -N -f [email protected]

For slave 2:

ssh -L 24002:127.0.0.1:24002 
-R 25000:127.0.0.1:25000
-L 26002:127.0.0.1:26002 -N -f [email protected]

Wait, why is the return port the same (25000)? The master is listening to one port only to receive results. That means that all of the slaves will be sending their results back through this port.

Jmter configurations

Next we go onto the Jmeter configurations. The slaves need to know that they should know that they are slaves and that the need to listen to local ports. We update the following lines in bin/jmeter.properties

Slave 1:

server_port=24001
server.rmi.localhostname=127.0.0.1
server.rmi.localport=26001

Slave 2:

server_port=24002
server.rmi.localhostname=127.0.0.1
server.rmi.localport=26002

The master needs to know where to send the tests to and how to receive results. This is done by changing these lines in /bin/jmeter.properties

remote_hosts=127.0.0.1:24001, 127.0.0.1:24002
client.rmi.localport=25000
mode=Statistical

The mode is important as we want to make sure not to stress test our master with returning connections from the slaves. “Statistical” will send a summary of our stresstest back to the master.

Finally, we can start our slaves and tell JMeter to use 127.0.0.1 as our RMI server in the background

nohup ./jmeter-server -Djava.rmi.server.hostname=127.0.0.1 > /dev/null 2>&1 &

This will execute JMeter in the background, so that you can close those terminals to these machines once they are running.

To start the test on our master just execute:

./jmeter -n -t jmetertesplan.jmx –r -l jmeteroutput.csv

The paramter –r tells jmeter to use all defined remote hosts. You can also connect to one server using

./jmeter -n -t jmetertesplan.jmx –R 127.0.0.1:24001 -l jmeteroutput.csv

To stop the test just execute

./stoptest.sh

Troubleshooting

This section could fill another blog post, however I do want to share some of my immediate trouble shooting tips.

1. Server reports: Can’t connect to server/Connection Timoute

It seems that sometimes JVM doesn’t know about localhost and setting the localhost variable might be neccessary. Executing

export JVM_ARGS="-Djava.rmi.server.hostname=localhost"

before running the test helped me. (If someone knows exactly what the problem is, please enlighten me!)

2. OutOfMemoryError: unable to create new native thread (too many threads, each thread has a large stack)

Decrease the XSS value in your jmeter file on your master machine

3. StackOverflowError (the stack size is greater than the limit)

Increase XSS value in your jmeter file on your master machine

4. GC Overhead limit exceeds…

Increase/decrease heap size
(-Xms2048m -Xmx2048m) in 512mb increments in your jmeter file

When changing the heapsize adjust the following line as well:

NEW="-XX:NewSize=683m -XX:MaxNewSize=683m"
(NewSize = round(Xms / 3) )

Sources

My article is mostly based on this:

https://cloud.google.com/compute/docs/tutorials/how-to-configure-ssh-port-forwarding-set-up-load-testing-on-compute-engine/

Great resources regarding JMerter performance:

https://blazemeter.com/blog/jmeter-performance-and-tuning-tips

JMeter

:

http://jmeter.apache.org/usermanual/index.html

What is your experience with JMeter and distributed testing? Anything to add to this? Please let me know in the comments section.

Hacker News稿源:Hacker News (源链) | 关于 | 阅读提示

本站遵循[CC BY-NC-SA 4.0]。如您有版权、意见投诉等问题,请通过eMail联系我们处理。
酷辣虫 » 综合技术 » Distributed Performance Testing in the Cloud with JMeter and AWS

喜欢 (0)or分享给?

专业 x 专注 x 聚合 x 分享 CC BY-NC-SA 4.0

使用声明 | 英豪名录