Learnings with Spring Boot and Thymeleaf

Creating a visually appealing webapplication from scratch with Spring Boot and Thymeleaf is pretty much straightforward and fast, if you take care of the following points:

Take care of conventions

  • Place your messages under src/main/resources/messages into the message.properties
  • Place static web content under src/main/resources/static (this will be the root directory for your web application for static content like images, CSS and JavaScript files)
  • The Thymeleaf templates must go into src/main/resources/templates and if required, subdirectories there

Be clean on configuration

  • Use src/main/resources/application.properties as few as possible
  • Use src/main/resources/applicationContext.xml as few as possible
  • Try to use annotation-based configuration as much as possible

All those points are very important, if you don’t follow them, you will get into trouble when you try to run your application standalone and not from your IDE!

Code analysis with SonarQube, jacoco and gradle

When you work on a Java project, you want to get an idea of your code quality.

Of course, “good” code doesn’t mean, the code is error-free, but on the other hand, if your code is seen as “bad”, you can be pretty sure, that it will become unmaintainable very soon.

Because of this, tools like SonarQube can be helpful to give an unbiased insight into how well your code might be, according to established coding standards.

First, you have to set up a SonarQube server, which is a very easy task, if you’re on an Ubuntu system:

Add the following line to your /etc/apt/sources.list:

deb http://downloads.sourceforge.net/project/sonar-pkg/deb binary/

and then run the well known and to-be-expected

apt-get update
apt-get install sonar

commands.

Assuming, that you already have got a PostgreSQL database running, create a user “sonar” with password “sonar” and enable the few PostgreSQL-related parts in /opt/sonar/conf/sonar.properties.

Finally, as root, start SonarQube with

/etc/init.d/sonar start

and maybe add it to /etc/rc.local

The next step is now, to prepare your project’s build.gradle script to ensure, that not only the SonarQube is filled with data, but also at least measures your test coverage.

The relevant parts are:

apply plugin: "sonar-runner"
apply plugin: "jacoco"

sonarRunner {
        sonarProperties {
                property "sonar.host.url", "http://localhost:9000"
                property "sonar.jdbc.url", "jdbc:postgresql://localhost:5432/sonar"
                property "sonar.jdbc.driverClassName", "org.postgresql.Driver"
                property "sonar.username", "sonar"
                property "sonar.password", "sonar"
                property "sonar.projectName", "rmmusic"
                property "sonar.jacoco.reportPath", "build/jacoco/test.exec"
                property "sonar.java.source property", "1.8"
        }
}

jacoco {
    reportsDir = file("build/tmp/jacoco.exec")
}

Additionally, log in as admin user into your SonarQube instance and in Settings->System->Update Center, add a few plugins:

  • Java
  • Checkstyle
  • Sonargraph
  • PMD
  • Timeline
  • Findbugs

and restart SonarQube.

As admin user, you should then set now a quality profile, e.g. the FindBugs profile

Now, when you run the gradle target sonarRunner, all those tests will be executed automatically and you’ll get detailed insights into your code and its quality.

A short look at Java 8 streams

With Java 8, among the new language feature of Lambdas, the new concept of streams was also introduced, and if you look at streams, you will certainly use Lambdas, too.

The advantage of streams is, that you increase the understandibility and readability of your code. And in theory, if you use use parallel streams the correct way, you can speed up your process, but from my observations, that won’t happen, if you use only small datasets and/or simple operations.

To show you how to use streams, let’s implement a small task:

Imagine, you have got a record collection system and want to calculate the value of your collection and a average price of each record, where you still know, how much you payed for it (that’s not neccessiarily the case for all of your records!).

In a traditional approach, you would implement it more or less like this:

List<Medium> media = mediumRepository.findAll();

double sumValue = 0;
long boughtMediaCount=0;
for (Medium medium : media) {
  if (medium.getBuyPrice() != null) {
    sumValue += medium.getBuyPrice();
    boughtMediaCount++;
  }
}

System.out.println("Total price="+String.format("EUR %.02f", sumValue));
System.out.println("Average price="+String.format("EUR %.02f", (sumValue / (double) boughtMediaCount)))

Now, let’s analyze, what we do here:

After retrieving the whole dataset a a list, we iterate over each element. In each iteration, we check, whether the property “buyPrice” is set, and if yes, we add that value to the totals and increase the counter for records, where we know the price. At the end, we want to get two values, the total price and the the average price.

In other words:

We look at each element (“stream“), only process one single property (“map“), only use those properties with a certain value (“filter“) and calculate a result (“collect“).

That description can now be nicely transformed into Java 8 code, which is almost identical to the non-technical description above:

List media = mediumRepository.findAll();

Averager averagePrice = media.stream().
    map(Medium::getBuyPrice).
    filter(v -> v != null).
    collect(Averager::new, Averager::accept, Averager::combine);

System.out.println("Total price="+String.format("EUR %.02f", averagePrice.getTotal()));
System.out.println("Average price="+String.format("EUR %.02f", averagePrice.getAverage()));

Isn’t that nice? No brace hell any more, no boring iterations.

Ok, you have to use an extra class, the Averager, which looks the following way:

public class Averager implements DoubleConsumer {
    private double total=0;
    private int count=0;

    public double getAverage() {
        return count>0? (total/(double)count) : 0;
    }

    public int getCount() {
        return count;
    }

    public void combine(Averager other) {
        total += other.total;
        count += other.count;
    }

    @Override
    public void accept(double value) {
        total += value;
        count++;
    }

    @Override
    public DoubleConsumer andThen(DoubleConsumer after) {
        return null;
    }

    public double total() {
        return total;
    }
}

For one single occurence, you will use a little more core here (a big little more), but even then, your readability and testability increases, and that’s, what finally counts.

A few final observations:

  • You can parallelize the work on your stream, if you use the parallel() method of the streaming API. But be warned, that like with every parallelization, there can be cases, where you actually might lose performance.
  • The order of invoking the stream methods is important:
    On my example, using filter() before map() is faster on an sequentially executed stream, but equal to slower on a parallel executed stream
  • On small datasets (in my benchmarks, I worked with roughly 1000 items), the traditional approach with the for-loop, is much faster, than working with streams. I don’t know, how much that changes on larger datasets and/or more complex items.

 

Machine monitoring with OpenTSDB

(Partially updated in July 2016)

Inspired by a nice presentation at http://de.slideshare.net/oliverhankeln/opentsdb-metrics-for-a-distributed-world, I wanted to set up an OpenTSDB environment on my machine to replace the old munin monitoring, I’m still using and fighting with.

The following guide shall describe the steps for setting up OpenTSDB monitoring on an Ubuntu machine.

A word on disk space

According to the SlideShare presentation, referenced above, any data point consumes less than 3 Bytes on the disk if compressed and less than 40 Bytes, if uncompressed.

With that number in mind, you shall be able to give an estimation, how much data you will gather the next year(s).

Installing HBase

Follow https://hbase.apache.org/book/quickstart.html, download the latest binary (at the time of writing: hbase-1.2.2-bin.tar.gz ) and install it e.g. in /opt:

clorenz@machine:~/Downloads $ cd /opt
clorenz@machine:/opt $ tar -xzvf ~/Downloads/hbase-1.2.2-bin.tar.gz
clorenz@machine:/opt $ ln -s hbase-1.2.2 hbase

Next, edit conf/hbase-site.xml:

<configuration>
 <property>
  <name>hbase.zookeeper.quorum</name>
  <value>127.0.0.1</value>
 </property>
 <property>
  <name>hbase.rootdir</name>
  <value>file:///opt/hbase</value>
 </property>
 <property>
  <name>hbase.zookeeper.property.dataDir</name>
  <value>/opt/zookeeper</value>
 </property>
</configuration>

If you already have a running zookeeper instance, you must instruct OpenTSDB not to start its own zookeeper. For that, please add the following configuration to conf/hbase-site.xml:

<property>
 <name>hbase.cluster.distributed</name>
 <value>true</value>
</property>

And in conf/hbase-env.sh set the following line:

# Tell HBase whether it should manage it's own instance of Zookeeper or not.
export HBASE_MANAGES_ZK=false

Now, in any case, regardless of zookeeper, continue and edit conf/hbase-env.sh:

...
export JAVA_HOME=/opt/java8
...

Be careful to ensure, that your local hostname is resolved properly, the best is:

clorenz@machine:/opt/hbase $ ping machine
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.050 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.042 ms
^C

Finally, start hbase by running

clorenz@machine:/opt/hbase $ ./bin/start-hbase.sh

With ps -ef | grep -i hbase you can ensure, that your hbase instance is running properly:

clorenz@machine:/opt/hbase/logs $ ps -ef | grep -i hbase
root 31701 2795 0 14:37 pts/2 00:00:00 bash /opt/hbase-0.98.9-hadoop2/bin/hbase-daemon.sh --config /opt/hbase-0.98.9-hadoop2/bin/../conf internal_start master
root 31715 31701 44 14:37 pts/2 00:00:07 /opt/java7/bin/java -Dproc_master -XX:OnOutOfMemoryError=kill -9 %p -Xmx1000m -XX:+UseConcMarkSweepGC -Dhbase.log.dir=/opt/hbase-0.98.9-hadoop2/bin/../logs -Dhbase.log.file=hbase-root-master-ls023.log -Dhbase.home.dir=/opt/hbase-0.98.9-hadoop2/bin/.. -Dhbase.id.str=root -Dhbase.root.logger=INFO,RFA -Dhbase.security.logger=INFO,RFAS org.apache.hadoop.hbase.master.HMaster start

Congratulations: You’ve finished your first step. Let’s take the next one:

Installing OpenTSDB

At first, download the latest source code from github:

clorenz@machine:/opt/git $ git clone git://github.com/OpenTSDB/opentsdb.git
Klone nach 'opentsdb'...
remote: Counting objects: 5518, done.
remote: Total 5518 (delta 0), reused 0 (delta 0)
Empfange Objekte: 100% (5518/5518), 27.09 MiB | 6.39 MiB/s, done.
Löse Unterschiede auf: 100% (3704/3704), done.
Prüfe Konnektivität... Fertig.

Next, build a debian package:

clorenz@machine:/opt/git/opentsdb (master)$ sh build.sh debian

If you encounter an error (e.g. like ./bootstrap: 17: exec: autoreconf: not found ), it’s likely possible, that you’re missing the prerequisite packages. Be sure to install at least the following ones:

  • dh-autoreconf
  • gnuplot

If everything went well with the debian build, you can install it:

clorenz@machine:/opt/git/opentsdb (master)$ sudo dpkg -i build/opentsdb-2.2.1-SNAPSHOT/opentsdb-2.2.1-SNAPSHOT_all.deb

Initial preparings for OpenTSDB

Before you can run OpenTSDB, you have to create the hbase tables:

clorenz@machine:/opt/git/opentsdb (master)$ env COMPRESSION=GZ HBASE_HOME=/opt/hbase ./src/create_table.sh

and at least in the beginning, it is helpful, that OpenTSDB creates the metrics automatically. For that, you have to set the following line in /etc/opentsdb/opentsdb.conf:

tsd.core.auto_create_metrics = true

Starting OpenTSDB

sudo service opentsdb start

When you access http://localhost:4242/ you will see the OpenTSDB GUI.

Now it’s time to start gathering data. We’ll use TCollector for the most basic data:

Installing TCollector

Again, we’re fetching the sourcecode from github:

clorenz@machine:/opt/git $ git clone git://github.com/OpenTSDB/tcollector.git

Let’s configure tcollector, so that it uses our own OpenTSDB instance by adding one single line to /opt/git/tcollector/startstop :

TSD_HOST=localhost

Starting tcollector is pretty easy:

clorenz@machine:/opt/git/tcollector (master *)$ sudo ./startstop start

It’s done!

Now, you can access your very first graph in the interface by selecting a timeframe and the metric df.bytes.free. You shall see now a graph!

Writing custom collectors

Any collector writes one or more lines with the following format:

metric timestamp value tag1=data1 tag2=data2

With in the subdirectory collectors of your tcollector installation, there are numerical subdirectories, which denote, how often a collector is executed. A directory name of 0 stands for “runs indefinitely, like a daemon”, values greater zero stand for “runs each n seconds”.

With that in mind, it shall be fairly easy to write own collectors now, like on the following example. Note, that these collectors are not neccessiarly written in Python, but you can basically use any language

#!/usr/bin/python
import os
import sys
import time
import glob

from collectors.lib import utils

def main():
 ts = int(time.time()) 
 yesterdayradio = glob.glob('/home/clorenz/data/wav/yesterdayradio-*')
 
 print "wavfiles.total %d %d type=yesterdayradio" %( ts, len(yesterdayradio))
 
 sys.stdout.flush()


if __name__ == "__main__":
 sys.stdin.close()
 sys.exit(main())

You can test your collector by executing it on the shell:

PYTHONPATH=/opt/tcollector /usr/bin/python /opt/tcollector/collectors/300/mystuff.py

Pretty straightforward, isn’t it?

If you for some reason generated wrong data, you can delete it, but beware, that this command is very dangerous, so the “1h ago” parameter in the following script actually means “now”, since the resolution is about one hour:

/usr/share/opentsdb/bin $ sudo ./tsdb scan --delete 2h-ago 1h-ago sum wavfiles.total type=*

Find more about manipulating the raw collected data at http://opentsdb.net/docs/build/html/user_guide/cli/scan.html

Let’s now polish the whole installation with a nicer frontend to get a real dashboard:

Installing Status Wolf as frontend

Since the standard GUI of OpenTSDB is a little raw, it’s a good idea to install an alternative for it, the one which is currently best looking (not only visually, but also in terms of features, like anomaly detection) is StatusWolf. To install StatusWolf, you need only a few steps:

  • install apache2
  • install libapache2-mod-php5
  • install php5-mysql
  • install php5-curl
  • ensure, that mod_rewrite is working:
    sudo a2enmod rewrite
    sudo a2enmod actions
    sudo service apache2 restart
  • download StatusWolf and install it into /opt
  • install pkg-php-tools
  • install composer ( https://getcomposer.org/download/ ):
    sudo mkdir -p /usr/local/bin
    sudo chown clorenz /usr/local/bin
    curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
  • install mysql-server (remember the root user password for later creation of the database)
  • Follow the StatusWolf setup instructions (be sure to remove all comment lines in the JSON configuration!)
  • Ensure, that all files belong to the www-data user:
    clorenz@machine:/opt $ sudo chown -R www-data StatusWolf
  • Create /etc/apache2/sites-available/statuswolf.conf:
    Listen 9653
    <VirtualHost your.host.name:9653>
     ServerRoot /opt/StatusWolf
     DocumentRoot /opt/StatusWolf
     <Directory /opt/StatusWolf>
     Order allow,deny
     Allow from all
     Options FollowSymLinks
     AllowOverride All
     Require all granted
     </Directory>
    </VirtualHost>
  • Link this file to /etc/apache2/sites-available
  • Ensure, that in /etc/apache2/mods-available/php5.conf, the php_admin_flag is disabled:
    # php_admin_flag engine Off
  • Create an user in the database (please use different values unless you want to create a security hole!):
    mysql statuswolf -u statuswolf -p
    INSERT INTO auth VALUES('statuswolf',MD5('statuswolf'),'Statuswolf User');
    insert into users values(2,'statuswolf','ROLE_SUPER_USER','mysql');

Selecting an ORM for Android

When you develop an app for Android, sooner or later you will have to answer the question, how you will maintain your data.

Unless you want to mess with flat files, you will most certainly use the standard SQL database mechanism, SQLLite, Android offers. And when you use them, you’ll need a decent ORM.

Manual database access

In the former days of Android, the usual way of working with SQLLite was to write an own ContentProvider, fiddling around with ContentResolver, Uri, manual data mapping, managed SQL queries and so on.

Of course, you can still do that, but since programming for Android is generally pretty ugly, writing an own database layer is even more ugly, and I would not recommend it any more, unless, you really have a reason to.

SugarORM

On W-JAX13, there’s was a talk about recommendable Android libraries, and SugarORM was one of them.

SugarORM is a small and lightweight OR mapper, which allows really easy access to simple database structures. It even offers a mechanism for simple database schema modifications and allowes a very clean data access like the following way:

Book.findById(Book.class, 1);

Nice, isn’t it?

However, if you have a more complex database with more than just text and integer fields, or if you have to implement a database migration, which cannot be done by plain SQL instructions, you are lost, unfortunately.

Since the community around SugarORM is pretty small, it’s not expectable, that it will be production-ready for complex databases. Anything beyond simple database object access will result in writing manual code with limited SQL statements. Retrieving just one or two fields from a table is not possible, and writing more complex database migration scripts is not supported. I for example managed at least to find a hack to bypass the limitations of the database migration, basically by bypassing SugarORM, but finally failed on the fact, that the Timestamp field of one my tables was not correctly read in. The neccessiary work of forking and patching SugarORM wasn’t worth the effort for me, since I then found….

GreenDAO

If you take a look, which ORMs in Android are really popular, you can’t bypass greenDAO, an SQLite ORM, which is not only popular since many years, but also designed with performance in mind. It’s no surprise, why some of the top apps on the market, like Pinterest, Airdroid or ICQ uses it.

Although greenDAO is a little more complicated to use, compared with SugarORM, it still offers a very nice database abstraction layer, offers the standard (powerful!) SQLite database migration mechanism without proprietary interfering stuff and does not suffer from bugs like wrong implemented database fields. There’s no timestamp field, but using date is ok, as long, as you’re informed about that. The only minor drawback is, that you have to pass the Context around, which uglifies your code a little bit. But hey, Android is ugly 🙂

A speciality of greenDAO is, that you have a generator, which means, that you don’t have to work with SQL commands, even on creating your database schema. But if you want to use SQL, of course, you still can.

On blog.surecase.eu, there’s a great starters’ tutorial. If you follow it, you will have your database access within a few minutes.

jmap – so small, so powerful

I recently wrote an article about jmap some time ago, but since I had a real-world incident recently, I want to present jmap again, since it’s so incredibly helpful:

Recently, we encountered a strange behaviour with a Java service, which was running for several weeks without interruption. Every twenty minutes, it showed really weird behaviour, compareable with a short-time amnesia.

Logfiles, network stats, manual tests, everything was OK besides from the actual amnesia effects. No trace of a real problem.

That’s the time, when I brought jmap in the game. It’s a really, really useful tool to inspect your running java process, and there most importantly it’s heap. A simple command

jmap -histo 1234

for the java process 1234 (find it out with ps -ef | grep java) showed me, that we had three million objects with “connection” in its name on the heap. Since I knew, that we don’t even need thirty thousand connections at the same, it was a clear indication, that we have a kind of leak here.

Since it was a third-party application, we decided, that instead of time-consuming debugging, we just do a restart of the service and voilà, all problems were solved.

So, if you encounter weird behaviour, it’s not the worst idea, to take a look onto the heap stack of your process to check, if you have a memory and/or an object leak somewhere.

Troubleshooting Java applications – Part 1: jmap

If you encounter abnormal behaviour in your Java application, Java offers you a set of tools to effectively identify the culprit. The first one, which I want to present, is jmap.

jmap

jmap, which is part of the JDK, is the tool of choice, if you get the impression, that you have a resource leak. With a simple command, you can create a histogram of all Java objects on the heap of your process:

jmap -histo PID

(where PID is the process ID of your Java process)

The output will look like this:

 num   #instances      #bytes  class name
----------------------------------------------
 1:       2031082   185199616  [C
 2:       2013868    64443776  java.lang.String
 3:        656635    21012320  java.util.HashMap$Entry
 4:        614312    14743488  de.christophlorenz.foo.Culprit
 5:         81443    12737392  <constMethodKlass>
 6:        123267    11564632  [Ljava.util.HashMap$Entry;
 7:         81443    11083656  <methodKlass>
...
3263:          1          16   sun.reflect.GeneratedMethodAccessor63
3264:          1          16   de.christophlorenz.foo.SomeFactory
Total    7182799   426077152

As you can see in number four, there are no less than 614312 instances of the Culprit class, consuming a little less than 15MB in total. It’s now up to you to decide, whether it’s the desired beaviour or not.

(Don’t worry about the large number of [C and [I entries, they are native character and ints, you will certainly use a lot.)

Now, as an other example, imagine, your RabbitMQ server is stressed by too many connections, maybe even blocking clients to connect, and someone has the suspicion, that your application might be running wild. With a simple jmap call, you can check your application and verify (or deny), that it is the cause:

jmap -histo 12345 | grep -i rabbitmq
 ...
 130:    14360   344640 [Lcom.rabbitmq.client.Address;
 131:    14356   344544 com.rabbitmq.client.Address
 188:     6250   100000 com.rabbitmq.client.impl.LongStringHelper$ByteArrayLongString
 212:      625    75000 com.rabbitmq.client.impl.AMQConnection$MainLoop
 226:      625    65000 com.rabbitmq.client.impl.AMQConnection
 235:      625    60000 com.rabbitmq.client.impl.ChannelN
 ...

Now, are you sure, you really need 625 connections to your RabbitMQ? No? Just go ahead and fix it 🙂

Of course, with jmap, you have many more options, like generating full heap- and thread dumps, which can later be analyzed by jvisualvm, but I’ll talk about that later.

Application monitoring with JMX and Jolokia

(or: It’s the inner values, that count)

Remember last time, when your application was all green in your monitoring suite, but you got complaints, because it did not do, what it was expected to? Or have you ever been in the situation, where you wanted to measure what your application does, without going through megabytes of logfiles? Do you need some KPI based monitoring? Don’t want to reinvent the wheel?

For any of these cases, the following monitoring approach, using the standard Java JMX approach together with Jolokia as HTTP bridge, will be perfect!

At first, let’s take a look at JMX, the Java Management Extensions:

JMX is a Java API for ressource management. It is a standard from the early days (JSR 3: JMX API, JSR 160: JMX Remote API), got some overhaul recently (Java 6: Merge of the both APIs into JSR 255 – the JMX API version 1.3) and since Java 7, we have the JMX API version 2.0. Basically, JMX consists of three layers, the Instrumentation Layer (the MBeans), the Agent Layer (the MBean Server) and the Distributed Layer (connectors and management client).

Although you can use JMX for managing virtually everything (even services), we just contentrate here on using JMX for monitoring purposes.The same we do for MBeans.

What are MBeans?

Generally spoken, MBeans are resources (e.g. a configuration, a data container, a module, or even a service) with attributes and operations on them. Everything else, like notifications or dynamic structures are out of scope for us now.

Technically, a MBean is a class, which implements an Interface and uses a naming convention, where the Interface name is the same as the class name plus “MBean” at the end:

class MyClass implements MyClassMBean

Now, let’s create a sample counting MBean:

public interface MyEventCounterMBean {
  public long getEventCount();
  public void addEventCount();
  public void setEventCount(long count);
}
package my.monitoring;
public class MyEventCounter implements MyEventCounterMBean {
  public static final String OBJECT_NAME="my.monitoring:type=MyEventCounter";
  private long eventCount=0;

  @Override
  public long getEventCount() {
    return eventCount;
  }

  @Override
  public void addEventCount() {
    eventCount++;
  }

  @Override
  public void setEventCount(long count) {
    this.count = count;
  }
}

Before we can use the bean, we have to make it available. For that, we need to wire it with the MBean server. The MBean server acts as a registry for MBeans, where each MBean is registered by its unique object name. Those object names consists of two parts, a Domain and a number of KeyProperties. The Domain can be seen as the package name of the bean, and one of these KeyProperties, the type, is its class name. if you use the “name” property, it denotes one of its attributes.

So, for our example above, the ObjectName would be:

my.monitoring:type=MyEventCounter

In every JVM, there’s at least one standard MBean server, the PlatformMBeanServer, which can be reached via

MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();

In theory, you could use more than one MBean server per JVM, but normally, using only the PlatformMBeanServer is sufficient.

Next step: Accessing MBeans

To access our MBean, we can either use Spring and its magic, or we do it manually.

The manual way looks the following:

We once have to register our bean, e.g. in an init method:

MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
ObjectName myEventCounterName = new ObjectName(MyEventCounter.OBJECT_NAME);
MyEventCounter myEventCounter = new MyEventCounter();
mbs.registerMBean(myEventCounter, myEventCounterName);

And for every access, we have to retrieve it from the MBeanServer so that we can invoke the methods:

MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
ObjectName myEventCounterName = new ObjectName(MyEventCounter.OBJECT_NAME);
mbs.invoke(myEventCounterName, "addEventCount", null, null);

Have you seen the second argument of the invoke method? It’s the name of the operation, you want to invoke. If you want to pass arguments, you pass their values as an object array as third, and their signature as fourth string array, e.g.

mbs.invoke(myEventCounterName, "setEventCount", new Object[] {number}, new String[] {int.class.getName()});

If we’re lucky, and our whole application is managed by Spring, it’s sufficcient to work with configuration and annotation only.

The MBean needs to be annotated as @Component and @ManagedResource with the object name as parameter:

@Component
@ManagedResource(objectName="my.monitoring:type=MyEventCounter")

and the attributes need a @ManagedAttribute:

@Override
@ManagedAttribute
public void addEventCount() {
  eventCount++;
}

In your spring configuration, besides the <context:component-scan> tag, you need one additional line for exporting the MBeans:

<context:mbean-export>

And those classes, which want to use the bean, just have to import it with the @Autowired annotation:

@Autowired
private MyEventCounterMBean myEventCounterMBean

Accessing MBeans from outside, using Jolokia

Of course, with the jconsole, you can access your MBeans, but a more elegant and more firewall-friendly way is use an HTTP bridge, which allows you to access the MBeans over HTTP. That’s, where Jolokia joins the game.

Jolokia is JMX-JSON-HTTP bridge, which allows access to your MBeans over HTTP and returns their attributes in JSON. Nice, isn’t it? Besides that, it allows bulk requests for improved performance, has got a security layer to restrict access and is really easy to install.

If you want to monitor your webapp, which runs inside tomcat, all you need is, to deploy the Jolokia agent webapp (available as a .war file) into your tomcat.

For a standalone Java application, just apply the Jolokia JVM agent (which in fact acts as an internal HTTP server) as javaagent in your start script:

java -javaagent:$BASE_DIR/libs/jolokia-jvm-1.2.1-agent.jar=port=9999,host=*

And if you build with gradle, apply the following line to your build.gradle:

runtime (group:"org.jolokia", name:"jolokia-jvm", classifier:"agent", version:"1.2.1")

Helpers – jmx4perl

Now, that we can access our MBeans from outside, it would be nice to have a tool available to just read the values on the comand line. The best tool for that is jmx4perl, which is available on github at https://github.com/rhuss/jmx4perl

The installation reminds of the good old Perl days with CPAN. If you’ve never worked with CPAN, just install jmx4perl according to the documentation and ACK all questions.

Now, let’s get an overview of all available MBeans:

jmx4perl http://your.application.host:9999/jolokia list

And if you want a decicated bean, run:

jmx4perl http://your.application.host:9999/jolokia read my.monitoring:type=MyEventCounter

Your output is in JSON and will be like:

{
 EventCount => 234,
 Name => 'MyEventCounter'
}

And finally, if you just need one attribute, run:

jmx4perl http://your.application.host:9999/jolokia read my.monitoring:type=MyEventCounter EventCount

In that case, you’ll get nothing but the value as a result.

Let’s go!

With these tools and figures, you can monitor virtually everything inside your application. All you have to do now is to provide the data (and you, as the developer of your application know, what exactly shall be monitored) and to monitor it with Nagios, OpenTSDB, whatever you want. All these tools are able, either directly, or with helpers like jmx4perl, to access, process and monitor the data.

Configuring Eclipse Kepler

By default, Eclipse Kepler comes with a way too large font for the package explorer and a pretty weird tab order.

Fixing both is pretty easy:

1. Setup a file called .gtkrc-eclipse in your home directory with the following contents:

style "eclipse" {
font_name = "Lucida Grande 8"
GtkButton::default_border={0,0,0,0}
GtkButton::default_outside_border={0,0,0,0}
GtkButtonBox::child_min_width=0
GtkButtonBox::child_min_heigth=0
GtkButtonBox::child_internal_pad_x=0
GtkButtonBox::child_internal_pad_y=0
GtkMenu::vertical-padding=1
GtkMenuBar::internal_padding=0
GtkMenuItem::horizontal_padding=4
GtkToolbar::internal-padding=0
GtkToolbar::space-size=0
GtkOptionMenu::indicator_size=0
GtkOptionMenu::indicator_spacing=0
GtkPaned::handle_size=4
GtkRange::trough_border=0
GtkRange::stepper_spacing=0
GtkScale::value_spacing=0
GtkScrolledWindow::scrollbar_spacing=0
GtkExpander::expander_size=10
GtkExpander::expander_spacing=0
GtkTreeView::vertical-separator=0
GtkTreeView::horizontal-separator=0
GtkTreeView::expander-size=8
GtkTreeView::fixed-height-mode=TRUE
GtkWidget::focus_padding=0
}

class "GtkWidget" style "eclipse"

style "gtkcompactextra" {
xthickness=0
ythickness=0
}
class "GtkButton" style "gtkcompactextra"
class "GtkToolbar" style "gtkcompactextra"
class "GtkPaned" style "gtkcompactextra"

2. Modify your eclipse start script to use this resource file by prepending it with

GTK2_RC_FILES=$GTK2_RC_FILES:/home/yourname/.gtkrc-eclipse ./eclipse

3. Install the Eclipse CSS editor and modify the appearence according to http://wiki.eclipse.org/Eclipse4/CSS , especially the part

.MPartStack {
    swt-mru-visible: true;
}

and restart eclipse