Mounting windows shares in a robust way

If you have to mount a share from an unreliable (in terms of “won’t be available 24/7) windows machine, using the traditional way with mount.cifs and /etc/fstab is a pain in the ass, since you will have to remount manually every now and then.

Let’s use the much better way with autofs:

We assume, that you want to mount your private “Documents” windows share from the machine with the IP address 192.168.1.116:

  1. Install autofs: sudo apt-get install autofs
  2. As root, add the following line to /etc/auto.master:
    /cifs /etc/auto.smb.top --timeout=60

    This line is used to check every 60 seconds for all potentially available smb shares to mount under /cifs.

  3. Create as root /etc/auto.smb.top with the following content:
    * -fstype=autofs,-Dhost=& file:/etc/auto.smb.sub

    This simple line tries to mount everything from all smb hosts with a little help of /etc/auto.smb.sub

  4. Create as root /etc/auto.smb.sub with the following content:
    * -fstype=cifs,credentials=/home/youruser/.smbcredentials,uid=1000,gid=100 ://${host}/&

    Here, you assign your smb credentials, which reside in /home/youruser/.smbcredentials:

  5. Create as your user /home/youruser/.smbcredentials with the following content:
    username=yourwindowsusername
    password=yourwindowspassword

    Set its file rights to 600 with chmod 600 /home/youruser/.smbcredentials

  6. Finally, restart the autofsservice as root with service autofs restart

Now you can access (or symlink) your windows share “Documents” under /cifs/192.168.1.116/Users/yourwindowsuername/Documents and you don’t have to worry any more about unresponsive windows shares, which block your file manager.

 

Credits go to the CentOS Wiki, which gave the idea at https://wiki.centos.org/TipsAndTricks/WindowsShares

Learnings with Spring Boot and Thymeleaf

Creating a visually appealing webapplication from scratch with Spring Boot and Thymeleaf is pretty much straightforward and fast, if you take care of the following points:

Take care of conventions

  • Place your messages under src/main/resources/messages into the message.properties
  • Place static web content under src/main/resources/static (this will be the root directory for your web application for static content like images, CSS and JavaScript files)
  • The Thymeleaf templates must go into src/main/resources/templates and if required, subdirectories there

Be clean on configuration

  • Use src/main/resources/application.properties as few as possible
  • Use src/main/resources/applicationContext.xml as few as possible
  • Try to use annotation-based configuration as much as possible

All those points are very important, if you don’t follow them, you will get into trouble when you try to run your application standalone and not from your IDE!

Code analysis with SonarQube, jacoco and gradle

When you work on a Java project, you want to get an idea of your code quality.

Of course, “good” code doesn’t mean, the code is error-free, but on the other hand, if your code is seen as “bad”, you can be pretty sure, that it will become unmaintainable very soon.

Because of this, tools like SonarQube can be helpful to give an unbiased insight into how well your code might be, according to established coding standards.

First, you have to set up a SonarQube server, which is a very easy task, if you’re on an Ubuntu system:

Add the following line to your /etc/apt/sources.list:

deb http://downloads.sourceforge.net/project/sonar-pkg/deb binary/

and then run the well known and to-be-expected

apt-get update
apt-get install sonar

commands.

Assuming, that you already have got a PostgreSQL database running, create a user “sonar” with password “sonar” and enable the few PostgreSQL-related parts in /opt/sonar/conf/sonar.properties.

Finally, as root, start SonarQube with

/etc/init.d/sonar start

and maybe add it to /etc/rc.local

The next step is now, to prepare your project’s build.gradle script to ensure, that not only the SonarQube is filled with data, but also at least measures your test coverage.

The relevant parts are:

apply plugin: "sonar-runner"
apply plugin: "jacoco"

sonarRunner {
        sonarProperties {
                property "sonar.host.url", "http://localhost:9000"
                property "sonar.jdbc.url", "jdbc:postgresql://localhost:5432/sonar"
                property "sonar.jdbc.driverClassName", "org.postgresql.Driver"
                property "sonar.username", "sonar"
                property "sonar.password", "sonar"
                property "sonar.projectName", "rmmusic"
                property "sonar.jacoco.reportPath", "build/jacoco/test.exec"
                property "sonar.java.source property", "1.8"
        }
}

jacoco {
    reportsDir = file("build/tmp/jacoco.exec")
}

Additionally, log in as admin user into your SonarQube instance and in Settings->System->Update Center, add a few plugins:

  • Java
  • Checkstyle
  • Sonargraph
  • PMD
  • Timeline
  • Findbugs

and restart SonarQube.

As admin user, you should then set now a quality profile, e.g. the FindBugs profile

Now, when you run the gradle target sonarRunner, all those tests will be executed automatically and you’ll get detailed insights into your code and its quality.

A short look at Java 8 streams

With Java 8, among the new language feature of Lambdas, the new concept of streams was also introduced, and if you look at streams, you will certainly use Lambdas, too.

The advantage of streams is, that you increase the understandibility and readability of your code. And in theory, if you use use parallel streams the correct way, you can speed up your process, but from my observations, that won’t happen, if you use only small datasets and/or simple operations.

To show you how to use streams, let’s implement a small task:

Imagine, you have got a record collection system and want to calculate the value of your collection and a average price of each record, where you still know, how much you payed for it (that’s not neccessiarily the case for all of your records!).

In a traditional approach, you would implement it more or less like this:

List<Medium> media = mediumRepository.findAll();

double sumValue = 0;
long boughtMediaCount=0;
for (Medium medium : media) {
  if (medium.getBuyPrice() != null) {
    sumValue += medium.getBuyPrice();
    boughtMediaCount++;
  }
}

System.out.println("Total price="+String.format("EUR %.02f", sumValue));
System.out.println("Average price="+String.format("EUR %.02f", (sumValue / (double) boughtMediaCount)))

Now, let’s analyze, what we do here:

After retrieving the whole dataset a a list, we iterate over each element. In each iteration, we check, whether the property “buyPrice” is set, and if yes, we add that value to the totals and increase the counter for records, where we know the price. At the end, we want to get two values, the total price and the the average price.

In other words:

We look at each element (“stream“), only process one single property (“map“), only use those properties with a certain value (“filter“) and calculate a result (“collect“).

That description can now be nicely transformed into Java 8 code, which is almost identical to the non-technical description above:

List media = mediumRepository.findAll();

Averager averagePrice = media.stream().
    map(Medium::getBuyPrice).
    filter(v -> v != null).
    collect(Averager::new, Averager::accept, Averager::combine);

System.out.println("Total price="+String.format("EUR %.02f", averagePrice.getTotal()));
System.out.println("Average price="+String.format("EUR %.02f", averagePrice.getAverage()));

Isn’t that nice? No brace hell any more, no boring iterations.

Ok, you have to use an extra class, the Averager, which looks the following way:

public class Averager implements DoubleConsumer {
    private double total=0;
    private int count=0;

    public double getAverage() {
        return count>0? (total/(double)count) : 0;
    }

    public int getCount() {
        return count;
    }

    public void combine(Averager other) {
        total += other.total;
        count += other.count;
    }

    @Override
    public void accept(double value) {
        total += value;
        count++;
    }

    @Override
    public DoubleConsumer andThen(DoubleConsumer after) {
        return null;
    }

    public double total() {
        return total;
    }
}

For one single occurence, you will use a little more core here (a big little more), but even then, your readability and testability increases, and that’s, what finally counts.

A few final observations:

  • You can parallelize the work on your stream, if you use the parallel() method of the streaming API. But be warned, that like with every parallelization, there can be cases, where you actually might lose performance.
  • The order of invoking the stream methods is important:
    On my example, using filter() before map() is faster on an sequentially executed stream, but equal to slower on a parallel executed stream
  • On small datasets (in my benchmarks, I worked with roughly 1000 items), the traditional approach with the for-loop, is much faster, than working with streams. I don’t know, how much that changes on larger datasets and/or more complex items.

 

Using nagios4 as a babysitter for your envionment

Out of many different monitoring solutions, nagios is one of the most used solutions. The following article shall describe in brief, how to set up some useful monitoring tasks:

Defining custom checks

in /etc/nagios4/commands.cfg, you can define a custom command. Let’s do this with a http check, which evaluates the result and checks in this result for a regex:

define command {
 command_name check_http_host_regex
 command_line $USER1$/check_http -H $HOSTADDRESS$ -p $ARG1$ -u $ARG2$ -r $ARG3$ -t30
}

This check is to be called from your /etc/nagios4/$cfg_dir/your_host.cfg:

define service {
 use generic-service
 host_name your.host.name
 service_description nginx_http_result
 is_volatile 0
 notification_options c,r
 check_command check_http_host_regex!80!/foo.css!your_regex
}

After that, you can run nagios4 -v /etc/nagios4/nagios.cfg to verify, that your configuration is valid.

Machine monitoring with OpenTSDB

(Partially updated in July 2016)

Inspired by a nice presentation at http://de.slideshare.net/oliverhankeln/opentsdb-metrics-for-a-distributed-world, I wanted to set up an OpenTSDB environment on my machine to replace the old munin monitoring, I’m still using and fighting with.

The following guide shall describe the steps for setting up OpenTSDB monitoring on an Ubuntu machine.

A word on disk space

According to the SlideShare presentation, referenced above, any data point consumes less than 3 Bytes on the disk if compressed and less than 40 Bytes, if uncompressed.

With that number in mind, you shall be able to give an estimation, how much data you will gather the next year(s).

Installing HBase

Follow https://hbase.apache.org/book/quickstart.html, download the latest binary (at the time of writing: hbase-1.2.2-bin.tar.gz ) and install it e.g. in /opt:

clorenz@machine:~/Downloads $ cd /opt
clorenz@machine:/opt $ tar -xzvf ~/Downloads/hbase-1.2.2-bin.tar.gz
clorenz@machine:/opt $ ln -s hbase-1.2.2 hbase

Next, edit conf/hbase-site.xml:

<configuration>
 <property>
  <name>hbase.zookeeper.quorum</name>
  <value>127.0.0.1</value>
 </property>
 <property>
  <name>hbase.rootdir</name>
  <value>file:///opt/hbase</value>
 </property>
 <property>
  <name>hbase.zookeeper.property.dataDir</name>
  <value>/opt/zookeeper</value>
 </property>
</configuration>

If you already have a running zookeeper instance, you must instruct OpenTSDB not to start its own zookeeper. For that, please add the following configuration to conf/hbase-site.xml:

<property>
 <name>hbase.cluster.distributed</name>
 <value>true</value>
</property>

And in conf/hbase-env.sh set the following line:

# Tell HBase whether it should manage it's own instance of Zookeeper or not.
export HBASE_MANAGES_ZK=false

Now, in any case, regardless of zookeeper, continue and edit conf/hbase-env.sh:

...
export JAVA_HOME=/opt/java8
...

Be careful to ensure, that your local hostname is resolved properly, the best is:

clorenz@machine:/opt/hbase $ ping machine
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.050 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.042 ms
^C

Finally, start hbase by running

clorenz@machine:/opt/hbase $ ./bin/start-hbase.sh

With ps -ef | grep -i hbase you can ensure, that your hbase instance is running properly:

clorenz@machine:/opt/hbase/logs $ ps -ef | grep -i hbase
root 31701 2795 0 14:37 pts/2 00:00:00 bash /opt/hbase-0.98.9-hadoop2/bin/hbase-daemon.sh --config /opt/hbase-0.98.9-hadoop2/bin/../conf internal_start master
root 31715 31701 44 14:37 pts/2 00:00:07 /opt/java7/bin/java -Dproc_master -XX:OnOutOfMemoryError=kill -9 %p -Xmx1000m -XX:+UseConcMarkSweepGC -Dhbase.log.dir=/opt/hbase-0.98.9-hadoop2/bin/../logs -Dhbase.log.file=hbase-root-master-ls023.log -Dhbase.home.dir=/opt/hbase-0.98.9-hadoop2/bin/.. -Dhbase.id.str=root -Dhbase.root.logger=INFO,RFA -Dhbase.security.logger=INFO,RFAS org.apache.hadoop.hbase.master.HMaster start

Congratulations: You’ve finished your first step. Let’s take the next one:

Installing OpenTSDB

At first, download the latest source code from github:

clorenz@machine:/opt/git $ git clone git://github.com/OpenTSDB/opentsdb.git
Klone nach 'opentsdb'...
remote: Counting objects: 5518, done.
remote: Total 5518 (delta 0), reused 0 (delta 0)
Empfange Objekte: 100% (5518/5518), 27.09 MiB | 6.39 MiB/s, done.
Löse Unterschiede auf: 100% (3704/3704), done.
Prüfe Konnektivität... Fertig.

Next, build a debian package:

clorenz@machine:/opt/git/opentsdb (master)$ sh build.sh debian

If you encounter an error (e.g. like ./bootstrap: 17: exec: autoreconf: not found ), it’s likely possible, that you’re missing the prerequisite packages. Be sure to install at least the following ones:

  • dh-autoreconf
  • gnuplot

If everything went well with the debian build, you can install it:

clorenz@machine:/opt/git/opentsdb (master)$ sudo dpkg -i build/opentsdb-2.2.1-SNAPSHOT/opentsdb-2.2.1-SNAPSHOT_all.deb

Initial preparings for OpenTSDB

Before you can run OpenTSDB, you have to create the hbase tables:

clorenz@machine:/opt/git/opentsdb (master)$ env COMPRESSION=GZ HBASE_HOME=/opt/hbase ./src/create_table.sh

and at least in the beginning, it is helpful, that OpenTSDB creates the metrics automatically. For that, you have to set the following line in /etc/opentsdb/opentsdb.conf:

tsd.core.auto_create_metrics = true

Starting OpenTSDB

sudo service opentsdb start

When you access http://localhost:4242/ you will see the OpenTSDB GUI.

Now it’s time to start gathering data. We’ll use TCollector for the most basic data:

Installing TCollector

Again, we’re fetching the sourcecode from github:

clorenz@machine:/opt/git $ git clone git://github.com/OpenTSDB/tcollector.git

Let’s configure tcollector, so that it uses our own OpenTSDB instance by adding one single line to /opt/git/tcollector/startstop :

TSD_HOST=localhost

Starting tcollector is pretty easy:

clorenz@machine:/opt/git/tcollector (master *)$ sudo ./startstop start

It’s done!

Now, you can access your very first graph in the interface by selecting a timeframe and the metric df.bytes.free. You shall see now a graph!

Writing custom collectors

Any collector writes one or more lines with the following format:

metric timestamp value tag1=data1 tag2=data2

With in the subdirectory collectors of your tcollector installation, there are numerical subdirectories, which denote, how often a collector is executed. A directory name of 0 stands for “runs indefinitely, like a daemon”, values greater zero stand for “runs each n seconds”.

With that in mind, it shall be fairly easy to write own collectors now, like on the following example. Note, that these collectors are not neccessiarly written in Python, but you can basically use any language

#!/usr/bin/python
import os
import sys
import time
import glob

from collectors.lib import utils

def main():
 ts = int(time.time()) 
 yesterdayradio = glob.glob('/home/clorenz/data/wav/yesterdayradio-*')
 
 print "wavfiles.total %d %d type=yesterdayradio" %( ts, len(yesterdayradio))
 
 sys.stdout.flush()


if __name__ == "__main__":
 sys.stdin.close()
 sys.exit(main())

You can test your collector by executing it on the shell:

PYTHONPATH=/opt/tcollector /usr/bin/python /opt/tcollector/collectors/300/mystuff.py

Pretty straightforward, isn’t it?

If you for some reason generated wrong data, you can delete it, but beware, that this command is very dangerous, so the “1h ago” parameter in the following script actually means “now”, since the resolution is about one hour:

/usr/share/opentsdb/bin $ sudo ./tsdb scan --delete 2h-ago 1h-ago sum wavfiles.total type=*

Find more about manipulating the raw collected data at http://opentsdb.net/docs/build/html/user_guide/cli/scan.html

Let’s now polish the whole installation with a nicer frontend to get a real dashboard:

Installing Status Wolf as frontend

Since the standard GUI of OpenTSDB is a little raw, it’s a good idea to install an alternative for it, the one which is currently best looking (not only visually, but also in terms of features, like anomaly detection) is StatusWolf. To install StatusWolf, you need only a few steps:

  • install apache2
  • install libapache2-mod-php5
  • install php5-mysql
  • install php5-curl
  • ensure, that mod_rewrite is working:
    sudo a2enmod rewrite
    sudo a2enmod actions
    sudo service apache2 restart
  • download StatusWolf and install it into /opt
  • install pkg-php-tools
  • install composer ( https://getcomposer.org/download/ ):
    sudo mkdir -p /usr/local/bin
    sudo chown clorenz /usr/local/bin
    curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
  • install mysql-server (remember the root user password for later creation of the database)
  • Follow the StatusWolf setup instructions (be sure to remove all comment lines in the JSON configuration!)
  • Ensure, that all files belong to the www-data user:
    clorenz@machine:/opt $ sudo chown -R www-data StatusWolf
  • Create /etc/apache2/sites-available/statuswolf.conf:
    Listen 9653
    <VirtualHost your.host.name:9653>
     ServerRoot /opt/StatusWolf
     DocumentRoot /opt/StatusWolf
     <Directory /opt/StatusWolf>
     Order allow,deny
     Allow from all
     Options FollowSymLinks
     AllowOverride All
     Require all granted
     </Directory>
    </VirtualHost>
  • Link this file to /etc/apache2/sites-available
  • Ensure, that in /etc/apache2/mods-available/php5.conf, the php_admin_flag is disabled:
    # php_admin_flag engine Off
  • Create an user in the database (please use different values unless you want to create a security hole!):
    mysql statuswolf -u statuswolf -p
    INSERT INTO auth VALUES('statuswolf',MD5('statuswolf'),'Statuswolf User');
    insert into users values(2,'statuswolf','ROLE_SUPER_USER','mysql');

Selecting an ORM for Android

When you develop an app for Android, sooner or later you will have to answer the question, how you will maintain your data.

Unless you want to mess with flat files, you will most certainly use the standard SQL database mechanism, SQLLite, Android offers. And when you use them, you’ll need a decent ORM.

Manual database access

In the former days of Android, the usual way of working with SQLLite was to write an own ContentProvider, fiddling around with ContentResolver, Uri, manual data mapping, managed SQL queries and so on.

Of course, you can still do that, but since programming for Android is generally pretty ugly, writing an own database layer is even more ugly, and I would not recommend it any more, unless, you really have a reason to.

SugarORM

On W-JAX13, there’s was a talk about recommendable Android libraries, and SugarORM was one of them.

SugarORM is a small and lightweight OR mapper, which allows really easy access to simple database structures. It even offers a mechanism for simple database schema modifications and allowes a very clean data access like the following way:

Book.findById(Book.class, 1);

Nice, isn’t it?

However, if you have a more complex database with more than just text and integer fields, or if you have to implement a database migration, which cannot be done by plain SQL instructions, you are lost, unfortunately.

Since the community around SugarORM is pretty small, it’s not expectable, that it will be production-ready for complex databases. Anything beyond simple database object access will result in writing manual code with limited SQL statements. Retrieving just one or two fields from a table is not possible, and writing more complex database migration scripts is not supported. I for example managed at least to find a hack to bypass the limitations of the database migration, basically by bypassing SugarORM, but finally failed on the fact, that the Timestamp field of one my tables was not correctly read in. The neccessiary work of forking and patching SugarORM wasn’t worth the effort for me, since I then found….

GreenDAO

If you take a look, which ORMs in Android are really popular, you can’t bypass greenDAO, an SQLite ORM, which is not only popular since many years, but also designed with performance in mind. It’s no surprise, why some of the top apps on the market, like Pinterest, Airdroid or ICQ uses it.

Although greenDAO is a little more complicated to use, compared with SugarORM, it still offers a very nice database abstraction layer, offers the standard (powerful!) SQLite database migration mechanism without proprietary interfering stuff and does not suffer from bugs like wrong implemented database fields. There’s no timestamp field, but using date is ok, as long, as you’re informed about that. The only minor drawback is, that you have to pass the Context around, which uglifies your code a little bit. But hey, Android is ugly 🙂

A speciality of greenDAO is, that you have a generator, which means, that you don’t have to work with SQL commands, even on creating your database schema. But if you want to use SQL, of course, you still can.

On blog.surecase.eu, there’s a great starters’ tutorial. If you follow it, you will have your database access within a few minutes.

jmap – so small, so powerful

I recently wrote an article about jmap some time ago, but since I had a real-world incident recently, I want to present jmap again, since it’s so incredibly helpful:

Recently, we encountered a strange behaviour with a Java service, which was running for several weeks without interruption. Every twenty minutes, it showed really weird behaviour, compareable with a short-time amnesia.

Logfiles, network stats, manual tests, everything was OK besides from the actual amnesia effects. No trace of a real problem.

That’s the time, when I brought jmap in the game. It’s a really, really useful tool to inspect your running java process, and there most importantly it’s heap. A simple command

jmap -histo 1234

for the java process 1234 (find it out with ps -ef | grep java) showed me, that we had three million objects with “connection” in its name on the heap. Since I knew, that we don’t even need thirty thousand connections at the same, it was a clear indication, that we have a kind of leak here.

Since it was a third-party application, we decided, that instead of time-consuming debugging, we just do a restart of the service and voilà, all problems were solved.

So, if you encounter weird behaviour, it’s not the worst idea, to take a look onto the heap stack of your process to check, if you have a memory and/or an object leak somewhere.

Upgrading Joomla 2.5 -> 3.3.x

Upgrading Joomla 2.5 (LTS) to 3.3.x (STS) is pretty hard, if you’re using Plugins or have reconfigured your system. I haven’t yet found out the complete way how to do the upgrade, but I’m trying to do it on a staging system. Here’s a guide, how to prepare the staging system:

1. Make a backup of your web contents, e.g. with gftp. Gzip it and put it in a safe, because you’ll most certainly need it again for a backup, since the Joomla update will fail for sure! After having the backup, install it locally:

clorenz@christoph ~ $ sudo rm -rf uhrenbastler
tar -xzvf uhrenbastler.tar.gz
sudo chown -R www-data uhrenbastler

2. Make a backup of your mysql database and install it locally, too

#!/bin/bash

JOOMLA_DB="xxxxxxxxxx"
JOOMLA_DB_PASSWORD="yyyyyyyyy"
echo "Copying remote joomla db to localhost"

mysqldump $JOOMLA_DB --verbose --add-drop-table --host=www.christophlorenz.de --user=$JOOMLA_DB --password=$JOOMLA_DB_PASSWORD > /tmp/$JOOMLA_DB.dump
mysqladmin -f drop $JOOMLA_DB --user=$JOOMLA_DB --password=$JOOMLA_DB_PASSWORD
mysqladmin -f create $JOOMLA_DB --user=$JOOMLA_DB --password=$JOOMLA_DB_PASSWORD
mysql $JOOMLA_DB --user=$JOOMLA_DB --password=$JOOMLA_DB_PASSWORD < /tmp/$JOOMLA_DB.dump

If your user doesn’t exist before, you must create it the following way:

$ mysql --user=root --password mysql
mysql> CREATE USER '$JOOMLA_DB'@'localhost' IDENTIFIED BY '$JOOMLA_PASSWORD';
mysql> GRANT ALL PRIVILEGES ON *.* TO '$JOOMLA_DB'@'localhost' WITH GRANT OPTION;
mysql> flush privileges;
mysql> \q

3. Deactivate the “Remember Me” plugin (filename “remember”)

4. Deinstall the following plugins:

  • CacheControl plugin
  • Include Content Item (NOT, ouch!)
  • n3t template
  • widgetkit
  • Advanced Google Analytics
  • Xmap (ouch! But there’s a replacement)
  • Mavik Thumbnails (maybe)
  • EasyImageCaption (ouch!, but optional)

5. Update patched “languagedomains” from localhost

6. Select “Options” -> “Short Term Support”

7. “Erweiterungen” -> “Aktualisierungen”: Cache leeren

8. Do the Upgrade (“Site” -> “Kontrollzentrum”)

9. Reinstall the following plugins

  • Mavik Thumbnails (3)
  • EasyImageCaption oder Multithumb
  • Advanced Google Analytics
  • Tooltips
  • mod_news_pro gk5
  • include_content_item (and when installed, apply the patched file plugins/content/include_content_item/include_content_item/include_content_item.lib.php (where lang.id is replaced by lang.lang_id and lang.code by lang.lang_code)
  • Phoca Gallery

10. Reconfig

  • News Show Pro GK5

11. New Template and lots, really lots of work!