Plesk (11) – Redirect Webmail to HTTPS

By default the webmail interfaces of Plesk is running unsecured on port 80. That’s bad, really bad (Shame on you Parallels!).

There are some guides out there to fix that, but they are all wrong in my eyes. Some are made for old releases, others are changing stuff in files which are overwritten on update or regeneration of config files.

But there’s a very simple way to do it. Plesk includes the config files for webmail from a directory using a common Apache Include. So, simply add an additional conf file and place a rewrite rule to https there. This works fine for Plesk 11.

On CentOS (location may is a bit different on other systems) create a new file “/etc/httpd/conf/plesk.conf.d/webmails/horde/1_ssl_redirect.conf”, and add this content:

RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}

Because Plesk also supports two other webmails, copy the file to the directories “atmail” and “roundcube” (or add a symlink).

Now restart Apache and enjoy.

I tested it myself with Horde and Atmail, but not Roundcube.

Posted in Apache, CentOS, Linux, Plesk | 3 Comments

Limit/prevent SSH brute force attempts

If you (have to) run a public available SSH server, you may have noticed already that there are a lot of brute force attacks trying to guess a user and password (have a look into /var/log/secure ). If you did it the correct way, you only allow public keys to authenticate of course. But maybe you can’t, because the client doesn’t support key authentication. Cisco routers/switches for example. Then (I personally do it on hosts which allow public keys only too) you can add some rules to your IPtables configuration (/etc/sysconfig/iptables on CentOS in this example):

### SSH - Do not block internal hosts/networks
-A INPUT -m state --state NEW -m tcp -p tcp --source 192.168.123.45 --dport 22 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --source 192.168.234.0/24 --dport 22 -j ACCEPT
### SSH - Block brute force attempts
-A INPUT -m state --state NEW -p tcp --dport 22 -m recent --set --name ssh --rsource
-A INPUT -m state --state NEW -p tcp --dport 22 -m recent --update --seconds 3600 --hitcount 20 --rttl --name ssh --rsource -j REJECT
-A INPUT -m state --state NEW -p tcp --dport 22 -m recent --update --seconds 3600 --hitcount 19 --rttl --name ssh --rsource -j LOG --log-prefix "SSH brute force attempt: "
-A INPUT -m state --state NEW -p tcp --dport 22 -m recent --update --seconds 3600 --hitcount 19 --rttl --name ssh --rsource -j REJECT
-A INPUT -m state --state NEW -p tcp --dport 22 -m recent --update --seconds 60 --hitcount 5 --rttl --name ssh --rsource -j REJECT
### SSH - No limit reached
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
  • Lines 2 and 3 add some exceptions which should never be blocked
  • Line 5 marks the package with a name (Or more precise, the name of the list it saves the source IPs to)
  • Line 6 rejects all packages to port 22 if there were 20 or more attempts within an hour
  • Line 7 logs the package after 19 attempts and line 8 rejects it
  • Line 9 rejects the package if there were five or more attempts within a minute
  • Line 11 finally allows the package if it was not dropped before

This way you achieve this:

  • After the 4th attempt the request is rejected, but after a minute it will work again
  • On the 19th attempt it will be logged one time, all further attempts are rejected without logging each of them (within an hour)

Notes:
There is a default limit of 20 requests which are logged. You will get an error if you try to use a hitcount which is larger. You should be able to increase it by setting the modprobe option “options xt_recent ip_pkt_list_tot=100” (Never tired it myself because 20 is enough for me).

To flush the table you can execute “echo / > /proc/net/xt_recent/ssh”

Posted in CentOS, Firewalls, Linux, Security | Tagged , , , , | 1 Comment

Quick tip: [bash] Execute multiple files in a directory at once

Today i created a bunch of scripts which i have to execute one by one from time to time. The order doesn’t matter. So i simply can execute this one-liner:

for script in *.pl ; do ./$script ; done

Posted in Linux, Quick tip | Tagged , , | Leave a comment

Slow DNS lookup on CentOS 6 machines

Slow DNS lookups can have many reasons. Mostly they are easy to fix because it simply is a wrong IP address of the DNS  f.ex. But today i had a harder one, but was easy to fix if you know how…

Looking up an address with dig works all the time, but when puppet gets its plugins or other files from the puppet master, it takes five second for each file, and puppet fails with an “execution expired” error. So I started to debug it.

It’s most likely not an error within puppet, because i know that our CentOS 5 clients are working just fine.

Next, tcpdump. Took five seconds until i saw a connection on port 8140. As expected, not an error within puppet. Next, DNS. Now it’s getting interesting.

It sends out two DNS request, one for the A record and one for the AAAA record. This is the default since glbc 2.9 to do them in parallel. But i only get back one answer waiting for the second. Looks like a problem on your DNS proxy which sits on our firewall or the DNS behind. Not sure yet.

The man page of resolv.conf has two options which could fix the problem:

single-request (since glibc 2.10)
  sets RES_SNGLKUP in _res.options. By default, glibc
  performs IPv4 and IPv6 lookups in parallel since
  version 2.9. Some appliance DNS servers cannot handle
  these queries properly and make the requests time out.
  This option disables the behavior and makes glibc
  perform the IPv6 and IPv4 requests sequentially (at the
  cost of some slowdown of the resolving process).

single-request-reopen (since glibc 2.9)
  The resolver uses the same socket for the A and AAAA
  requests. Some hardware mistakenly sends back only one
  reply.When that happens the client system will sit
  and wait for the second reply. Turning this option on
  changes this behavior so that if two requests from the
  same port are not handled correctly it will close the
  socket and open a new one before sending the second
  request.

The second option sounds exactly like my problem. Added “options single-request-reopen” to resolv.conf, and bang… Works!

Posted in CentOS, Linux | Tagged , , | Leave a comment

Quickly find large vmware.log files

Because one of our VMs had an error while taking a snapshot, i wanted to have a look inside vmware.log. Recognized that it was more than three GB in size. That’s not normal at all.

In vSphere 5.1 VMware changed the way it rotates the vmware.log file. The setting “log.rotateSize” doesn’t exists anymore. It now uses a bandwith/rate limit, which is fine in most cases.

If that happened to one machine, it may also happened to other machines. Because we only have four vSphere hosts which all share the same datastores, i could simply do a one-liner to get the size of the log file of all VMs:

cd /vmfs/volumes/; ls -lhdS [A-Z]*/*/vmware.log | head -10

This shows the 10 biggest logfiles. To prevent that datastores are shown twice, once by name and once by id, i limited it to only show datastores starting with a capital letter (all our datastores start with an upper case letter, you may have to adjust the command to fit our environment).

To manually rotate the log, you can do two things as far i know:

  1. Shutdown and power off/on the VM
  2. Simply vMotion the VM (not the datastore) to another host
Posted in VMware | Tagged , , , , | Leave a comment