Skip to main content

a silhouette of a person's head and shoulders, used as a default avatar

Generating passwords with Perl

DES:

    mkpasswd
      perl -e 'printf "%s\n", crypt("pass", "two-letter-salt")'

MD5:

    mkpasswd --hash=md5
    perl -e 'printf "%s\n", crypt("pass", "\$1\$6-8-letter-salt\$")'

PLAIN-MD5:

    perl -MDigest::MD5 -e 'printf "{PLAIN-MD5}%s\n", Digest::MD5::md5_hex("pass")'

DIGEST-MD5:

    perl -MDigest::MD5 -e 'printf "{DIGEST-MD5}%s\n", Digest::MD5::md5_hex("user:realm:pass")'
a silhouette of a person's head and shoulders, used as a default avatar
a silhouette of a person's head and shoulders, used as a default avatar

FreeBSD FTP tips & tricks

Q. Does anyone know if the default ftp server from FreeBSD allow me to give acces to users only for ftp, no shell access to upload files to there home directories?

A. The default ftpd will work with a little tweaking.

    touch /bin/ftpshell
    echo "/bin/ftpshell" >> /etc/shells

When you add your users, set their shell to /bin/ftpshell

    echo USERNAME >> /etc/ftpchroot

The users will be able to login via ftp and nothing else because there shell
is a crap fake shell. The ftpchroot will lock them into their home
directory very effectively.

a silhouette of a person's head and shoulders, used as a default avatar

a silhouette of a person's head and shoulders, used as a default avatar

How to clone an OpenVZ virtual machine

I need sometimes to clone a vps in an openvz environment, so here you can find three methods to do this task:

first option:

    # vzctl stop 101
    Stopping VE ...
    VE was stopped
    VE is unmounted
    # cp -r /vz/private/101 /vz/private/202
    # cp /etc/vz/conf/101.conf /etc/vz/conf/202.conf
    # vzctl start 202
    Starting VE ...
    Initializing quota ...
    VE is mounted
    Setting CPU units: 1000
    VE start in progress...

the second option:

    #mkdir /vz/private/new_VEid
    #cd /vz/private/old_VEID
    #tar cf - * | ( cd /vz/private/new_VEid tar xfp -)
    #cp old_VEID.conf new_VEID.conf

and the third option:

    # OLDVE=222 NEWVE=333 # Just an example
    # vzctl stop $OLDVE
    # mkdir /vz/root/$NEWVE
    # cp /etc/vz/conf/$OLDVE.conf /etc/vz/conf/$NEWVE.conf
    # cp -a /vz/private/$OLDVE /vz/private/$NEWVE
    # vzctl start $NEWVE; vzctl start $OLDVE
a silhouette of a person's head and shoulders, used as a default avatar

a silhouette of a person's head and shoulders, used as a default avatar

A Simple Firewall for Linux Server

The firewall below will let in SSH, HTTP and FTP. To avoid SSH brute force dictionary attacks it uses the iptables recent match module for connection rate limiting. It is intended for a Host with a single interface connected to the net, eg. a webserver.

Hint before enabling it add this to your /etc/crontab:

*/5 *   * * *   root /etc/init.d/simplefirewall stop >> /var/log/firewall.stop

And check /var/log/firewall.stop to make sure it runs. This will open your firewall again after 5 minutes to avoid locking yourself out. When everything works as expected comment it out.

#!/bin/bash

# Very simple firewall for a single interface

IF="eth0"   #Interface
HIPORT="1024:65535" #Highports (dont change)

IPTABLES=`which iptables` || IPTABLES="/usr/sbin/iptables"

case $1 in
  close)
  $IPTABLES -F
  $IPTABLES -X
  $IPTABLES -F INPUT
  $IPTABLES -F OUTPUT
  $IPTABLES -P INPUT DROP
  $IPTABLES -P OUTPUT ACCEPT
  $IPTABLES -A INPUT -p icmp --icmp-type 8 -j ACCEPT
  $IPTABLES -A INPUT -i lo -j ACCEPT
  echo "Firewall closed, all connections blocked"
  exit 0
  ;;

  stop)
  $IPTABLES -F 
  $IPTABLES -X 
  $IPTABLES -F INPUT
  $IPTABLES -F OUTPUT
  $IPTABLES -P INPUT ACCEPT
  $IPTABLES -P OUTPUT ACCEPT
  echo "Firewall closed, all connections allowed"
  exit 0
  ;;

  start)
  # First of all, flush all rules
  $IPTABLES -F
  $IPTABLES -F -t nat
  $IPTABLES -X 
  $IPTABLES -F INPUT
  $IPTABLES -F OUTPUT
  $IPTABLES -F FORWARD

  # set default policy and create additional chains
  $IPTABLES -P INPUT DROP
  $IPTABLES -P OUTPUT DROP
  $IPTABLES -P FORWARD DROP
  $IPTABLES -N dropchain
  $IPTABLES -N ssh
  $IPTABLES -N blacklist

  # enable additional kernel security
  echo "1" > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts
  echo "1" > /proc/sys/net/ipv4/tcp_syncookies
  echo "1" > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts
  echo "1" > /proc/sys/net/ipv4/icmp_ignore_bogus_error_responses
  echo "1" > /proc/sys/net/ipv4/conf/$IF/rp_filter
  echo "0" > /proc/sys/net/ipv4/conf/$IF/accept_redirects
  echo "0" > /proc/sys/net/ipv4/conf/$IF/accept_source_route
  echo "0" > /proc/sys/net/ipv4/conf/$IF/bootp_relay
  echo "1" > /proc/sys/net/ipv4/conf/$IF/log_martians

  # tune tcp params see for info:
  # http://www.ussg.iu.edu/hypermail/linux/kernel/0202.1/0436.html
  echo "30"  > /proc/sys/net/ipv4/tcp_keepalive_intvl
  echo "5"   > /proc/sys/net/ipv4/tcp_keepalive_probes
  echo "900" > /proc/sys/net/ipv4/tcp_keepalive_time

  # local processes:
  $IPTABLES -A INPUT -i lo -j ACCEPT
  $IPTABLES -A OUTPUT -o lo -j ACCEPT

  # icmp stuff:
  $IPTABLES -A INPUT -i $IF -p icmp --icmp-type echo-request -j ACCEPT
  $IPTABLES -A INPUT -i $IF -p icmp --icmp-type echo-reply -j ACCEPT
  $IPTABLES -A OUTPUT -o $IF -p icmp --icmp-type echo-request -j ACCEPT
  $IPTABLES -A OUTPUT -o $IF -p icmp --icmp-type echo-reply -j ACCEPT
  $IPTABLES -A OUTPUT -o $IF -p icmp --icmp-type source-quench -j ACCEPT
  $IPTABLES -A INPUT -i $IF -p icmp --icmp-type time-exceeded -j ACCEPT
  $IPTABLES -A OUTPUT -o $IF -p icmp --icmp-type time-exceeded -j ACCEPT
  $IPTABLES -A INPUT -i $IF -p icmp --icmp-type parameter-problem -j ACCEPT
  $IPTABLES -A OUTPUT -o $IF -p icmp --icmp-type parameter-problem -j ACCEPT
  $IPTABLES -A INPUT -i $IF -p icmp --icmp-type fragmentation-needed -j ACCEPT
  $IPTABLES -A OUTPUT -o $IF -p icmp --icmp-type fragmentation-needed -j ACCEPT

  # let answers out:
  $IPTABLES -A OUTPUT -m state --state ESTABLISHED,RELATED -o $IF -p tcp -j ACCEPT
  $IPTABLES -A OUTPUT -m state --state ESTABLISHED -o $IF -p udp -j ACCEPT

  # let all answers in:
  $IPTABLES -A INPUT  -m state --state ESTABLISHED,RELATED -i $IF -p tcp -j ACCEPT
  $IPTABLES -A INPUT  -m state --state ESTABLISHED -i $IF -p udp -j ACCEPT

  # ssh rate limit support - iptables recent module needed!
  # see http://www.e18.physik.tu-muenchen.de/~tnagel/ipt_recent/

  # prepare blacklist
  $IPTABLES -A blacklist -m recent --name blacklist --set
  $IPTABLES -A blacklist -j LOG --log-level info --log-prefix "FW log BLACKLIST: "
  $IPTABLES -A blacklist -j DROP

  # drop everyone currently on the blacklist
  $IPTABLES -A ssh -m recent --update --name blacklist --seconds 600 --hitcount 1 -j DROP

  # count incomers
  $IPTABLES -A ssh -m recent --set    --name counting1
  $IPTABLES -A ssh -m recent --set    --name counting2
  $IPTABLES -A ssh -m recent --set    --name counting3
  $IPTABLES -A ssh -m recent --set    --name counting4

  # add to blacklist on rate exceed
  $IPTABLES -A ssh -m recent --update --name counting1 --seconds    20 --hitcount   5 -j blacklist
  $IPTABLES -A ssh -m recent --update --name counting2 --seconds   200 --hitcount  15 -j blacklist
  $IPTABLES -A ssh -m recent --update --name counting3 --seconds  2000 --hitcount  80 -j blacklist
  $IPTABLES -A ssh -m recent --update --name counting4 --seconds 20000 --hitcount 400 -j blacklist

  # accept at the end of SSH chain
  $IPTABLES -A ssh -j ACCEPT

  # put all SSH traffic in the ssh chain
  $IPTABLES -A INPUT -m state --state NEW -i $IF -p tcp --sport $HIPORT --dport ssh -j ssh

  ########### start of custom rules ############

  # let HTTP in
  $IPTABLES -A INPUT -m state --state NEW -i $IF -p tcp --sport $HIPORT --dport http -j ACCEPT

  # let FTP in (needs loaded ip_conntrack_ftp module)
  $IPTABLES -A INPUT -m state --state NEW -i $IF -p tcp --sport $HIPORT --dport ftp -j ACCEPT

  ########### end of custom rules ############

  # drop & log everything else
  $IPTABLES -A INPUT  -j dropchain
  $IPTABLES -A OUTPUT -j dropchain

  # dropchain: every packet will be dropped, and, if defined logged...
  $IPTABLES -A dropchain -p icmp -j DROP      #dont log outgoing icmp
  $IPTABLES -A dropchain -p tcp -m state --state INVALID -j LOG --log-level info --log-prefix "FW log INVALID: "
  $IPTABLES -A dropchain -j LOG --log-level info --log-prefix "FW log: "      #log everything
  $IPTABLES -A dropchain -j DROP

  #done
  echo "Firewall up and running..."
  exit 0
  ;;

  *)
  echo "usage: start | stop | close"
  exit 1
  ;;
esac

exit 1;
a silhouette of a person's head and shoulders, used as a default avatar

Play a MP3 list over HTTP

All my mp3s are stored on my filesever. Normally I just mount the mp3 directory on my current workstation (either via NFS or Samba). I have some playlists too which store all files with relative paths (relative to the playlist location). This works fine.

But sometimes I want to have an easy way to listen to a playlist without mounting anything (eg. when using the laptop). So I made the mp3 files and playlists available on my local Apache webserver and use the following short CGI to get a playlist with http adresses to the file.

#!/usr/bin/perl

$WEBDIR = '/var/www';
$WEBSERVER = 'xerxes';

use CGI;
use File::Basename;

print "Content-type: audio/x-mpegurlrnrn";

$q = new CGI();

$list = $q->param('list');
$dir  = dirname($list);
$list = $WEBDIR.'/'.$list;

open (LIST, $list) or die("Could not read $list");
@m3u = <LIST>;
close LIST;

foreach $file (@m3u){
 $file = 'http://'.$WEBSERVER.$dir.'/'.$file;
 chomp($file);
 print "$filen";
}
a silhouette of a person's head and shoulders, used as a default avatar

Panache and Peril with Plesk (Spamassassin instalation)

This is a copy of the original article, see it here

With this new dedicated server, I chose Plesk as a control panel solely because I hated it less than Ensim or cPanel. Normally I’d do all the installation, configuration, and tweaking of a server myself, but I just don’t have time for that anymore. The biggest problem I have with control panels is that they make it very hard to do manual configs the normal Linux way: changes either get overwritten or ignored. Plesk is a little easier to live with in this regard; what follows are my last mile tweaks.

While Plesk allows you to create a chrooted FTP user for a domain, it doesn’t have a front end for adding another, much less one constrained a particular subdirectory of that domain. Plesk’s configuration of ProFTPD chroots any user with a group of psacln. Via the shell, just useradd a new user with the desired home directory, use /bin/false for the shell so they can’t SSH in, and add them to group psacln. You’ll also need to make sure that the httpdocs directory is 751, not the default 750.

Plesk charges \$49.00 to provide a GUI for SpamAssassin configuration. While I didn’t save any money doing this myself, maybe you will: here’s how to get SpamAssassin going with Plesk’s qmail implementation. If you have the Plesk SpamAssassin RPM installed (usually available in /root/swsoft), it provides a spammng command line utility for tweaking the SpamAssassin configuration. This utility appears to work without checking your license information: you can run /etc/init.d/spamassassin stop than spammng -c -C start. This restarts the spamd daemon with the proper command line flags for Plesk’s qmail install.

Going a bit further, try spammng -c -C -e –mailname “user@domain.com” start: this will enable SpamAssassin for the specified mailbox by editing the .qmail file in the relevant directory. For my install, this was /var/qmail/mailnames/domain.com/user. With these two bits of information, we can combine Plesk’s qmail with the default SpamAssassin installation without spending any cash. I have not yet figured out how to do this server wide (every incoming piece of mail is processed) or domain wide (every mail for a specific domain is processed). You?

In the .qmail file, Plesk writes the following:

| /usr/local/psa/bin/psa-spamc accept
| true
./Maildir/

psa-spamc is a shell wrapper around the SpamAssassin spamc utility and allows one argument: whether to “accept”/deliver mail that is flagged as spam based on the /etc/mail/spamassassin/ configuration, or whether to “reject” it. It’d be nice to have some granularity to say “reject everything over score 10”, but eh, not a biggie.

The last thing is to reteach the /etc/init.d/spamassassin startup script. Since we haven’t paid Plesk to fiddle with SpamAssassin, we have to teach our default install how to interact with Plesk’s qmail without their help. If you’ve started the spamd daemon from Plesk’s spammng, run the following to capture the startup configuration: ps auwx | grep spamd. You’ll get something along the lines of:

/usr/bin/spamd --username=popuser --daemonize --nouser-config --helper-home-dir=/var/qmail --max-children 5 --create-prefs --virtual-config-dir=/var/qmail/mailnames/%d/%l/.spamassassin --pidfile=/var/run/spamd/spamd_full.pid --socketpath=/tmp/spamd_full.sock

Open up /etc/sysconfig/spamassassin and make it look like:

SPAMDOPTIONS="-d -c -m5 -H /var/qmail --username=popuser --nouser-config
--virtual-config-dir=/var/qmail/mailnames/%d/%l/.spamassassin --socketpath=/tmp/spamd_full.sock"
SPAMD_PID=/var/run/spamd/spamd_full.pid

SPAMDOPTIONS is one line. Restart spamd with /etc/init.d/spamassassin restart and check Plesk’s qmail log at /usr/local/psa/var/log/maillog. If everything goes right, you should be able to send yourself a piece of mail and notice two things: spamd will process each incoming message and report the results and a .spamassassin directory will show up in the right domain and user directory under /var/qmail/mailnames/. Done.

Plesk’s licensing is annoying in other ways: I can only use 30 domains within my current install. Thankfully, domain aliases don’t count against this limitation and, with Drupal’s multisite capabilities, I can run any amount of domains on one code base with multiple databases. Unfortunately, this is a problem when it comes to logfiles and analysis: I’m not entirely sure if Apache’s ServerAlias is considered %v for split-logfile. Needs more testing before I can fully implement domain aliases.

====

  • in order to process spammassassin the email we need a .qmail file like this:
bijoux# cat /var/qmail/mailnames/domain.com/user/.qmail
| /usr/local/psa/bin/psa-spamc accept
| true
./Maildir/

for every user.

to obtain this you can edit the file or you can try next command, to update .qmail file for specific user.

bijoux# /usr/local/psa/admin/bin/spammng -c -C -e --mailname user@domain.com start
  • if you want to start SpamAssassin correctly at boot, put next lines in /etc/init.d/rc_local:
/etc/init.d/psa-spamassassin stop
/etc/init.d/spamd stop
/usr/local/psa/admin/bin/spammng -c -C start

a silhouette of a person's head and shoulders, used as a default avatar

Metadata as a Service

OpenSUSE bug 276018 got me into thinking about software repositories and data transfer again.

Problem statement

Software distribution in the internet age goes away from large piles of disks, CDs or DVD and moves towards online distribution servers providing software from a package repository. The next version of OpenSUSE, 10.3, will be distributed as a 1-CD installation with online access to more packages.
Accessing a specific package means the client needs to know whats available and if a package has dependencies to other packages. This information is kept in a table of contents of the repository, usually referred to as metadata.
First time access to a repository requires download of all metadata by the client. If the repository changes, i.e. packages get version upgrades, large portions of the metadata have to be downloaded again - refreshed.

The EDOS project proposes peer-to-peer networks for distributing repository data.

But how much of this metadata is actually needed ? How much bandwidth is wasted by downloading metadata that gets outdated before first use ?

And technology moves on. Network speeds raise, available bandwidth explodes, internet access is as common as TV and telephone in more and more households. Internet flatrates and always on will be as normal as electrical power coming from the wall socket in a couple of years. At the same time CPUs get more powerful and memory prices are on a constant decrease.

But the client systems can't keep up since customers don't buy a new computer every year. The improvements in computing power, memory, and bandwidth are mostly on the server side.

And this brings me to Metadata as a Service.

Instead of wasting bandwidth for downloading and client computing power for processing the metadata, the repository server can provide a WebService, handling most of the load. Clients only download what they actually need and cache as they feel appropriate.

Client tools for software management are just frontends for the web service. Searching and browsing is handled on the server where load balancing and scaling are well understood and easily handled.

This could even be driven further by doing all the repository management server-side. Clients always talk to the same server which knows the repositories the client wants to access and also tracks software installed on the client. Then upgrade requests can be handled purely by the server, making client profile uploads obsolete. Certainly the way to go for mobile and embedded devices.
Google might offer such a service - knowing all the software installed on a client is certainly valuable data for them.

Just a thought ...