Monthly Archives: February 2009

More WordPress Performance Optimization

General WordPress performance info

After all static files to cdn and move javascript out of html header. I also enabled multi CNAME cdn to see whether it increase page loading speed(not enabled in my cdn plugin yet, still testing). Here is the resault,

wordpress-loading

It seems CSS loading will still block css background images even those included in main html header. I also used Varnish to front proxy WordPress, because WordPress becomes extremely slow after move database from MyISAM to InnoDB.

Updates: I found 2 website helping draw water fall chart. Pagetest by AOL and Site-Perf.com.

Updates 2: I partly solved css image not preloading problem by insert an invisible img tag like this,

But this only work in Chrome. IE will not load any image before css loaded.
preloading-img
Look at loading time of autumn.jpg, near 200ms improvement in Chrome.

My 2nd WordPress plugin Real IP

Check this out.
If your wordpress comments log is filled with local ips, you may give a try.

You can start paying Google App Engine now

More cloud news this week.

Google just announced paid resources for GAE.

The price is similar to other cloud provider. Interesting is that they also charge 0.01 cent per email recipient .

How to copy selected files to cloudfront

If you use my WordPress CDN plugin and amazon cloudfront, you may have problem putting files into s3 storage. Here is a simple way without using any commercial tool if you are using Linux.

First download s3sync . Extract it to somewhere. In this example I used my home directory.

mkdir ~/.s3conf

Edit ~/.s3conf/s3config.yml, which should looks like this

aws_access_key_id: your s3accesskey
aws_secret_access_key: your secret key

Enter wordpress directory

cd wordpress
find * -type f -readable  ( -name *.css -o -name *.js -o 
    -name *.png -o -name *.jpg -o -name *.gif -o -name *.jpeg ) 
    -exec ~/s3sync/s3cmd.rb -v put bucket:prefix/{} {} 
    x-amz-acl:public-read Cache-Control:max-age=604800 ;

Change bucket to your real bucket name. If you don’t need any prefix, do not include slash. Adjust cache-control header. ~/s3sync/s3cmd.rb should point to where you extracted s3sync.

Update 1: Don’t forget install mime-types if you Linux distro didn’t install it by default. Check whether /etc/mime.types exists.

Update 2: s3cmd.rb does not set content-type at all. I think python version does. Any way I wrote a script to redo everything.

#!/bin/sh

BUCKET=
#Set your bucket
PREFIX=
#If you want to use prefix set it like PREFIX=blog/

find * -type f -readable  -name *.css -exec ~/s3sync/s3cmd.rb -v put $BUCKET:$PREFIX{} {} 
    x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:text/css ;

find * -type f -readable  -name *.js -exec ~/s3sync/s3cmd.rb -v put $BUCKET:$PREFIX{} {} 
    x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:application/x-javascript ;

find * -type f -readable  -name *.png -exec ~/s3sync/s3cmd.rb -v put $BUCKET:$PREFIX{} {} 
    x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/png ;

find * -type f -readable  -name *.gif -exec ~/s3sync/s3cmd.rb -v put $BUCKET:$PREFIX{} {} 
    x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/gif ;

find * -type f -readable  ( -name *.jpg -o -name *.jpeg ) -exec ~/s3sync/s3cmd.rb -v put $BUCKET:$PREFIX{} {} 
    x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/jpeg ;

Update 3: I just realized cloudfront does not gzip files. So I rewrote my script to force gzip encoding on css and js files.

#!/bin/sh

BUCKET=
#Your bucket
PREFIX=
#If you want to use prefix set it like PREFIX=blog/ 
S3CMD=/home/user/s3sync/s3cmd.rb
#Your absolute path to s3cmd.rb

find * -type f -readable  -name *.css -exec sh -c "gzip -9 -c {} > /tmp/s3tmp && 
        $S3CMD -v put $BUCKET:$PREFIX{} /tmp/s3tmp x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:text/css Content-Encoding:gzip" ;

find * -type f -readable  -name *.js -exec sh -c "gzip -9 -c {} > /tmp/s3tmp && 
        $S3CMD -v put $BUCKET:$PREFIX{} /tmp/s3tmp x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:application/x-javascript Content-Encoding:gzip" ;

find * -type f -readable  -name *.png -exec $S3CMD -v put $BUCKET:$PREFIX{} {} 
    x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/png ;

find * -type f -readable  -name *.gif -exec $S3CMD -v put $BUCKET:$PREFIX{} {} 
    x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/gif ;

find * -type f -readable  ( -name *.jpg -o -name *.jpeg ) -exec $S3CMD -v put $BUCKET:$PREFIX{} {} 
    x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/jpeg ;

Update 4: 4/1/2009
I added function to copy a single file or single directory.

#!/bin/sh
if [[  -n $1 ]]; then
LOC=$1
else
LOC="*"
fi

BUCKET=
#Your bucket
PREFIX=
#If you want to use prefix set it like PREFIX=blog/ 
S3CMD=/home/user/s3sync/s3cmd.rb
#Your absolute path to s3cmd.rb

find $LOC -type f -readable  -name *.css -exec sh -c "gzip -9 -c {} > /tmp/s3tmp && 
        $S3CMD -v put $BUCKET:$PREFIX{} /tmp/s3tmp x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:text/css Content-Encoding:gzip" ;

find $LOC -type f -readable  -name *.js -exec sh -c "gzip -9 -c {} > /tmp/s3tmp && 
        $S3CMD -v put $BUCKET:$PREFIX{} /tmp/s3tmp x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:application/x-javascript Content-Encoding:gzip" ;

find $LOC -type f -readable  -name *.png -exec ${S3CMD} -v put $BUCKET:$PREFIX{} {} 
    x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/png ;

find $LOC -type f -readable  -name *.gif -exec ${S3CMD} -v put $BUCKET:$PREFIX{} {} 
    x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/gif ;

find $LOC -type f -readable  ( -name *.jpg -o -name *.jpeg ) -exec ${S3CMD} -v put $BUCKET:$PREFIX{} {} 
    x-amz-acl:public-read Cache-Control:max-age=604800 Content-Type:image/jpeg ;

For example if you saved this script to a file name cloudfront:

cd wordpress
cloudfront wp-content/uploads

Without any command line argument this script will upload all file under current directory.

WordPress approved my first plugin

Check this out.

Passed all He.net IPv6 test

Finally transfered my domain from 1&1 to gkg.net as I blogged a couple days ago.

My IPv6 tld glue record was inserted into .net tld DNS server instantly from gkg.net web interface. Highly recommend gkg as domain registrar.

IPv6 Badge

IPv6 Badge

I wrote a WP plugin

I wrote my first WordPress plugin which will help rewriting js, css and theme file’s url to your own CDN network. So you don’t have to hack wordpress files.

You can download it here.

Pylons 0.9.7 released

Pylons 0.9.7 is released today.

Pylons is my new favorite web application framework. It was codeigniter, webpy and django.

A lot Cloud buzz today

I have used ubuntu 8.1 ec2 release for a while. I am actually very impressed by the beta version ubuntu cloud. It is a very lean version of ubuntu server. The default configuration run very fast with minimal number of services comparing to CentOS or Redhat. It is fast.

Meanwhile stock market indexes fall to 1997 levels, and Nasdaq took the lead.

Generate ShoreWall blacklist from Spamhaus and DShield

I wrote a bash script to automatically generate Shorewall blacklist from Spamhaus drop list and dshield.org’s block list .

Do not run this script automatically if ssh is the only mean you connect to your server, because you can accidentally blacklist yourself. And you may not run it more often then once per hour due to spamhaus limitation.

#!/bin/sh

echo "#ADDRESS/SUBNET         PROTOCOL        PORT" > /tmp/blacklist
wget  -q -O - http://feeds.dshield.org/block.txt | awk --posix '/^[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.0t/ { print $1 "/24";}' >> /tmp/blacklist
wget -q -O - http://www.spamhaus.org/drop/drop.lasso | awk --posix '/^[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}// { print $1;}' >> /tmp/blacklist
echo "#LAST LINE -- ADD YOUR ENTRIES BEFORE THIS ONE -- DO NOT REMOVE" >> /tmp/blacklist
mv /tmp/blacklist /etc/shorewall/blacklist

shorewall refresh &>/dev/null

I also use fail2ban to generate dynamic shorewall ban list.

UPDATE: And don’t forget enable blacklist option in /etc/shorewall/shorewall.conf

BLACKLIST_DISPOSITION=DROP