Author Archives: mudy - Page 2

Flash video player to html5 fallback

There’re many javascripts [1,2] to do html5 to flash fallback.

I slightly modified this script to do the opposite. It will normally play in a flash player whenever possible, but with html5 video on devices like ipad.

Here is my modified js file. You will also need google loader and open video player.

Add this to your html head

<script src="http://www.google.com/jsapi"></script>
<script src="/static/html5-video.js"></script>

Use html5 video tag as usual.

<video width="592" height="336" preload="none" controls>
  <source src="http://video.mudy.netdna-cdn.com/elephants_dream_592x336.mp4" 
  type="video/mp4" />
</video>

[1] http://henriksjokvist.net/archive/2009/2/using-the-html5-video-tag-with-a-flash-fallback
[2] http://diveintohtml5.org/video.html


Use SimpleCDN for Silverlight streaming

Microsoft’s Smooth Streaming use standard http protocol, so it is possible to use simplecdn mirror bucket for delivering.

Here is my step by step to use open source software and simplecdn for HD video delivering.

Step 1: Server
Even though the Microsoft has opened spec for a while, there is current only 1 open source smooth streaming server code-shop.com.

It has a few server plugins including apache and nginx. I choose nginx as my backend server. You can download the nginx plugin from code-shop.com. It has two parts, a nginx module and a small utiility mp4split which converts mp4 files to fragmented format.

Nginx doesn’t use dynamic linked library, so you have to recompile the entire binary. I compiled the smooth streaming module on nginx-0.7.62 without any problem.

Step 2: Encoding
The entire process is documented on here. It require avisynth and a few other windows utilites.

If you need to do it on linux, here is how:
You will need both ffmpeg and x264. I have tried use ffmpeg alone, but it can not accept stats file from a different bit rate setting which is required step.

Direct pipe through ffmpeg and x264 also doesn’t work, because x264 can’t recognize file type, so you need a named pipe.

mkfifo video.y4m

This file can be reused many times.

Pass 1:

ffmpeg -i big_buck_bunny_720p_h264.mov -an -f yuv4mpegpipe - > video.y4m & 
x264 --threads auto --profile high --level 3.2 --preset slow --no-mbtree --b-pyramid  --min-keyint 24 --keyint 96 --pass 1 --bitrate 2524 -o /tmp/bbb_2524.mp4 video.y4m

You can also run these two commands under 2 different console windows. Setting –no-mbtree is important.

Pass 2:
If everything go ok in pass 1, you can now run pass 2 for multiple bitrate

ffmpeg -i big_buck_bunny_720p_h264.mov -an -s 256x144 -f yuv4mpegpipe - > video.y4m & 
x264 --threads auto --profile high --level 3.2 --preset slow --no-mbtree --b-pyramid  --min-keyint 24 --keyint 96 --pass 2 --bitrate 260 -o /tmp/bbb_260.mp4 video.y4m

Repeat this step with your desired resolution, bitrate and filenames.

The output mp4 file from x264 cannot be streamed, you will need a small utility qt-faststart to fix them.

Problem: I can’t encode playable audio file. If I include audio, the player simply stop.

Now following the rest step described here to split your mp4 files.

Step 3: Player
Unfortunately there’s no working open source player can do smooth stream. Code-shop’s website mentioned openvideoplayer can be updated by replaced with smoothstreaming dll, but I can not find where to put the file.

So I duplicated code-shop’s demo page.

Step 4: SimpleCDN
The simplest step, make a mirror bucket and point it to your webserver.

Result:

Here is my 720p video streaming through simplecdn.
sstreaming

No audio yet. It looks exactly like every other demo page, but it stream through my nginx server and simplecdn. Microsoft’s player use additional query parameters to speedup playback, but simplecdn will strip them away.

Other thought:
I think it is possible to split fragmented mp4 files to real files, so you probably need not a special server module.

Google chrome frame test

Long time no update.

I have just added google frame to my blog template. I was trying to find some sites actually use chrome frame without any success. Just out of curiosity I added it to my own website.

Here are screenshots after install chrome frame in my ie8.
About box

Right Click Menu

Update 1: The display will flick when it switches rendering engine. Ie8 will also switch render engine back when I click a regular webpage.

Java in Cloud

This time it is real. Google just announced java for app engine.

Updates: And cron job in app engine as well.

Two differences I noticed from python gae. It deploys byte code (war file) instead of source code to cloud. Using xml instead of yaml.

Disk IO: EC2 vs Mosso vs Linode

Recently I read an interesting idea on amazon EC2 forum about Raid0 strip on EBS to improve disk access performance. So I am very curious to know whether this idea actually works. Technically it is also possible to setup a raid system on Linode(referral link) as well, but it will be backed by same hardware (so I didn’t test this idea).

In this test I used bonnie++ 1.03e with direct IO support. These 3 VPS have slightly different configure. Mosso server has 256MB ram with 2.6.24 kernel and 4 AMD virtual cores. Lindoe vps has 360MB ram with custom built 2.6.29 kernel and 4 intel virtual cores. EC2 high-cpu medium instance has 1.7GB ram with 2.6.21 kernel and 2 intel virtual cores.

Here is the raw test result. On each VPS I run bonnie++ 3 times, then use median of 3 tests as the final result. The summary result is unweighted average value of different columns. Due to the memory size difference, I used different test file size. The EBS I used here is 4x10GB raid0.

In this table, -D means that test run with Direct IO option. The best results are highlighted. Direct IO test on EBS taking forever, so I didn’t finish that test.

.

Write (MB/s)

Read (MB/s)

Seek (#/s)

.

Mosso -D

32.4

52.9

219

.

Mosso

56.9

52.6

225

.

Linode -D

37.7

76

187

.

Lindoe

41.5

76.1

201

.

EC2 -D

32.4

50.7

220

.

EC2

18.9

39.2

210

.

EBS Raid0

52.4

23.1

1076

In this chart, I used logarithm scales and shifted origin in order to show the relative difference between them. So the column value does not reflect the real test results. Higher value is better.

Disk IO Chart

Conclusions: There is no clear winner in this test. Each VPS has the their high score in different category. Only one thing is clear, O_Direct does not work very well on EBS. Due to the nature of VPS, the Disk IO test is very unreliable. The performance I show here is not repeatable and may not reflect the true disk performance.

My Varnish VCL for WordPress

On Varnish’s official website, there is a WordPress optimization guide For The Impatient: Preparing Varnish/Wordpress for a Slashdotting in 60 seconds or less….

The problem is that it removes cookie too aggressively. All non admin page will be virtually static. So I made my own vcl to remove cookies for only static files.

Here it is

backend default {
.host = "10.25.0.1";
.port = "80";
}

sub vcl_recv {
# Normalize Content-Encoding
    if (req.http.Accept-Encoding) {
        if (req.url ~ ".(jpg|png|gif|gz|tgz|bz2|lzma|tbz)(?.*|)$") {
            remove req.http.Accept-Encoding;
        } elsif (req.http.Accept-Encoding ~ "gzip") {
            set req.http.Accept-Encoding = "gzip";
        } elsif (req.http.Accept-Encoding ~ "deflate") {
            set req.http.Accept-Encoding = "deflate";
        } else {
            remove req.http.Accept-Encoding;
        }
    }
# Remove cookies and query string for real static files
    if (req.url ~ "^/[^?]+.(jpeg|jpg|png|gif|ico|js|css|txt|gz|zip|lzma|bz2|tgz|tbz|html|htm)(?.*|)$") {
       unset req.http.cookie;
       set req.url = regsub(req.url, "?.*$", "");
    }
# Remove cookies from front page
    if (req.url ~ "^/$") {
       unset req.http.cookie;
    }
}
sub vcl_fetch {
        if (req.url ~ "^/$") {
                unset obj.http.set-cookie;
        }
}

So all interactive pages will be sent to php backend with correct cookies. All static files and front page will be served by varnish proxy.

Archlinux EC2 Public AMI

I made 2 public Archlinux EC2 AMI.

Important Notes:

  • Most instructions on this page are outdated. All necessary packages have been included inside latest AMI
  • If you want to build your own AMI, I released this build script on gitub and aur packages.

Update 2011/1/25

Update kernel to 2.6.37 and fix account creation.

Update 2010/8/30

Change static ip to kernel dhcp and remove initrd

Update 2010/8/28

The network configure will be saved when image first time booted. If you want to revert to dhcp in case you need to rebuild or stop. You should run this

sudo /etc/rc.d/ec2 stop

I also changed default cflags, so if you want to recompile packages, you can use srcpac. For example

sudo abs extra/python
sudo srcpac -Sb python

Update 2010/8/21

Add a user arch with the same ssh key as root.
The hostname is now static, if you want to rebundle, make sure change HOSTNAME in rc.conf to myhost and remove last line of /etc/hosts.
Here is the new build script.

Update 2010/7/23:

Updated to BTRFS as root.

Update 2010/7/20:

Updated to pvgrub and EBS.
Here is the updated script to generate an EC2 EBS.
I also made an aur package for kernel26 with patch from gentoo and opensuse.
There is a simple patch for mainline kernel from amazon.

Arch AMI ID
i386 ami-5ae11133
x86_64 ami-84e111ed

Updates:
10/21/2009: Updated all packages and use ubuntu kernels. Here is the new AMI making script. Those kernels will load some unnecessary modules, you will need to unload them manually. I will update again if I can found more stable kernel.

They are very basic installation with just ssh. If you need tools like ec2-ami-tools or ec2-api-tools, you can find my aur packages here. Or you can add my private repo to your pacman.conf.

[iphash]
Server = http://static.iphash.net/public/i686/

or

[iphash]
Server = http://static.iphash.net/public/x86_64/

Then

pacman -Sy ec2-ami-tools ec2-api-tools

If you want to roll your own image. Here (outdated – see beginning of this post) is the script I used to make these AMIs.

If you wish to set hostname and domainname you can pass following script as instance user-data.

MYHOST=yourhost    #set your real hostname here
MYDOMAIN=yourdomain  #set your domainname here

sed -i s/myhost/$MYHOST/ /etc/rc.conf
hostname $MYHOST

echo "NISDOMAINNAME="$MYDOMAIN"" >/etc/conf.d/nisdomainname
nisdomainname $MYDOMAIN

/etc/rc.d/syslog-ng restart

x=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
if [ $(echo $x|grep 404|wc -l) -eq 0 ]; then
cat </etc/hosts
#      
127.0.0.1               localhost.localdomain   localhost
$x  $MYHOST.$MYDOMAIN  $MYHOST
# End of file
EOF
fi
cp /etc/skel/.bash* /root/

IPv6 Sage

Just by doing some simple tasks like dig AAAA record or traceroute6. I am on he.net’s TOP 10 IPv6 certs now.

IPv6 Sage

Updates: This daily dig actually remind me those boring daily quests in WOW.

Deploy Archlinux Chroot onto VPS

Update 7/20/2010: I updated this script to be more Lxc friendly. And I also made small patch to modify inittab rc.sysinit rc.shutdown for lxc. If you are not use using dhcpd, you will still need to modify /etc/rc.conf to setup default route.

Download the new script here, and Lxc patch.


Most VPS providers do not have archlinux image or allow changing root device like Linode does. Even though I am comfortable dealing with debian or ubuntu, but tiny difference between them are still annoying over the time. So I decide to install a mini chroot enviroment onto all of them to normalize linux enviroment.

If you want to use a ubuntu or debian chroot, you probably should read DebootstrapChroot. My method here only applys to Archlinux.

These scripts are only for Linux newbies like myself, who are lazy to type all that many commands every time. If you are a Linux guru or sysadmin, you may find this method trivial, insecure or laughable.

Prepare your local system

I assume you already have at least one working Archlinux system installed. First you need to install some necessary tools. If you do not have an archlinux installed, you may skip to last section of post and test the one I built.

pacman -Sy devtools lzma cpio

Devtools includes mkarchroot which is a script bootstrip a mini root similar to debootstrap. If you just run “mkarchroot miniroot base”, it can make you a working mini archlinux. But the default installation is huge about 500MB. You probably do not want all of them inside a VPS enviroment.

Lzma, Cpio are my choice of packaging, you can also use zip, tar, gzip or bzip2, and modify other parts of my script accordingly.

Make a working chroot

The first script is to make a compact mini root and compress it to a single file.
You can either download (outdated) or copy/paste following lines to a file name miniarch

#!/bin/bash
# 2009 Copyright Yejun Yang (yejunx AT gmail DOT com)
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.
# http://creativecommons.org/licenses/by-nc-sa/3.0/us/

PACKS="sed gawk coreutils filesystem texinfo grep pacman 
       module-init-tools wget curl net-tools procps nano tar cpio zip 
       gzip bzip2 lzma psmisc initscripts iputils dnsutils iproute2 
       less dash which"

if [[ $1 == i686 ]]; then
  ARCH=i686
else
  ARCH=x86_64
fi

ROOT=mini_$ARCH

cat < pacman.conf
[options]
HoldPkg     = pacman glibc
SyncFirst   = pacman

[core]
Server = ftp://mirror.cs.vt.edu/pub/ArchLinux/$repo/os/$ARCH
Server = http://archlinux.mirrors.uk2.net/$repo/os/$ARCH
Include = /etc/pacman.d/mirrorlist
[extra]
Server = ftp://mirror.cs.vt.edu/pub/ArchLinux/$repo/os/$ARCH
Server = http://archlinux.mirrors.uk2.net/$repo/os/$ARCH
Include = /etc/pacman.d/mirrorlist
[community]
Server = ftp://mirror.cs.vt.edu/pub/ArchLinux/$repo/os/$ARCH
Server = http://archlinux.mirrors.uk2.net/$repo/os/$ARCH
Include = /etc/pacman.d/mirrorlist
EOF

mkarchroot -C pacman.conf $ROOT $PACKS

chmod 666 $ROOT/dev/null
mknod -m 666 $ROOT/dev/random c 1 8
mknod -m 666 $ROOT/dev/urandom c 1 9
mknod -m 600 $ROOT/dev/console c 5 1
mkdir -m 755 $ROOT/dev/pts
mkdir -m 1777 $ROOT/dev/shm

echo nameserver 4.2.2.1 > $ROOT/etc/resolv.conf
echo nameserver 4.2.2.2 >> $ROOT/etc/resolv.conf

find $ROOT -depth -print | cpio -ov | lzma -5 > $ROOT.cpio.lzma

Modify PACKS= to packages you want to be installed.

You should also modify the Server= to whichever fast for you. I used rankmirrors to find out the fastest server.

Run this script

./miniarch

or

./miniarch i686

or both.
It will make a minimal working chroot for Archlinux under currect directory and pack them into a single file mini_x86_64.cpio.lzma or mini_i686.cpio.lzma. These two file should be around 40MB if everything worked correctly.

Copy these .lzma file to your webserver root. Now you can safely delete the working directory

Deploy to VPS

You can download the files you just made to your vps and unpack them. But I made simple script to do that.

You can download or copy/paste following line to a file name deploy

#!/bin/bash
# 2009 Copyright Yejun Yang (yejunx AT gmail DOT com)
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.
# http://creativecommons.org/licenses/by-nc-sa/3.0/us/

ARCH=$(uname -m)

if [[ $ARCH != x86_64 ]]; then
ARCH=i686
fi

URL=http://YOURWEBSERVER/

if [ -e /var/chroot/mini_${ARCH} ]; then
  echo "**** /var/chroot/mini_${ARCH} already exists. "
  echo "**** You have to remove previous deployment."
  exit 1
fi

mkdir -p /var/chroot
cd /var/chroot
echo "Start downloading ${URL}mini_${ARCH}.cpio.lzma , be patient ..."
wget -q -O - ${URL}mini_${ARCH}.cpio.lzma | lzma -d | cpio -idv

deploy_success () {
cat <

Change URL= to your own webserver.

Before you running this script on your target machine. Make sure lzma, wget and cpio are installed. If you are using ubuntu, you can run

sudo aptitude update
sudo aptitude install lzma wget cpio

Running this script will deploy a mini chrootable archlinux in to /var/chroot/mini_i686 or /var/chroot/mini_x86_64. The unpacked size will be around 200MB.

To simplify this process, you can copy this file to webserver as well.

wget -q -O - http://yourwebsite/deploy |sudo bash

done.

For lazy people or testing only

If you are so lazy to make your own archlinux mini root or you don't have a working archlinux, you may test my prebuilt mini root by running following line, you will still need lzma, cpio and wget on your target machine.

wget -q -O - http://bit.ly/iZzq |sudo bash

Disclaimer

I DO NOT guarantee the correctness of these script and my prebuilt chroot. Be caution running any command with sudo. You may not hold me responsible for anything happened to your system.

Updates:
April 5, 2009: changed /bin/sh to /bin/bash

Plugin Image Optimizer

This plugin will reduce uploaded image size in wordpress.
You will also need to install Optipng , Jhead and unsafe mode php.

These tools will only strip meta information of your images, therefore the result should be lossless.

You can download this plugin here.