optimization

Batch Process Image Optimizations

Posted by Mike Brittain on February 14, 2010
WWW / 2 Comments

A couple weeks ago I wrote a post about a script I had put together for batch processing JPEGs with jpegtran.  This week I extended that script so that it handles processing GIF and PNG images as well.  It’s now a project on GitHub called “Wesley“.

Wesley is a single Perl script that you run from the command line, supplying the path to a single file name or a directory where you keep your site’s images.  If you work on a Linux or Mac development server, you can quickly run this script against all new images that you add to your site code.  Additionally, you could tie this into your build process or pre-commit hook for your preferred source control.  I haven’t spent time on this yet, but expect to add a write up on it soon.

The script strips meta data and comments from image files and tries to optimize images using lossless techniques.  You should be able to run Wesley on your images without any reduction in quality.

Wesley makes use of locally installed copies of ImageMagick, jpegtran, pngcrush, and gifsicle.  Some of these are probably already installed on your own machine (or shared hosting service).  If you are missing one or more of these packages, you can still run Wesley and it will use as many packages as you have available.

Usage

   wesley.pl  /path/to/images/

Sample Output

  ----------------------------
    Summary
  ----------------------------

  Converting the following GIFs to PNG would save additional file size.
  Bytes saved: 19173 (orig 149404, saved 12.83%)

    ./hd-sm.gif
    ./top_navigation.gif
    ./logo.gif

  Inspected 226 JPEG files.
  Modified 190 files.
  Huffman table optimizations: 138
  Progressive JPEG optimizations: 52
  Bytes saved: 408508 (orig 2099658, saved 19.45%)

  Inspected 105 PNG files.
  Modified 99 files.
  Bytes saved: 84618 (orig 315056, saved 26.85%)

  Inspected 129 GIF files.
  Modified 70 files.
  Bytes saved: 57535 (orig 1393120, saved 4.12%)

  Total bytes saved: 550661 (orig 3807834, saved 14.46%)

Tags: , , ,

Batch Processing your JPEGs with jpegtran

Posted by Mike Brittain on January 27, 2010
WWW / 1 Comment

UPDATE: Please read my post about a new version of this image processing script.

Stoyan Stefanov wrote up a nice post last year about installing jpegtran on a Mac or Unix/Linux system so that you can run optimizations on your JPEG files.  His conclusion on jpegtran is that you can save about 10% on your JPEG file sizes for “about a minute of work, or less.”

Sounds great!  I looked it over and, indeed, jpegtran cuts some of the junk out of the JPEG files I tested.  The only holdup, however, is that at CafeMom we have a few thousand JPEG files in our site code, and that number grows every week.  The only reasonable solution was to automate this process.

The following Perl script should work right out of the box for you, assuming you already have jpegtran installed on your server or shared hosting account.

optimize_jpegs.pl

#!/usr/bin/perl

#
# Lossless optimization for all JPEG files in a directory
#
# This script uses techniques described in this article about the use
# of jpegtran: http://www.phpied.com/installing-jpegtran-mac-unix-linux/
#

use strict;
use File::Find;
use File::Copy;

# Read image dir from input
if (!$ARGV[0]) {
    print "Usage: $0 path_to_images\n";
    exit 1;
}
my @search_paths;
my $images_path = $ARGV[0];
if (!-e $images_path) {
    print "Invalid path specified.\n";
    exit 1;
} else {
    push @search_paths, $ARGV[0];
}

# Compress JPEGs
my $count_jpegs = 0;
my $count_modified = 0;
my $count_optimize = 0;
my $count_progressive = 0;
my $bytes_saved = 0;
my $bytes_orig = 0;

find(\&jpegCompress, @search_paths);

# Write summary
print "\n\n";
print "----------------------------\n";
print "  Sumary\n";
print "----------------------------\n";
print "\n";
print "  Inspected $count_jpegs JPEG files.\n";
print "  Modified $count_modified files.\n";
print "  Huffman table optimizations: $count_optimize\n";
print "  Progressive JPEG optimizations: $count_progressive\n";
print "  Total bytes saved: $bytes_saved (orig $bytes_orig, saved "
       . (int($bytes_saved/$bytes_orig*10000) / 100) . "%)\n";
print "\n";

sub jpegCompress()
{
    if (m/\.jpg$/i) {
        $count_jpegs++;

        my $orig_size = -s $_;
        my $saved = 0;

        my $fullname = $File::Find::dir . '/' . $_;

        print "Inspecting $fullname\n";

        # Run Progressive JPEG and Huffman table optimizations, then inspect
		# which was best.
        `/usr/bin/jpegtran -copy none -optimize $_ > $_.opt`;
        my $opt_size = -s "$_.opt";

        `/usr/bin/jpegtran -copy none -progressive $_ > $_.prog`;
        my $prog_size = -s "$_.prog";

        if ($opt_size && $opt_size < $orig_size && $opt_size <= $prog_size) {
            move("$_.opt", "$_");
            $saved = $orig_size - $opt_size;
            $bytes_saved += $saved;
            $bytes_orig += $orig_size;
            $count_modified++;
            $count_optimize++;

            print " -- Huffman table optimization: "
				. "saved $saved bytes (orig $orig_size)\n";

        } elsif ($prog_size && $prog_size < $orig_size) {
            move("$_.prog", "$_");
            $saved = $orig_size - $prog_size;
            $bytes_saved += $saved;
            $bytes_orig += $orig_size;
            $count_modified++;
            $count_progressive++;

            print " -- Progressive JPEG optimization: "
				. "saved $saved bytes (orig $orig_size)\n";
        }

        # Cleanup temp files
        if (-e "$_.prog") {
             unlink("$_.prog");
        }
        if (-e "$_.opt") {
            unlink("$_.opt");
        }
    }
}

How to use this script

For starters, copy this script into a text file (such as optimize_jpegs.pl) and set it to be executable (chmod 755 optimize_jpegs.pl).

After the script is setup, pull the trigger…

$ ./optimize_jpegs.pl  /path/to/your/images/dir

That’s it.  The output should look something like this:

Inspecting ./phpXkWlcW.jpg
 -- Progressive JPEG optimization: saved 1089 bytes (orig 13464)
Inspecting ./phpCnBRri.jpg
 -- Progressive JPEG optimization: saved 1155 bytes (orig 34790)
Inspecting ./phpx6G3lD.jpg
 -- Progressive JPEG optimization: saved 742 bytes (orig 11493)

...

----------------------------
  Sumary
----------------------------

  Inspected 21 JPEG files.
  Modified 21 files.
  Huffman table optimizations: 0
  Progressive JPEG optimizations: 21
  Total bytes saved: 63918

Wrap up

Many thanks to Stoyan for his post on jpegtran, and all of the other performance ideas he has been sharing on his blog.  This script was easy to write, knowing the right techniques to be running on our images.

optimize_jpegs.pl took about a minute or so to run on our thousands of images and shaved a few megabytes from all of those files combined.  This will be a great savings for us.

Tags: , , , ,

How to Improve JavaScript Latency in Mobile Browsers

Posted by Mike Brittain on January 20, 2009
Mobile / 2 Comments

Mobile browsers are really coming along.  Mobile Safari is built on top of WebKit and has just as much capability as the desktop version.  Same with Android’s browser.  Blackberry’s browser, I understand, has improved tremendously over previous versions.  The new offering from Palm centers application development around web technologies HTML, CSS, and JavaScript.

As more applications and data grow to live in the cloud, then access to them via a browser must be easy and fast, which is often not the case with data on mobile devices.  A web site can take many seconds to several minutes to load all of the content required.  And at the heart of many sites these days lie some common elements — JavaScript libraries.

Personally, I have avoided heavy-weight libraries for mobile application development, because I know that they are a burden to the end-user.  This is less often the case for desktop users, who typically have broadband connections at home or at work.  So what do we do to improve this situation?

I propose that the mobile browser makers (or OS makers, in most cases) embed standard versions of common JavaScript libraries within their browsers.  Google already makes a number of these available as a hosted solution for web application developers: jQuery, YUI, Prototype, script.aculo.us, etc.  Other players, particularly in the CDN space, could also become involved in hosting these frameworks.  Nearly half of the libraries that Google hosts are larger than the 25 KB cache limit in mobile Safari (for example).  By embedding a handful of these libraries, mobile browsers could speed up some of the overhead of mobile applications that rely on Ajax or heavy DOM manipulation.

How would you do this?  Likely by inspecting HTTP requests by URL.  Google’s hosted libraries include version numbers, which allows developers to peg their work to a specific version, not having to worry about quirks in future versions that could upset their apps.  When an application makes use of one of these embedded libraries, the browser can simply execute the JavaScript library without having to make an external request.  If the application uses a newer version that is not embedded in the browser, the HTTP request would proceed as normal.  End users would get a slower experience than with an embedded framework, but that experience would be no worse than we have now.

I’m interested in hearing others’ thoughts about this idea.

Tags: , , , , , , , , , , , ,