Monthly Archives: October 2011

Upload S3 files directly with AJAX

One great feature of S3 is that you can upload directly to your s3 bucket through a POST action, the flow is very simple, you only need to generate some params of where is going to be uploaded and the permissions/meta, with the Amazon PHP SDK, you can easily do like:

        require_once 'sdk.class.php';
	$s3_uploader = new S3BrowserUpload();
	$params = $s3_uploader->generate_upload_parameters($bucket, $expires, array(
		'acl' => AmazonS3::ACL_PUBLIC,
		'key' => $path_file,
		'Content-Type' => $mime,

The $params will be an array, whose $params['input'] needs to be passed as hidden form values, and finally you need to set a file input where user will select the file (at end of all other params):

<input type="file" name="file" />

You can also specify a redirection URL to get back to your page after user have uploaded.

This work fine for “traditional uploads”, but what if you want to take advantage of new API’s like storing string content into a file; or grab the canvas current image and directly save into S3 without need to send to a server; or make a thumbnail in the fly and upload directly to S3?

The first problem here is that you cannot do crossdomain callings, since S3 don’t have the feature of adding a header of Access-Control-Origin which will allow to directly post, but fortunately with postMessage you can upload a HTML and a javascript to your S3 and act as “proxy” to it.

The other problem is that postMessage only allows types that can be cloned, due security a File/FileReader cannot be cloned so you need to pass directly the contents instead the object.

In this example I will allow you to

  1. Select an image
  2. Resize to half through canvas
  3. Get the base64 encoded binary
  4. Pass to postmessage with s3 POST data
  5. Decode the base64
  6. Pass through ajax to S3 subdomain

In fact is very simple and they are using standard features, but unfortunately as HTML5 is an still working feature on most browser currently only work on Firefox 4+ and Chrome 11+

First let’s review the part that will be stored in your s3 bucket and serve as a proxy, we will have three files:

  1. index.html
    Will contain only code to load the javascript and nothing more
  2. base64-binary.js
    It’s an handful script which will allow to decode a base64 image into an ArrayBuffer which can be converted into a Blob and send as a “file” in a POST message
  3. upload.js
    This script will contain the logic and will receive the message and convert the base64 encoded image with POST params and convert into an AJAX request

You can see the full files here:

The core part of the upload.js is:

window.addEventListener("message", function(event) {
	var data =; //get the data

	//upload data through a blob
	var separator = 'base64,';
	var index = data.content.indexOf(separator);
	if (index != -1) {
		var bb = new BlobBuilder();

		//decode the base64 binary into am ArrayBuffer
		var barray = Base64Binary.decodeArrayBuffer(data.content.substring(index+separator.length));

	    var blob = bb.getBlob();

	    //pass post params through FormData
	    var formdata = new FormData();

		for (var param_key in data.params) {
			formdata.append(param_key, data.params[param_key]);
		formdata.append("file", blob, "myblob.png"); //add the blob

		//finally post the file through AJAX
		var xhr = new XMLHttpRequest();"POST", data.url, true);
}, false);

As you can see we are only decoding the base64 and convertingo into a Blob, passing all the params into a FormData and upload through AJAX.

As this file will be stored in S3 there won’t be any crossdomain issue, but you need to be able to create a Blob and a FormData to make it work.


On the script to upload we are going to have a bit more code, first we need a code to generate the params for S3, this is done easily in php with:

	//init aws-sdk
	require_once 'sdk.class.php';

	//response will be in json
	header('Content-Type: application/json');	

	$bucket = 'danguer-blog'; //your s3 bucket
	$bucket_path = 'upload-html5/tmp'; //"dir" where is going to be stored inside the bucket
	$filename = isset($_POST['name'])?$_POST['name']:null;
	$filemime = isset($_POST['mime'])?$_POST['mime']:null;

	//handle errors
	if (empty($filename)) {
		print json_encode(array(
			'error' => 'must provide the filename',

	if (empty($filemime)) {
		print json_encode(array(
			'error' => 'must provide the mime',

	if (strpos($filename, '..') !== false) {
		print json_encode(array(
			'error' => 'not relative paths',

	$expires = '+15 minutes'; //token will be valid only for 15 minutes
	$path_file = "{$bucket_path}/{$filename}";
	$mime = $filemime; //help the browsers to interpret the content	

	//get the params for s3 to upload directly
	$s3_uploader = new S3BrowserUpload();
	$params = $s3_uploader->generate_upload_parameters($bucket, $expires, array(
		'acl' => AmazonS3::ACL_PUBLIC,
		'key' => $path_file,
		'Content-Type' => $mime,

	print json_encode(array(
		'error' => '',
		'url' => "http://{$params['form']['action']}",
		'params' => $params['inputs']


Next, we need to present the file input and create the iframe pointing to the S3 files we have uploaded:

	<!-- hiden frame -->
	<iframe id="postMessageFrame" src="<?=$url_iframe?>">

	<h3>Upload Files</h3>
	<input type="file" accept="image/*" onchange="uploadFile(this.files)">

And the javascript code is a bit more, since once selected we need to resize it

function resizeImage(file, mime) {
	var canvas = document.createElement("canvas");
    var ctx = canvas.getContext("2d");
    var canvasCopy = document.createElement("canvas");
    var ctxCopy = canvasCopy.getContext("2d");

	var reader = new FileReader();
    reader.onload = function() {
	    var image = new Image();
	    image.onload = function() {
		    //scale just at half
		    canvasCopy.width = image.width;
	        canvasCopy.height = image.height;
	        ctxCopy.drawImage(image, 0, 0);

	        canvas.width = image.width * 0.5;
	        canvas.height = image.height * 0.5;
	        ctx.drawImage(canvasCopy, 0, 0, canvasCopy.width, canvasCopy.height, 0, 0, canvas.width, canvas.height);

	        //convert into image and get as binary base64 encoded
	        //to pass through postMessage
	        var url = canvas.toDataURL(mime, 0.80);
	        uploadImage(file, url)

        image.src = reader.result;

	//read contents

As you can see we are opening the file selected, drawing into a canvas, copy into other canvas of the half of size and getting a base64 binary in the same mimetype of the file, finally we are calling to upload the file:

function uploadImage(file, dataURL) {
	//load signature
	var signature_params = {
		mime: file.type,

	$.post(url_ajax_signature, signature_params, function(response) {
		//send through crossdomain page
		var windowFrame = document.getElementById('postMessageFrame').contentWindow ;
		var data = {
			params: response.params,
			url: response.url,
			content: dataURL

		//send data of s3 request signature and base64 binary data
		windowFrame.postMessage(data, 'http://<?=$url_iframe_host?>');
	}, 'json');

We are first getting the POST params for S3 through AJAX from the PHP file and later we are going to send those data through a postMessage to the S3 iframe so it can process and upload to S3 without need of send first the file to server and later from there upload into S3, with this you can upload directly to S3 from client’s browser and if needed you can later check the contents to avoid injecting any non wanted files or malware.

All the code can be seen here:


Base64 Binary Decoding in Javascript

Currently all the base64 decoders in javascript are for strings, not suitable for binary data, the common example is if you uses a canvas element and get their base64 representation ( canvas.toDataURL() ) usually you will upload and in the server do the base64 decode, if you want to process the data in javascript you will find the data will got corrupted since it’s processed as string.

This script is using the new javascript array typed, the basic usage is decoding the base64 into an Uint8Array you can call like:

var uintArray = Base64Binary.decode(base64_string);

The other is wrap the Uint8Array into an ArrayBuffer, this is very helpful for example to upload a file through FormData ( formdata.append("file", arraybuffer, "test.png") ), you can use with:

var byteArray = Base64Binary.decodeArrayBuffer(base64_string);

You can check the code here:


Login through Google+ and OAuth API

One of the common scenarios is to login through an external account like Google+, Twitter or Facebook.
In this code snippet I will show how to login using Google+ API

First you need to register your api at, register the Google+ API Service and create a new application; in the “Redirect URI” you need to pass the path of the script and add the following: “googleplus.php?op=redirect” I will comment why is this later, after you see the code, you can fragment code into other more elegant paths/scripts.

From the generated application you will need the Redirection URI and the Client ID

Google OAuth works in the following way:

  1. Generate an URL using params of services you want to access (scope) and your Client ID and Redirection URI
  2. You need to redirect user to this URL
  3. After user has approved the use, you will be redirected with a hash params, so your server won’t see it, you can pass to your server through ajax or in this case through a GET request

In the code the “?op=redirect” will show just a white page with a javascript redirection to convert the hash location into a normal GET request; this will enable in the server to grab the access_token which you can use to verify through Google+ site if is valid.
This code part does it:

        $access_token = $_GET['access_token'];

	//do something with the token, first check is real
	$data = @file_get_contents("{$access_token}");
	if ($data) {
		print $data;
	} else {
		print "Token not valid!";

The code is not prepared to handle error (will show as a $_GET['error'] after have redirected) so you need to handle that for your scenario.




Using php for analyzing apache logs

Apache has a nice feature that is to send the log output through a pipe. This avoid to configure syslog or create a listening server in php for syslog forwarding logs.

The changes in apache are really simple, you just need to write something like:

LogFormat "%v %A %D \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" mylog
CustomLog "|/usr/bin/php5 [PATH_TO_SCRIPT]/apache-stdin.php log" mylog
ErrorLog "|/usr/bin/php5 [PATH_TO_SCRIPT]/apache-stdin.php error"

In apache-stdin.php the flow is very simple, you just need to do:

$fp = fopen('php://stdin', 'r');
do {
	//read a line from apache, if not, will block until have it
	$data = fgets($fp);
	$data = trim($data); //remove line end

	if (empty($data)) {
		break; //no more data so finish it

	//process the data
} while(true);


As you can see it’s basically reading a line and processing.

I’ve built a helper script around this at:

Where you can pass an additional param to specify it’s a normal log, or the error log; like in the apache configuration I’ve posted.

You can also configure the log format you are using in apache to get you a simple description like:

$format = '%v %A %D \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"';

With this, it will give you an array like:

'hostname' => '',
'local_ip' =>,
'time_ms' => 0.0002,
'first_line_request' => 'GET / HTTP/1.1',
'status_last' => 200,
'bytes_sent' => 2048

From there you can use for storing in a file (it’s the common behavior of this script), insert into a db, use simpledb, etc.


Installing php-fpm and apache2

I’ve readed a lot of tutorials of using nginx and php-fpm.
I’m still using apache2 much more due the .htaccess easiness that allow with rewriting and password protect (despite using other modules like mod_svn, etc)

Even I haven’t found a 100% transparent solution like ngninx, here is what I’m using to work with php-fpm and apache2. Here are the steps in debian:

  • echo "deb stable all" >> /etc/apt/sources.list
  • apt-get update
  • apt-get install libapache2-mod-fastcgi apache2-mpm-worker php5-fpm php5-common php5-cli
  • do all the apt-get of all php5-* modules you want
  • Add the following to a new file: /etc/apache2/mods-enabled/fpm.load
    AddType application/x-httpd-fastphp5 .php .phtml
    Action application/x-httpd-fastphp5 /fast-cgi-fake-handler
  • in /etc/php5/fpm/pool.d/www.conf change:
    listen =


    ; listen =
     listen = /var/run/php-fpm.socket

    This will enable the unix socket which should be faster

  • /etc/init.d/php-fpm restart
  • /etc/init.d/apache2 restart


With this you will be able to use php-fpm through a /fast-cgi-fake-handler so for example in your /etc/apache2/sites-enabled/000-default your file will look like:

FastCGIExternalServer /var/www/fast-cgi-fake-handler -socket /var/run/php-fpm.socket
DocumentRoot /var/www

The system will rewrite /index.php path into: /fast-cgi-fake-handler/index.php which will be passed to the fastcgi

This have two problems:

  1. You need to set the FastCGIExternalServer like ${DOCUMENT_ROOT}/fast-cgi-fake-handler in all your virtual hosts to make it work
  2. Zend framework and other which use the consume all url’s pattern to pass through index.php will not work as will generate an endless loop, there is an easy solution, in your .htaccess use after RewriteEngine On:
    RewriteRule ^fast-cgi-fake-handler/ - [L,NC]

    this will skip processing all urls which contain “fast-cgi-fake-handler” at start; or of course use a RewriteCond to avoid this.