A new bridge between PHP and FFmpeg

AV is a new PHP extension that let you make use of functionalities provided by FFmpeg, the popular open source video processing tool. Unlike the existingFFmpeg extension, AV is designed for both decoding and encoding. One of the specific goals of this project is to make creation of video files playable on HTML5-compliant web browsers as easy as possible. Used alongside the GD and QB extensions, AV makes PHP a powerful and flexible platform for authoring multimedia contents.

The AV extension adds the following functions to PHP:

  • av_file_close
  • av_file_open
  • av_file_optimize
  • av_file_seek
  • av_file_stat
  • av_stream_close
  • av_stream_open
  • av_stream_read_image
  • av_stream_read_pcm
  • av_stream_write_image
  • av_stream_write_pcm

Consult the documentation for details on using these functions.

Extraction of contents from a file involves opening the file using av_file_open(), opening the appropriate streams using av_stream_open(), then reading from them using av_stream_read_image() or av_stream_read_pcm().

Creation of video files works in a similiar manner: opening the file using av_file_open(), adding the appropriate streams using av_stream_open(), then writing to them using av_stream_write_image() or av_stream_write_pcm().

Video frames are transfer to and from FFmpeg in GD images. If the dimensions of the video frame does not match those of the GD image, it will be scaled to fit.

PCMAudio samples are transfered in binary strings. The sample format is 32-bit floating point, interleaved stereo. The sample rate is always 44,100hz. Resampling occurs when the source material does not match these parameters.

The following example demonstrates the transcoding of a QuickTime video file to MP4 and WebM:


$folder = dirname(__FILE__);

// open source file
$file_in = av_file_open("$folder/source-code.mov", "r");
$stat_in = av_file_stat($file_in);

// open output files
$file_out1 = av_file_open("$folder/source-code.mp4", "w");
$file_out2 = av_file_open("$folder/source-code.webm", "w");

// create image buffer
$width = $stat_in['streams']['video']['width'];
$height = $stat_in['streams']['video']['height'];
$image = imagecreatetruecolor($width, $height);

// open input streams
$a_strm_in = av_stream_open($file_in, "audio");
$v_strm_in = av_stream_open($file_in, "video");

// open output streams
$a_strm_out1 = av_stream_open($file_out1, "audio");
$v_strm_out1 = av_stream_open($file_out1, "video", array( "width" => $width, "height" => $height ));

$a_strm_out2 = av_stream_open($file_out2, "audio");
$v_strm_out2 = av_stream_open($file_out2, "video", array( "width" => $width, "height" => $height ));

$v_time = 0;
$a_time = 0;
while(!av_file_eof($file_in)) {
  // read from video stream if it's behind the audio stream
  if($v_time < $a_time) {
    // read video frame
    if(av_stream_read_image($v_strm_in, $image, $v_time)) {
      // write video frame
      av_stream_write_image($v_strm_out1, $image, $v_time);
      av_stream_write_image($v_strm_out2, $image, $v_time);
    } else {
      // no more video frames
      $v_time = INF;
  } else {
    // read audio segment
    if(av_stream_read_pcm($a_strm_in, $pcm, $a_time)) {
      // write audio segment
      av_stream_write_pcm($a_strm_out1, $pcm, $a_time);
      av_stream_write_pcm($a_strm_out2, $pcm, $a_time);
    } else {
      // no more audio
      $a_time = INF;

// close files




The code should be self-explanary by and large. The only thing worth extra attention is the while loop where the copying occurs. Instead of reading every frame from the video stream first then proceeding to the audio stream, we alternate between the two. This arrangement reflects how video files are stored on disk. The various streams that make up a movie are "muxed" together. Packets containing visual information interleave with those containing aural information:


If instead of processing the streams in parellel, we had chosen to process the video stream first, then on the read side, audio packets would pile up in memory (as they are needed later), and on the write side, video packets would pile up in memory as well since they cannot be committed to disk until audio packets ahead of them in time were written. As a result, memory usage would thus be far higher.

The video file used in the example was obtained from www.hd-trailers.net. It's a clip from the movie "Source Code." Here's how it looks after the conversion:

Before we write a video frame to the output files, we could make changes to it. GD's imagefilter() is one way to do this. The following shows a sepia effect applied to the sample video:

Two lines were added to the script:

    if(av_stream_read_image($v_strm_in, $image, $v_time)) {
      imagefilter($image, IMG_FILTER_GRAYSCALE);
      imagefilter($image, IMG_FILTER_COLORIZE, 50, 25, 0);
      av_stream_write_image($v_strm_out1, $image, $v_time);
      av_stream_write_image($v_strm_out2, $image, $v_time);
    } else {

The QB extension offers another, more flexible way for manipulating the image. The following is a sepia filter implemented in PHP+QB:

 * @engine qb
 * @param image      $image
 * @param float32    $intensity
 * @local float32[4][4]  $YIQMatrix
 * @local float32[4][4]  $inverseYIQ
 * @local float32[4]  $k
function sepia(&$image, $intensity) {
  $YIQMatrix = array(
    array(0.299,  0.596,  0.212, 0.000),
    array(0.587, -0.275, -0.523, 0.000),
    array(0.114, -0.321,  0.311, 0.000),
    array(0.000,  0.000,  0.000, 1.000),
  $image = mv_mult($YIQMatrix, $image);
  $k = array(1, 0, 0, 1);
  $image *= $k;    // clear I and Q
  $k = array(0, $intensity, 0, 0);
  $image += $k;    // set I to intensity
  $inverseYIQ = inverse($YIQMatrix);  
  $image = mv_mult($inverseYIQ, $image);

The output:

More interesting effects can be achieved given the high level of programmability. Here's the result from using the reflection filtered developed in Tutorial 4:

And this is the result produced by the Ascii-Mii Pixel Bender kernel:

It's Source Code...in ASCII!