Composer on Pagoda Box

Composer recently switched from using GitHub’s http interface to using the GitHub API to load dependencies. This caused issues on Pagoda Box (and probably elsewhere) as the number of unauthenticated API requests stacked up. For me this was causing deployments to be aborted as Github prevented further requests required by my application to fullfil its composer requirements. The solution is to prevent the Github API from being used and installing from source. Turns out this is actually very simple.

If you have something like this in your boxfile:

php composer.phar install

then you need to update this to force installation from source like:

php composer.phar install --prefer-source


Integrating offline documentation into my workflow with Dash

People like to compare programming languages, or their favourite framework. Everyone tends to have a favourite and the subsequent discussions (arguments/flame wars) very rarely add any value. Lots of non-PHP programmers delight in the various “php sucks!” articles, sometimes they’re right but often they miss the point. PHP can suck but so can any programming language when used incorrectly. That said, i’m pretty fond of PHP. Its been my programming language of choice for the last 10 years give or take. I happen to like it and I tend to think there’s a lot to like about it. If I was to distill this down to a killer feature though (as people tend to do) then i’d suggest its nothing in the language itself. For me the killer feature in PHP has always been the documentation. The PHP docs rock, they look awful (hopefully not for much longer) but they add so much value to the language its untrue. The comments too are usually excellent. I’ve lost count of the amount of times where the answer I was looking for (whatever the question) was either answered by the documentation itself or the user contributed notes beneath. If you’re a PHP dev then you will have found this too – 100%. The only problem with the PHP docs is how much I can rely on them at times, and that they rely on an active internet connection. Now, admittedly this is few and far between but there are times when internet is flaky or non-existent – travelling on trains, planes (or automobiles) for example. Recently though I’ve found a solution to this issue – a great Mac App called Dash

Screen Shot 2013-02-23 at 21.12.45
launching dash from Alfred

Dash. Dash describes itself as “… an API Documentation Browser and Code Snippet Manager. Dash stores snippets of code and searches offline documentation”. You want a searchable copy of the PHP docs available offline then Dash is for you. Of course, Dash is not limited to PHP. There are docsets for most languages and libraries you can think of. This is already powerful. Recently though i’ve picked up on a couple more features in Dash that make it really killer for me and now, whether I have internet or not, I tend to use Dash for all documentation reading. One of these things is the hookup it has with Alfred, another excellent Mac app. I can launch Alfred with the shortcut ⌥+SPACE, and by typing “dash SOMETHING” the docs are immediately searched for that item – for example “dash array_intersect“.

Screen Shot 2013-02-23 at 21.27.48
A function definition file within Netbeans

This is pretty sweet. Its quick, its offline, and somehow it feels distraction free as its outside of the browser. This definitely scratched an itch for me, I wondered if Dash could do anything more that I would fine useful. It turns out it could. I’m a big fan of the Netbeans IDE for all my PHP development. Yeah, I know all the cool-kids are using Sublime Text with its infinite customisation but I like Netbeans. I’ve been using it for years, i’m happy with it. One thing i’ve never liked so much though is the way it handles documenting core php functions and libraries. Hovering over a function will display the Phpdoc for that code, where as ⌘+Click will take you to the function. This works great for custom code but not so great for the core  language code.  I want to see this documentation on the PHP site, the Netbeans method of displaying it within a function definition file just doesn’t cut it. Fortunately i’ve found a method of integrating Dash here too – using the “Look up in Dash” System Service. I’ve been a Mac user for about 3 years though and i’ve always been a little confused by services, this is the first time i’ve made any use of them. Through a System Service within Netbeans i’ve been able to connect a shortcut in Netbeans to the Dash service. I’ve chosen ⌘+⇧+D. By highlighting a piece of code, and using that shortcut Dash is launched and searches immediately for that code. Of course you can use any shortcut you like but this works for me. Pretty simple in the end, and now I have Dash linked into my workflow quite effectively.

A new home for my blog

It seems like every other post on this blog has been about a new location for my blog. For the last couple of years i’ve happily been hosting my site at They’ve provided a great service but I just felt like I needed  a change. Predominantly this was because I was doing some investigation into cloud hosting solutions for work and I realised i’ve never had a proper play with Amazon’s EC2 – the backbone of much of the “cloud” infrastructure on the web. So here I am, a new home for the foreseeable future. I’m also attempting to blog more frequently. Hopefully both things will go well.

Using multitail for monitoring multiple log files

Like many developers my job tends to include a number of low-level sysadmin tasks. I generally have open most the day with one thing or another, whether working locally or SSH’ed into one of our remote servers. Once an app is in production its really handy to keep an eye on the server logs to see whats happening and be able to respond proactively to errors as they occur. Multitail is a great tool I found for monitoring multiple log files at the same time, helping to keep all of this monitoring in a single window.

In simple terms multitail allows you to monitor multiple files simultaneously. In my case this is almost always the apache error_log file but it could be access logs, ftp logs or anything really.

A simple use of multitail could be:

-l "ssh root@REMOTE.IP.1 tail -f /usr/local/apache/logs/error_log" 
-l "ssh root@REMOTE.IP.2 tail -f /usr/local/apache/logs/error_log"

One of the most powerful features in multitail is the ability to add exceptions based on regular expression patterns. This allows you to filter out any errors which you’re not as interested in. For example, if you’re monitoring a log for PHP errors you may be less interested in 404 errors. This can lead to a more advanced multitail usage like this which includes named windows and multitail divided into vertical columns:

multitail -du -C -s 2
-Ev "does not exist" -Ev "filter this" -Ev "dont show this"
-t WindowName1 -l "ssh root@REMOTE.IP.1 tail -f /usr/local/apache/logs/error_log" 
-t WindowName2 -l "ssh root@REMOTE.IP.2 tail -f /usr/local/apache/logs/error_log"
-t WindowName3 -l "ssh root@REMOTE.IP.3 tail -f /usr/local/apache/logs/error_log" 
-t WindowName4 -l "ssh root@REMOTE.IP.4 tail -f /usr/local/apache/logs/error_log"

Installation of multitail is really simple if you’re using Homebrew, simply “brew install multitail” and you’re ready to go.

Removing duplicate rows in MySQL

Its often the case that you find application issues at the stage they become problematic. MySQL seems to be one of the most common ones of these for me whether it be something as simple as a lack of an index or something far more fundamental with your schema. A recent issue I came across had been caused by some far from perfect code associated with updating of elements within an ecommerce CMS using an API connection. A table that should realistically have no more than 10,000 rows had grown to over 4 million. This had caused an almost inevitable slowdown with all interactions with this table. Looking at the table there was a huge amount of data duplication. The data tended to be duplicated on all but the primary key, the question was how to remove this duplicate data without having to run a long running PHP or Shell script against the production database. The answer was surprisingly simple and one of those times where a simple SQL command is all thats needed.

ALTER IGNORE TABLE `table_with_duplicates` 
ADD UNIQUE INDEX `remove_duplicates` (`col_1`, `col_2`,  `col_3`);

An explanation of how this works can be seen on the MySQL site:

IGNORE is a MySQL extension to standard SQL. It controls how ALTER TABLE works if there are duplicates on unique keys in the new table or if warnings occur when strict mode is enabled. If IGNORE is not specified, the copy is aborted and rolled back if duplicate-key errors occur. If IGNORE is specified, only the first row is used of rows with duplicates on a unique key, The other conflicting rows are deleted. Incorrect values are truncated to the closest matching acceptable value

Working with Zend Tool in multiple dev environments

On any given Zend Framework project I can be working in 2 or 3 locations – my work PC, home PC or my MacBook. My source code will always be in Subversion and I usually work on a development server before pushing completed work to the production server. In this kind of environment I’ve never been too sure where exactly I should work with Zend_Tool.

How i see it, there’s 2 options:

  • Set up to work locally with Zend_Tool on each dev environment and then push to the dev server from there, checking in the Zend_Tool manifest etc with each Zend_Tool usage.
  • Use Zend_Tool directly on the dev server and then download each addition/alteration to then push into SVN.

I would be inclined to say the most reliable way would be the multiple Zend_Tool setup but i’d be interested to hear if people can think of any potential issues with this or reasons why i should make a different choice.

n.b. I originally posted this as a question on Stack Overflow – feel free to drop in over there and answer the question.

Simple introduction to using Oauth with Zend_Service_Twitter

The Zend Framework is slowly changing the way i develop websites. I say slowly as the documentation for much of the framework is sadly lacking. It is overly complex at times and at other times it is lacking in required detail or is just out-of-date. In putting together  a twitter based application recently i came across such an occasion with regards to the recent changes made by twitter in turning off basic authentication. Hopefully, this might help someone else too.

If you’re looking to build a Twitter application using the Zend Framework then there is some half decent information on the Zend Framework site in the section dealing with Oauth. However it was lacking in some detail, looking at the documentation for Zend_Service_Twitter was also not particularly useful as it has not been updated with examples of how to use Oauth. What i’ve put together is a really simple example of the process you could follow to make this work. The beauty of the Zend Framework is that there is always a number of ways to achiveve the same task so this is by no means the “right” way to do it, that being said it works for me.

First of all i added the configs i would need to my config file, application.ini

oauth_consumer.callbackUrl = "";
oauth_consumer.siteUrl = "";
oauth_consumer.consumerKey = "MY_CONSUMER_KEY";
oauth_consumer.consumerSecret = "MY_CONSUMER_SECRET";

Its probably worth noting that i use sessions to persist a few values in this example, they are setup like this within my TwitterController.php file:

class TwitterController extends Zend_Controller_Action
protected $session;
public function init()
$this->session = new Zend_Session_Namespace(‘Default’);
// etc..

Given the multitude of options with regards to how your Zend application might be put together I will not tell you where the rest of the code should be placed as thats completely dependent on your application.

First of all get your request token and then redirect the user to Twitter for them to grant access to your application

// within TwitterController::authAction
$config = $this->getInvokeArg(‘bootstrap’)->getOption(‘oauth_consumer’);
$consumer = new Zend_Oauth_Consumer($config);
$token = $consumer->getRequestToken();
$this->session->request_token = serialize($token);

Following the authorisation at Twitter the user will be returned to your callback URL, identified in your config file as oauth_consumer.callbackUrl. Using a combination of your request token and the response from Twitter the unique user access token is generated using the getAccessToken method of Zend_Oauth_Consumer.

// within TwitterController::callbackAction
$config = $this->getInvokeArg(‘bootstrap’)->getOption(‘oauth_consumer’);
$consumer = new Zend_Oauth_Consumer($config);
$access_token = $consumer->getAccessToken($this->_request->getQuery(), unserialize($this->session->request_token));

This was all pretty simple, you now have an access token that you can store for all later usage. It was unclear from the documentation though what the next step should be, all examples of Zend_Service_Twitter used basic authentication. Looking at the source code i noticed that for a valid signature you have to pass to the Zend_Service_Twitter constructor an array of options that included the config variables from your original request to Zend_Oauth_Consumer as well as your access token and twitter screen name:

$token = unserialize($user->getUserToken()); // retrieve token from storage
$configs = $this->getInvokeArg(‘bootstrap’)->getOption(‘oauth_consumer’);
$twitter = new Zend_Service_Twitter(array(
‘username’ => $token->screen_name,
‘accessToken’ => $token,
‘consumerKey’ => $configs[‘consumerKey’],
‘consumerSecret’ => $configs[‘consumerSecret’],
‘callbackUrl’ => $configs[‘callbackUrl’]
$response = $twitter->account->verifyCredentials();

Thats it, you are now able to make use of all methods within Zend_Service_Twitter. Happy tweeting!

Moving to from Media Temple to Linode

This weekend I finally found the time and motivation to do something that I probably should have done a couple of years ago, I started moving my sites away from Media Temple’s Grid Server. If you’re reading this post then i guess the migration to my shiny new Linode VPS has been successful. I was very tempted to write an in-depth discussion of why I have left Media Temple and why i chose Linode. I decided not to for a few reasons:

  • This could just be my opinion – feel free to do your own research though
  • Media Temple are likely to be too busy schmoozing at industry parties to care
  • Its sunny outside and i can smell BBQ
  • I’ve already exhausted too many hours trying to make my Grid Server usable, i don’t intend to waste any more of my time even thinking about them
  • I’m a geek with a shiny new toy that doesn’t suck, i intend to play with it now

Getting Zend Framework working on Media Temple’s Grid Server

I’ve just been through setting up Zend Framework on my Grid Server from Media Temple for the first time. There’s a few gotchas in there so i thought i’d share my experiences so hopefully that will save headaches for others in the future.

First things first, you will need to grab an up-to-date version of Zend Framework for your account. I plan to use Zend for a number of sites so i’ll create a folder in my home directory and call it Zend. Since i might want to switch between different versions of Zend i’m going to check out the folder into its tagged directory so from within the Zend folder i’ll run this command:

[shell]svn export[/shell]

You can then reference Zend for each site by creating a symbolic link, e.g. from within /domains/ run

[shell]ln -s ~/Zend/release-1.10.4/library/Zend[/shell]

Now then, at this stage this is all fairly obvious and will match your Zend setup on most server configurations. What makes Grid Server a special case is the fact that it still runs in PHP 4 by default. If you were to run

[shell]php –version[/shell]

from within the shell you will likely see PHP running 4.4.8 for CLI. This is lame for a number of reasons, specifically though for Zend Framework. Zend uses PHP5 specific syntax so when you first setup Zend_Tool you will run into your first major issue. If you have followed the instructions of the Zend site you will likely have created an alias so that “zf” points to your file*, something like this:

[shell]alias zf=’~/Zend/release-1.10.4/bin/'[/shell]

However, if you run that command you will get

[php]Parse error: syntax error, unexpected T_STRING, expecting T_OLD_FUNCTION or T_FUNCTION or T_VAR or ‘}'[/php]

which is basically the “you’re using PHP4” error. Annoying, but actually simple to fix. Open up in vi and you’ll see a block of code that looks like this:

if test "@php_bin@" != ‘@’php_bin’@’; then
elif command -v php 1>/dev/null 2>/dev/null; then
PHP_BIN=`command -v php`

Basically this block of code uses a number of methods to locate where PHP is installed. We don’t need it to do this hard work though, we know where it is and we need to define which version we want it to use. So, simply at the end of this block or in place of the entire block we can add this line:


Simple as that, Zend_Tool will now use the PHP5 executable for all the command line goodness. From here on in, its up to you. Happy coding.

* There’s no point adding all these handy little aliases if you’re going to lose them each time your shell session ends. I would usually add my aliases to the .bashrc file but since this file isnt used on Grid Server then you instead need to place them in your .bash_profile file located in your home directory. For example:

alias ll=’ls -la’
alias zf=’~/Zend/release-1.10.4/bin/’

Creating a positive coding legacy

Its a sad fact that for many developers your day to day work will involve dealing with legacy code and, invariably, this code sucks. It can be global variables, lack of documentation, code without test coverage or all of the above and more. We’ve probably all been angered by it, cursing the developers we inherited the code from. How then can you make sure that your coding legacy is a stable one and not a ticking timebomb?.

If we are to contemplate creating a stable legacy then its worth considering for a moment what it means to inherit a problematic codebase. There are a number of very professional approaches to handling an inherited codebase. Rowan Merewood presented his ideas on the subject recently at PHPBenelux (slides here). He has some solid ideas about dealing with the legacy of inherited code or poorly documented code. He also introduces the fact that all code we write is future legacy code in waiting. That being said, you need to start making sure your legacy will be a good one.

I was once told to comment my code like my coding peers were psychopaths with OCD-like interest in good code and bad anger management skills. This is a good starting point. Following a coding standard is probably one of the next things you should choose to adopt. Developers are often very adaptable people,adept in multiple languages, but we have a massive dislike for inconsistency. I recently spent almost an hour updating a 3rd party library to follow a strict camelcasing pattern in variable names, the lack of consistency made me feel uneasy. Because the code followed no formalised pattern it had become fractured, incosistent and harder to follow. A good example of a recognised standard is the Zend Framework Coding Standard for PHP. According to the documentation the standard helps to “…ensure that the code is high quality, has fewer bugs, and can be easily maintained”. It will also keep the developers than inherit your code happier. Legacy++.

The benefits of a coding standard do not stop there. Another powerful tool in the toolset for a PHP developer is PHP_CodeSniffer, a tool designed to detect violations of a defined set of coding standards. In the way it is possible to “validate” HTML against a specific Doctype Declaration it is also possible to validate the syntax of your PHP against a defined rulebase. The ability to enforce coding style may seem restrictive at first but the longer you follow this kind of regime the more it will become second nature. It will also become a massive tool within your bug detection routines.

Documentation of code, as i touched on before, is likely one of the greatest assets in making your code a positive legacy. It’s can also be directly linked to the adoption of a coding style. In the case of phpDocumentor a formalised set of parameters within your code comments are used to allow for the automatic generation of documentation for your project. We all know that good documentation makes developers happy so this is a great start. Most good IDE’s will use this documentation for “hints” as you code, this is invaluable also.

Hopefully, by following these few simple steps you can begin to see the ease by which you can make your coding legacy a better place. These tips are aimed very much at the more beginner programmer because i’m sure that no experienced programmer would ever not follow these principles (cough*). Of course this is only the start as I have not touched on either Unit Testing or Continuous Integration. However, I think we can all learn something by recognising ourselves as the creators of coding legacies. Oh, and be scared of people that inherit your code, if its bad and they’re mental, then you better sleep with one eye open.

* well, I try to most of the time :)

further reading: