Debugging Selenium with X11 Forwarding on Scrutinizer CI

Last week we ran into some issues on Scrutinizer CI with our Behat Selenium test suite. These things tend to be quite hard to debug: as Selenium is running headless in X virtual framebuffer (Xvfb) there is nothing to see for the developer. It’s possible to take screenshots, but this requires code changes (probably).

X11 Forwarding

One of the cool things you can do with Selenium (or more particular Firefox or Chrome) is X11 forwarding. If you’re running a X.Org Window system it is possible to forward the display from one box to another. When you’re on a Linux desktop environment you’re golden, but if you’re on a Mac like me you have to install XQuartz to get it working. Follow the instructions on the site and don’t forget to logout and login after installation otherwise your $DISPLAY enviroment variable is empty. By the way: if you’re a Windows user I honestly suggest to get a Mac or install Ubuntu to get it working :).

If you want more information on this topic there is plenty of information to be found on the interwebz.

SSH Remote debugging session

To get this X11 forwarding working on Scrutinizer CI first request a new SSH debugging session. This can be done on the inspection page. When the inspection fails you can retry but in the same dropdown also choose for “SSH Remote debugging”. The first time you do this it will ask for your public keys from Github, you should accept the request. It may take some time but after a few seconds or minutes you’ll receive a SSH login to connect to. Add an -X switch after ssh, so the command looks like this:

Now login on the remote machine and verify your $DISPLAY server is set (should be something like localhost:10.0). Open ~/.profile in your favourite editor and remove the line:

Do not log out, but create new SSH session with the command above. If all is fine you should be able to start firefox and see the browser appearing. Kill this firefox instance, and start Selenium in this session (java -jar /location/of/selenium.jar) and in the first session start your Behat tests.

Speedup your test suite on Codeship using ParallelCI

As I mentioned in an earlier blog post we use Codeship to test some of our private repositories. The folks at Codeship improved their service a lot since we first used it: the UI is improved a lot (both visually as practically) and the list of notification services keeps growing too.

Lately they introduced a cool new feature called ParallelCI. Travis CI has a similar feature called build matrix. You can split up your test suite in multiple parallel builds, called pipelines. If you have a big or slow test suite (probably your Behat tests) you can speed up things a lot by splitting them into multiple pipelines.

Example configuration

Because our phpspec suite is fast enough, we’ve splitted our Behat suite in multiple pipelines. Of course this is project dependant and will vary per use case. To enable ParallelCI open your project settings at Codeship and click on the “Test” link. Scroll down for the “Configure Test Pipelines” section. There will be one pipeline configured called “Test commands” in which all your current test commands are configured.
Click on the green “Add new pipeline” link and a new pipeline tab will be added. Give it a clear name and add your test command. To get an idea of how this can be done take a look at our configuration:

Tab #1: Behat user

Tab #2: Behat profile

Tab #3: phpspec

When you save these settings (the pipeline edit form is bit cumbersome as you will notice when adding new tabs, but I guess this will be improved soon enough) and rerun your last run you’ll see your suite will be split into multiple pipelines and as a result it will speedup things drastically. So I definitely see the use of this new feature and I’m sure you’ll love it for your bigger test suites.

Symfony2 and RabbitMQ: Lessons learned

Last year we introduced RabbitMQ into our stack at Waarneembemiddeling.nl. We were in desperate need of a worker queue and after fiddling around with Gearman, Beanstalkd and RabbitMQ we made our choice: RabbitMQ it will be.

Now there’s quite some information to be found on RabbitMQ and how to use it, but a lot of things you have to find out yourself. Questions like:

  • what happens to messages on the queue after a service restart?
  • what happens to messages on the queue after a reboot?
  • how do we notice that a worker crashed?
  • what happens to my message when the consumer dies while processing?
  • etc.

Using RabbitMQ and Symfony2 (or php in general) is quite easy. There is a bundle for Symfony2 called OldSoundRabbitMqBundle and a php library called php-amqplib which work very well. Both are from the same author, you should probably thank him for that :).

First try: pure php consumers

We’re running a fairly common setup. Because we’ve been warned that php consumer die out every now and then, we’re using Supervisor to start new consumers when needed. There is a lot of information out there on this subject so I won’t go in there.

Despite the warnings we started with pure php consumers powered by the commands in OldSoundRabbitMqBundle. The first workers were started like this:

This means we’re consuming from the async_event queue without any limit to the messages. Basically this means it will run forever, or better said: until php crashes. Or worse: your consumer ends up in non-response state. Which means it doesn’t process any message any more and Supervisor thinks all is fine because you still have a running process. This happened once to our mail queue. I can assure you it’s better to prevent these kind of things.

Second try: pure php consumers with limited messages

So after the mail-gate I was searching for a quick way to make our setup more error proof. The OldSoundRabbitMqBundle supports limiting the messages to process. So I limited our workers so that they got restarted a couple of times a day:

After that things got running more smoothly and it took a while before we encountered new problems. While spitting trough the logs I notices some consumers produced some errors. A brief summary:

  • General error: 2006 MySQL server has gone away
  • Warning: Error while sending QUERY packet.

Because the consumer is one process that keeps running, that also means that the service container and stuff keeps existing in memory. When you’ve done some queries the database connection keeps open in the background. And if it’s quiet on our queue, it may take some time before we reach the message limit. If that time exceeds the connect_timeout of your MySQL server, you’ll run into the warnings and errors about lost connections.

Of course we should close the connection after the message is processed or could try catch for Doctrine DBAL connection exceptions or increase the connect_timeout setting but thats just denying the real problem. Running consumers with a booted Symfony2 kernel just doesn’t work so well.

A final resort could be to strip down the consumers and don’t use the Symfony2 kernel and container but we don’t like that. Our messages are most of the time serialized events which get dispatched again after the consumer picks them up. At the application level we don’t want to know wether we are in a RabbitMQ consumer or in a normal HTTP request.

Real solution: rabbitmq-cli-consumer

So it took a couple of months to learn the hard way we needed some different solution for our consumers. I found this interesting blog post about the same problem. He solved it with Java and Ruby consumers. We all learned java in college right, but I don’t like to run the memory eating jvm on our servers. The Ruby consumer unfortunately misses some good documenten for me as Ruby virgin. So I got a bit lost there.

That was the point where Go got in. Go is a kind of improved C with not real OO but a lot of cool stuff in it. I wrote a application that makes it possible to consume messages from RabbitMQ queue and pipe them into an command line application. I called it: rabbitmq-cli-consumer.

The main advantages for using rabbitmq-cli-consumer are:

  • no more stability issues to deal with
  • lightweight and fast
  • no need to restart your workers after a fresh deployment

We still use supervisor to start and stop the consumers because it’s the right tool for it. An example of how we start a consumer:

An example of a Symfony2 command we use:

Final tip: use the management plugin

Before even starting with RabbitMQ make sure you have the management plugin installed. It gives you a good overview about whats happening. Also you can purge queues, add users, add vhosts etc.

Install Selenium headless on Debian Wheezy (optionally with Ansible)

When you start testing with Behat and Mink Selenium2 driver you also need a browser running. Because we develop on a virtualised server installing FireFox was a bit more tricky then I expected. Of course a search yielded some interesting results, but also a lot of crap. So here is a little writeup on how I managed to get it running to save you some time. An example playbook can be found at the bottom of this post. But beware: this is Debian only!

On Debian there is a package called iceweasel. This is a rebranded version of FireFox. Because of that there is no firefox package available in the default repositories.

We are using Ansible for configuration management (for both our production and develop environment) so I prefer a package above compiling shit because that’s much easier to automate. There are a couple of options to install FireFox trough package manager:

  1. add Linux Mint repository
  2. add ubuntuzilla repository

Using the Linux Mint repository I experienced some problems. The Ubuntuzilla repository worked like a charm. If you want to install manually just follow the instructions in their Wiki. After adding the repository you can install the firefox package:

To run Firefox headless you also need some display server and to emulate that we are going to use xvfb. Selenium requires Java, thus we install:

Download Selenium somewhere:

You should be able to start Selenium now:

Starting by hand is a bit lame, so we use this startup script:

Copy this to /etc/init.d/selenium and after that you can:

And when we create an Ansible playbook for all this we get:

How to use CodeShip with Symfony2, phpspec and Behat

My coworkers and I at waarneembemiddeling.nl are really fond of phpspec and Behat. Yes, we must confess: we didn’t test much since a couple of months ago. We skipped the phpunit age and started right away with phpspec and Behat. We also like services, so instead of setting up (and maintain) our own CI server, we use CodeShip. To be honest we fell in love with Travis, but that was a little bit to expensive for us. And so our search ended at CodeShip.

There is some documentation on how to use it with php, but its not that in depth about phpspec and friends. Let’s start with phpspec, as this is pretty easy. I’m assuming you install phpspec and Behat as dev dependencies using Composer:

phpspec

Now head over to codeship.io and edit your projects configuration. Pick “PHP” as your technology (didn’t see that one coming). In the “setup commands” field we first select the desired php version:

Next install deps (I believe this line is placed there by default by the codeship guys):

Then add phpsec to the “test commands” field:

Et voila, phpspec should now be functioning. :)

Behat

Behat is a little bit more difficult. The first problem you need to solve is to get the MySQL credentials into your Symfony2 application. These are provided trough environment vars, but differ from the naming convention in Symfony2.

We start by changing our app/config/config_test.yml:

Now to let Symfony2 pick up the environment vars we have to follow the convention I just mentioned. This means that an environment variable with the name SYMFONY__TEST_DATABASE_USER will be recognised when building the container. But let’s start by adding a bash script to ease the setup of the testing environment (locally and Codeship). Call it setup_test_env.sh and place it in the root of your project:

Then adjust your codeship setup commands and add:

Last but not least add the behat command to the test commands:

Things should be working now. Quickly enough you will run into the infamous xdebug “Fatal error: Maximum function nesting level of ‘100’ reached” error. Let’s fix this right away and add this in your setup commands:

Summary

So the complete setup commands dialog for phpspec and Behat together looks like this:

And the test commands like this:

Everything should be working fine now! To run your tests local don’t forget to first execute the bash script (notice the extra dot, it is required):

Happy testing! ūüėČ

Slow initialization time with Symfony2 on vagrant

A few days ago we switched our complete infrastructure from hosting provider. Also we made the switch from CentOS to Debian. So we also got a new fresh development environment using Debian and Vagrant (and latest PHP and MySQL ofcourse :)).

We expected the new dev box to be fast, but the oppositie was happening: it was slow as hell. And when I mean slow as hell, it’s terribly slow (10 – 20 seconds, also for the debug toolbar). In the past we had some more problems with performance on VirtualBox and Vagrant. There are some great post out there on this subject (here and here) which we already applied to our setup. In a nutshell:

  • change logs and cache dir in AppKernel
  • use NFS share

The cause: JMSDiExtraBundle

After some profiling I discovered there were so many calls originating from JMSDiExtraBundle I tried to disable the bundle. And guess what: loading time dropped to some whopping 200ms!

The real problem was the way the bundle was configured:

This causes the bundle to search trough all your php files in those locations. Apparently in the old situation (php 5.3 and CentOS) this wasn’t as problematic as in the new situation (php-fpm 5.5, Debian).

Speed up your data migration with Spork

One of the blogs I like to keep an eye on is Kris Wallsmith¬†his personal blog. He is a Symfony2 contributor and also author of Assetic and Buzz. Last year he wrote about a new experimental project called Spork: a wrapper around pcntl_fork to abstract away the complexities with spawning child processes with php. The article was very interesting,¬†although I didn’t had any valid use case to try the library out.¬†¬†That was, until today.

It happens to be we were preparing a rather large data migration for a application with approximately 17,000 users. The legacy application stored the passwords in a unsafe way – plaintext – so we had to encrypt ’em al during the migration. Our weapon of choice was bcrypt, and using the¬†BlowfishPasswordEncoderBundle¬†implementing was made easy. Using bcrypt did introduce a new problem: encoding all these records would take a lot of time! That’s where Spork comes in!

Setting up the Symfony2 migration Command

If possible I wanted to fork between 8 and 15 processes to gain maximum speed. We’ll run the command on a VPS with 8 virtual cores so I want to stress the machine as much as possible ;).¬†Unfortunately the example on GitHub as well on his blog didn’t function any more so I had to dig in just a little bit. I came up with this to get the forking working:

The command generates the following output:

Make it a little bit more dynamic

To be really useful I’ve added some parameters so we can control the behavior a little more. As I mentioned before I wanted to control the amount forks so I added a option to control this. This value needs to be passed on to the constructor of the ChunkStrategy:

<?php
 
namespace Netvlies\AcmeMigrationBundle\Command;
 
...
 
class GeneratePasswordCommand extends ContainerAwareCommand
{
    protected function configure()
    {
        $this
            ->setName('generate:password')
            ->addOption('forks', 'f', InputOption::VALUE_REQUIRED, 'How many childs to be spawned', 4)
        ;
    }
 
    protected function execute(InputInterface $input, OutputInterface $output)
    {
        $forks = (int) $input->getOption('forks');
 
        ...

        $manager = new ProcessManager(new EventDispatcher(), true);
        $strategy = new ChunkStrategy($forks);
        $manager->process($iterator, $callback, $strategy);
    }
}

I also added a max parameter so we can run some tests on a small set of users, instead of the whole database. When set I pass it on to the setMaxResults method of the $query object.

Storing the results in MySQL: Beware!

In Symfony2 projects storing and reading data from the database is pretty straight forward using Doctrine2. However when you start forking your PHP process keep in mind the following:

  1. all the forks share the same database connection;
  2. when the first fork exits, it will also close the database connection;
  3. database operations in running forks will yield: General error: 2006 MySQL server has gone away

This is a known problem. In order to fix this problem I create and close a new connection in each fork:

That’s basically it. Running this command on a VPS comparable with c1.xlarge Amazone EC2 server did speed up things a lot. So if you’re also working on a import job like this which can be split up in separate tasks you know what to do… Give Spork a try! It’s really easy, I promise.

UPDATE 2013-03-19
As stated in the comments by Kris, you should close the connection just before forking. Example of how to do this:

Symfony2 authentication provider: authenticate against webservice

The past few days I have really be struggeling with the Symfony2 security component. It is the most complex component of Symfony2 if you ask me! On the symfony.com website there is a pretty neat cookbook article about creating a custom authentication provider. Despite the fact that it covers the subject pretty well, it lacks support for form-based authentication use cases. In the current Symfony2 project I’m working on, we’re dealing with a web service that we need to authenticate against. So the cookbook article was nothing more then a good introduction unfortunately.

Using DaoAuthenticationProvider as example

Since we don’t want to reinvent the wheel, a good place to start is by investigating the providers that are in the Symfony2 core. The DaoAuthenticationProvider is a very good example, and used by the default form login. We are going to add a few pieces of code, so we can use the listener and configuration settings. The only thing we want to change are the authentication itself and the user provider. If you take a look at the link above, you will see the only thing we need to change is the checkAuthentication method. But, a few more steps are needed in order to make things function correctly. Let’s begin! :)

We also need a UserProvider!

First things first: we need a custom user provider. The task of the user provider is load the user from a source so the authentication process can continue. Because a user can already be registered at the webservice a traditional database user provider won’t work. We need to create a local record for every user that registers or logs in and doesn’t have an account. So basically the user provider is only responsible for loading and creating a user record. In this example I save the user immediately when there is no record; probably you want to do this after authenticating.

The code for the use provider looks like this:
[php]
<?php

namespace Acme\DemoBundle\Security\Core\User;

use Symfony\Component\Security\Core\User\UserProviderInterface;
use Symfony\Component\Security\Core\User\UserInterface;
use Symfony\Component\Security\Core\Exception\UsernameNotFoundException;
use Symfony\Component\Security\Core\Exception\UnsupportedUserException;
use Acme\DemoBundle\Service\Service;
use Acme\DemoBundle\Entity\User;
use Doctrine\ORM\EntityManager;

class WebserviceUserProvider implements UserProviderInterface
{
private $service;
private $em;

public function __construct(Service $service, EntityManager $em)
{
$this->service = $service;
$this->em = $em;
}

public function loadUserByUsername($username)
{
// Do we have a local record?
if ($user = $this->findUserBy(array(’email’ => $username))) {
return $user;
}

// Try service
if ($record = $this->service->getUser($username)) {
// Set some fields
$user = new User();
$user->setUsername($username);
return $user;
}

throw new UsernameNotFoundException(sprintf(‘No record found for user %s’, $username));
}

public function refreshUser(UserInterface $user)
{
return $this->loadUserByUsername($user->getUsername());
}

public function supportsClass($class)
{
return $class === ‘Acme\DemoBundle\Entity\User';
}

protected function findUserBy(array $criteria)
{
$repository = $this->em->getRepository(‘Acme\DemoBundle\Entity\User’);
return $repository->findOneBy($criteria);
}
}
[/php]

We add it to our services configuration in app/config/services.yml:
[xml]
<container xmlns="http://symfony.com/schema/dic/services"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://symfony.com/schema/dic/services http://symfony.com/schema/dic/services/services-1.0.xsd">

<parameters>
<parameter key="security.user.provider.acme.service.class">Acme\DemoBundle\Security\Core\User\WebserviceUserProvider</parameter>
<parameter key="acme.service.class">Acme\DemoBundle\Service\Service</parameter>
</parameters>

<services>
<service id="acme_demo_webservice" class="%acme.service.class%">
</service>
<service id="acme_demo_user_provider" class="%security.user.provider.acme.service.class%">
<argument type="service" id="acme_demo_webservice" />
<argument type="service" id="doctrine.orm.entity_manager" />
</service>
</services>
</container>
[/xml]

Creating the AuthenticationProvider

As I said earlier we are going to base our provider on the DaoAuthenticationProvider. In my bundle I created a new class called ServiceAuthenticationProvider. Like our example we are extending the abstract UserAuthenticationProvider. Besides the checkAuthentication method we also must implement the retrieveUser method. We inject the service through the constructor, so the class looks like this:

[php]
<?php
namespace Acme\DemoBundle\Security\Core\Authentication\Provider;

use Symfony\Component\Security\Core\Encoder\EncoderFactoryInterface;
use Symfony\Component\Security\Core\User\UserProviderInterface;
use Symfony\Component\Security\Core\User\UserCheckerInterface;
use Symfony\Component\Security\Core\User\UserInterface;
use Symfony\Component\Security\Core\Exception\UsernameNotFoundException;
use Symfony\Component\Security\Core\Exception\AuthenticationServiceException;
use Symfony\Component\Security\Core\Exception\BadCredentialsException;
use Symfony\Component\Security\Core\Authentication\Token\UsernamePasswordToken;
use Symfony\Component\Security\Core\Authentication\Provider\UserAuthenticationProvider;
use Acme\DemoBundle\Service\Service;

class EpsAuthenticationProvider extends UserAuthenticationProvider
{
private $encoderFactory;
private $userProvider;
private $service;

/**
* @param Service $service
* @param \Symfony\Component\Security\Core\User\UserProviderInterface $userProvider
* @param UserCheckerInterface $userChecker
* @param $providerKey
* @param EncoderFactoryInterface $encoderFactory
* @param bool $hideUserNotFoundExceptions
*/
public function __construct(Service $service, UserProviderInterface $userProvider, UserCheckerInterface $userChecker, $providerKey, EncoderFactoryInterface $encoderFactory, $hideUserNotFoundExceptions = true)
{
parent::__construct($userChecker, $providerKey, $hideUserNotFoundExceptions);
$this->encoderFactory = $encoderFactory;
$this->userProvider = $userProvider;
$this->service = $service;
}

/**
* {@inheritdoc}
*/
protected function checkAuthentication(UserInterface $user, UsernamePasswordToken $token)
{
$currentUser = $token->getUser();

if ($currentUser instanceof UserInterface) {
if ($currentUser->getPassword() !== $user->getPassword()) {
throw new BadCredentialsException(‘The credentials were changed from another session.’);
}
} else {
if (!$presentedPassword = $token->getCredentials()) {
throw new BadCredentialsException(‘The presented password cannot be empty.’);
}

if (! $this->service->authenticate($token->getUser(), $presentedPassword)) {
throw new BadCredentialsException(‘The presented password is invalid.’);
}
}
}

/**
* {@inheritdoc}
*/
protected function retrieveUser($username, UsernamePasswordToken $token)
{
$user = $token->getUser();
if ($user instanceof UserInterface) {
return $user;
}

try {
$user = $this->userProvider->loadUserByUsername($username);

if (!$user instanceof UserInterface) {
throw new AuthenticationServiceException(‘The user provider must return a UserInterface object.’);
}

return $user;
} catch (UsernameNotFoundException $notFound) {
throw $notFound;
} catch (\Exception $repositoryProblem) {
throw new AuthenticationServiceException($repositoryProblem->getMessage(), $token, 0, $repositoryProblem);
}
}

[/php]

Note the call to $this->service->authenticate where the magic happens. The retrieveUser method receives a User instance from our user provider. Although this is not really clear in the code above, it will be after configuration in the service container. We use the configuration from the Symfony core and adjust it to our needs:

[xml]
<service id="security.authentication_provider.acme_demo_webservice" class="%security.authentication.provider.acme_service.class%" abstract="true" public="false">
<argument type="service" id="acme_demo_webservice" />
<argument /> <!– User Provider –>
<argument type="service" id="security.user_checker" />
<argument /> <!– Provider-shared Key –>
<argument type="service" id="security.encoder_factory" />
<argument>%security.authentication.hide_user_not_found%</argument>
</service>
[/xml]

Please note the empty arguments. Look a bit strange, huh? These will be magically filled when the container is build by our Factory! This is a bit tricky, and the cookbook explains pretty wel, so I suggest to take a look there. We are extending the FormLoginFactory because we want to change it bit:

[php]
<?php

namespace Acme\DemoBundle\DependencyInjection\Factory;

use Symfony\Component\Config\Definition\Builder\NodeDefinition;
use Symfony\Component\DependencyInjection\DefinitionDecorator;
use Symfony\Component\DependencyInjection\ContainerBuilder;
use Symfony\Component\DependencyInjection\Reference;
use Symfony\Bundle\SecurityBundle\DependencyInjection\Security\Factory\FormLoginFactory;

class SecurityFactory extends FormLoginFactory
{
public function getKey()
{
return ‘webservice-login';
}

protected function getListenerId()
{
return ‘security.authentication.listener.form';
}

protected function createAuthProvider(ContainerBuilder $container, $id, $config, $userProviderId)
{
$provider = ‘security.authentication_provider.acme_demo_webservice.’.$id;
$container
->setDefinition($provider, new DefinitionDecorator(‘security.authentication_provider.acme_demo_webservice’))
->replaceArgument(1, new Reference($userProviderId))
->replaceArgument(3, $id)
;

return $provider;
}
}
[/php]

Add the builder in the Acme\DemoBundle\AcmeDemoBundle.php file:
[php]
<?php

namespace Acme\DemoBundle;

use Symfony\Component\HttpKernel\Bundle\Bundle;
use Symfony\Component\DependencyInjection\ContainerBuilder;
use Acme\DemoBundle\DependencyInjection\Factory\SecurityFactory;

class AcmeDemoBundle extends Bundle
{
public function build(ContainerBuilder $container)
{
$extension = $container->getExtension(‘security’);
$extension->addSecurityListenerFactory(new SecurityFactory());
}
}
[/php]

Finally, change your security config:

The webservice-login key activates our authentication provider. The user provider is defined under providers as acme_provider with the corresponding service id.
I used the AcmeDemo bundle from symfony-standard repository, so you could just copy paste most of my code to see everything in action! Only thing you need to provide yourself is a dummy webservice.

Happy coding!

Western Digital Green Caviar WD10EADS and hdparm problems

A few weeks ago I ordered myself a new eco friendly home server. The machine will be acting as my content server: running samba, sabnzbd, nginx and mysql. Most of the time it will be idling, so my goal was to build a server with energy saving hardware. The old server already had a 1TB Western Digital Green Cavair drive in it which I bought in 2010. So I placed an order for the following parts:

  1. MSI H61M-P22 motherboard
  2. Intel Pentium G620
  3. Transcend JetRam JM1333KLN-2G
  4. be quiet! Pure Power L7 300W

Within a few days I received the whole shipment, and a few hours later my server was up and running again :). After installing and configuring Debian Squeeze I measured the power usage: 22 ~ 25 watt idle, not bad! I didn’t tweak anything at all, so I started my journey. Spinning down the disk after a period of idling was my next goal. For Linux users, hdparm is the tool which gets the job done. On Debian or Ubuntu all you need to do is:

To configure your drive you have to know its name. One way to figure that out is

Output on my system:

The /dev/sdb device is the USB key running the OS, while /dev/sda is my WD Green Caviar drive.
To set the spindown time all you have to do is:

Of course you should change the time out if you want. However, after waiting 25 seconds I did a status check:

What the? :( It obviously didn’t work! After trying some more time outs I concluded it didn’t really work. Googling for my drive and hdparm I quickly found a lot of other people which ran in the same problem. Furthermore I discovered that Linux and the old Western Digital Green Caviar drives don’t play well with each other. To summarize: the drive puts heads into parking position after 8 seconds idle time! This causes a very high Load_Cycle_Count. To check this, you have to install the smartctl utility:

Then, check the S.M.A.R.T. data for you drive:

When issuing this command for a couple of minutes, I saw the number growing rapidly: every 3 minutes a new hit. Not good! :(

Fix the drive for Linux usage

Western Digital published a advisory about this problem. Basically we as Linux users are left in the dark. They provide a tool to reset of reconfigure the timeout, but it only runs on MS-DOS…

The good news is that the utility is present on the Ultimate Boot CD. Burn the ISO on a disc (or make a bootable USB key) and remove the timeout with:

After that you’re drive will pay attention to the settings provided by hdparm, and the Load_Cycle_Count won’t be growing that rapidly. The count on my server grows by 2 counts per day, instead of ~ 200! :) And when the drives is standby my server consumes 18 ~ 20 watt!

How to create a VM with PHP 5.4 using Vagrant and Puppet

Everybody PHP developer who didn’t live under a rock the past few months must have heard of the upcoming release of PHP 5.4. Well, March 1 it was finally there: the official release of PHP 5.4!

Because it definately will take some time before we can install it with our favorite package manager, I decided to create a small Puppet manifest in combination with Vagrant that will build a virtual machine. Normally, you have to compile PHP from source in order to try it that quickly after it has been released. However, the nice dudes from dotdeb.org compiled them already for us, and provide it via their repository. Nice! :)
Furthermore, Vagrant provides us a cool Ubuntu server image, ready to rock with Puppet. So, let’s get thing of the ground shall we? (pro tip: scroll all the way down to simply clone my git repository with all the code ;))

Prerequisites

In order to get things running smoothly you have:

  1. Installed VirtualBox 4.1.x
  2. Installed Vagrant
  3. Some IDE for editing Puppet manifests (I prefer Geppetto)

Creating our project structure

Let’s start with creating a basic directory structure for storing our files needed. Fire up Eclipse/Geppetto and start a new project in your workspace. Create the following structure:

  • manifests
  • modules
    • php54
      • files
      • manifests
  • www

Writing the Puppet manifest

There are a few things we need to accomplish with Puppet, in chronological order:

  1. Add the dotdeb.org repository to/etc/apt/sources.list
  2. Add the dotdeb.org GPG key
  3. Run apt-get update
  4. Run apt-get install php5

Because we can bucket files to the VM easily with Puppet, I choose to supply a modified sources.list so Puppet takes care of copying it into the VM. Then, I download the GPG key with the famous wget utility and pipe it into apt-key. The exec call to apt-get update speaks for itself, and last but not least I tell Puppet to install the latest php5 package.

With the require directive I make sure that all commands are executed in the right order.

The contents of the init.pp file in the php54 module looks like this:

Also we create a sources.list file in the “files” directory (you could change the Debian mirrors):

Last thing I do is create the entry point for Puppet, namely the site.pp file in the manifests directory:

All I do is including the php54 module which will handle all the magic for us.

Creating the virtual machine

Now Vagrant comes in to use. Create a Vagrantfile in your project root with the following content:

I’m using a Debian Squeeze box from vagrantbox.es here, credits go to the original author. I’m making use of the VirtualBox shared folders. These are not really fast, but will do for testing purposes. If you want some more advanced sharing I suggest NFS or Samba if you are on Windows.

Now, all left to do is start the VM. Open up a terminal and do vagrant up in your project root:

Navigate to http://33.33.33.10 with your favorite browser and have some happy testing :)

For all the lazy people out there, you can start the box with just 3 commands: