Would anyone want to read an email-based course on using Beanstalkd, and other queues? If I get enough interest I’m happy to write something up. Let me know below, or drop me a note via Twitter (@alister_b).
Would anyone want to read an email-based course on using Beanstalkd, and other queues? If I get enough interest I’m happy to write something up. Let me know below, or drop me a note via Twitter (@alister_b).
Having just watched Sebastian Bergmann’s “The State of PHPUnit” presentation from Fosdem 2015, I was inspired to install and test a project of mine with the latest stable PHPUnit – v4.7. It was easily installed on the command line.
composer global require "phpunit/phpunit"
I installed it as a new, global, tool because in my project I am using the “ibuildings/qa-tools” repository to install and help run a number of QA tools – and the stable 1.1.* versions lock PHPunit to v3.7 – the last released version of which was in April 2014.
A good part of the reason to do so – beyond using the latest version – was also to enable the strict tests
<phpunit beStrictAboutTestsThatDoNotTestAnything="true" checkForUnintentionallyCoveredCode="true" beStrictAboutOutputDuringTests="true" beStrictAboutTestSize="true" ... more parameters ... colors="false" verbose="true" >
This blogpost is to help someone else that tries it – and comes across the same issue I did:
PHP Fatal error: Class PHPUnit_Util_DeprecatedFeature_Logger contains 1 abstract method and must therefore be declared abstract or implement the remaining methods (PHPUnit_Framework_TestListener::addRiskyTest) in …vendor/phpunit/phpunit/PHPUnit/Util/DeprecatedFeature/Logger.php on line 201
My fix was simple – it took some systematic editing of the phpunit.xml file to figure it out. At first, I tried commenting out the various add-on I’ve got for PHPunit, tools to report slow tests, and to automatically close and check the results of any Mockery expectations. None of them helped, and so I started on the parameters in the opening XML tag of the file.
The actual fix was simple – the problematical line for me was:
colors = "false"
Removing that from the top of the phpunit.xml file, solved my issue, and now I’ve also gone on to update the “ibuildings/qa-tools” package to dev-master to get the latest-and-greatest (including automatically pulling in PHPUnit v4.* and Behat v3, among others). It was reassuring to know that I had the previous configuration safely stored in version control – so I could always just revert back to something that had worked. Running a separate copy of PHPunit installed outside of the project didn’t hurt either.
I’ve said for a long time that “you don’t get paid the big bucks for knowing what to do – it’s for knowing how to fix it when you make the inevitable screw-ups”.
Now, when I run my PHPunit-tests, I get a lot more warnings about ‘risky’ tests (all of it “This test executed code that is not listed as code to be covered or used”) – but those aren’t big issues for me right now.
The take-away is, don’t be afraid to upgrade, and if there is a problem, systematically (temporarily) commenting, or removing configuration, or code, can find the issues surprisingly quickly.
Just a quick note to point out a couple of presentations on Queuing. I’ve recently shown the second (which admittedly has some significant things in common with the original, and not just the web-based slides).
Either way, you are welcome to look at them online, and the original html source, and some source code, are all online at http://github.com/alister
It’s been one of those quiet spots around here for a while, so here’s the catch-up on what has been happening while I was not posting.
I’ve recently finished a short contract working with an agency, Transform (part of the Engine group) working with a couple of government departments. The Office Of The Public Guardian receives, checks and stores Lasting Powers of Attorney – a legal document that you write while still mentally compentant to say what you would like to happen should the worst occur, and by whom you want to do it. The simpler cases aren’t actually very complicated, but there is a lot of work to get the form completed – and the information can have to be written in triplicate over two or three different forms.
The project was to work with (and at the offices of) the new Government Digital Services (GDS) who are building the http://Gov.uk project, and I helped write the first-draft (but otherwise basically complete) prototype to put the form online. If nothing else, it allows someone to step through and only have to put information in once. Myself and one other developer, with a project manager and many others from the OPG and GDS took nearly all of the 37-pages of duplicated paper forms and created a PHP/Zend Form based system, that in the end produced PDFs, ready to check and then have signed by everyone involved.
It was an interesting project – and it will be a valuable service to make it easier to handle, and eventually also process from the back-office perspective. It’s not quite what I would normally do, I’m far more infrastructure and back-end oriented – not so used to building a large and complex flowing form, and so I elected to move on at the end of the prototype, rather than continue with the alpha/beta phases. With a little luck, it will go live later this year after some extensive user-testing to make it as useful, and easy as possible to fill in this important legal document. The end result, in a few years, should be the ability to have many times more people making an LPA for themselves, often as part as something as routine as making a will, or buying a new house.
Rest assured, there’s been plenty of relaxing since I finished that project a couple of weeks ago (especially since I spend a good portion of the time working while distinctly under the weather).
In the last couple of weeks, I’ve been a software developing machine. I’ve also been looking for my next contract role, hopefully something to start just after Easter – though, as I write this, exactly what that will be is still up in the air.
First, I’ve been putting together a development VM – currently based on a beta of Ubuntu 12.04. There are a few things that go into making it.
Puppet config https://github.com/alister/puppet-ab
I’ve been occasionally tweaking this since before Christmas when I took some time to do a deeper dive into Puppet, after using it last year. Many of the modules I use, are actually pulled in from other github-based projects, especially from a number by https://github.com/saz (Steffen Zieger).
Puppet-dropbox: (forked from https://github.com/cwarden/puppet-dropbox) Dropbox is a very useful tool on any desktop to copy files around your own machines, and shared folders enable easy access to others working on the same project – I found it nearly invaluable in my last contract. It was also good to be able to improve the code that was upstream – a small fix to allow for Ubuntu to be also set as a destination for the installation.
In the end, I have elected to use it to just install the basic command line tool (rather than the full client), and then that can be used to install the main client, if required. It saves having to store the username and password in the repository, and it is also useful from other security standpoints not having copies of your files on all machines where the puppet manifests might be run.
Dotfiles is more of a meta-project, many appear to have a repo by that name, but a large number of them are hand-rolled. I forked one of the more common bases by Ryan Bates https://github.com/ryanb/dotfiles who also runs the excellent http://railscasts.com/ (which are not all about Ruby On Rails). I have yet to find a good way to integrate it with another shell-oriented project – Oh-My-Zsh, https://github.com/robbyrussell/oh-my-zsh which is an excellent improvement over a standard Bash shell, that I’ve been using for more than a dozend years.
The common thread between both of these is to take a basic machine, with git, an SSH key and Puppet installed and bring it quickly up to a full spec development desktop/server. It’s a continuing project, but a valuable one, and not just as a learning tool.
There are two other projects that I’ve been working on.
guard-puppet-lint: Guard (see the railscast episode http://railscasts.com/episodes/264-guard) is a ruby-based project that will watch for file changes in a subdirectory hierarchy. There are a lot of plug-ins for it https://rubygems.org/search?utf8=%E2%9C%93&query=guard- including PHP-oriented ones for PHPunit and PHP_CodeSniffer. The project itself can be downloaded from https://rubygems.org/gems/guard-puppet-lint.
As the name suggests, this small ruby gem adds a slightly easier way to run Puppet-Lint through Guard. As my first released ruby-code, there’s not much to it, and in fact, it’s really just a hack of guard-shell, that will run puppet-lint on the changed manifests. it does make it slightly cleaner though, and so I’m happy enough. I’m also very pleased to have had a couple of (very minor) issues raised – literally one word missing from the readme file and a single character to reduce the number of false-positive files that might be processed. There are some ideas I can add to it to make it even more useful, but that can wait for a little while, and besides, I have to figure out how to better use Guard in the first place, to be able to to do so.
The code itself works fine, but only as a URL destination. One of the ideas that came up while working with the Office Of The Public Guardian on their new LPA form was to put QR codes onto the final output PDF pages to help verify automatically that all the pages that have been produced, have been recieved at the back office – and also being able to refer the paper form to a digital version stored in the database.
It’s a classic refactoring though – taking a piece of code and, without changing the end results, make it possible to use in a slightly different, but useful, context. Eventually, the qr.php webpage would be a thin wrapper around the class – and the class itself could be used from backend code to, for example, generate an image that can be placed into a PDF.
Recruiters: Here’s the rules.
All of the above have happened to me.
About that last point about feed-back? There’s one recruiter that is on my list (it’s not a good list) because he didn’t bother telling me what the employer was saying about me, but a different recruiter did find out and let me know. The 2nd one has still got a chance to place me, but the first, not so much. Ironically, the comments were about some rants I posted on my LinkedIn profile. It’s also a potential employer I no longer care about working for.
The strangest story happened to me about 15 years ago. I was working with a small recruiter and spending a couple of days to tweak a CV so that it was just perfect for a potential job (this was when I still wrote CVs, this year, it’s all on websites to read, not MSWord documents). Then I got a phone call from one of the largest recruitment companies in town – they had sent my CV (without my knowledge) and the employer was interested in talking to me. WTF? That was so not good. It was even worse for the small recruiter though – it turns out he knew the rogue recruiter. He was married to her.
Finally, when you do contact a candidate with a potential role, make sure you send them your details – and about the job(s). Just a quick email with a note with the what and where. Without it, they will not know how to get back to you for anything. I know you love to talk on the phone (and it avoids that pesky audit-trail), or you might make wonderful notes in your recruitment systems for yourself, but when us developers are looking, we can get a dozen phone calls a day from all different recruiters, and quite likely on our mobile phones to boot. We’ve not got the chance to write it all down most of the time, so you should, and drop us a note about it. Otherwise, we can’t get back to you – if it was interesting. So, it’s in your own benefit to keep us in the loop.
If you aren’t taking hiring seriously – other people can, and do hire the people you need.
I’ve been guilty of it before – leaving it a couple of days – or even a week before getting back to someone that sent in their CV – although of course, most of the time, it didn’t matter. The person wasn’t going to get hired because they were just not good enough (the generally poor quality of developers is a different rant).
A couple of times I have been bitten hard when hiring though, such as being introduced to a sysadmin on a Thursday night, following up late Friday afternoon and finding out on Monday when I chased him up, that he had just accepted an offer.
So, what to do? Well, to be honest, all you can do is be swift about things. Check all CVs that come in within a couple of hours at most, and for those that show promise, get back to them and arrange the next step as quickly as you can (probably a quick chat on the phone?) and pencil in some time – in your own calendar, if not theirs – a potential time to sit down with them properly.
Please though, after you’ve had the interview get back to them quickly. Occasionally, I’ll have left them with a little thing to do (some code to write, or something to get back to me on), it’s a good idea to just drop a quick email confirming that after they step out the door. A couple of times when I was looking for a new job, I’ve actually emailed them back that afternoon, or before lunchtime the following day to follow up with some code. Both times I was starting that role inside two weeks.
When I’ve been interviewing, I’ve even offered someone a job before they left the interview. It was obvious that the guy was a good developer – just searching for him online found a number of posts he’d done into relevant mailing lists. A few years later, I’d moved on myself, he was now freelancing, so on my suggestion he was interviewed again, and promptly hired again.
There is a cut-throat market for developers in the last few years, and that’s not likely change. Really good people will always have a choice if they want it. You, as an employer, need to be worth working for.
A future post will touch more on my ‘perfect wish-list’ of working environments.
A quick fun post for those of you with an Amazon Kindle – some instructions on how to a) jailbreak your reader (trivially easy), and then b) put your own wallpapers on there, so you get a more interesting ‘screensaver’.
It’s really easy, no more than 20 mins and a couple of reboots/software updates. Most of the time is literally waiting for the reader to restart after you’ve placed a file in the base directory.
All of the instructions are here: http://wiki.mobileread.com/wiki/Kindle_Screen_Saver_Hack_for_all_2.x_and_3.x_Kindles
A few notes to make it easier to understand:
I was out last night at the The Big Xmas [bash] #, near Silicon Roundabout. It was a fun night out meeting various people, tech, business and recruiters. Oh, the shame though – I was wearing the same T-shirt as someone else – and, yes, I have indeed replaced people with small shell scripts.
Now, to the main part of what this post is about – the rant. It’s not aimed at the particular event last night alone though. It’s alcohol at various tech-meetups in general. Look guys, you generally end up buying too much anyway, and all too often its also to the exclusion of those that may prefer to not get inebriated.
As an example, The Hacker News meetups will get dozens of pizzas (which are, admittedly all eaten – there are 150+ people attending usually), but also a couple of stacks worth of cans and bottles, each with 24 cans in each tray, several hundred cans at least. It’s just as well they aren’t all drunk on the night – many of the event-goers would be unconscious by the end. At least they will also add a few trays of soft-drinks, Lemonade and Cola.
If you want a couple of drinks to help lubricate the social aspect of an evening out, I’ve got no problem at all. I don’t though. I prefer to save my brain cells for doing interesting things, like oh, writing code?
For other events, how about adding some more soft drinks to replace some of the alcohol? Last night, the choice was booze, or fizzy water; That was all.
Thankfully, people don’t generally get blotto at the various meetups – at least that I’ve seen, but I expect there’s been one or two that have swerved their way home sometimes.
Do you have a comment about alcohol being served at the various meetups? Would you like more, less, or do you think that organisers and sponsors are doing it right? I would love to start a conversation here about the good or bad of it.
Capistrano, makes deployment of code easy. If you need to do a number of additional steps as well, then the fact that they can be scripted and run automatically is a huge win.
If you’ve only got a single machine (or maybe two), then you could certainly write your own quite simple, and entirely workable system – I described something just like this in a previous post: “SVN checkouts vs exports for live versions”. That was written and used before I was deploying to multiple machines however – and had to be run from the command line of the machine itself. It was OK even when I had a couple of machines to deploy – I just opened an SSH to both, and ran the command on them both at the same time. When I attended the London Devops roundtable on Deployment I even advocated for that as a valid deployment mechanism. But, at the same time, as I was saying that (and it’s in the video), I was also writing Chef cookbooks and a Capistrano script to be able to build, and then deploy code to at least four different machines at once.
A number of people have already written about how to setup Capistrano to deploy PHP scripts. I’ll not repeat their work, instead I’ll just tell you some of the problems you might come across afterwards.
cap shellis a wonderful thing, until it bites you
The Capistrano shell will let you run a simple command or an internal task on one, or as many machines as you want. This can be useful when you are trying things out – and if you are in anyway unsure where a command can be run – you can practice it, just do:
cap> with web uptime
cap> on host.example.com uptime
Those two commands just show how long a machine has been up, and the current load average. Easy, and safe, but as they run, they show the list of machines they succeed on.
There are some other useful commands you can try:
## show the currently live REVISION file on each machine
cap> cat /mnt/html/deployed/current/REVISION
## This file is created as each new/updated checkout is done.
## change your path to the ./current/ path as appropriate
Since you should be deploying the same codebase to all your live machines at a time (or staging, or qa/test), the versions (or git sha1’s) should be the same as well.
Finally, in the ‘useful’ list is
cap deploy:cleanup – this will remove old deployments. Keeping a few around are useful, but they can take up a lot of space. As
cap --explain deploy:cleanup says:
Clean up old releases. By default, the last 5 releases are kept on each server (though you can change this with the keep_releases variable). All other deployed revisions are removed from the servers. By default, this will use sudo to clean up the old releases, but if sudo is not available for your environment, set the :use_sudo variable to false instead.
If you want to change the default to something other than 5, that can be set with the line “
set :keep_releases, 10” in deploy.rb.
I’ve found that the latest version available in the main source code repository is only apparently checked when the Capistrano shell is first run. This can be useful if you want to check out to a limited set of machines, run a test and then check out to all the machines (you end up with the same version checked out in the same-named ‘releases/’ directory), but if you are sitting on the
cap> prompt in Capistrano shell and doing multiple
!deploy commands, you won’t get new versions of code that have been committed to the repository. Exit the shell, and re-run to solve this.
Be wary if you are logged into the machine, and sitting somewhere inside the ./current/ directory. Because of the symlink is being changed underneath you to a new directory that is being pointed to (the newest subdirectory in releases/), if you do not do a
cd . to refresh your location within the real directory tree, you will still be in an old copy of the code. The ‘cd’ makes sure you are in the latest place on disk, via the (now changed) symlink.
Capistrano has the ability to remove the currently live version, and change the ‘current’ symlink to the previous location. Should the worst happen, and a website deployment fail, this can help, if ‘rolling forward’, with a fast-fix, check-in and redeploy may not be easily possible.
# to roll back to a previous deployment:
If you have rolled back the webservers (php/app servers) you will have to restart php-fpm (or maybe Apache) on the servers, as they do not necessarily pick up the (old) versions of code that is being run now. The same would also be true if you have set APC to cache the byte-code and not look at the time-stamp of files in case they change. I’ve found that PHP-FPM also has this issue.
I’ve been pretty busy in the last couple of years, first at Binweevils and in 2011, PeerIndex – hence the utter lack of posts, but as the note on my personal CV site says, I’m taking some time off between looking for my next role. This does give the opportunity to write more of PHP Scaling and the tools around development that I’ve been using in the last couple of years, and that have been piquing my curiosity.
So, it is my plan to investigate other languages such as Python and Ruby, and tools like Puppet and Node.Js. Rest assured, I’ll keep up with the state-of-the art in PHP and such technologies as MongoDB though!
There’s also a number of planned posts right here, more for Beanstalkd (and talking about other queues), Deployment with Capistrano, graphing and logging (including how to mark a Capistrano deployment in a graph!) and a few other things, including rants.