How experiences in my career have impacted me

First Job

My career as a software developer started in 1999 before the non-event that was y2k (fortunately!) I was working for a large networking company (3Com now HP), gave me lots of opportunities working in a defects and enhancements team. They had a great graduate training scheme that exposed me to many different aspects of business and finance as well as computer networks and development.

Working in software support provided me with experience of what it is like to take other people’s code, understand it, read any documentation and update it. The organisation was actually really good at documenting designs etc. and there were a good number of experts available to answer questions.

I stayed there for over 10 years, moving from the sustaining group, into new products and then into their applications team, however, I do believe my skills began to stagnate. Basically, I got comfortable working there, it was easy doing the same thing day in day out and although there were challenges we had worked on similar things before so they did get resolved quickly. Redundancy came looming several times and I avoided it a number of times but eventually it hit me!

What I learnt from this was not to get too comfortable in an organisation and to keep my skills up to date.

Redundancy & New Hope

I realised then that although I had transferable development skills, the industry had moved on, but I hadn’t! The big shift in my time at 3Com was from desktop applications to web based applications. I had been using Java for a good few years by this point and had dabbled with JavaScript and HTML but I didn’t know a lot about Enterprise Web Application development. During my garden period of redundancy I then spent a lot of time sorting my CV out, but at the same time, learning new skills and technologies that were relevant to the industry. I focused around the key ones at the time that were Spring and Hibernate as these seemed to be what the cool guys were using!

I was very fortunate at the time that I was looking because I found the right company at the right time and they gave me the opportunity to use my existing skills and learn the rest on the job. This was a great opportunity. I was thrown in the deep end straight away and given the task to build a custom migration application in Spring that pulled data from the old database and used all of the new business objects to push the data into the system. There was a short time frame and the fear of migrating the data incorrectly but it spurred me on to learn fast and get up to speed!!!

Whilst working for this organisation I did get many opportunities to expand both my technical skills and my development process and managerial skills. We adopted Agile in the organisation which was a breath of fresh air compared to the previous way of working. The team was much happier working in this way and being trusted to do what they do best; make software. This was great for me however, looking back I realise that this was still only in the specific area of the business. I needed to widen those skills, for example, we weren’t doing TDD there at the time, we were barely doing automated testing etc. It was time to move on again when redundancy hit when the company ran out of cash and they needed to downsize.

What I learnt from this was to keep learning languages and frameworks but also to be a better developer by ensuring that quality isn’t just for QA to own. Automation and TDD give the power to the developer to ensure they are confident that their code will work now and in the future.

Time to try TDD, read more and aim to become a better developer

My next role took me to a small company near where I live where I joined a small development team. Sat in the same room as a collection of servers it was a slightly noisy place to work but I got used to it. A more positive opportunity at this new organisation was that I finally had the opportunity to embrace TDD. It was great because I got to read about it from the master Kent Beck, try it, use it and love it! It made me think about how I was writing my code and how that impacted the ability to write automated tests. I truly understood the impact of tightly coupled code and how that becomes rather irritating when writing tests. I must confess that even though I don’t stick to TDD all the time, what I have learnt from this ensures that the code that I do write enables tests to be written after without needing to change the code.

I also read cover to cover two books by Bob Martin on Clean Code and a great and easy read on being a software professional; Clean Coder.

Once I had experienced TDD, read and practiced the principles in Clean Code it enabled me to talk to others in confidence about the benefits and challenges. This drove a significant change in the development team where more unit tests were being written and people would come to me for advice on how to test components.

In the end though I did realise that the server noise was too much and it wasn’t going to get resolved so I felt it time to move on.

What I learnt from my time in this organisation was to embrace best practices (although these do change all the time) and try to conduct myself in a professional way. I think these days this is possible without needing to join a professional body because it is one of mindset rather than membership.

Putting logic in the database. Good or bad practice?

So, today I had a discussion with a colleague about whether it was a good idea to store business logic in a database trigger. My default position, probably because I’m an application developer has always been to keep any logic in the application and just use the database to store data.

The discussion in particular was around whether to use a database trigger to copy comments into an audit table and then remove the comments from the current table, essentially moving them at a particular point in the workflow. The thing I don’t like about this is the ground is moving beneath your feet and in certain situations the data will disappear, which a few months/years down the road could waste time in working out what is happening. The justification for this change was because it’s easy to do it there. My point was that you can just as easily write that in the application and also write automated tests as part of the build to verify it’s working. True, you could probably write SQL tests as well to prove the trigger.

What do others think?

It got me thinking that I should really see what the general consensus is in the industry before I push hard either way. It seems that a lot of people are divided, and maybe because some are database developers and the others are application developers.

The question was raised here on stackexchange and the “answer” pretty much came back as don’t do it. This made me think, yes, I was right . However, looking further down the list of responses there is some middle ground.

Some people use databases as simple key/value pair type systems and from my perspective this isn’t a bad thing. You don’t have to have a specialist DBA, if performance of the operation isn’t critical then keep it in the application and then all the logic is in one searchable place (developers often don’t think to look in the database for . You don’t end up with vendor tie-in with your database provider which although rare, organisations do switch DB systems.

The arguments for putting things into the database (triggers specifically) from this article. They say that the problems are with the triggers, they are with the developers. Pushing the onus onto the skill of the developer etc. There are some capabilities that databases provide like auto-generated columns etc. where triggers and stored procedures come in.


I think the conclusions aren’t clearcut, the opinion in a couple of places seems to be that they generally aren’t great for the reasons I gave above. It depends a lot on usage but I still stand by my principles of ensuring that software should be easy to understand, you shouldn’t have to hunt around many different systems to see what’s going on and only do something in other ways if there is a compelling need, i.e. performance etc.

In the situation described above, I do believe though that auditing is a good use case for database trigger and I said that in the end to wrap up the discussion. I think it is important to question and have an opinion on development matters, however, it is also important to step back when it is a suitable way and learn from different points of view.

How to deploy a Spring Boot application to Amazon AWS using Elastic Beanstalk

I have recently started playing with Spring Boot and have been really impressed with everything I’ve seen. If I need to create a RESTful web service, no problem. If I need to create a web MVC application that uses JPA, no problem. Using embedded Tomcat and H2 database out of the box enables any Java developer to rapidly create applications. Then when I need to deploy this on to a Tomcat server as a war it’s simply a case of just changing the pom.xml and ensuring that the tomcat dependencies are provided. (I’ll cover this later).

This article is the first in a series that will share what I learn about Spring Boot and deploying it into the cloud. The articles will include details on how to architect an application that uses Spring Profiles to run integration tests on local databases, while being able to deploy the same code onto a production server in the cloud.

Watch the YouTube video below that covers all of the steps in this article.

The code that goes along side this article can be found here

Resources that I have used to build my understanding of Spring Boot and Cloud deployment include:

– Rob Harrop’s demo of Spring deployment on Amazon AWS. There are some excellent tips on Amazon AWS and what it does well and not so well. It’s well worth a watch.

– Spring Guides available here. All the guides you need to get up and running with the basics.

Creating a Spring Boot War
1) Start up eclipse IDE with the Spring Extensions installed. For Luna add this link to your update installer
2) You’ll also need Tomcat server installed in Eclipse. If you don’t have this setup then search Google for setup instructions before you continue.
3) Select File->New->Other->Spring->Spring Starter Project
4) Set the name and the artifact to spring-boot-aws
5) Change the packaging from jar to war (This does a couple of things that I’ll explain later)
6) Select Actuator and Remote Shell so that we have some RESTful services to test the app with.
7) Click Finish

So, what has this done? It has created a simple Spring Boot application with some REST services like /beans that will return a JSON object of all the beans in your application.

There are two differences between the War variant and the Jar variant. The War variant doesn’t need an embedded tomcat because it will be deployed in a tomcat server so the pom.xml has the spring-boot-starter-tomcat dependency set to “provided”. The Jar variant has the scope tags removed to include this dependency in the jar.


The second difference is the ServletInitializer

public class ServletInitializer extends SpringBootServletInitializer {

	protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
		return application.sources(SpringBootAwsApplication.class);


Now that you have this application created, we need to generate the war file for deployment onto Amazon AWS. Right click on the pom.xml and select Run As->Maven Install. This will run the build and create the war file in the target folder of your application.

Deploy your application using Amazon Elastic Beanstalk

1) Login to Amazon AWS.
2) In the main control panel select Elastic Beanstalk under Deployment & Management.
3) Click on Create Application in the top right corner.
4) Enter the Application Name and click Next.
5) Environment Tier – Web Server
6) Predefined Configuration – Tomcat
7) Environment Type – Single instance
8) Click Next
9) Select Upload your own, click Browse and locate the war you created earlier.
10) When the application is uploaded you will see the next page where you select your URL.
11) Enter a name and click check availability to see if you can use it.
12) Click Next
13) We don’t need a RDB in this example so click next here.
14) In this next step you are defining the EC2 instance that will be created, if you are using a free trial then stick to the free t1.micro instance type.
15) EC2 Key Pair, can be left unselected. You won’t need it for now and most likely you won’t have one configured yet. This will be covered in a later post.
16) Click Next.
17) In Environment Tags click next again because we don’t care about this.
18) Review the configuration, and then click Launch.

Amazon AWS will now provision your server, install the Tomcat server and deploy the war file that you uploaded. It does take a good 5-10 minutes for this action to complete.

Once it is up and running bizarrely the health starts as red, when you see this you should be able to go to your URL you configured earlier, it will be something like remember to include the beans part in the URL as that will invoke the Spring Boot REST service to return a JSON object containing all the Spring Beans in your application.

If this doesn’t work it will be worth you going back and checking if your application works locally before you try diagnosing the issue on AWS. However, it should work just fine. Failing that do a git clone then a mvn package of this project and upload that war and it should work just fine.

To upload another war you will see the upload and deploy button in the middle of the screen.

Screenshot 2015-02-10 22.27.36

Please come back for future posts on hooking up databases to the sample application. I will explain how to build the configuration in AWS as well as how to use Spring Profiles to control the datasource in the application.

Spring Boot NoSuchBeanDefinitionException

At the moment I’m in the process of getting to grips with Spring 4, and Spring Boot. I had created a Service PeopleService class that implements an interface IPeopleService. However, when I added the following code to my main method in my Spring Boot application

public CommandLineRunner init(
	final PeopleService peopleService) {

	return new CommandLineRunner() {
		public void run(String... strings) throws Exception {

I get the following error:

Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type [] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {}

The reason for this is Spring Autowires to the Interface and not the implementation. So, what I needed to do was use the IPeopleService in my init() method instead.

Don’t make network requests on the UI thread!

Today I was generating an HTTP request in an Android application and I got an exception that briefly confused me. The solution is an obvious one once I understood the reason and to be fair the exception is pretty self explanatory.

The code I has was the following, just to make an HTTP connection and check the file size of the file I was requesting.

  try {
    HttpURLConnection urlConnection = null;
    URL url = new URL("");

    try {
      urlConnection = (HttpURLConnection) url.openConnection();

      int fileLength = urlConnection.getContentLength();
    } catch (IOException exception) {
       // Handle exception
  } finally {
    if (urlConnection != null) {
 } catch (MalformedURLException malformedUrl) {
   // Handle exception

The exception that I got from this code was

Caused by: android.os.NetworkOnMainThreadException
            at android.os.StrictMode$AndroidBlockGuardPolicy.onNetwork(

The reason for this exception was because I was making a network request from a method that was invoked from a button click in an activity. What I should have done is created a new thread or AsyncTask to make this call. Most people will see this exception and understand immediately or not do such a daft thing in the first place, but if you are new to Android development then the solution may help others.

private class DownloadTask extends AsyncTask {
    protected Void doInBackground(Void... sUrl) {
        try {
            HttpURLConnection urlConnection = null;
            URL url = new URL("");

            try {
                urlConnection = (HttpURLConnection) url.openConnection();

                int fileLength = urlConnection.getContentLength();
            } catch (IOException exception) {
                // Handle exception
            } finally {
                if (urlConnection != null) {
        } catch (MalformedURLException malformedUrl) {
            // Handle exception
        return null;

The task would then be invoked using the following.

new DownloadTask().execute();

Using the new AndroidJUnitRunner from Espresso

So, a new version of Espresso should be out shortly that will be bundled with the Android Support Tools. It will include a new test instrumentation runner and the dependencies should be importable into your Gradle project.

Below shows what you’ll need to do to include the new runner and the dependencies, however, as it’s not out yet the version of androidTestCompile is still TBD. 😦

Can’t wait!

android {
  defaultConfig {
    testInstrumentationRunner ''

dependencies {
  androidTestCompile ''

Using Vagrant when behind a proxy server

Proxy servers are a pain and working round them is irritating. The instructions below should help you get your Vagrant server up and running when working behind a proxy server.

Two parts to the proxy problem
There are two parts to configuring proxy servers when using Vagrant, the first is configuring the host so that the vagrant tool can download the VM image, any plugins etc. and the second is so that the Vagrant VM can also access the internet through the proxy. It will need access if it is installing any additional pieces of software.

There are some Q & A’s out there that say use the vagrant-proxyconf, however, this solves the 2nd part of the problem and not the whole problem. You will quickly find that you cannot download the proxy plugin when you are behind a proxy! To get the plugin downloading to work you need to set the HTTP_PROXY environment variable, this will also help with tools like curl, apt-get etc. so it is a useful general tip.

So, first lets set the HTTP proxy variables on the host environment.
On Windows:
set HTTP_PROXY= e.g. http://user:password@myproxyserver.local:8181
set HTTPS_PROXY= e.g. http://user:password@myproxyserver.local:8181

On Linux/OS X
export HTTP_PROXY= e.g. http://user:password@myproxyserver.local:8181
export HTTPS_PROXY= e.g. http://user:password@myproxyserver.local:8181

It’s not great though because your proxy password is now available on the history of the terminal and the environment variables so, be sensible with this approach.

To install the vagrant-proxyconf plugin type the following in your terminal or command prompt.

vagrant plugin install vagrant-proxyconf

If this works then great, however, it didn’t work for me so I had to download the gem file and install it using the following:

– Download the gem file here.
– In your terminal/command prompt window change your directory to where your vagrant project is kept and run the following command:
vagrant plugin install vagrant-proxyconf-1.4.0.gem

To verify that the plugin is installed correctly type the command.

vagrant plugin list

Once you have the plugin installed you’ll need to tweak your Vagrantfile to include the proxy details for the plugin. Place this at the bottom of your Vagrantfile after last end line.

Vagrant.configure("2") do |config|
  if Vagrant.has_plugin?("vagrant-proxyconf")
    config.proxy.http = "http://myproxyserver.local:8181"
    config.proxy.https = "https://myproxyserver.local:8181"
    config.proxy.no_proxy = "localhost,"