Author: marcthomas2012

@Cachable usage and gotchas

Recently I was working with a colleague to implement caching in an application. Initially it looks fairly straight forward to integrate it, however, because of the way the code was being called we came across a number of problems that I want to share.

To see the code examples of where @Cachable works and where it doesn’t work clone the code from https://github.com/marcthomas2013/ehcache

Dependencies required

This example code is based on Spring Boot 1.2.5, however, when using Spring Boot and ehcache prior to Spring Boot 1.3 you will need to pull in the following dependency to be able to get the ehcacheCacheManager. I struggled to find the dependency that I needed for org.springframework.cache.ehcache.EhCacheCacheManager so, here it is.

<dependency>
   <groupId>org.springframework</groupId>
   <artifactId>spring-context-support</artifactId>
   <version>4.2.0.RELEASE</version>
</dependency>

Cachable annotation

Once you have the ehcacheCacheManager set up and an ehcache.xml config file in your resources folder then you are ready to start adding the @Cachable annotation to your public methods.

<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:noNamespaceSchemaLocation="ehcache.xsd" updateCheck="true"
         monitoring="autodetect" dynamicConfig="true">

    <!-- <diskStore path="java.io.tmpdir" /> -->
    <diskStore path="~/cache" />

    <cache name="demoCache"
           maxEntriesLocalHeap="10000"
           maxEntriesLocalDisk="1000"
           eternal="false"
           diskSpoolBufferSizeMB="20"
           timeToIdleSeconds="300" timeToLiveSeconds="600"
           memoryStoreEvictionPolicy="LFU"
           transactionalMode="off">
        <persistence strategy="localTempSwap" />
    </cache>
</ehcache>

In the example code we have a cache setup in ehcache.xml with the name “demoCache” this name needs to go into the value parameter. Then the key that you want to use for the cache is set in the “key” field. This key field can be defined as something more than just a parameter name, see Spring’s documentation for more detail.

@Cacheable(value="demoCache", key="#url")
public String getURLResponse(String url) {
    System.out.println("This should only happen once!");
    return "Success";
}

 

Different ways of invoking the cache method

Some ways work and some don’t. In the example code referred to above there is a class MyController that invokes all the cache calls.

In the MyService class the getURLResponse method contains a log line that says “This should only happen once!” this is to show when the method is invoked. If the caching is working then you should only ever see this line once because the cache should be returning the value, however, there are a number of demonstration method calls that show when this doesn’t work and this article explains why.

callServiceDirect explanation

The first call callServiceDirect in MyController is probably the most likely scenario. A controller calling an autowired service method. You’ll see in the log that the target method is called once, and then the response is cached. The second call return value is returned straight from the cache.

callFromObjectWithDependencyPassedIn explanation

The next call callFromObjectWithDependencyPassedIn shows the scenario if you have another class that needs to invoke the service method and you need to pass that dependency in. This approach works and is the way you should do it, but then we’ll compare it with the next method callFromObjectCreatedByTheService that doesn’t work.

callFromObjectCreatedByTheService explanation

There is a method called callFromObjectCreatedByTheService that is very quirky but is used to highlight a Spring Aspect Oriented Programming (AOP) proxy issue that people could get caught out with. When the code is run you will notice that the log shows that the methods @Cachable annotation isn’t invoked. To explain why this is we need to look into the createObjectThatCallsService method in MyService.

What this method is doing is creating another object and passing itself into the object as a form of dependency injection. You might think that this is fine however, it doesn’t work and it is because Spring needs to Proxy the objects that it manages. The Spring Documentation describes this in detail but essentially proxying means that outside of the MyService class references to the object are in fact a Spring Proxy to the MyService object, whereas when you are running code within MyService the object’s this will be the MyService and not the Spring Proxy.

For AOP based annotations like @Cachable to work they need to be invoked using the proxied object and not the directly the object where the method exists. Effectively under the hood when a method with @Cachable is called using a proxied object Spring can then invoke any annotation based calls prior to the actual method being invoked. This is not possible if you are calling MyService direct. This is why this approach doesn’t work.

callingPrivateCacheMethodFromWithinService explanation

callingPrivateCacheMethodFromWithinService is another example of a call to an @Cachable method that doesn’t work and is for the same reason as explained above but for a different use case. The Spring Documentation on caching annotations explains on a tip box on the right with the title “Method visibility and @Cacheable/@CachePut/@CacheEvict” that this won’t work because of a limitation of Spring AOP and it suggests if you want to be able to call private methods internally with AOP based annotations then you will need to setup AspectJ instead. Sadly, I haven’t been able to find a good tutorial on setting up AspectJ instead of Spring AOP.

callingCacheMethodFromWithinService explanation

callingCacheMethodFromWithinService is another example of a call to an @Cachable method that doesn’t work and is for the same reason as explained above but for a different use case. You will notice that this is the is calling the same public method that works for all calls that are external to MyService and as long as they are invoked using the Spring Proxied object they will work, however, this example is calling the @Cachable method inside the MyService which therefore won’t call the method via the proxy object.

How to create a Spring Boot Remote Shell Command

So today I’ve been looking at the actuator and remote functionality in Spring Boot and thought I’d have a go and getting a custom command working in my simple Spring Boot app.

One challenge from the documentation is it only talks about how to setup the command using Groovy, so what if you are using Java? There are a few pitfalls that catch people out so I thought I’d document them here in one article.

So, the documentation on the remote shell does say that the command will be looked for on the class path at /commands or /crash/commands the subtle point about this is the command should be put in your project so that they are built because the remote shell command are compiled on first use so they need to be in your /resources folder.

So, you should put the source code for any remote shell commands in /src/main/resources/commands/ the downside with this is now that it is in the resources folder the IDE won’t highlight any syntax issues. My tip would be to write the command in your java source tree and then when you are happy with it, drag it into the resources folder. One reason for this is any compilation issues will be ignored when invoking the command.

Dependency required:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-remote-shell</artifactId>
</dependency>

Two annotations you’ll need on your command are:

@Command

@Usage(“My Test Command”)

These two annotations indicate that this is a shell command and that the command description is “My Test Command”.

The name of your class becomes the command name.

See my example source code below, that was in my Spring Boot Project at /src/main/resources/commands/

 

package commands;
import org.crsh.cli.Command;
import org.crsh.cli.Usage;
import org.crsh.command.BaseCommand;
public class mycommand extends BaseCommand {
    @Command
    @Usage("My Test Command")
    public String main() {
        return "test command invoked";
    }
}

How to make a Spring Boot Jar into a War to deploy on tomcat

Spring boot is a brilliant addition to the Spring Framework however, not everyone wants to use the Jar format that it encourages you to use. Searching on the internet the information is there to switch from Jar to War however, it doesn’t seem to be in one place in the documentation so I thought I’d put it together as a single post.

If you create a WAR project straight away then the following will be provided, however, if you have started off with a Jar packaging type this article will explain how to switch it to war. pom.xml changes The first thing you will need to do is change your pom.xml file. Change <packaging> value from jar to war. Add the following to tell the project that the tomcat server is provided.

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-tomcat</artifactId>
  <scope>provided</scope>
</dependency>

Code changes In your Spring Boot application class that has the @SpringBootApplication annotation you need to extend SpringBootServletInitializer and override the configure method. e.g.

@SpringBootApplication
public class DemoApplication extends SpringBootServletInitializer{
    @Override
    protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
        return application.sources(DemoApplication.class);
    }
    public static void main(String[] args) {
        SpringApplication.run(DemoApplication.class, args);
    }
}

So now you should be able to deploy this war file to your application server, e.g. tomcat. Also, you will see in your target folder there are two war files, one with .original extension. This is basically because the Spring Boot maven plugin has built the war file such that it can still be run using java. e.g. java -jar demo-0.0.1-SNAPSHOT.war Hope this helps

How to get a team ready for TDD on their projects

TDD is great! I’ve found it a brilliant way of thinking about how my code is put together and how to make it testable from the outset, however, for a lot of developers there are a few hurdles to overcome first.

sport-659224_640

The biggest two I would say are:

1) Why should I bother?

2) How to do it?

Why should I bother?

 

In addition to the benefits of writing unit tests in the first place, TDD brings additional benefits to the table. Here are a few below:

– Force the developer to think about what the code is meant to be doing.

– Forces the developer to think about the dependencies in the code.

– Prevent tight coupling of dependences.

– Provides the developer with a security blanket when they need to make changes to the code at a later point, i.e. they can make changes and verify they haven’t broken the rest of the system.

– Because the developer has to write tests before writing any code it forces the code to be testable all the time. Removing the issues around the code being untestable.

“Force” might sound like a harsh term, however, it does mean that the code that comes out of this process will be better for it. There is less opportunity to avoid challenges like dependency management and therefore improvements naturally follow when using TDD.

How to do it?

I have heard this statement many times from developers and management. “Okay, I understand the point of TDD now, but my projects are too complex to start using it

This is a common response from many developers particularly on projects that are long established with code bases that don’t have any tests in place. This results in code that is already very difficult to test and therefore, starting TDD is even more difficult. Another issue I have come across is in teams that aren’t writing code day in day out and therefore their coding skills are already starting from a bit of a rusty position.

Today, I came across a great article called “Kata – The only way to learn TDD” which presented a good solution to the problem. Rather than trying to get developers used to using TDD in their own code, get them used to following the principles with some simple challenges. This also has the benefits of getting developers used to their IDEs as well, learning shortcuts and speeding up development.

So, a great way to get going with TDD is learn about the basics of Red, Green, Refactor and then get practicing with some simple code kata challenges. These will hone your TDD skills as well as increasing your knowledge of your IDE.

Time to give it a go. Good luck!

Setting up a Jenkins continuous integration environment for Android

The aim of this article will be to run through the steps needed to setup a Jenkins based CI environment to build and Android project, run unit tests, integration tests, etc. It will run through all of the tasks assuming that you have a clean Windows installation without even Java being installed.

  • Install JDK
  • Install SCM (Git/SVN)
  • Install Jenkins
  • Install the Android SDK
  • Install any useful Jenkins plugins
  • Create your first build target

Install JDK

At the time of writing this, JDK 1.8 is the latest version so I would recommend installing that on your server, however, if for any reason you cannot run with this JDK and then install the version that your project requires. Just head over to Oracle’s site, download and install it in the default location.

Install SCM

Git

Our project used git and initially I came from an SVN background so I found TortoiseGIT a useful client. This can be found here. It’s not the best client because it encourages you to use git in an SVN way but it does the job for us.

You will also need to install the git client application which you can get from here. However, if you start up TortoiseGIT you will be prompted almost straight away to do this. Download it and install it on your machine.

This is useful to have these tools on your machine to manually test the cloning of your builds etc. if you have issues when configuring the server.

Set the location of git in your system path

Jenkins needs to know where the git application is installed. The easiest and most useful way to do this is to add it to the system path. Click the start menu and type environment then select system environment variables from the search results. Then click Environment Variables and find Path and edit it. Assuming you installed git in the default location you should add ;C:\Program Files (x86)\Git\bin to the end of the path.

SVN

If you use SVN still, then it install your preferred SVN tools at this point.

Install Jenkins

So this is the main reason for this article, time to install Jenkins. You can find it here. Jenkins is one of the most popular CI servers for Java at the moment. It is a fork of the Hudson CI project so, if you are unfamiliar with these servers and you see some people talking about Hudson and some about Jenkins, these are effectively the same thing. The plugins between the servers are all compatible with both servers.

Screenshot 2015-04-22 15.41.15

This article is for Windows and therefore we’ll be installing the native package. This has the benefit that it comes with Windows Service wrappers so when the server restarts then Jenkins will automatically restart as well.

Now is the best time to update all of the plugins that come with Jenkins. To do this click on the Manage Jenkins link on the left hand side of http://localhost:8080/ then click Manage Plugins. Click Select All at the bottom of the screen and then click Download Now and install after restart. While it is downloading the plugins, check the Restart Jenkins when installation is complete and no jobs are running checkbox so that the plugins are updated.

Install Jenkins Plugins

There are a number of plugins that we’ll need for Android or are just recommended because they are really helpful in my opinion. To select these plugins:

Click Manage Jenkins

Click Manage Plugins

Click the available tab

Search for each of these plugins, click select and then install without restart. Some of the plugins will already be installed.

  • Android Emulator Plugin – This is required to be able to run the android emulator and run instrumented automated tests.
  • Git Client Plugin
  • Git Plugin
  • JaCoCo Plugin – This is useful for capturing code coverage information when running unit tests and any other automated tests.
  • JUnit Plugin – When running any JUnit tests in your build this plugin will allow Jenkins to report the results in a useful way.
  • Gradle Plugin – This allows Jenkins to run Gradle based builds which on modern Android projects will be a requirement.
  • JavaDoc Plugin – Use this plugin to provide JavaDoc output in your build of your source.

Install Android SDK

The next step we need to take before attempting to build anything is to download and install the android SDK and point Jenkins at it. Head over to the Android SDK site here and download the stand alone SDK. Install it using the default settings apart from make sure that it is available for all users so the service can access it.

Start up the SDK Manager and download the SDK that you need and any Extras that you might need. If you’re already building from an IDE like Android Studio you’ll already know what you need.

Once this is done you need to tell the OS where Android SDK is so that Jenkins can use the right location for building. To do this open up the Windows System Environment variables and create a new variable called ANDROID_HOME and set it to C:\Program Files (x86)\Android\android-sdk assuming that is where you installed it to.

Create your first build target

  • In the top level screen, click on New Item then enter the name of your project, select Freestyle Project and click OK.
  • Enter a description for the target.
  • Then enter the details for your SCM. I’m using git so I select git and enter the URL.
  • We also connect using a specific build account and password so we add that by clicking on the add button below the URL.
  • If you get the error Could not init C:\Windows\TEMP\hudson this is most likely because you haven’t added git to the system path for Jenkins to find it. Once you do this and restart the Jenkins service it will work.
  • Next you’ll need to setup up what to build. Under the Build panel, click the Add Build Step button.
  • Select Invoke Gradle Script, click on Use Gradle Wrapper
  • Enter  clean build into the Tasks field.

You’ll likely want to archive the artifacts of the build, i.e. keep the APK files for future use so

  • Click on Add Post Build Action
  • Enter **/*.apk into the

That’s it. You can of course create other targets in Jenkins to build JavaDoc, run unit tests and integration tests on code commit etc. but this will get you the base system working.

Fear of fixing code

A while back I was working on a task of retrospectively documenting a project written a couple of years back. Whilst I was going through the code building up my understanding of it I came across a number of code smells. Nothing major fortunately but, things like, print errors to System.out rather than using loggers, code that appeared to no longer be required, things that should be refactored for clarity etc. Although this isn’t great, it’s no surprise and I’ve seen many projects like this in the past. Also, developers will have come across issues with my code written before I appreciated the benefits of automation.

alone-62253_1280

Fear

The most painful part with these issues was the fear of breaking the product if I addressed the issues.

I had this fear because the project had no automated tests to speak of which meant the only way of verifying whether things were still working would be to manually test the code myself which isn’t the best plan or schedule some time in with QA. Neither of these are an option because of cost, so because of this the code was left as is.

The trouble is, this will only get worse and over time the code will begin to rot. Changes will take longer, more manual testing is required etc. and all this costs time and money.

Solution

When writing any code whether it is new or adding to existing code, make sure that it has automated tests to prevent the rot and reduce the fear of making changes that can be verified using automation. You don’t have to use TDD, however, once you get the hang of it, it will certainly make writing tests a lot quicker. I recommend starting as soon as you can, to understand the ways in which to write unit and integration tests and you will soon find comfort that whenever you need to refactor code, add functionality etc. you will have the security blanket of your automated tests to keep you calm!

How experiences in my career have impacted me

First Job

My career as a software developer started in 1999 before the non-event that was y2k (fortunately!) I was working for a large networking company (3Com now HP), gave me lots of opportunities working in a defects and enhancements team. They had a great graduate training scheme that exposed me to many different aspects of business and finance as well as computer networks and development.

Working in software support provided me with experience of what it is like to take other people’s code, understand it, read any documentation and update it. The organisation was actually really good at documenting designs etc. and there were a good number of experts available to answer questions.

I stayed there for over 10 years, moving from the sustaining group, into new products and then into their applications team, however, I do believe my skills began to stagnate. Basically, I got comfortable working there, it was easy doing the same thing day in day out and although there were challenges we had worked on similar things before so they did get resolved quickly. Redundancy came looming several times and I avoided it a number of times but eventually it hit me!

What I learnt from this was not to get too comfortable in an organisation and to keep my skills up to date.

Redundancy & New Hope

I realised then that although I had transferable development skills, the industry had moved on, but I hadn’t! The big shift in my time at 3Com was from desktop applications to web based applications. I had been using Java for a good few years by this point and had dabbled with JavaScript and HTML but I didn’t know a lot about Enterprise Web Application development. During my garden period of redundancy I then spent a lot of time sorting my CV out, but at the same time, learning new skills and technologies that were relevant to the industry. I focused around the key ones at the time that were Spring and Hibernate as these seemed to be what the cool guys were using!

I was very fortunate at the time that I was looking because I found the right company at the right time and they gave me the opportunity to use my existing skills and learn the rest on the job. This was a great opportunity. I was thrown in the deep end straight away and given the task to build a custom migration application in Spring that pulled data from the old database and used all of the new business objects to push the data into the system. There was a short time frame and the fear of migrating the data incorrectly but it spurred me on to learn fast and get up to speed!!!

Whilst working for this organisation I did get many opportunities to expand both my technical skills and my development process and managerial skills. We adopted Agile in the organisation which was a breath of fresh air compared to the previous way of working. The team was much happier working in this way and being trusted to do what they do best; make software. This was great for me however, looking back I realise that this was still only in the specific area of the business. I needed to widen those skills, for example, we weren’t doing TDD there at the time, we were barely doing automated testing etc. It was time to move on again when redundancy hit when the company ran out of cash and they needed to downsize.

What I learnt from this was to keep learning languages and frameworks but also to be a better developer by ensuring that quality isn’t just for QA to own. Automation and TDD give the power to the developer to ensure they are confident that their code will work now and in the future.

Time to try TDD, read more and aim to become a better developer

My next role took me to a small company near where I live where I joined a small development team. Sat in the same room as a collection of servers it was a slightly noisy place to work but I got used to it. A more positive opportunity at this new organisation was that I finally had the opportunity to embrace TDD. It was great because I got to read about it from the master Kent Beck, try it, use it and love it! It made me think about how I was writing my code and how that impacted the ability to write automated tests. I truly understood the impact of tightly coupled code and how that becomes rather irritating when writing tests. I must confess that even though I don’t stick to TDD all the time, what I have learnt from this ensures that the code that I do write enables tests to be written after without needing to change the code.

I also read cover to cover two books by Bob Martin on Clean Code and a great and easy read on being a software professional; Clean Coder.

Once I had experienced TDD, read and practiced the principles in Clean Code it enabled me to talk to others in confidence about the benefits and challenges. This drove a significant change in the development team where more unit tests were being written and people would come to me for advice on how to test components.

In the end though I did realise that the server noise was too much and it wasn’t going to get resolved so I felt it time to move on.

What I learnt from my time in this organisation was to embrace best practices (although these do change all the time) and try to conduct myself in a professional way. I think these days this is possible without needing to join a professional body because it is one of mindset rather than membership.