My first course has been published!

Today is a big day for me. I have been working for a number of months now on a course on building Spring Cloud Functions on AWS. This is a course that is published by Packt publishing who approached me early in the new year to do a course on creating applications using AWS Lambda and Spring.

I will put more posts up about my experiences of this but for now please take a look here.

@Cachable usage and gotchas

Recently I was working with a colleague to implement caching in an application. Initially it looks fairly straight forward to integrate it, however, because of the way the code was being called we came across a number of problems that I want to share.

To see the code examples of where @Cachable works and where it doesn’t work clone the code from

Dependencies required

This example code is based on Spring Boot 1.2.5, however, when using Spring Boot and ehcache prior to Spring Boot 1.3 you will need to pull in the following dependency to be able to get the ehcacheCacheManager. I struggled to find the dependency that I needed for org.springframework.cache.ehcache.EhCacheCacheManager so, here it is.


Cachable annotation

Once you have the ehcacheCacheManager set up and an ehcache.xml config file in your resources folder then you are ready to start adding the @Cachable annotation to your public methods.

<ehcache xmlns:xsi=""
         xsi:noNamespaceSchemaLocation="ehcache.xsd" updateCheck="true"
         monitoring="autodetect" dynamicConfig="true">

    <!-- <diskStore path="" /> -->
    <diskStore path="~/cache" />

    <cache name="demoCache"
           timeToIdleSeconds="300" timeToLiveSeconds="600"
        <persistence strategy="localTempSwap" />

In the example code we have a cache setup in ehcache.xml with the name “demoCache” this name needs to go into the value parameter. Then the key that you want to use for the cache is set in the “key” field. This key field can be defined as something more than just a parameter name, see Spring’s documentation for more detail.

@Cacheable(value="demoCache", key="#url")
public String getURLResponse(String url) {
    System.out.println("This should only happen once!");
    return "Success";


Different ways of invoking the cache method

Some ways work and some don’t. In the example code referred to above there is a class MyController that invokes all the cache calls.

In the MyService class the getURLResponse method contains a log line that says “This should only happen once!” this is to show when the method is invoked. If the caching is working then you should only ever see this line once because the cache should be returning the value, however, there are a number of demonstration method calls that show when this doesn’t work and this article explains why.

callServiceDirect explanation

The first call callServiceDirect in MyController is probably the most likely scenario. A controller calling an autowired service method. You’ll see in the log that the target method is called once, and then the response is cached. The second call return value is returned straight from the cache.

callFromObjectWithDependencyPassedIn explanation

The next call callFromObjectWithDependencyPassedIn shows the scenario if you have another class that needs to invoke the service method and you need to pass that dependency in. This approach works and is the way you should do it, but then we’ll compare it with the next method callFromObjectCreatedByTheService that doesn’t work.

callFromObjectCreatedByTheService explanation

There is a method called callFromObjectCreatedByTheService that is very quirky but is used to highlight a Spring Aspect Oriented Programming (AOP) proxy issue that people could get caught out with. When the code is run you will notice that the log shows that the methods @Cachable annotation isn’t invoked. To explain why this is we need to look into the createObjectThatCallsService method in MyService.

What this method is doing is creating another object and passing itself into the object as a form of dependency injection. You might think that this is fine however, it doesn’t work and it is because Spring needs to Proxy the objects that it manages. The Spring Documentation describes this in detail but essentially proxying means that outside of the MyService class references to the object are in fact a Spring Proxy to the MyService object, whereas when you are running code within MyService the object’s this will be the MyService and not the Spring Proxy.

For AOP based annotations like @Cachable to work they need to be invoked using the proxied object and not the directly the object where the method exists. Effectively under the hood when a method with @Cachable is called using a proxied object Spring can then invoke any annotation based calls prior to the actual method being invoked. This is not possible if you are calling MyService direct. This is why this approach doesn’t work.

callingPrivateCacheMethodFromWithinService explanation

callingPrivateCacheMethodFromWithinService is another example of a call to an @Cachable method that doesn’t work and is for the same reason as explained above but for a different use case. The Spring Documentation on caching annotations explains on a tip box on the right with the title “Method visibility and @Cacheable/@CachePut/@CacheEvict” that this won’t work because of a limitation of Spring AOP and it suggests if you want to be able to call private methods internally with AOP based annotations then you will need to setup AspectJ instead. Sadly, I haven’t been able to find a good tutorial on setting up AspectJ instead of Spring AOP.

callingCacheMethodFromWithinService explanation

callingCacheMethodFromWithinService is another example of a call to an @Cachable method that doesn’t work and is for the same reason as explained above but for a different use case. You will notice that this is the is calling the same public method that works for all calls that are external to MyService and as long as they are invoked using the Spring Proxied object they will work, however, this example is calling the @Cachable method inside the MyService which therefore won’t call the method via the proxy object.

How to create a Spring Boot Remote Shell Command

So today I’ve been looking at the actuator and remote functionality in Spring Boot and thought I’d have a go and getting a custom command working in my simple Spring Boot app.

One challenge from the documentation is it only talks about how to setup the command using Groovy, so what if you are using Java? There are a few pitfalls that catch people out so I thought I’d document them here in one article.

So, the documentation on the remote shell does say that the command will be looked for on the class path at /commands or /crash/commands the subtle point about this is the command should be put in your project so that they are built because the remote shell command are compiled on first use so they need to be in your /resources folder.

So, you should put the source code for any remote shell commands in /src/main/resources/commands/ the downside with this is now that it is in the resources folder the IDE won’t highlight any syntax issues. My tip would be to write the command in your java source tree and then when you are happy with it, drag it into the resources folder. One reason for this is any compilation issues will be ignored when invoking the command.

Dependency required:


Two annotations you’ll need on your command are:


@Usage(“My Test Command”)

These two annotations indicate that this is a shell command and that the command description is “My Test Command”.

The name of your class becomes the command name.

See my example source code below, that was in my Spring Boot Project at /src/main/resources/commands/


package commands;
import org.crsh.cli.Command;
import org.crsh.cli.Usage;
import org.crsh.command.BaseCommand;
public class mycommand extends BaseCommand {
    @Usage("My Test Command")
    public String main() {
        return "test command invoked";

How to make a Spring Boot Jar into a War to deploy on tomcat

Spring boot is a brilliant addition to the Spring Framework however, not everyone wants to use the Jar format that it encourages you to use. Searching on the internet the information is there to switch from Jar to War however, it doesn’t seem to be in one place in the documentation so I thought I’d put it together as a single post.

If you create a WAR project straight away then the following will be provided, however, if you have started off with a Jar packaging type this article will explain how to switch it to war. pom.xml changes The first thing you will need to do is change your pom.xml file. Change <packaging> value from jar to war. Add the following to tell the project that the tomcat server is provided.


Code changes In your Spring Boot application class that has the @SpringBootApplication annotation you need to extend SpringBootServletInitializer and override the configure method. e.g.

public class DemoApplication extends SpringBootServletInitializer{
    protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
        return application.sources(DemoApplication.class);
    public static void main(String[] args) {, args);

So now you should be able to deploy this war file to your application server, e.g. tomcat. Also, you will see in your target folder there are two war files, one with .original extension. This is basically because the Spring Boot maven plugin has built the war file such that it can still be run using java. e.g. java -jar demo-0.0.1-SNAPSHOT.war Hope this helps

How to get a team ready for TDD on their projects

TDD is great! I’ve found it a brilliant way of thinking about how my code is put together and how to make it testable from the outset, however, for a lot of developers there are a few hurdles to overcome first.


The biggest two I would say are:

1) Why should I bother?

2) How to do it?

Why should I bother?


In addition to the benefits of writing unit tests in the first place, TDD brings additional benefits to the table. Here are a few below:

– Force the developer to think about what the code is meant to be doing.

– Forces the developer to think about the dependencies in the code.

– Prevent tight coupling of dependences.

– Provides the developer with a security blanket when they need to make changes to the code at a later point, i.e. they can make changes and verify they haven’t broken the rest of the system.

– Because the developer has to write tests before writing any code it forces the code to be testable all the time. Removing the issues around the code being untestable.

“Force” might sound like a harsh term, however, it does mean that the code that comes out of this process will be better for it. There is less opportunity to avoid challenges like dependency management and therefore improvements naturally follow when using TDD.

How to do it?

I have heard this statement many times from developers and management. “Okay, I understand the point of TDD now, but my projects are too complex to start using it

This is a common response from many developers particularly on projects that are long established with code bases that don’t have any tests in place. This results in code that is already very difficult to test and therefore, starting TDD is even more difficult. Another issue I have come across is in teams that aren’t writing code day in day out and therefore their coding skills are already starting from a bit of a rusty position.

Today, I came across a great article called “Kata – The only way to learn TDD” which presented a good solution to the problem. Rather than trying to get developers used to using TDD in their own code, get them used to following the principles with some simple challenges. This also has the benefits of getting developers used to their IDEs as well, learning shortcuts and speeding up development.

So, a great way to get going with TDD is learn about the basics of Red, Green, Refactor and then get practicing with some simple code kata challenges. These will hone your TDD skills as well as increasing your knowledge of your IDE.

Time to give it a go. Good luck!

Setting up a Jenkins continuous integration environment for Android

The aim of this article will be to run through the steps needed to setup a Jenkins based CI environment to build and Android project, run unit tests, integration tests, etc. It will run through all of the tasks assuming that you have a clean Windows installation without even Java being installed.

  • Install JDK
  • Install SCM (Git/SVN)
  • Install Jenkins
  • Install the Android SDK
  • Install any useful Jenkins plugins
  • Create your first build target

Install JDK

At the time of writing this, JDK 1.8 is the latest version so I would recommend installing that on your server, however, if for any reason you cannot run with this JDK and then install the version that your project requires. Just head over to Oracle’s site, download and install it in the default location.

Install SCM


Our project used git and initially I came from an SVN background so I found TortoiseGIT a useful client. This can be found here. It’s not the best client because it encourages you to use git in an SVN way but it does the job for us.

You will also need to install the git client application which you can get from here. However, if you start up TortoiseGIT you will be prompted almost straight away to do this. Download it and install it on your machine.

This is useful to have these tools on your machine to manually test the cloning of your builds etc. if you have issues when configuring the server.

Set the location of git in your system path

Jenkins needs to know where the git application is installed. The easiest and most useful way to do this is to add it to the system path. Click the start menu and type environment then select system environment variables from the search results. Then click Environment Variables and find Path and edit it. Assuming you installed git in the default location you should add ;C:\Program Files (x86)\Git\bin to the end of the path.


If you use SVN still, then it install your preferred SVN tools at this point.

Install Jenkins

So this is the main reason for this article, time to install Jenkins. You can find it here. Jenkins is one of the most popular CI servers for Java at the moment. It is a fork of the Hudson CI project so, if you are unfamiliar with these servers and you see some people talking about Hudson and some about Jenkins, these are effectively the same thing. The plugins between the servers are all compatible with both servers.

Screenshot 2015-04-22 15.41.15

This article is for Windows and therefore we’ll be installing the native package. This has the benefit that it comes with Windows Service wrappers so when the server restarts then Jenkins will automatically restart as well.

Now is the best time to update all of the plugins that come with Jenkins. To do this click on the Manage Jenkins link on the left hand side of http://localhost:8080/ then click Manage Plugins. Click Select All at the bottom of the screen and then click Download Now and install after restart. While it is downloading the plugins, check the Restart Jenkins when installation is complete and no jobs are running checkbox so that the plugins are updated.

Install Jenkins Plugins

There are a number of plugins that we’ll need for Android or are just recommended because they are really helpful in my opinion. To select these plugins:

Click Manage Jenkins

Click Manage Plugins

Click the available tab

Search for each of these plugins, click select and then install without restart. Some of the plugins will already be installed.

  • Android Emulator Plugin – This is required to be able to run the android emulator and run instrumented automated tests.
  • Git Client Plugin
  • Git Plugin
  • JaCoCo Plugin – This is useful for capturing code coverage information when running unit tests and any other automated tests.
  • JUnit Plugin – When running any JUnit tests in your build this plugin will allow Jenkins to report the results in a useful way.
  • Gradle Plugin – This allows Jenkins to run Gradle based builds which on modern Android projects will be a requirement.
  • JavaDoc Plugin – Use this plugin to provide JavaDoc output in your build of your source.

Install Android SDK

The next step we need to take before attempting to build anything is to download and install the android SDK and point Jenkins at it. Head over to the Android SDK site here and download the stand alone SDK. Install it using the default settings apart from make sure that it is available for all users so the service can access it.

Start up the SDK Manager and download the SDK that you need and any Extras that you might need. If you’re already building from an IDE like Android Studio you’ll already know what you need.

Once this is done you need to tell the OS where Android SDK is so that Jenkins can use the right location for building. To do this open up the Windows System Environment variables and create a new variable called ANDROID_HOME and set it to C:\Program Files (x86)\Android\android-sdk assuming that is where you installed it to.

Create your first build target

  • In the top level screen, click on New Item then enter the name of your project, select Freestyle Project and click OK.
  • Enter a description for the target.
  • Then enter the details for your SCM. I’m using git so I select git and enter the URL.
  • We also connect using a specific build account and password so we add that by clicking on the add button below the URL.
  • If you get the error Could not init C:\Windows\TEMP\hudson this is most likely because you haven’t added git to the system path for Jenkins to find it. Once you do this and restart the Jenkins service it will work.
  • Next you’ll need to setup up what to build. Under the Build panel, click the Add Build Step button.
  • Select Invoke Gradle Script, click on Use Gradle Wrapper
  • Enter  clean build into the Tasks field.

You’ll likely want to archive the artifacts of the build, i.e. keep the APK files for future use so

  • Click on Add Post Build Action
  • Enter **/*.apk into the

That’s it. You can of course create other targets in Jenkins to build JavaDoc, run unit tests and integration tests on code commit etc. but this will get you the base system working.

Fear of fixing code

A while back I was working on a task of retrospectively documenting a project written a couple of years back. Whilst I was going through the code building up my understanding of it I came across a number of code smells. Nothing major fortunately but, things like, print errors to System.out rather than using loggers, code that appeared to no longer be required, things that should be refactored for clarity etc. Although this isn’t great, it’s no surprise and I’ve seen many projects like this in the past. Also, developers will have come across issues with my code written before I appreciated the benefits of automation.



The most painful part with these issues was the fear of breaking the product if I addressed the issues.

I had this fear because the project had no automated tests to speak of which meant the only way of verifying whether things were still working would be to manually test the code myself which isn’t the best plan or schedule some time in with QA. Neither of these are an option because of cost, so because of this the code was left as is.

The trouble is, this will only get worse and over time the code will begin to rot. Changes will take longer, more manual testing is required etc. and all this costs time and money.


When writing any code whether it is new or adding to existing code, make sure that it has automated tests to prevent the rot and reduce the fear of making changes that can be verified using automation. You don’t have to use TDD, however, once you get the hang of it, it will certainly make writing tests a lot quicker. I recommend starting as soon as you can, to understand the ways in which to write unit and integration tests and you will soon find comfort that whenever you need to refactor code, add functionality etc. you will have the security blanket of your automated tests to keep you calm!

How experiences in my career have impacted me

First Job

My career as a software developer started in 1999 before the non-event that was y2k (fortunately!) I was working for a large networking company (3Com now HP), gave me lots of opportunities working in a defects and enhancements team. They had a great graduate training scheme that exposed me to many different aspects of business and finance as well as computer networks and development.

Working in software support provided me with experience of what it is like to take other people’s code, understand it, read any documentation and update it. The organisation was actually really good at documenting designs etc. and there were a good number of experts available to answer questions.

I stayed there for over 10 years, moving from the sustaining group, into new products and then into their applications team, however, I do believe my skills began to stagnate. Basically, I got comfortable working there, it was easy doing the same thing day in day out and although there were challenges we had worked on similar things before so they did get resolved quickly. Redundancy came looming several times and I avoided it a number of times but eventually it hit me!

What I learnt from this was not to get too comfortable in an organisation and to keep my skills up to date.

Redundancy & New Hope

I realised then that although I had transferable development skills, the industry had moved on, but I hadn’t! The big shift in my time at 3Com was from desktop applications to web based applications. I had been using Java for a good few years by this point and had dabbled with JavaScript and HTML but I didn’t know a lot about Enterprise Web Application development. During my garden period of redundancy I then spent a lot of time sorting my CV out, but at the same time, learning new skills and technologies that were relevant to the industry. I focused around the key ones at the time that were Spring and Hibernate as these seemed to be what the cool guys were using!

I was very fortunate at the time that I was looking because I found the right company at the right time and they gave me the opportunity to use my existing skills and learn the rest on the job. This was a great opportunity. I was thrown in the deep end straight away and given the task to build a custom migration application in Spring that pulled data from the old database and used all of the new business objects to push the data into the system. There was a short time frame and the fear of migrating the data incorrectly but it spurred me on to learn fast and get up to speed!!!

Whilst working for this organisation I did get many opportunities to expand both my technical skills and my development process and managerial skills. We adopted Agile in the organisation which was a breath of fresh air compared to the previous way of working. The team was much happier working in this way and being trusted to do what they do best; make software. This was great for me however, looking back I realise that this was still only in the specific area of the business. I needed to widen those skills, for example, we weren’t doing TDD there at the time, we were barely doing automated testing etc. It was time to move on again when redundancy hit when the company ran out of cash and they needed to downsize.

What I learnt from this was to keep learning languages and frameworks but also to be a better developer by ensuring that quality isn’t just for QA to own. Automation and TDD give the power to the developer to ensure they are confident that their code will work now and in the future.

Time to try TDD, read more and aim to become a better developer

My next role took me to a small company near where I live where I joined a small development team. Sat in the same room as a collection of servers it was a slightly noisy place to work but I got used to it. A more positive opportunity at this new organisation was that I finally had the opportunity to embrace TDD. It was great because I got to read about it from the master Kent Beck, try it, use it and love it! It made me think about how I was writing my code and how that impacted the ability to write automated tests. I truly understood the impact of tightly coupled code and how that becomes rather irritating when writing tests. I must confess that even though I don’t stick to TDD all the time, what I have learnt from this ensures that the code that I do write enables tests to be written after without needing to change the code.

I also read cover to cover two books by Bob Martin on Clean Code and a great and easy read on being a software professional; Clean Coder.

Once I had experienced TDD, read and practiced the principles in Clean Code it enabled me to talk to others in confidence about the benefits and challenges. This drove a significant change in the development team where more unit tests were being written and people would come to me for advice on how to test components.

In the end though I did realise that the server noise was too much and it wasn’t going to get resolved so I felt it time to move on.

What I learnt from my time in this organisation was to embrace best practices (although these do change all the time) and try to conduct myself in a professional way. I think these days this is possible without needing to join a professional body because it is one of mindset rather than membership.

Putting logic in the database. Good or bad practice?

So, today I had a discussion with a colleague about whether it was a good idea to store business logic in a database trigger. My default position, probably because I’m an application developer has always been to keep any logic in the application and just use the database to store data.

The discussion in particular was around whether to use a database trigger to copy comments into an audit table and then remove the comments from the current table, essentially moving them at a particular point in the workflow. The thing I don’t like about this is the ground is moving beneath your feet and in certain situations the data will disappear, which a few months/years down the road could waste time in working out what is happening. The justification for this change was because it’s easy to do it there. My point was that you can just as easily write that in the application and also write automated tests as part of the build to verify it’s working. True, you could probably write SQL tests as well to prove the trigger.

What do others think?

It got me thinking that I should really see what the general consensus is in the industry before I push hard either way. It seems that a lot of people are divided, and maybe because some are database developers and the others are application developers.

The question was raised here on stackexchange and the “answer” pretty much came back as don’t do it. This made me think, yes, I was right . However, looking further down the list of responses there is some middle ground.

Some people use databases as simple key/value pair type systems and from my perspective this isn’t a bad thing. You don’t have to have a specialist DBA, if performance of the operation isn’t critical then keep it in the application and then all the logic is in one searchable place (developers often don’t think to look in the database for . You don’t end up with vendor tie-in with your database provider which although rare, organisations do switch DB systems.

The arguments for putting things into the database (triggers specifically) from this article. They say that the problems are with the triggers, they are with the developers. Pushing the onus onto the skill of the developer etc. There are some capabilities that databases provide like auto-generated columns etc. where triggers and stored procedures come in.


I think the conclusions aren’t clearcut, the opinion in a couple of places seems to be that they generally aren’t great for the reasons I gave above. It depends a lot on usage but I still stand by my principles of ensuring that software should be easy to understand, you shouldn’t have to hunt around many different systems to see what’s going on and only do something in other ways if there is a compelling need, i.e. performance etc.

In the situation described above, I do believe though that auditing is a good use case for database trigger and I said that in the end to wrap up the discussion. I think it is important to question and have an opinion on development matters, however, it is also important to step back when it is a suitable way and learn from different points of view.

How to deploy a Spring Boot application to Amazon AWS using Elastic Beanstalk

I have recently started playing with Spring Boot and have been really impressed with everything I’ve seen. If I need to create a RESTful web service, no problem. If I need to create a web MVC application that uses JPA, no problem. Using embedded Tomcat and H2 database out of the box enables any Java developer to rapidly create applications. Then when I need to deploy this on to a Tomcat server as a war it’s simply a case of just changing the pom.xml and ensuring that the tomcat dependencies are provided. (I’ll cover this later).

This article is the first in a series that will share what I learn about Spring Boot and deploying it into the cloud. The articles will include details on how to architect an application that uses Spring Profiles to run integration tests on local databases, while being able to deploy the same code onto a production server in the cloud.

Watch the YouTube video below that covers all of the steps in this article.

The code that goes along side this article can be found here

Resources that I have used to build my understanding of Spring Boot and Cloud deployment include:

– Rob Harrop’s demo of Spring deployment on Amazon AWS. There are some excellent tips on Amazon AWS and what it does well and not so well. It’s well worth a watch.

– Spring Guides available here. All the guides you need to get up and running with the basics.

Creating a Spring Boot War
1) Start up eclipse IDE with the Spring Extensions installed. For Luna add this link to your update installer
2) You’ll also need Tomcat server installed in Eclipse. If you don’t have this setup then search Google for setup instructions before you continue.
3) Select File->New->Other->Spring->Spring Starter Project
4) Set the name and the artifact to spring-boot-aws
5) Change the packaging from jar to war (This does a couple of things that I’ll explain later)
6) Select Actuator and Remote Shell so that we have some RESTful services to test the app with.
7) Click Finish

So, what has this done? It has created a simple Spring Boot application with some REST services like /beans that will return a JSON object of all the beans in your application.

There are two differences between the War variant and the Jar variant. The War variant doesn’t need an embedded tomcat because it will be deployed in a tomcat server so the pom.xml has the spring-boot-starter-tomcat dependency set to “provided”. The Jar variant has the scope tags removed to include this dependency in the jar.


The second difference is the ServletInitializer

public class ServletInitializer extends SpringBootServletInitializer {

	protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
		return application.sources(SpringBootAwsApplication.class);


Now that you have this application created, we need to generate the war file for deployment onto Amazon AWS. Right click on the pom.xml and select Run As->Maven Install. This will run the build and create the war file in the target folder of your application.

Deploy your application using Amazon Elastic Beanstalk

1) Login to Amazon AWS.
2) In the main control panel select Elastic Beanstalk under Deployment & Management.
3) Click on Create Application in the top right corner.
4) Enter the Application Name and click Next.
5) Environment Tier – Web Server
6) Predefined Configuration – Tomcat
7) Environment Type – Single instance
8) Click Next
9) Select Upload your own, click Browse and locate the war you created earlier.
10) When the application is uploaded you will see the next page where you select your URL.
11) Enter a name and click check availability to see if you can use it.
12) Click Next
13) We don’t need a RDB in this example so click next here.
14) In this next step you are defining the EC2 instance that will be created, if you are using a free trial then stick to the free t1.micro instance type.
15) EC2 Key Pair, can be left unselected. You won’t need it for now and most likely you won’t have one configured yet. This will be covered in a later post.
16) Click Next.
17) In Environment Tags click next again because we don’t care about this.
18) Review the configuration, and then click Launch.

Amazon AWS will now provision your server, install the Tomcat server and deploy the war file that you uploaded. It does take a good 5-10 minutes for this action to complete.

Once it is up and running bizarrely the health starts as red, when you see this you should be able to go to your URL you configured earlier, it will be something like remember to include the beans part in the URL as that will invoke the Spring Boot REST service to return a JSON object containing all the Spring Beans in your application.

If this doesn’t work it will be worth you going back and checking if your application works locally before you try diagnosing the issue on AWS. However, it should work just fine. Failing that do a git clone then a mvn package of this project and upload that war and it should work just fine.

To upload another war you will see the upload and deploy button in the middle of the screen.

Screenshot 2015-02-10 22.27.36

Please come back for future posts on hooking up databases to the sample application. I will explain how to build the configuration in AWS as well as how to use Spring Profiles to control the datasource in the application.