Microservices are all the rage right now. Everyone is taking their big monoliths and decomposing them into smaller services with exposed APIs. If you are doing this right then your services should be completely decoupled and independently releasable. Yet the way some APIs are designed makes this extremely hard to accomplish, if not impossible. Let’s take a look at the problem and how to solve it.

If you’ve ever had to parse JSON from your terminal you probably know about jq. It’s basically sed for JSON and it works wonderfully well. If you’ve had to parse YAML from your terminal however, the problem becomes a bit harder. You can either go for some super obscure 15 lines sed and awk combination that has the advantage of being pure bash, or go with a higher level language (ruby or python comes to mind) to actually do the parsing and outputting the result to stdout. In this post I’ll show jyparser, a simple tool (packaged as a nice docker image) that allows you to use a jq-like syntax to parse and also update JSON and YAML files from your terminal using exactly the same commands.

Most docker tutorials that you’ll find out there (the ones in this blog included) will assume that you have a single host running all your containers or a few hosts but where you are manually managing them. While this is nice and simple to explain the basic concepts, it is probably not the way you want to run your applications in production. In most cases you will have a cluster of servers all running different containers that need to talk to each other and know how to function properly, even when some of those servers suddenly go offline.

If you have been following my posts on Docker then you know by now that I usually run on OSX with Boot2Docker. It is definitely a really useful tool if you are not on a native Linux kernel and it makes using Docker on Mac and Windows almost as easy and transparent as if you were on Linux. That is, until you need to expose one or more ports from your containers and then you want to access those from your host. If you are on Linux then you can simply go to localhost and the port number and that’s it. If you are using boot2docker however, you need to remember that your docker host is actually the boot2docker VM and not your laptop, so you first need to know what that VM’s IP is. In this very short post I want to describe a way in which you can access your containers on localhost even if you are using boot2docker.

In the previous post we talked about Registrator and how, combined with a service discovery backend like Consul, it allows us to have transparent discovery for our containers while still keeping their portability. One thing we didn’t talk about though is how are we supposed to access those services registered in Consul from our consumer applications, which could be running as containers themselves.

In the previous post we talked about Consul and how it can help us towards a highly available and efficient service discovery. We saw how to run a Consul cluster, register services, query through its HTTP API as well as its DNS interface and use the distributed key/value store. One thing we missed though was how to register the different services we run as docker containers with the Cluster. In this post I’m going to talk about Registrator, an amazing tool that we can run as a docker container whose responsibility is to make sure that new containers are registered and deregistered automatically from our service discovery tool.

In the previous post I talked a bit about Docker and the main benefits you can get from running your applications as isolated, loosely coupled containers. We then saw how to “dockerize” a small python web service and how to run this container in AWS, first manually and then using Elastic Beanstalk to quickly deploy changes to it. This was really good from an introduction to Docker point of view but in real life one single container running on a host will not cut it. You will need a set of related containers running together and collaborating, each with the ability to be deployed independently. This also means that you need a way to know which container is running what and where. In this post I wanted to talk a bit about service discovery. Particularly, I’m going to show how you can use Consul running as a container to achieve this goal in a robust and scalable way.

By now I would image that Docker needs no introduction, given that is one of the hottest technologies and indeed buzzwords in the industry today. But just in case, we’ll see the basics of it. We’ll also see how you can quickly run a Docker container in AWS and how you can easily deploy your changes to it.

In the previous post we saw an overview of what functional programming is and how the new features of Java 8 allow developers to write their applications using a more functional style. One of the main points in this new version of the language was the introduction of lambdas. Together with lambdas came the use of functional interfaces and methods references. This post will explore these features in more detail, showing when to use them, the restrictions around them and how you can use them to make your code more readable and concise.

This is going to be the first on a series of posts where I’ll explore in a bit of detail the new functional programming ideas introduced by Java 8. In this post I’ll introduce some concepts and go in very high-level details about all the new features introduced by Java 8. Subsequent posts will dive into more details about each specific topic.