AWS ECS Docker container and load balancing with service discovery

If you have a micro service architecture in AWS and you want to direct trafifc and balance the traffic you need and Elastic Load Balancer with target groups.

To my experiment this is what you need to do in order to direct traffic from a single ELB FQDN to multiple applications/containers.

This setup assumes that you have a one webapp/client and one or more back-end services to which the client talks to.

The steps:

  1.  Make sure that your container have a host port defined of 0. This will make ECS service automatically assign a dynamic port.
  2. Create a target group for each application (client app and all back-end services)
  3. Create the ELB and add rules to your listener, or example:
    1. ClientApp: no rules here, all traffic is assumed to go to the root of the DNS
    2. Backend services: IF rule with path rule of something like “/api/myapi* and associate the wanted target group
      1. This will redirect all traffic that contain /api/myapi to the designated target group
  4. Next go to ECS and in your cluster create a service or each client and back-end service that you want to redirect traffic. The reason you have to create a service or each app is that you can only associate one ELB and Target Group for each container and it’s port, even if you have multiple container in your task definition only one container can capture your traffic unless you do other configuration in your docker host.

 

 

Advertisements

Gradle + Java: Multi projects

Here are a few tips for making a Java multi project work with Gradle:

Step 1

Create a settings.gradle file in your main project and add this with your changes:

include(‘:{my secondary project name here}’)
// Uncomment the line below if your secondary project is in a different path than the main project
// project(“:{my secondary project name here}”).projectDir = file(“../{my secondary project folder name one level up, on the same level as the main project”)
rootProject.name = ‘my main project name here’

Step 2

Associate the secondary project with your main project:

Go to your main project build.gradle file and add the following:

dependencies {
compile project(‘:{my secondary project name here}’)
}

 Step 3: Extra

This is just extra: If you want to simply refer a local library file in your main project you do this:

dependencies {
compile files(‘lib/{my library name}.jar’)
testCompile files(‘lib/{my library name}.jar’)
}

 

Spring Boot CORS Bean

Hi,

Here is a code sample for enabling CORS for your Spring Boot application.

In the example below it is assumed that you have your configurations for allowed methods, origins etc somewhere configured and passed. In this case a cors setting class.

@Bean
    public FilterRegistrationBean corsFilterRegistrationBean() {
        CorsConfiguration configuration = new CorsConfiguration();
        configuration.setAllowedOrigins(Arrays.asList(this.corsSettings.getOrigin().split(",")));
        configuration.setAllowedMethods(Arrays.asList(this.corsSettings.getMethods().split(",")));
        List<String> headers = Arrays.asList(this.corsSettings.getHeaders().split(","));
        for (String header: headers
             ) {
            configuration.addAllowedHeader(header);

        }
        UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
        source.registerCorsConfiguration("/**", configuration);
        FilterRegistrationBean bean = new FilterRegistrationBean(new CorsFilter(source));
        bean.setOrder(Ordered.HIGHEST_PRECEDENCE);
        return bean;
    }

Creating self-signed certificates for AWS(or Azure)

Ok you can use openssl to create a self-signed cert:

openssl genrsa 2048 > privatekey.pem
openssl req -new -key privatekey.pem -out csr.pem
openssl x509 -req -days 365 -in csr.pem -signkey privatekey.pem -out server.crt
Then you can upload it to AWS by:
aws iam upload-server-certificate –server-certificate-name {certname} –certificate-body file://server.crt –private-key file://privatekey.pem
Simple :).

A docker self-hosted Logging API using Elasticsearch, Kibana and MongoDB

Hi,

In this post I wanted to show you a way of hosting a custom logging API that uses Elasticsearch, Kibana and MongoDB.

In the github link you will find the following:

  • A docker-compose file that contains the definitions needed to create the environment
  • Docker image definitions for Kibana, Elasticsearch and MongoDB

Things to know

  • Both Kibana and Elasticsearch don’t use X-Pack as a security measure, the original image definition that I used as the basis used SearchGuard but I wanted to create something that is free to use.
  • For a simple authentication I used Nginx reverse-proxies images.
  • The mongoDB uses and needs credentials to access
  • The LogAPI would use TSL and basic auth at the moment.

TODO

  • Adding token based authentication
  • Adding the LogAPI nodejs code
  • Using Readonlyrest to add security between Elasticsearch and Kibana

This is a work in progress

The docker compose file

Continue reading

My ELK stack + Spring Boot + PostgreSQL docker definition

Hi,

If you need an example to put up a docker environment for Spring Boot, Postgre and ELK stack take a look at this:

https://github.com/lionadi/Templates/blob/master/docker-compose.yml

 version: '2'
 
services:
 mydatabase:
 image: mydockerimageregistry.azurecr.io/mydatabase:0.1.0
 container_name: mydatabase
 restart: always
 build:
 context: .
 dockerfile: postgredockerfile
 ports:
 - 5432:5432
 environment:
 POSTGRES_PASSWORD: f130e10792
 
 volumes:
 - pgdata:/var/lib/postgresql/data
 networks:
 - mynetwork
 
 apiapp:
 image: mydockerimageregistry.azurecr.io/apiapp:0.1.0
 container_name: apiapp
 ports:
 - 8080:8080
 environment:
 SPRING_PROFILES_ACTIVE: localdevPostgre
 SPRING_DATASOURCE_URL: jdbc:postgresql://mydatabase:5432/postgres
 build:
 context: .
 dockerfile: DockerfileAPIApp
 networks:
 - mynetwork
 volumes:
 - applogs:/logs
 depends_on:
 - "elasticsearch1"
 - "mydatabase"
 links:
 - mydatabase
 
 elasticsearch1:
 image: docker.elastic.co/elasticsearch/elasticsearch:5.4.1
 container_name: elasticsearch1
 environment:
 - cluster.name=docker-cluster
 - bootstrap.memory_lock=true
 - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
 ulimits:
 memlock:
 soft: -1
 hard: -1
 nofile:
 soft: 65536
 hard: 65536
 mem_limit: 1g
 cap_add:
 - IPC_LOCK
 volumes:
 - esdata1:/usr/share/elasticsearch/data
 ports:
 - 9200:9200
 - 9300:9300
 networks:
 - mynetwork
 dockbeat:
 image: mydockerimageregistry.azurecr.io/dockbeat:0.1.0
 build:
 context: .
 dockerfile: dockerfileDockbeat
 container_name: dockbeat
 networks:
 - mynetwork
 volumes: 
 - /var/run/docker.sock:/var/run/docker.sock
 depends_on:
 - "elasticsearch1"
 cap_add:
 - ALL
 
 filebeat:
 image: mydockerimageregistry.azurecr.io/filebeat:0.1.0
 container_name: filebeat
 restart: always
 build:
 context: .
 dockerfile: dockerfileFilebeat
 networks:
 - mynetwork
 volumes:
 - applogs:/usr/share/filebeat/logs
 depends_on:
 - "elasticsearch1"
 - "apiapp"
 packetbeat:
 image: mydockerimageregistry.azurecr.io/packetbeat:0.1.0
 container_name: packetbeat
 restart: always
 build:
 context: .
 dockerfile: dockerfilePacketbeat
 networks:
 - mynetwork
 volumes:
 - applogs:/logs
 depends_on:
 - "elasticsearch1"
 - "apiapp"
 cap_add:
 - NET_ADMIN
 kibana:
 image: docker.elastic.co/kibana/kibana:5.4.1
 container_name: kibana
 networks:
 - mynetwork
 ports:
 - "9900:5601"
 environment:
 SERVER_NAME: kibana
 SERVER_PORT: 5601
 ELASTICSEARCH_URL: http://elasticsearch1:9200
 XPACK_SECURITY_ENABLED: "true"
 ELASTICSEARCH_PASSWORD: changeme
 ELASTICSEARCH_USERNAME: elastic
 depends_on:
 - "elasticsearch1"
 
volumes:
 pgdata:
 
 esdata1:
 driver: local
 esdata2:
 driver: local
 
 applogs:
networks:
 mynetwork:
 driver: bridge
 ipam:
 config:
 - subnet: 172.10.0.0/16
 gateway: 172.10.5.254
 aux_addresses:
 kibana: 172.10.1.8
 packetbeat: 172.10.1.7
 filebeat: 172.10.1.6
 dockbeat: 172.10.1.5
 elasticsearch2: 172.10.1.4
 elasticsearch1: 172.10.1.3
 mydatabase: 172.10.1.2
 apiapp: 172.10.1.1

Docker, Docker Swarm and Azure guide

Table of contents

Azure configurations. 1

Docker on Ubuntu Server. 1

Connect to Azure Virtual Image for Docker on Ubuntu Server. 3

Container Registry. 4

Build, push and remove images to and from registry. 4

Docker Swarm Configurations. 4

Create a swarm.. 5

Deploy a service. 5

Inspect a service. 5

Tips and Tricks. 5

Docker Swarm.. 5

Spring Boot 6

Docker Swarm.. 6

Notice: This guide assumes you have Docker installed locally for some of the steps to work.

Azure configurations

Docker on Ubuntu Server

First step is to create a new VM supporting docker.

The above is rather self-explanatory.

The only bigger thing is to choose between a password or SSH key. You can generate one with Putty, use an Unix environment or use linux bash support for Windows(type bash in powershell).

Then type ssh-keygen {filename or path + filename}

Then copy the public key from your generatedkeyfile {filename}.pub

Connect to Azure Virtual Image for Docker on Ubuntu Server

ssh -i {filename or path + filename} {username}@{virtual image IP or DNS name}

ssh -i /root/.ssh/mykeyfile dockeradmin

To get the virtual image IP go to the newly created virtual image in Azure UI. In the Azure Portal UI select you resource group where the Docker on ubuntu was created, from there you should be able to find your virtual image. Just press on the virtual image to get a new overview screen of the virtual image.

Next you should see the DNS name and Virtual IP address. Use either with the SSH command.

Remember to access the images from a Azure Container Registry you have to log in to the registry. See Container Registry.

Container Registry

Next create a private container registry for images publishing and management.

Follow this guide from Microsoft, it’s pretty good:

https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-portal

Now to connect to the registry open your terminal/powershell and type:

docker login {registry domain} -u {username} -p {password}

docker login myregistry.azurecr.io -u xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx -p myPassword

https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-docker-cli

Build, push and remove images to and from registry

Once you have your image definition ready use the following command to build a new image:

docker build -t {registry domain}/{image name}:{image version number} .

docker build -t myregistry.azurecr.io/CoolImage:1.0 .

Then to push it:

docker push {registry domain}/{image name}:{image version number}

docker push myregistry.azurecr.io/CoolImage:1.0

docker pull {registry domain}/{image name}:{image version number}

docker rmi {registry domain}/{image name}:{image version number}

docker rmi myregistry.azurecr.io/CoolImage:1.0

Docker Swarm Configurations

Create a swarm

This is very simple, just type the following command in the Azure Docker on Ubuntu virtual image:

docker swarm init

https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/

This will create you a basic structure for your swarm. More info on the Docker Swarm here:

https://docs.docker.com/engine/swarm/key-concepts/#services-and-tasks

Deploy a service

So this will create a container and replicate it in your swarm, meaning that the Docker Swarm will automatically create instances as task in your swarm worker nodes of your container.

docker service create –replicas 1 –name {service name} -p {docker network port}:{container internal port} –with-registry-auth {registry domain}/{image name}:{image version number}

docker service create –replicas 1 –name myservice -p 8080:8080 –with-registry-auth myregistry.azurecr.io/CoolImage:1.0

Inspect a service

docker service inspect {service name}

docker service inspect –pretty {service name}

This will give you an overview of the service and it’s configurations.

Run docker service ps {service name} to see which nodes are running the service.

Run docker ps on the node where the task is running to see details about the container for the task.

Docker Stack Deploy

The services can also be created and updated using a Docker Compose file (v3 and above). Deployment using a Docker Compose file is otherwise similar to the steps described above, but instead of creating and updating services, the compose file should be edited and deployed.

Deploying a stack

Before deploying the stack for the first time, you need to authenticate with the container registry:

docker login {registry-domain}

A stack defined in a compose file can be deployed with the following command:

docker stack deploy –with-registry-auth –compose-file {path-to-compose-file} {stack-name}

Updating a stack

An existing stack can be updated with the same command as creating it. However, with-registry-auth option may be omitted.

docker stack deploy –compose-file {path-to-compose-file} {stack-name}

Docker Compose

Using docker compose is rather simple but can be a bit confusing at first. Basically with docker-compose you can specify a cluster of container with apps, dbs etc in one shot. With a single command you can set things up.

Compose file example

To start our you need to create a docker-compose.yml file and start configuring the docker environment.

In the example below the compose file defines two containers/services:

  1. apppostgres
    1. A postgre database
  2. app
    1. A spring boot application using the database

These two services are joined together with a common network and static IP. Notice that the postgre database needs a volumes configuration to keep the data even if the database is shutdown.

Also notice that the spring boot application is linked and put to depend on the database service. An in the datasource URL for the database the configuration refers to the container name of the database service/container.

More details: https://docs.docker.com/compose/overview/

version: ‘2’

services:

apppostgres:

image: myregistry.azurecr.io/apppostgres:0.1.0

container_name: apppostgres

restart: always

build:

context: .

dockerfile: postgredockerfile

ports:

– 5432:5432

environment:

POSTGRES_PASSWORD: mypassword

POSTGRES_DB: appdb

volumes:

– pgdata:/var/lib/postgresql/data

networks:

– mynetwork

app:

image: myregistry.azurecr.io/app:0.1.3

container_name: app

ports:

– 8080:8080

environment:

SPRING_PROFILES_ACTIVE: devPostgre

SPRING_DATASOURCE_URL: jdbc:postgresql://apppostgres:5432/appdb

build:

context: .

dockerfile: Dockerfile

links:

– apppostgres

depends_on:

– "apppostgres"

networks:

– mynetwork

volumes:

pgdata:

networks:

mynetwork:

driver: bridge

ipam:

config:

– subnet: 172.10.0.0/16

gateway: 172.10.5.254

aux_addresses:

apppostgres: 172.10.1.2

app: 172.10.1.

Commands

To use these command navigate to the location where the docker-compose.yml file resides.

To build the environment:

docker-compose build

To start the services(notice the -d parameter, it means to detach from the services once they are up):

Docker-compose up -d

To stop and kill(remove) the services:

Dokcer-compose stop

Dokcer-compose kill

And to update a single service/container, try this link:

http://staxmanade.com/2016/09/how-to-update-a-single-running-docker-compose-container/

Tips and Tricks

Docker Swarm

Send registry authentication details to swarm agents:

docker service update –with-registry-auth {service name}

Publish or remove a port as a node port:

docker service update –publish-add 4578: 4578 {service name}

docker service update –publish-rm 4578 {service name}

Publish a port with a specific protocol:

docker service update –publish-add 4578: 4578 /udp client

Update the used image in the service:

docker service update –image {registry domain}/{image name}:{image version number} {service name}

Add and remove an environmental variable:

docker service update –env-add " MY_DATA_URL =http://test.com" {service name}

docker service update –env-rm "MY_DATA_URL" {service name}

Connect to the container terminal:

docker exec -it {container ID or name} /bin/sh

Look at the logs from the container:

docker logs –follow {container ID or name}

Spring Boot

Run jar package with spring profile:

java -jar .buildlibsmyapp.jar -d "SPRING_PROFILES_ACTIVE=profile1"

Repackage the spring boot application:

gradle bootrepackage

Docker Swarm

Create a service with environmental variable – spring profile:

docker service create –replicas 1 –name {service name} -e "SPRING_PROFILES_ACTIVE=profile1" -p {docker network port}:{container internal port} –with-registry-auth {registry domain}/{image name}:{image version number}