AWS ECS Docker container and load balancing with service discovery

If you have a micro service architecture in AWS and you want to direct trafifc and balance the traffic you need and Elastic Load Balancer with target groups.

To my experiment this is what you need to do in order to direct traffic from a single ELB FQDN to multiple applications/containers.

This setup assumes that you have a one webapp/client and one or more back-end services to which the client talks to.

The steps:

  1.  Make sure that your container have a host port defined of 0. This will make ECS service automatically assign a dynamic port.
  2. Create a target group for each application (client app and all back-end services)
  3. Create the ELB and add rules to your listener, or example:
    1. ClientApp: no rules here, all traffic is assumed to go to the root of the DNS
    2. Backend services: IF rule with path rule of something like “/api/myapi* and associate the wanted target group
      1. This will redirect all traffic that contain /api/myapi to the designated target group
  4. Next go to ECS and in your cluster create a service or each client and back-end service that you want to redirect traffic. The reason you have to create a service or each app is that you can only associate one ELB and Target Group for each container and it’s port, even if you have multiple container in your task definition only one container can capture your traffic unless you do other configuration in your docker host.

 

 

Advertisements

Gradle + Java: Multi projects

Here are a few tips for making a Java multi project work with Gradle:

Step 1

Create a settings.gradle file in your main project and add this with your changes:

include(‘:{my secondary project name here}’)
// Uncomment the line below if your secondary project is in a different path than the main project
// project(“:{my secondary project name here}”).projectDir = file(“../{my secondary project folder name one level up, on the same level as the main project”)
rootProject.name = ‘my main project name here’

Step 2

Associate the secondary project with your main project:

Go to your main project build.gradle file and add the following:

dependencies {
compile project(‘:{my secondary project name here}’)
}

 Step 3: Extra

This is just extra: If you want to simply refer a local library file in your main project you do this:

dependencies {
compile files(‘lib/{my library name}.jar’)
testCompile files(‘lib/{my library name}.jar’)
}

 

AWS ECS and Bitbucket Pipelines: Deploy your docker application

Hi,

Here are some tips and tricks on how to update an existing AWS ECS deployment.

NOTICE: This post assume that you have some knowledge on AWS, scripting, docker and Bitbucket.

The scripts and guide below will do the following:

  1. Clone external libraries needed for your project (Assumes that you have a multi-project application
  2. Build your application
  3. Build the docker image
  4. Push the docker image into Elastic Container Service own container registry
  5. Stop all tasks in a cluster to force the associated services to restart the tasks
    1. I am using this method of deploying to avoid making constant new task definitions. I find that unnecessary. My view is to have a deployment docker tag that your target service task definitions use. In this way you only make sure that you have one good task definition that you plan to you. If needed later update your task definition and service to use it. This deployment suggestion will not care of any other detail that except the docker image and cluster name.
  6. Then test your API or Web App in the cloud with some 3rd party tool in this case I am using Postman collection, Postman Environmental Settings and Newman

The above steps can be performed automatically when you make changes to a branch or manually from the commit view or branches view (on the row with the branch or commit id move you mouse on top of “…” to get the option “Run pipeline for a branch” and selected the manual pipeline option)

Needed steps:

  1. Create/have an IAM access keys for deployment into ECS and ECR from Bitbucket.
  2. Generate SSH keys in the Bitbucket repository where you plan to run your pipeline
  3. If you have any depended Bitbucket repositories copy the Public Key in Step 2 into that repository settings.
  4. Then in the primary repository from which you plan to deploy set environmental variables needed for the deployment.
  5. Create you pipeline with the example Bitbucket pipeline definition file and supplement scripts.

Step 1: AWS Access

You will need an access key/secrect to AWS with the following minimum policy settings:


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "ecs:ListTasks",
                "ecr:CompleteLayerUpload",
                "ecr:GetAuthorizationToken",
                "ecs:StopTask",
                "ecr:UploadLayerPart",
                "ecr:InitiateLayerUpload",
                "ecr:BatchCheckLayerAvailability",
                "ecr:PutImage"
            ],
            "Resource": "*"
        }
    ]
}

Step 2: SSH Keys for Bitbucket

More info here on how to generate a key:

https://confluence.atlassian.com/bitbucket/use-ssh-keys-in-bitbucket-pipelines-847452940.html?_ga=2.23419699.803528104.1518767114-2088289616.1517306997

Notice: Remember to get the public key

Step 3: Other repositories (Optional)

From bitbucket:

If you want your Pipelines builds to be able to access a different Bitbucket repository (other than the repo where the builds run):

  1. Add an SSH key to the settings for the repo where the build will run, as described in Step 1 above (you can create a new key in Bitbucket Pipelines or use an existing key).
  2. Add the public key from that SSH key pair directly to settings for the other Bitbucket repo (i.e. the repo that your builds need to have access to).
    See Use access keys for details on how to add a public key to a Bitbucket repo.

Step 4: Setting up environmental variables

APPIMAGE_TESTENV_CLUSTER : The cluster name where to which the docker image is deployed to  in this case a test environment that is manually triggered

APPIMAGE_DEVENV_CLUSTER: A dev target cluster that is associated with the master branch and starts automatically

APPIMAGE_NAME: The docker image name (Notice: Must match the one in your service -> task definition)

APPIMAGE_TAG: the docker image tag The docker image name (Notice: Must match the one in your service -> task definition)

AWS_ACCESS_KEY_ID (SECURE)

AWS_SECRET_ACCESS_KEY (SECURE)

AWS_DEFAULT_REGION : The region where your cluster is located

REGISTRYNAME : The ECR registry name wherethe image is to be pushed

Step 5: Bitbucket Pipeline

Pipeline definitions

The sample pipeline script has two options:
* Custom/manual deployment in the custom section of the script
* Branches/automatic deployment in the branches section of the script

# This is a sample build configuration for Java (Gradle).
# Check our guides at https://confluence.atlassian.com/x/zd-5Mw for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: atlassian/default-image:latest # Include Java support
options:
  max-time: 15 # 15 minutes incase something hangs up
  docker: true # Include Docker support
pipelines:
  custom: # Pipelines that can only be triggered manually
    test-env:
      - step:
          caches:
            - gradle
          script:
            # Modify the commands below to build your repository.
            # You must commit the Gradle wrapper to your repository
            # https://docs.gradle.org/current/userguide/gradle_wrapper.html
            - git clone {your external repository}
            - ls
            - bash ./scripts/bitbucket/buildproject.sh
            # Install AWS CLI and configure it
            #----------------------------------------
            - apt-get update
            - apt-get install jq
            - curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
            - unzip awscli-bundle.zip
            - ./awscli-bundle/install -b ~/bin/aws
            - export PATH=~/bin:$PATH
            #----------------------------------------
            - bash ./scripts/bitbucket/awsdev-dockerregistrylogin.sh
            - export IMAGE_NAME=$REGISTRYNAME/$APPIMAGE_NAME:$APPIMAGE_TAG
            - docker build -t $IMAGE_NAME .
            - docker push $IMAGE_NAME
            # This will stop all tasks in the AWS Cluster, by doing this the AWS Service will start the defined task definitions as new tasks.
            # NOTICE: This approach needs task definitions attached to services and no manually started tasks.
            - bash ./scripts/bitbucket/stopalltasks.sh $APPIMAGE_TESTENV_CLUSTER
            #----------------------------------------
            # Install Newman tool and test with postman collection and environmental settings your web app
            - npm install -g newman
            - ./scripts/newman-API-tests/run-testenv-tests.sh
            #----------------------------------------
    prod-env:
      - step:
          script:
            - echo "Manual triggers for deployments are awesome!"
  branches:
    master:
      - step:
          caches:
            - gradle
          script:
            #----------------------------------------
            # Modify the commands below to build your repository.
            # You must commit the Gradle wrapper to your repository
            # https://docs.gradle.org/current/userguide/gradle_wrapper.html
            - git clone {your external repository}
            - ls
            - bash ./scripts/bitbucket/buildproject.sh
            # Install AWS CLI and configure it
            #----------------------------------------
            - apt-get update
            - apt-get install jq
            - curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
            - unzip awscli-bundle.zip
            - ./awscli-bundle/install -b ~/bin/aws
            - export PATH=~/bin:$PATH
            #----------------------------------------
            # Build and install the newest docker image
            - bash ./scripts/bitbucket/awsdev-dockerregistrylogin.sh
            - export IMAGE_NAME=$REGISTRYNAME/$APPIMAGE_NAME:$APPIMAGE_TAG
            - docker build -t $IMAGE_NAME .
            - docker push $IMAGE_NAME
            #----------------------------------------
            # This will stop all tasks in the AWS Cluster, by doing this the AWS Service will start the defined task definitions as new tasks.
            # NOTICE: This approach needs task definitions attached to services and no manually started tasks.
            - bash ./scripts/bitbucket/stopalltasks.sh $APPIMAGE_DEVENV_CLUSTER
            #----------------------------------------
            # Install Newman tool and test with postman collection and environmental settings your web app
            - npm install -g newman
            - ./scripts/newman-API-tests/run-devenv-tests.sh
            #----------------------------------------

Stop all AWS tasks in the cloud

#!/bin/bash

echo "Getting tasks from AWS:"
tasks=$(aws ecs list-tasks --cluster $APPIMAGE_TARGETCLUSTER | jq -r '.taskArns | map(.[0:]) | reduce .[] as $item (""; . + $item + " ")')
echo "Tasks received"
for task in $tasks; do
echo "Stopping task from AWS: " $task
	aws ecs stop-task --task $task --cluster $APPIMAGE_TARGETCLUSTER
#echo "Task stopped."
done

Build your project

echo "Rebuilding project"
#gradlew_output=$(./gradlew build);
#echo "$gradlew_output"

./gradlew test
if [ $? -eq 0 ]; then
    echo Tests OK
    ./gradlew clean
    ./gradlew bootRepackage
else
    echo Tests Failed
fi

Get the AWS Login details for ECR docker login

#Notice: To use a certain profile for login define additional profiles like this: aws configure --profile awscli

function doAwsDockerRegistryLogin()
{
    local  myresult=$(aws ecr get-login --no-include-email)
    echo "$myresult"
}

result=$(doAwsDockerRegistryLogin)   # or result=`myfunc`
eval $result

Running API or WebApp tests with Newman and Postman

What do you need to make the tests work

  1. Create a new Postman collection
  2. Add your URLs to test
  3. Add scripts into the test tab
  4. When all your URLs in your collection are ready export them from the collection … button
  5. (Optional) Then create environment settings that you can export and use with newman

Bash script to run the newman tests

sleep 1m # Force a wait to make sure that all AWS services, your app, LBs etc are all loaded up and running

echo $DEVENV_URL
until $(curl --output /dev/null --silent --head --fail --insecure "$DEVENV_URL"); do
    printf '.'
    sleep 5
done

echo "Starting newman tests"
newman run {your postman collection}.postman_collection.json --environment "{your postman collection}.postman_environment.json" --insecure --delay-request 10<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>

Postman scripts example

Retrieving a token from the body and inserting it into an environmental variable

var jsonData = JSON.parse(responseBody);

console.log("TOKEN:" + jsonData.token);

var str_array = jsonData.token.split('.');
for(var i = 0; i < str_array.length -1; i++) {
console.log("Array Item: " + i);
console.log(str_array[i])
console.log(CryptoJS.enc.Utf8.stringify(CryptoJS.enc.Base64.parse(str_array[i])));
}
postman.setEnvironmentVariable("token", jsonData.token);

Testing a response for success and body content

// example using pm.response.to.be*
pm.test("response must be valid and have a body", function () {
// assert that the status code is 200
pm.response.to.be.ok; // info, success, redirection, clientError, serverError, are other variants
// assert that the response has a valid JSON body
pm.response.to.be.withBody;
pm.response.to.be.json; // this assertion also checks if a body exists, so the above check is not needed
});

console.log("BODY:" + responseBody);

Links

http://2mins4code.com/2017/11/08/building-a-cicd-environment-with-bitbucket-pipelines-docker-aws-ecs-for-an-angular-app/

https://bitbucket.org/awslabs/amazon-ecs-bitbucket-pipelines-python/overview

https://confluence.atlassian.com/bitbucket/deploy-to-amazon-ecs-892623902.html

https://bitbucket-pipelines.prod.public.atl-paas.net/validator

https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html

https://confluence.atlassian.com/bitbucketserver/getting-started-with-bitbucket-server-and-aws-776640193.html

https://confluence.atlassian.com/bitbucket/java-with-bitbucket-pipelines-872013773.html

https://confluence.atlassian.com/bitbucket/use-ssh-keys-in-bitbucket-pipelines-847452940.html?_ga=2.23419699.803528104.1518767114-2088289616.1517306997

https://confluence.atlassian.com/bitbucket/environment-variables-794502608.html

https://docs.aws.amazon.com/cli/latest/reference/index.html#cli-aws

https://confluence.atlassian.com/bitbucket/run-pipelines-manually-861242583.html

https://www.npmjs.com/package/newman

http://blog.getpostman.com/2017/10/25/writing-tests-in-postman/

Spring Boot Cache – Custom KeyGenerator

I created this kind of a bean to have trully unique keys for caching trough annotations @Cachable, @CachePut etc.

@Bean
    public KeyGenerator keyGenerator() {
        return new KeyGenerator() {
            @Override
            public Object generate(Object o, Method method, Object... objects) {
                // This will generate a unique key of the class name, the method name,
                // and all method parameters appended.
                StringBuilder sb = new StringBuilder();
                sb.append(o.getClass().getName());
                //sb.append(method.getName());
                for (Object obj : objects) {
                    sb.append(obj.toString());
                }
                return sb.toString();
            }
        };
    }

Spring Boot CORS Bean

Hi,

Here is a code sample for enabling CORS for your Spring Boot application.

In the example below it is assumed that you have your configurations for allowed methods, origins etc somewhere configured and passed. In this case a cors setting class.

@Bean
    public FilterRegistrationBean corsFilterRegistrationBean() {
        CorsConfiguration configuration = new CorsConfiguration();
        configuration.setAllowedOrigins(Arrays.asList(this.corsSettings.getOrigin().split(",")));
        configuration.setAllowedMethods(Arrays.asList(this.corsSettings.getMethods().split(",")));
        List<String> headers = Arrays.asList(this.corsSettings.getHeaders().split(","));
        for (String header: headers
             ) {
            configuration.addAllowedHeader(header);

        }
        UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
        source.registerCorsConfiguration("/**", configuration);
        FilterRegistrationBean bean = new FilterRegistrationBean(new CorsFilter(source));
        bean.setOrder(Ordered.HIGHEST_PRECEDENCE);
        return bean;
    }

Creating self-signed certificates for AWS(or Azure)

Ok you can use openssl to create a self-signed cert:

openssl genrsa 2048 > privatekey.pem
openssl req -new -key privatekey.pem -out csr.pem
openssl x509 -req -days 365 -in csr.pem -signkey privatekey.pem -out server.crt
Then you can upload it to AWS by:
aws iam upload-server-certificate –server-certificate-name {certname} –certificate-body file://server.crt –private-key file://privatekey.pem
Simple :).

Using multiple AWS CLI profiles to manage development environments

To avoid setting global AWS credentials/access to the AWS CLI you can use CLI Profiles like this:

Create a new profile:
aws configure –profile {profilename}

Then use it by adding the profile after a command like in the example below:
aws ecr get-login –no-include-email –region eu-central-1 –profile {profilename}

 

Thats it, this will allows you to use different access keys and policies for different purposes without different AWS security configurations overriding others.

 

This is especially true when you want to test your code against real world security settings in the cloud that can’t have higher level rights.