Lessons learned from building Microservices – Part 3: Patterns, Architecture, Implementation And Testing

Introduction

In this blog post I will go over things I’ve learned when working with microservices. I will cover things to do and things that you should not do. There won’t be alot of code here, mostly theory and ideas.

I will discuss most of the topic from a smaller development teams point of view. Some of these things are applicable for larger teams and projects but not all necessarily. It all depends on your own project and needs.

  • Architecture
  • Sharing functionality (Common code base)
  • Continuous Integration / Continuous Delivery (CI/CD)
  • High Availability
  • Messaging and Event driven architecture
  • Security
  • Templates and scripting
  • Logging and Monitoring (+metrics)
  • Configuration pattern
  • Exception handling and Errors
  • Performance and Testing

General advice

Generally I advice to use and consider existing technologies, products and solution with your application and architecture to speed up your development and keep the amount of custom code at a minimum.

This will allow you to save on errors, save on time and money.

Still, make sure that you choose technologies and products that support your and your clients solutions; not things that you think are “cool” right now or would be fun to use. Your choices should fit the needs and requirements on not only your project but the whole of the architecture and the future vision of your project.

Architecture

To keep things simple I would say there are two ways to approach microservices.

Approach one: Starting big but small

The first approach is the one you probably are familiar with. These are big projects by big companies like Amazon, Netflix, Google, Uber etc.

Usually this involves creating hundreds or even thousands of microservice on multiple platforms and technologies. This usually require large teams of people both developing, deploying and up keeping the microservice solution they are working on.

This approach is definitely not for everyone; it requires alot of people, resources and money.

So this is why I recommend approach number two.

Approach two: Starting small but plan big

Most likely you have a team of a few people and limited resources. In this case I recommend starting between microservice architecture and monolith one.

By this I mean that you start the process by designing it and implementing all the infrastructure of a microservice but do not start splitting you application into microservices from the start. Create one microservice, expand it then split it when things start to grow so that it feels like a new microservice is needed.

By this time you have had time to understand and analyze your business domain. Now you have an idea what kind of a communication between microservices you need; perhaps HTTP based or decoupled messaging based.

When you are creating you microservice keep you design pattern for you microservices simple. Do not implement overly complicated patterns to impress anyone, including yourself. It will make the upkeep of you microservices a hell. You want to keep the code as simple and as common inside a microservice and between them.

Share as much as possible between microservices.

Create good Continuous Integration and Continuous Deployment procedures as soon as possible. It will save you time.

Verify you have proper availability and scalability based on your application needs.

Prefer scripting to automate and speed up development and upkeep.

Use templates everywhere you can, especially for creating new microservices.

Have a common way to do exception handling and logging.

Have a good configuration plan when deploying your Microservices.

You also need team member who are not afraid to do many different things with multiple technologies. You need people who can learn anything, adapt and develop with any tool, tech or language.

With these approaches and check list you should be able to manage a microservice architecture with only a handful of people. For up-keeping even one person is enough, but constant development at least two or three would be a good amount.

Sharing functionality (Common code base)

When it comes to code I prefer the “golden” rule in programming to not repeat myself but with microservices you will end up with duplication.

The wise thing to do with microservices is to know when to not duplicate and have a common access to shareable code and functionality; and why do this?:

  • Developer and up doing similar code that is used again and again in multiple microservices.
  • These common pieces of code and functionality end up having the same king of problems and bug which have to be corrected in every place
  • The problems and bugs cause security issues
  • With performance issues
  • With possible hard to understand code even when the logic and functionality is the same but the code ends of being slightly or vastly different.
  • And lastly all of the above combined cause you to spend time and money that you may not have

The next question to ask is:

What should you share? The main rule is that is must be common. The code should not be specific to a certain microservice or a domain.

Again it all depends on your project but here are a few things:

  • Logging, I highly recommend this to a unified logging output format that is easy to index and analyze.
  • Access Logs
  • Security
    • User authorization but not authentication or registration. Registration is better suited as an external microservice as it’s own domain.
    • Encryption
    • JSON Web Token related security code and processing
    • API Key
    • Basic Auth
  • Metrics
  • HTTP Client class for HTTP requests. Create an instance of this class with different parameters to share common functionality for logging, metrics and security.
  • Code to use and access Cloud based resources from AWS, Azure
    • CloudWatch
    • SQL Database
    • AppInsights
    • SQS
    • ServiceBus
    • Redis
    • etc…
  • Email Client
  • Web Related Base classes like for controllers
  • Validations and rules
  • Exception and error handling
  • Metrics logic
  • Configuration and settings logic

How should you distribute your shared functionality and code? Well it all depends on your project but here are a few ways:

  • One library to rule them all :D. Create one library which all projects need to use. Notice: This might become a problem later on when your common library code amount grows. You will end up with functionality that you may not need in a particular Microservice.
  • Create multiple libraries that are used on need basis. Thus using only bits of functionality which you need.
  • Create Web API’s or similar services that you request them to perform a certain action. This might work for things like logging or caching but not for all functionality. Also notice that you will lose speed of code to latency if you outsource your common functionality to a common service that is run independently from your actual code that needs that common functionality.
  • A combination of all of the above.

Dependency injection

Use your preferred dependency injection library to manage your classes and dependencies.

When using your DI I recommend thinking of combining classes into “packages” of functionality by feature, domain, logic, data source etc. By doing this you can target specific parts of your code without “contaminating” the project with unneeded code even if you have a large library like a common library.

For example you could pack a set of classes that provide you the functionality to communicate with a CRM, get, modify and add data.

This would include your model classes. A CRM client class, some logic that operate on the model to clean them up etc.

Once you identify them, make your code that that you can add them into your project with the least amount of code.

Also consider creating a logic to automatically tell a developer which configurations are missing once a set of functionalities are added. The easiest way to achieve this is to add this at compile and/or runtime with checks.

See my previous article on this matter for a more detailed description:

https://lionadi.wordpress.com/2019/10/01/spring-boot-bean-management-and-speeding-development/

Continuous Integration/Continuous Delivery (CI/CD)

There are may ways of doing CI and CD but the main point is that ADD it and automate as much as possible. This is especially important with microservices and small team sizes.

It will speed things up and keep things working.

Here are a few things to take into consideration:

  1. Create unit tests that are run during your pipelines
  2. Create API or Service level tests that verify that things work with mock or real life data. You can do this by mocking external dependencies or using them for real if available.
  3. Add performance tests and stability tests to your pipelines if possible to verify that things run smoothly.
  4. Think of using the same tool for creating your API or service tests when developing and when running the same tests in a pipeline. You can just reuse the same tests and be sure that what you test manually is the same that should work in production. For example: https://www.postman.com/ and https://github.com/postmanlabs/newman
  5. Script as much as possible and parametrize your scripts for reuse. Identify which scripts can be used and shared to avoid doing things twice.
  6. Use semantic versioning https://semver.org/
  7. Have a deployment plan on how you are going to use branches to deploy to different environments (https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow), for example:
    1. You can use release branches to deploy to different environments based on pipeline steps
    2. Or have specific branches for specific environment. Once things are merged into them certain things start to happen
  8. Use automated build, test and deployment for dev environment once things are merged to your development branch.
  9. Use manual steps for other environments deployment, this is to avoid testers to receiving bad builds in QA or production crashing on bugs not caught up.
  10. If you do decide to automate everything all the way to production, make sure you have good safe guards that things don’t blow up.
  11. And lastly; nothing is eternal. Experiment and re-iterate often and especially if you notice problems.

Common tools to CI/CD

https://www.sonatype.com/product-nexus-repository

https://www.ansible.com/

https://www.rudder.io/

https://www.saltstack.com/

https://puppet.com/try-puppet/puppet-enterprise/

https://cfengine.com/

https://about.gitlab.com/

https://www.jenkins.io/

https://codenvy.com/

https://www.postman.com/

https://www.sonarqube.org/

High Availability

The main point in high availability is that your solution will continue to work as well as possible or as normal even if some parts of it fail.

Here are the three main points:

  • Redundancy—ensuring that any elements critical to system operations will have an additional, redundant components that can take over in case of failure.
  • Monitoring—collecting data from a running system and detecting when a component fails or stops responding.
  • Failover—a mechanism that can switch automatically from the currently active component to a redundant component, if monitoring shows a failure of the active component.

Technical components enabling high availability

  • Data backup and recovery—a system that automatically backs up data to a secondary location, and recovers back to the source.
  • Load balancing—a load balancer manages traffic, routing it between more than one system that can serve that traffic.
  • Clustering—a cluster contains several nodes that serve a similar purpose, and users typically access and view the entire cluster as one unit. Each node in the cluster can potentially failover to another node if failure occurs. By setting up replication within the cluster, you can create redundancy between cluster nodes.

Things that help in high availability

  • Make your application stateless
  • Use messaging/events to ensure that business critical functionality is performed at some point in time. This is especially true to any write, update or delete operations.
  • Avoid heavy coupling between services, if possible and if you have to do use a lightweight messaging system. The most troublesome aspect of communicating between microservices is going to be over HTTP.
  • Have good health checks that are fast to respond when requested. This can be divided into two categories:
    • Liveness: Checks if the microservice is alive, that is, if it’s able to accept requests and respond.
    • Readiness: Checks if the microservice’s dependencies (Database, queue services, etc.) are themselves ready, so the microservice can do what it’s supposed to do.
  • Use “circuit breaker” to quickly kill unresponsive services and quickly run them back up
  • Make sure that you have enough physical resources(CPU, Memory, disk space etc) to run your solution and your architecture
  • Make sure you have enough request threads supported in your web server and web application
  • Make sure you verify how sizable HTTP requests your web server and application is allowed to receive, the header is usually that will fail your application.
  • Test your solution broadly in stress and load balancing tests to identity problems. Attach a profiler during these tests to see how your application perform, what bottlenecks there are in your code, what hogs resources etc.
  • Keep your microservices image sizes to the minimum for optimal run in production and optimal deployment. Don’t add things that you don’t use, it will slow your application down and deployment will suffer; all of this will lead to more needed physical resources and more money needed.

Messaging and Event driven architecture

I will be covering this topic in an upcoming post but until then here are a few pointers.

Because of the nature of Microservices; that they can be quickly scaled up and down based on needs I would very highly recommend that you use messaging for business critical operations and logic.

The most important ones I would say are: Writing, Updating and Deleting data.

Also all long running operations I would recommend using messaging.

Notice: One of the most important thing I consider is that you log and monitor the success of messages being send, processed and finished, with trailing to the original request to connect logs and metrics together to get a whole picture when troubleshooting.

I have coveted this in my previous logging post: https://lionadi.wordpress.com/2019/12/03/lessons-learned-from-building-microservices-part-1-logging/

Security

Generally security is an important aspect of any application and has many different topics and details to cover.

Related on security I covered this extensively in my last post in this series on Microservices, go check it out: https://lionadi.wordpress.com/2020/03/23/lessons-learned-from-building-microservices-part-2-security/

Templates and scripting

To speed up development, keep things the same and thus avoiding duplicate errors + unnecessary fixes, use templates where possible. This is especially true for Microservices.

What are possible templates that you could have:

  • Templates for deploying Cloud resources like ARM for Azure or Cloudformation for AWS.
  • Beckend Application templates
  • Front application templates
  • CI/CD templates
  • Kubernetes templates
  • and so on…

Anything that you know you will end up having multiple copies is good to standardize and create templates.

Also I recommend that for applications (front or backend), it is a very good practice to have the applications up and running as soon as you duplicate them from your repository. They should be able to be up and running as soon as you start them.

Script as much as possible and make the scripts reusable.

Parametrize all of the variables you can in your scrips and templates

Here are a few things you would need for a backend application template:

  • Security such as authentication and authorization.
  • Logging and metrics
  • Configuration and settings logic
  • Access Logs
  • Exception handling and errors
  • Validations

Logging and Monitoring (+metrics)

Again as with security this is a large topic, I’ve also written about this in my previous post in the series and recommend go checking it out:

https://lionadi.wordpress.com/2019/12/03/lessons-learned-from-building-microservices-part-1-logging/

Configurations pattern

For microservice configurations I recommend a following pattern where your deployment environments (DEV, QA, PROD etc) configuration files configurations/settings values are left empty. You still have the configuration/setting in your configuration/settings files but you leave them empty.

Next you need to make sure that your code knows how to report empty configuration values when your application is started. You can achieve this by creating a common way to retrieve configurations/settings value and being able to analyze which of the needed and loaded configurations are present.

This way when your docker image is started and the application inside the image starts running and retrieving configurations, you should be able to see what is missing in your environment.

This is mostly because you don’t want to have your environment specific configurations set in your git repository, especially the secrets. You will end up setting these values in your actual QA, PROD etc environments through a mechanism. If you forget to add a setting/configuration in your mechanism your docker image may crash and you will end up searching for the problem a long time, even with proper logging it may not be immediately clear.

I’ve written an previous post on this matter which opens things up on the code level:

https://lionadi.wordpress.com/2019/10/01/spring-boot-bean-management-and-speeding-development/

Exception handling and Errors

Three main points with exceptions and errors:

  • Global exception handling
  • Make sure you do not “leak” exceptions to your clients
  • Use a standardized error response
  • Log things properly
  • And take into consideration security issues with errors

Again for for details on logging and security check my previous posts:

https://lionadi.wordpress.com/2019/12/03/lessons-learned-from-building-microservices-part-1-logging/

https://lionadi.wordpress.com/2020/03/23/lessons-learned-from-building-microservices-part-2-security/

For error responses, you have two choices:

  1. Make up your own
  2. Or use an existing system

I would say avoid making your own if possible but it all depends on your application and architecture.

Consider first existing ones for a reference:

https://www.hl7.org/fhir/operationoutcome.html

https://developers.google.com/search-ads/v2/standard-error-responses

https://developers.facebook.com/docs/graph-api/using-graph-api/error-handling/

Still here is also an official standard which you can use and may be supported by your preferred framework or library: https://www.rfc-editor.org/rfc/rfc7807.html

The RFC 7807 specifies the following for error responses and details from https://www.rfc-editor.org/rfc/rfc7807.html:

  • Error responses MUST use standard HTTP status codes in the 400 or 500 range to detail the general category of error.
  • Error responses will be of the Content-Type application/problem, appending a serialization format of either json or xml: application/problem+json, application/problem+xml.
  • Error responses will have each of the following keys(Internet Engineering Task Force (IETF)):
    • detail (string) – A human-readable description of the specific error.
    • type (string) – a URL to a document describing the error condition (optional, and “about:blank” is assumed if none is provided; should resolve to a human-readable document).
    • title (string) – A short, human-readable title for the general error type; the title should not change for given types.
    • status (number) – Conveying the HTTP status code; this is so that all information is in one place, but also to correct for changes in the status code due to the usage of proxy servers. The status member, if present, is only advisory as generators MUST use the same status code in the actual HTTP response to assure that generic HTTP software that does not understand this format still behaves correctly.
    • instance (string) – This optional key may be present, with a unique URI for the specific error; this will often point to an error log for that specific response.

RFC 7807 example error response:

HTTP/1.1 403 Forbidden
Content-Type: application/problem+json
Content-Language: en

{
  "type": "https://example.com/invalid-account",
  "title": "Your account is invalid.",
  "detail": "Your account is invalid, your account is not confirmed.",
  "instance": "/account/34122323/data/abc",
  "balance": 30,
  "accounts": ["/account/34122323", "/account/8786875"]
}
   HTTP/1.1 400 Bad Request
   Content-Type: application/problem+json
   Content-Language: en

   {
   "type": "https://example.net/validation-error",
   "title": "Your request parameters didn't validate.",
   "invalid-params": [ {
                         "name": "age",
                         "reason": "must be a positive integer"
                       },
                       {
                         "name": "color",
                         "reason": "must be 'green', 'red' or 'blue'"}
                     ]
   }

Performance and Testing

Testing

To make sure that your solution and architecture works and performs I recommend doing extensive testing. Familiarize yourself with the testing pyramid which hold the following test procedures:

  • Units tests:
    • Small units of code tests which tests preferably one specific thing in your code
    • The tests makes sure things work as intended
    • The number of unit tests will outnumber all or tests
    • Your unit tests should run very fast
    • Mock things used in your tested functionality: replace a real thing with a fake version
    • Stub things; set up test data that is then returned and tests are verified against
    • You end up leaving out external dependencies for better isolation and faster tests.
    • Test structure:
      • Set up the test data
      • Call your method under test
      • Assert that the expected results are returned
  • Integration tests:
    • Here you test your code with external dependencies
    • Replace your real life dependencies with test doubles that perform and return same kind of data
    • You can run them locally by spinning them up using technologies like docker images
    • Your can run them as part of your pipeline by creating and starting a specific cluster that hold test double instances
    • Example database integration test:
      • start a database
      • connect your application to the database
      • trigger a function within your code that writes data to the database
      • check that the expected data has been written to the database by reading the data from the database
    • Example REST API test:
      • start your application
      • start an instance of the separate service (or a test double with the same interface)
      • trigger a function within your code that reads from the separate service’s API
      • check that your application can parse the response correctly
  • Contract tests
    • Tests that verify how two separate entities communicate and function with each other based on a commonly predefined contract (provider/publisher and consumer/subscriber. Common communications between entities:
      • REST and JSON via HTTPS
      • RPC using something like gRPC
      • building an event-driven architecture using queues
    • Your tests should cover both the publisher and the consumer logic and data
  • UI Tests:
    • UI tests test that the user interface of your application works correctly.
    • User input should trigger the right actions, data should be presented to the user
    • The UI state should change as expected.
    • UI Tests does not need to be performed end-to-end; the backend could be stubbed
  • End-to-End testing:
    • These tests are covering the whole spectrum of your application, UI, to backend, to database/external services etc.
    • These tests verify that your applications work as intended; you can use tools such as Selenium with the WebDriver Protocol.
    • Problems with end-to-end tests
      • End-to-end tests require alot of maintenance; even the slightest change somewhere will affect the end result in the UI.
      • Failure is common and may be unclear why
      • Browser issues
      • Timing issues
      • Animation issues
      • Popup dialogs
      • Performance and long wait times for a test to be verified; long run times
    • Consider keeping end-to-end to the bare minimum due to the problems described above; test the main and most critical functionalities
  • Acceptance testing:
    • Making sure that your application works correctly from a user’s perspective, not just from a technical perspective.
    • These tests should describe what the users sees, experiences and gets as an end result.
    • Usually done through the user interface
  • Exploratory testing:
    • Manual testing by human beings that try to find out creative ways to destroy the application or unexpected ways an end user might use the application which might cause problems.
    • After these finding you can automate these things down the testing pyramid line in units tests, or integration or UI.

All of the automated tests can be integrated to you integration and deployment pipeline and you should consider to do so to as many of the automated tests as possible.

Performance

For performance tests the only good way to get an idea of your solutions and architectures performance is to break it and see how it works under long sustained duration.

Two test types are good for this:

  • Stress testing: Trying to break things by scaling the load up constantly untill your application stop totally working. Then you analyze your finding based on logs, metrics, test tool results etc.
  • Load testing: A sustained test where you keep on making the same requests as you would expect in real life to get an idea how things work in the long run; these tests can go on from a few hours to a few days.

The main idea is that you see problems in your code like:

  • Memory leaks
  • CPU spikes
  • Resource hogging pieces of code
  • Slow pieces of code
  • Network problems
  • External dependencies problem
  • etc

One of my favorite tool for this is JMeter https://jmeter.apache.org/.

And to get the most out of these tests I recommend attaching a code profiler to your solutions and see what happens during these tests.

There is a HUGE difference how your code behaves when you manually test your code under a profiles and how it behaves when thousands or millions or requests are performed. Some problems only become evident when they are called thousands of times, especially memory allocations and releases.

And lastly; cover at least the most important and critical sections of your solutions and keep adding new ones when possible or problems areas are discovered.

These tests can also be added as part of the pipelines.

Topology tests

Do your performances tests while simulating possible error in your architecture or down times.

  • Simulate slow start times for servers.
  • Simulate slow response times from servers.
  • Simulate servers going down; specific or randomly

Test how your system works under missing resources and problems.

Test how expensive your system is

When you are creating tests consider testing the financial impact of your overall system and architecture. By doing different levels of load tests and stress tests you should be able to get a view on what kind of costs you will end up with.

This is especially important with cloud resources where what you pay is related to what to consume.

Create an index for each Cloudwatch logstream

  1. Go to the AWS Lambda function and search your ElasticSearch lambda function associated with your wanted ES instance. The name of the function should start with: LogsToElasticsearch_
  2. Then in this JS file search for a code of line that generated the logging entry to be pushed to an ES index. This should be in a function named as: function transform(payload) {…}
  3. In here search for the line that created the index: var indexName = [ … ]
  4. Change it to the following(NOTICE: The index name must be in lower case):
    var indexName = [
    ‘cwl-‘ + payload.logStream.toLowerCase() + “-” + timestamp.getUTCFullYear(), // year
    (‘0’ + (timestamp.getUTCMonth() + 1)).slice(-2), // month
    (‘0’ + timestamp.getUTCDate()).slice(-2) // day
    ].join(‘.’)

Helper Scripts for Docker, git and Java developers

Hi,

Here are some of my own scripts that I use when developing to ease my life:

Building a Java Gradle project, then building a docker image and pushing it

./gradlew test
if [ $? -eq 0 ]; then
    echo Tests OK
    gradle clean
    gradle generateGitProperties
    gradle bootRepackage
    ./cleandocker.sh
    docker rmi {your image name + tag}
    docker build -t {your image name + tag} .
    ./dockerregistrylogin.sh
    docker push {your image name + tag}
else
    echo Tests Failed
    exit 1
fi

Clean docker from all running containers and stopped ones

echo "Stoping all containers"
docker stop $(docker ps -a -q)
echo "Removing all containers"
docker rm $(docker ps -a -q)
echo "Starting dev environment"

Commit your code to git after gradle tests are successfull

./gradlew test
if [ $? -eq 0 ]; then
    echo Tests OK
    git add .
    git commit -m "$1"
    git push
else
    echo Tests Failed
    exit 1
fi

Merge your branch with your master

git checkout master
git pull origin master
git merge dev -m "$1"
git push origin master
git checkout dev

This one is for AWS Developers to run and get the AWS ECR docker login

#Notice: To use a certain profile for login define additional profiles like this: aws configure --profile awscli

function doAwsDockerRegistryLogin()
{
    local  myresult=$(aws ecr get-login --no-include-email --region eu-central-1 --profile awscli)
    echo "$myresult"
}

result=$(doAwsDockerRegistryLogin)   # or result=`myfunc`
eval $result

 

AWS ECS and Bitbucket Pipelines: Deploy your docker application

Hi,

Here are some tips and tricks on how to update an existing AWS ECS deployment.

NOTICE: This post assume that you have some knowledge on AWS, scripting, docker and Bitbucket.

The scripts and guide below will do the following:

  1. Clone external libraries needed for your project (Assumes that you have a multi-project application
  2. Build your application
  3. Build the docker image
  4. Push the docker image into Elastic Container Service own container registry
  5. Stop all tasks in a cluster to force the associated services to restart the tasks
    1. I am using this method of deploying to avoid making constant new task definitions. I find that unnecessary. My view is to have a deployment docker tag that your target service task definitions use. In this way you only make sure that you have one good task definition that you plan to you. If needed later update your task definition and service to use it. This deployment suggestion will not care of any other detail that except the docker image and cluster name.
  6. Then test your API or Web App in the cloud with some 3rd party tool in this case I am using Postman collection, Postman Environmental Settings and Newman

The above steps can be performed automatically when you make changes to a branch or manually from the commit view or branches view (on the row with the branch or commit id move you mouse on top of “…” to get the option “Run pipeline for a branch” and selected the manual pipeline option)

Needed steps:

  1. Create/have an IAM access keys for deployment into ECS and ECR from Bitbucket.
  2. Generate SSH keys in the Bitbucket repository where you plan to run your pipeline
  3. If you have any depended Bitbucket repositories copy the Public Key in Step 2 into that repository settings.
  4. Then in the primary repository from which you plan to deploy set environmental variables needed for the deployment.
  5. Create you pipeline with the example Bitbucket pipeline definition file and supplement scripts.

Step 1: AWS Access

You will need an access key/secrect to AWS with the following minimum policy settings:


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "ecs:ListTasks",
                "ecr:CompleteLayerUpload",
                "ecr:GetAuthorizationToken",
                "ecs:StopTask",
                "ecr:UploadLayerPart",
                "ecr:InitiateLayerUpload",
                "ecr:BatchCheckLayerAvailability",
                "ecr:PutImage"
            ],
            "Resource": "*"
        }
    ]
}

Step 2: SSH Keys for Bitbucket

More info here on how to generate a key:

https://confluence.atlassian.com/bitbucket/use-ssh-keys-in-bitbucket-pipelines-847452940.html?_ga=2.23419699.803528104.1518767114-2088289616.1517306997

Notice: Remember to get the public key

Step 3: Other repositories (Optional)

From bitbucket:

If you want your Pipelines builds to be able to access a different Bitbucket repository (other than the repo where the builds run):

  1. Add an SSH key to the settings for the repo where the build will run, as described in Step 1 above (you can create a new key in Bitbucket Pipelines or use an existing key).
  2. Add the public key from that SSH key pair directly to settings for the other Bitbucket repo (i.e. the repo that your builds need to have access to).
    See Use access keys for details on how to add a public key to a Bitbucket repo.

Step 4: Setting up environmental variables

APPIMAGE_TESTENV_CLUSTER : The cluster name where to which the docker image is deployed to  in this case a test environment that is manually triggered

APPIMAGE_DEVENV_CLUSTER: A dev target cluster that is associated with the master branch and starts automatically

APPIMAGE_NAME: The docker image name (Notice: Must match the one in your service -> task definition)

APPIMAGE_TAG: the docker image tag The docker image name (Notice: Must match the one in your service -> task definition)

AWS_ACCESS_KEY_ID (SECURE)

AWS_SECRET_ACCESS_KEY (SECURE)

AWS_DEFAULT_REGION : The region where your cluster is located

REGISTRYNAME : The ECR registry name wherethe image is to be pushed

Step 5: Bitbucket Pipeline

Pipeline definitions

The sample pipeline script has two options:
* Custom/manual deployment in the custom section of the script
* Branches/automatic deployment in the branches section of the script

# This is a sample build configuration for Java (Gradle).
# Check our guides at https://confluence.atlassian.com/x/zd-5Mw for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: atlassian/default-image:latest # Include Java support
options:
  max-time: 15 # 15 minutes incase something hangs up
  docker: true # Include Docker support
pipelines:
  custom: # Pipelines that can only be triggered manually
    test-env:
      - step:
          caches:
            - gradle
          script:
            # Modify the commands below to build your repository.
            # You must commit the Gradle wrapper to your repository
            # https://docs.gradle.org/current/userguide/gradle_wrapper.html
            - git clone {your external repository}
            - ls
            - bash ./scripts/bitbucket/buildproject.sh
            # Install AWS CLI and configure it
            #----------------------------------------
            - apt-get update
            - apt-get install jq
            - curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
            - unzip awscli-bundle.zip
            - ./awscli-bundle/install -b ~/bin/aws
            - export PATH=~/bin:$PATH
            #----------------------------------------
            - bash ./scripts/bitbucket/awsdev-dockerregistrylogin.sh
            - export IMAGE_NAME=$REGISTRYNAME/$APPIMAGE_NAME:$APPIMAGE_TAG
            - docker build -t $IMAGE_NAME .
            - docker push $IMAGE_NAME
            # This will stop all tasks in the AWS Cluster, by doing this the AWS Service will start the defined task definitions as new tasks.
            # NOTICE: This approach needs task definitions attached to services and no manually started tasks.
            - bash ./scripts/bitbucket/stopalltasks.sh $APPIMAGE_TESTENV_CLUSTER
            #----------------------------------------
            # Install Newman tool and test with postman collection and environmental settings your web app
            - npm install -g newman
            - ./scripts/newman-API-tests/run-testenv-tests.sh
            #----------------------------------------
    prod-env:
      - step:
          script:
            - echo "Manual triggers for deployments are awesome!"
  branches:
    master:
      - step:
          caches:
            - gradle
          script:
            #----------------------------------------
            # Modify the commands below to build your repository.
            # You must commit the Gradle wrapper to your repository
            # https://docs.gradle.org/current/userguide/gradle_wrapper.html
            - git clone {your external repository}
            - ls
            - bash ./scripts/bitbucket/buildproject.sh
            # Install AWS CLI and configure it
            #----------------------------------------
            - apt-get update
            - apt-get install jq
            - curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
            - unzip awscli-bundle.zip
            - ./awscli-bundle/install -b ~/bin/aws
            - export PATH=~/bin:$PATH
            #----------------------------------------
            # Build and install the newest docker image
            - bash ./scripts/bitbucket/awsdev-dockerregistrylogin.sh
            - export IMAGE_NAME=$REGISTRYNAME/$APPIMAGE_NAME:$APPIMAGE_TAG
            - docker build -t $IMAGE_NAME .
            - docker push $IMAGE_NAME
            #----------------------------------------
            # This will stop all tasks in the AWS Cluster, by doing this the AWS Service will start the defined task definitions as new tasks.
            # NOTICE: This approach needs task definitions attached to services and no manually started tasks.
            - bash ./scripts/bitbucket/stopalltasks.sh $APPIMAGE_DEVENV_CLUSTER
            #----------------------------------------
            # Install Newman tool and test with postman collection and environmental settings your web app
            - npm install -g newman
            - ./scripts/newman-API-tests/run-devenv-tests.sh
            #----------------------------------------

Stop all AWS tasks in the cloud

#!/usr/bin/env bash

echo "Getting tasks from AWS:"
echo "Cluster: $1 Service: $2"
#For a single task
#----------------
#task=$(aws ecs list-tasks --cluster "$1" --service-name "$2" | jq --raw-output '.taskArns[0] | split("/")[1]' )
 #echo "Stopping task: " $task

 #aws ecs stop-task --task $task --cluster "$1"
#----------------
tasks=$(aws ecs list-tasks --cluster "$1" --service-name "$2" | jq -r '.taskArns | map(.[0:]) | reduce .[] as $item (""; . + $item + " ")')
echo "Tasks received"
for task in $tasks; do
echo "Stopping task from AWS: " $task
	aws ecs stop-task --task $task --cluster "$1"
#echo "Task stopped."
done

Build your project

echo "Rebuilding project"
#gradlew_output=$(./gradlew build);
#echo "$gradlew_output"

./gradlew test
if [ $? -eq 0 ]; then
    echo Tests OK
    ./gradlew clean
    ./gradlew bootRepackage
else
    echo Tests Failed
fi

Get the AWS Login details for ECR docker login

#Notice: To use a certain profile for login define additional profiles like this: aws configure --profile awscli

function doAwsDockerRegistryLogin()
{
    local  myresult=$(aws ecr get-login --no-include-email)
    echo "$myresult"
}

result=$(doAwsDockerRegistryLogin)   # or result=`myfunc`
eval $result

Running API or WebApp tests with Newman and Postman

What do you need to make the tests work

  1. Create a new Postman collection
  2. Add your URLs to test
  3. Add scripts into the test tab
  4. When all your URLs in your collection are ready export them from the collection … button
  5. (Optional) Then create environment settings that you can export and use with newman

Bash script to run the newman tests

sleep 1m # Force a wait to make sure that all AWS services, your app, LBs etc are all loaded up and running

echo $DEVENV_URL
until $(curl --output /dev/null --silent --head --fail --insecure "$DEVENV_URL"); do
    printf '.'
    sleep 5
done

echo "Starting newman tests"
newman run {your postman collection}.postman_collection.json --environment "{your postman collection}.postman_environment.json" --insecure --delay-request 10<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>

Postman scripts example

Retrieving a token from the body and inserting it into an environmental variable

var jsonData = JSON.parse(responseBody);

console.log("TOKEN:" + jsonData.token);

var str_array = jsonData.token.split('.');
for(var i = 0; i < str_array.length -1; i++) {
console.log("Array Item: " + i);
console.log(str_array[i])
console.log(CryptoJS.enc.Utf8.stringify(CryptoJS.enc.Base64.parse(str_array[i])));
}
postman.setEnvironmentVariable("token", jsonData.token);

Testing a response for success and body content

// example using pm.response.to.be*
pm.test("response must be valid and have a body", function () {
// assert that the status code is 200
pm.response.to.be.ok; // info, success, redirection, clientError, serverError, are other variants
// assert that the response has a valid JSON body
pm.response.to.be.withBody;
pm.response.to.be.json; // this assertion also checks if a body exists, so the above check is not needed
});

console.log("BODY:" + responseBody);

Links

http://2mins4code.com/2017/11/08/building-a-cicd-environment-with-bitbucket-pipelines-docker-aws-ecs-for-an-angular-app/

https://bitbucket.org/awslabs/amazon-ecs-bitbucket-pipelines-python/overview

https://confluence.atlassian.com/bitbucket/deploy-to-amazon-ecs-892623902.html

https://bitbucket-pipelines.prod.public.atl-paas.net/validator

https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html

https://confluence.atlassian.com/bitbucketserver/getting-started-with-bitbucket-server-and-aws-776640193.html

https://confluence.atlassian.com/bitbucket/java-with-bitbucket-pipelines-872013773.html

https://confluence.atlassian.com/bitbucket/use-ssh-keys-in-bitbucket-pipelines-847452940.html?_ga=2.23419699.803528104.1518767114-2088289616.1517306997

https://confluence.atlassian.com/bitbucket/environment-variables-794502608.html

https://docs.aws.amazon.com/cli/latest/reference/index.html#cli-aws

https://confluence.atlassian.com/bitbucket/run-pipelines-manually-861242583.html

https://www.npmjs.com/package/newman

http://blog.getpostman.com/2017/10/25/writing-tests-in-postman/

SharePoint change document set and items content type to a new content type

I’ll put it simply:

This is a PowerShell script that you can use to change a content type of a library or list to another one. This script can identify between Document Sets and library or list items.


param (
[string]$WebsiteUrl = "http://portal.spdev.com/",
[string]$OldCTName = "DSTestCT",
[string]$NewCTName = "DSTestCT"
)

if ( (Get-PSSnapin -Name MicroSoft.SharePoint.PowerShell -ErrorAction SilentlyContinue) -eq $null )
{
Add-PsSnapin MicroSoft.SharePoint.PowerShell
}

function Reset-ListContentType ($WebUrl, $ListName, $OldCTName, $NewCTName)
{
$web = $null
try
{
$web = Get-SPWeb $WebUrl

$list = $web.Lists.TryGetList($ListName)
$oldCT = $list.ContentTypes[$OldCTName]

$isChildOfCT = $list.ContentTypes.BestMatch($rootNewCT.ID).IsChildOf($rootNewCT.ID);
if($oldCT -ne $null -and $isChildOfCT -eq $false)
{
$hasOldCT = $true
$isFoldersCTReseted = Reset-SPFolderContentType –web $web -list $list –OldCTName $OldCTName –NewCTName $NewCTName
Reset-SPFileContentType –web $web -list $list –OldCTName $OldCTName –NewCTName $NewCTName
Remove-ListContentType –web $web -list $list –OldCTName $OldCTName –NewCTName $NewCTName
if($hasOldCT -eq $true)
{
Add-ListContentType –web $web -list $list –OldCTName $OldCTName –NewCTName $NewCTName
if($isFoldersCTReseted -eq $true)
{
Set-SPFolderContentType –web $web -list $list –OldCTName $OldCTName –NewCTName $NewCTName
}
}
}


}catch
{

}
finally
{
if($web)
{
$web.Dispose()
}
}

}

function Remove-ListContentType ($web, $list, $OldCTName, $NewCTName)
{


$oldCT = $list.ContentTypes[$OldCTName]

$isChildOfCT = $list.ContentTypes.BestMatch($oldCT.ID).IsChildOf($oldCT.ID);

if($isChildOfCT -eq $true)
{
$list.ContentTypes.Delete($oldCT.ID)
}
$web.Dispose()

return $isChildOfCT
}

function Add-ListContentType ($web, $list, $OldCTName, $NewCTName)
{



$list.ContentTypes.Add($rootNewCT)

$web.Dispose()
}

function Reset-SPFolderContentType ($web, $list, $OldCTName, $NewCTName)
{
#Get web, list and content type objects

$isFoldersCTReseted = $false


$isChildOfCT = $list.ContentTypes.BestMatch($rootNewCT.ID).IsChildOf($rootNewCT.ID);

$oldCT = $list.ContentTypes[$OldCTName]
$folderCT = $list.ContentTypes["Folder"]
$newCT = $rootNewCT

$newCTID = $newCT.ID

#Check if the values specified for the content types actually exist on the list
if (($oldCT -ne $null) -and ($newCT -ne $null))
{
$list.Folders | ForEach-Object {

if ($_.ContentType.ID.IsChildOf($rootNewCT.ID) -eq $false -and $_.ContentType.ID.IsChildOf($oldCT.ID) -eq $true -and $_.Folder.ProgID -eq "Sharepoint.DocumentSet")
{
Write-Host "Found a document set: " $_.Name "Processing document set"
$item = $list.GetItemById($_.ID);
$item["ContentTypeId"] = $folderCT.Id
$item.Update()
$isFoldersCTReseted = $true
}
}
}

$web.Dispose()

return $isFoldersCTReseted
}

function Set-SPFolderContentType ($web, $list, $OldCTName, $NewCTName)
{
#Get web, list and content type objects



$folderCT = $list.ContentTypes["Folder"]
$newCT = $list.ContentTypes[$NewCTName]

#Check if the values specified for the content types actually exist on the list
if (($newCT -ne $null))
{
$list.Folders | ForEach-Object {
if ($_.ContentType.ID.IsChildOf($newCT.ID) -eq $false -and $_.ContentType.ID.IsChildOf($folderCT.ID) -eq $true -and $_.Folder.ProgID -eq "Sharepoint.DocumentSet")
{
$item = $list.GetItemById($_.ID);
$item["ContentTypeId"] = $newCT.Id
$item.Update()
}
}
}

$web.Dispose()
}


function Reset-SPFileContentType ($web, $list, $OldCTName, $NewCTName)
{
#Get web, list and content type objects



$isChildOfCT = $list.ContentTypes.BestMatch($rootNewCT.ID).IsChildOf($rootNewCT.ID);

$oldCT = $list.ContentTypes[$OldCTName]
$folderCT = $list.ContentTypes["Folder"]
$newCT = $rootNewCT

$newCTID = $newCT.ID

#Check if the values specified for the content types actually exist on the list
if (($oldCT -ne $null) -and ($newCT -ne $null))
{
$list.Folders | ForEach-Object {
if ($_.ContentType.ID.IsChildOf($rootNewCT.ID) -eq $false -and $_.ContentType.ID.IsChildOf($oldCT.ID) -eq $true)
{
$_["ContentTypeId"] = $folderCT.Id
$_.Update()
}
}
#Go through each item in the list
$list.Items | ForEach-Object {
Write-Host "Item present CT ID :" $_.ContentType.ID
Write-Host "CT ID To change from :" $oldCT.ID
Write-Host "NEW CT ID to change to:" $rootNewCT.ID

#Check if the item content type currently equals the old content type specified
if ($_.ContentType.ID.IsChildOf($rootNewCT.ID) -eq $false -and $_.ContentType.ID.IsChildOf($oldCT.ID) -eq $true)
{
#Check the check out status of the file
if ($_.File.CheckOutType -eq "None")
{
Change the content type association for the item
$item = $list.GetItemById($_.ID);
$item.File.CheckOut()
write-host "Resetting content type for file: " $_.Name "from: " $oldCT.Name "to: " $newCT.Name

$item["ContentTypeId"] = $newCTID
$item.UpdateOverwriteVersion()
Write-Host "Item changed CT ID :" $item.ContentType.ID
$item.File.CheckIn("Content type changed to " + $newCT.Name, 1)
}
else
{
write-host "File" $_.Name "is checked out to" $_.File.CheckedOutByUser.ToString() "and cannot be modified"
}
}
else
{
write-host "File" $_.Name "is associated with the content type" $_.ContentType.Name "and shall not be modified"
}
}
}
else
{
write-host "One of the content types specified has not been attached to the list"$list.Title
return
}

$web.Dispose()
}

$web = Get-SPWeb $WebsiteUrl
$rootWeb = $web.Site.RootWeb;
$rootNewCT = $rootWeb.AvailableContentTypes[$NewCTName]

Foreach ($list in $web.Lists) {
Write-Host $list.BaseType
if($list.Hidden -eq $false -and $list.BaseType -eq "DocumentLibrary")
{
Write-Host "Processing list: " $list.Title
Reset-ListContentType –WebUrl $WebsiteUrl –ListName $list.Title –OldCTName $OldCTName –NewCTName $NewCTName
}
}

$web.Dispose()

 

Tutorial guide: Learn Python basics

Hi ok, here are my own notes on the basics of Python. I am working on a Udemy course on data science and got excited about Python(it has been while since I’ve done something cool with Python, so I am excited 🙂 ). This is based on the Udemy course https://www.udemy.com/data-science-and-machine-learning-with-python-hands-on

Where it goes, hope it helps.

You can run python code as scripts or as a python notebook.

 

Running a script from a command prompt:

python “your script file location and name”

 

Basic basics

 

List definition:

listOfNumbers = [1, 2, 3, 4, 5, 6]

 

Iteration through a list of items:

for number in listOfNumbers:

print number,

if (number % 2 == 0):

print “is even”

else:

print “is odd”

 

#Notice: In python you differentiate between blocks of code by whitespace, or a tab. Not the same as let say Java or C# where the char { and } are used to differentiate a block of code. Pay attention to you formatting, indentation.

 

#Notice: The char , (comma) is used to tell that something is going to continue on the same print line, within the same block of code. See example above.

 

#Notice: Colons : differentiate clauses.

 

Importing modules

 

import numpy as np

 

A = np.random.normal(25.0, 5.0, 10)

print A

 

Data structures

 

Lists

 

Defining a list(Notice: The brackets [] define an mutable list):

x = [1, 2, 3, 4, 5, 6]

 

Printing the length of a list:

print len(x)

 

Sub setting lists:

 

First 3 elements(counting starts from zero):

x[:3]

 

Last 3 elements:

x[3:]

 

Last two elements from starting from the end of the list:

x[-2:]

 

Extend the list with a new list :

x.extend([7,8])

 

Add a new item to the list:

x.append(9)

 

Python is a weekly typed language which allows you to put whatever you want in a list:

 

Creating a multidimensional list:

y = [10, 11, 12]

listOfLists = [x, y]

listOfLists

 

Sort a list(descending):

z = [3, 2, 1]

z.sort()

 

Sort a list(ascending):

z.sort(reverse=True)

 

Tuples

 

Are just like lists but immutable.

You can not extend them append them, sort them etc. You can not change them.

 

Example:

 

#Tuples are just immutable lists. Use () instead of []

x = (1, 2, 3)

len(x)

 

y = (4, 5, 6)

 

listOfTuples = [x, y]

 

Tuples common usage for data science or data processing:

Is to use it to assign variables to input data that as it is read in.

 

This example creates variable with values from a “source” where data is split by the comma.

#Notice: It is important that you have the same about of variables in your tuple as you are retrieving/assigning from the data “source”.

(age, income) = “32,120000”.split(‘,’)

print age

print income

 

Dictionaries

 

A way to define a “lookup” table:

 

# Like a map or hash table in other languages

captains = {}

captains[“Enterprise”] = “Kirk”

captains[“Enterprise D”] = “Picard”

captains[“Deep Space Nine”] = “Sisko”

captains[“Voyager”] = “Janeway”

 

print captains[“Voyager”]

 

print captains.get(“Enterprise”)

 

for ship in captains:

print ship + “: ” + captains[ship]

 

If something is not found the result will be none:

print captains.get(“NX-01”)

 

Functions

 

Let you repeat a set of operation over and over again with different parameters.

 

Notice: use def to define a function and  () chars to define the parameters and use the return keyword to return value from the function.

 

def SquareIt(x):

return x * x

 

print SquareIt(2)

 

Pass functions around as parameters

#Notice: You have to make sure that what you are typing is correct because there is no strong typing in Python. Typing the wrong function name will cause errors.

 

#You can pass functions around as parameters

def DoSomething(f, x):

return f(x)

 

print DoSomething(SquareIt, 3)

 

Lambda functions

Is Functional programming: You can inline a function into a function

 

#Notice: lambda keyword is telling that you are defining a inline function to be used where you put it. In the example below inside a function parameter named x followed by a colon : character followed by what the function actually does. To pass in multiple parameters to a lambda function use the comma , to separate the variables.

 

#Lambda functions let you inline simple functions

print DoSomething(lambda x: x * x * x, 3)

 

Boolean Expressions

 

Valye is false

print 1 == 3

 

Value is true(or keyword check which is true)

print (True or False)

 

Check if something is a certain value( Use the is keyword)

print 1 is 3

 

If else clauses

 

if 1 is 3:

print “How did that happen?”

elif 1 > 3:

print “Yikes”

else:

print “All is well with the world”

 

Looping

 

Normal looping

for x in range(10):

print x,

 

Continue and Break

 

#Process only the first 5 items but skip the number one

for x in range(10):

if (x is 1):

continue

if (x > 5):

break

print x,

 

While

 

x = 0

while (x < 10):

print x,

x += 1

 

 

Hiding and showing HTML elements with ASP .NET

Hi,

Ok, this is probably a quite simple thing to do and maybe everyone knows it BUT just in case… :).

Well, you need two things a C# code(or some other way to ouput HTML) to generate and anchor link by which to display or hide a given HTML element.

In this case, let’s say that you have a DIV element. You assign as the ID value the client ID which your control receives from ASP .NET(notice that the unique ID or the ID but the ClientID, it is HTML friendly).

Then you would do something like the code below. You are calling a JavaScript function which receives as parameters the full ID for the DIV element and the start of the DIV full id.

The idea is to be able to identify all of the elements which are under this same hide and show logic while still being able to identify a single item.

Exmaple ID:
id =”cars_ASP.NETUserControlClientID_carnumber”

The bolded part would be used to identify ALL of the items which need be processed at the same time. While the non-bolded value(carnumber) is the identifier for a single item.

The C# code below creates an anchor calling the function below the C# code. Notice the href definition “javascript:void(0)“, this is to avoid page jump in certain browsers and their versions.

String.Format("<a href=\"javascript:void(0)\" onclick=\"showInfo('{0}','{2}')\">{1}</a>", "HTML Element ID to find unique counter or ID" + this.ClienID, "The link title/name", this.ClienID);

function showInfo(id, controlUniqueID) {
// Search for all items and hide them
 $('*[id*=' + controlUniqueID + ']').each(function () {
 $(this).hide();
 });

// Next open only the item which you want to see
 if ($('#' + id).css('display') == 'none') {
 $('#' + id).show();
 }
 else {
 $('#' + id).hide();
 }

 }