Category Archives: Programming

Lessons learned from building microservices – Part 1: Logging

This is a part in a series of posts discussing things learned while I worked with micro-services. The things I write here are not absolute truths and should be considered as the best solutions at the time I and my team used these methods. You might chose to do things differently and I recommend highly to find out for yourself the best practices and approaches that work for you and your project.

I also assume that you have a wide range of pre-existing knowledge on building microservices, API, programming languages, programming, cloud providers etc.

Notice: In the examples below I will omit “boilerplate” code to save space.

Base requirements for logging

Service instances

In a microservice architecture the most important thing is to be able to see what each microservice instance is doing. This means in the case of kubernetes each pod, or each container with docker etc.

So if you have a service named Customer and you have three instances of this service you would want to know what each service is doing when logging. So here is a check list of things to consider:

  • You need to know what each service instance is doing because each instance will process logic and each instance will have it’s own output based on what it is doing or requested to do
  • Each log entry should be able to identify which service instance was that performed the log entry by providing a unique service intance id
  • Each log entry should identify which application version the service instance is using
  • Each log entry should tell in which environment the service instane is operating in, example: development, test, qa, prod
  • If possible each log entry should tell where the service instance is like IP address or host-name

Monitoring

Next you need a way to push logs to a location, aggregate them, parse and index them perhaps, then analyze them and finally to be able to easily find logs, make graphs, alerts etc.

A common pattern or stack to use and the one I used was ElasticSeach, Logstash and Kibana. You can mix and match different service and solutions to get the same results.

Log types

Next I’ll cover the different logging types you might need and that will make your life easier.

General logging details

Before we cover the different types of logs which you might need first we need to have some common data witch each log entry. This data will help us in different way depending on the solution you are making. In my example here these data are related to an API backend but you might find them useful in some other types of solutions.

So consider adding these logging fields to other logs as metadata.

public class LogData
 {

    private String requestId;
    private String userId;
    private String environmentId;
    private String appName;
    private String appVersion;
    private Instant createdAt;

}
FieldSampleDescription
requestId6f88dcd0-f628-44f1-850e-962a4ba086e3This is a value that should represent a request to your API. This request id should be applied to all log entries to be able to group all log entries from a request.
userId9ff4016d-d4e6-429f-bca8-6503b9d629e1Same as with the request id but a user id that represents a possible user that made the API request.
environmentIdDEV, TEST, PRODThis should tell a person looking at a log entry from which environment the log entry came for. This is important in cases where all log entries are pushed into one location and not separated physically.
appNameYour Cool APISame as with the environment id but concerns the app name.
appVersion2.1.7Same as with the environment id but concerns the app version.
createdAt02/08/2019 12:37:59This should represent when the log entry has been created. This will help very much in tracking the progress of the application logic in all environment in case of troubleshooting.

Access log

Access logs are a great way to keep track of your API requests and their response to a client. I won’t go deeper into them, there are plenty of detail descriptions available which I recommend going through, here is one:

https://httpd.apache.org/docs/2.4/logs.html#accesslog

https://en.wikipedia.org/wiki/Server_log

Here is some sample code:

public class AccessLog {
    private String clientIP;
    private String userId;
    private String timestamp;
    private String method;
    private String requestURL;
    private String protocol;
    private int statusCode;
    private int payloadSize;
    private String borwserAgent;
    private String requestId;
}
FieldSampleDescription
clientIP127.0.0.1The IP address of the client that made the request to you API.
userIdaa10318a-a9b7-4452-9616-0856a206da75Preferably this should be the same user id that was used in the LogData class above
timestamp02/08/2019 12:37:59A date time format of your choice when the request occured.
methodGET, POST, PUT etc.HTTP Method of the request.
requestURLhttps://localhost:9000/api/customer/infoThe URL of the request
protocolHTTP/1.1The protocol used to communicate with the API request.
statusCode200, 201, 401, 500 etc.HTTP status code of the request response.
payloadSize2345The size of the payload returned to the client.
borwserAgentMozilla/4.08 [en] (Win98; I ;Nav)“The User-Agent request header contains a characteristic string that allows the network protocol peers to identify the application type, operating system, software vendor or software version of the requesting software user agent.” – https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/User-Agent
requestIdThis should the the same request id used in the LogData class earlier.

Message Queue Log

This is a sample log which you could use with events/message queues. Depending on what message queue you use and what kind of configurations, you would most likely have minimal information about the message pushed to a queue.

From a troubleshooting point of view and being able to track things I would recommend passing with the message additional metadata related to the message original situation.

Lets take as an example an API request. What I did was add an additional property field to my message which held a JSON version of the class below. Looking at the message below you can see that mostly it is the same fields as in the LogData class earlier with added metadata related to the message itself which can also be used to controler the message logic at the receiving end.

public class MessageQueueLog {
    private String sourceHostname;
    private String sourceAppName;
    private String sourceAppVersion;
    private String sourceEnvironmentId;
    private String sourceRequestId;
    private String sourceUserId;
    private String message;
    private String messageType;
}
FieldSampleDescription
sourceHostnameLook at the LogData example earlier.
sourceAppNameLook at the LogData example earlier.
sourceAppVersionLook at the LogData example earlier.
sourceEnvironmentIdLook at the LogData example earlier.
sourceRequestIdLook at the LogData example earlier.
sourceUserIdLook at the LogData example earlier.
messageJSON data JSON data representing a serialized object that hold important data to be used the the receiving end.
messageTypeUPDATE_USER, DELETE_USERA simple unique static ID for the message. This ID will tell the receiving end what it needs to do with the data in the message field.

Metrics log

With metrics logs the idea is to be able to track desired things in your application. A common thing that you might like to track would be how external request from your own code is performing. This will allow you set up alerts and troubleshoot problem with external sources, especially if combined with a access log you can see and a metrics log of how long you request totally took to finish.

So you could track the following metrics:

  • External source like database, API, service etc.
  • You request total processing time from start to end to return a response
  • Some important section of your code
public class MetricsLog {

    private String title;
    private String body;
    private String additional;
    private String url;
    private int statusCode;
    private Double payloadSize;
    private Long receivedResponseAtMillis = 0L;
    private Long sentRequestAtMillis = 0L;
    private MetricsLogTypes logType;
    private double elapsedTimeInSeconds = 0;
    private double elapsedTimeInMS = 0;
    private String category;
}
FieldSampleDescription
titleUser Database
bodyUpdate user
additionalSome additional data
urlhttp://localhost:9200/api/car/typesIf this is a API request to an external service you should log the request URL.
statusCode200, 401, 500 etc.The HTTP status code returned by the external source.
payloadSize234567The size of the returned data.
receivedResponseAtMillis1575364455When the response was received, this could be in UNIX epoch time.
sentRequestAtMillis1575363455When the request was received, this could be in UNIX epoch time.
logTypeAPI, DATABASE, CODE etc.This could be used to identify what kind of a metric this is.
elapsedTimeInSeconds1Calculate and write how long it took for the response to be received.
elapsedTimeInMS1000Calculate and write how long it took for the response to be received.
categoryCategory1/2/3 etc.This could be used to group different metrics together.

Security Logs

I would also consider creating a separate security log that would be logged and identified by the logging indexer to it’s own pattern or category etc.

This is to speed up troubleshooting related to security issues like when someone signs in, signs out, registers etc.

Aggregated log entry

This is an example where you would have a main log class that will contain our desired log entry data and details for a system.

Possible use cases is when streaming to Cloudwatch or to perhaps Elasticsearch.

public class CloudLog {
    private LocalDateTime timeStamp;
    private String logger;
    private Map<String, Object> metadata;
    private String message;
    private String level;
}
FieldDescription
timeStampA timestamp when the log entry was created.
loggerThe logger entity name.
metadataA map full of key value pair, full of data which can be serialized into JSON for indexing.
messageThe main message to the log entry
levelSeverity level of the log entry, DEBUG, INFO, ERROR, etc.

Spring Boot: Bean management and speeding development

Intro

Is this blog post I’ll show a way how to use Spring Boot functionality to create a more automatized way to use beans that are some what created as component or features.

The idea is that we way have functionalities or features which we want to have easy and clear access through code, so the following things should be true:

  • If I want I can use a set of beans easily
  • If I want I can use a specific bean or beans within the previous set of beans
  • It should be easily told to Spring what beans to load, a only liner preferably
  • Configuration of beans should be not hidden from a developer, the developer should be noticed if a configuration is missing from a required bean ( By configuration I mean application properties)
  • A bean or set of beans should be able to be used from a common library so that when the library is references in a project the beans will not be automatically created and thus creating mandatory dependencies that would break the other project code and/or add functionalities which are not required

All of the above will happen if the following three things are created and used properly within a code base:

  1. Custom annotations to represent features or functionalities by tagging wanted code
  2. Usage of component scan to load up the wanted features or functionalities based on the set annotations
  3. Usage of properties classes which extend from a properties base class handling application properties dependencies and configuration logic and logging

Notice: I assume that you are familiar with Java and Spring Boot, so I’ll skip some of the minor details regarding the implementation.

Implementation

Custom annotation

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
public @interface MyFeature {
   
}

To use this annotation you need to apply it to bean creation process which you want the component scan to pick up.

@Bean(name = "MY_FEATURE_BEAN")
        @Autowired
        @Profile({"primary"})
        @MyFeature
        public MyFeatureClass createMyFeatureBean(MyFeatureProperties myfeatureProperties) {
            MyFeatureClass myFeature = new MyFeatureClass(myfeatureProperties);
            // Do someething else with the class

            return myFeature; // Return the class to be used as a bean
        }

You can also directly apply it to a class. This way the class is used directly to create a bean out of it.

Component Scanning

You can use the Spring Boot component scanning in many different ways (I recommend looking at what the component scan can do).

In this example it is enough for you to tell which annotation to include in your project, notice that you have to create a configuration class for this to work:


@Configuration
@ComponentScan(basePackages = "com.my.library.common",
        includeFilters = @ComponentScan.Filter(MyFeature.class))
public class MyFeaturesConfiguration {
}

Extended properties configuration

For this example we need two things to happen for the custom properties configuration and handling/logging to work:

  1. Create a properties class that represents a set of properties for a feature or set or features and/or functionalities
  2. Extend it from a base properties class that will examine each field in the class and determine if a property has been set, not set or if it is optional.

What we want to achieve here is that we want to show a developer which properties from a feature or functionalities are missing or not missing. We don’t show the values since the values may contain sensitive data, we only list ALL of the properties in a properties class no matter if they have set values or not. This is to show to a developer all the needed fields and which are invalid, including optional properties.

This approach will significantly improve a developers or a system admins daily work load by decreasing. You won’t have to guess what is missing. And combining with good documentation on the property level of a configuration class you should figure out easily what is missing.

BaseProperties class

Extend this class in all classes that you want to define properties.

import com.sato.library.common.general.exceptions.SettingsException;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.util.StringUtils;

import javax.annotation.PostConstruct;
import java.lang.reflect.Field;
import java.util.Optional;

public class BaseProperties {
    @PostConstruct
    private void init() throws Exception {
        boolean failedSettingsCheck = false;
        StringBuilder sb = new StringBuilder();

        // Go through every field in the class and log it's situation if it has problems(missing property value). NOTICE: A report of the settings properties is only logged IF a required field is not set
        for (Field f : getClass().getDeclaredFields()) {
            f.setAccessible(true);
            String optionalFieldPostFixText = " ";
            boolean isOptionalSetting = false;
            String classConfigurationPropertyFieldPrefixText = "";

            // Check to see if the class has a configuration properties annontation, if so add the defined property path to the logging
            if (getClass().getDeclaredAnnotation(ConfigurationProperties.class) != null) {
                final ConfigurationProperties configurationPropertiesAnnotation = getClass().getDeclaredAnnotation(ConfigurationProperties.class);
                if (!StringUtils.isEmpty(configurationPropertiesAnnotation.value()))
                    classConfigurationPropertyFieldPrefixText = configurationPropertiesAnnotation.value() + ".";

                if (StringUtils.isEmpty(classConfigurationPropertyFieldPrefixText) && !StringUtils.isEmpty(configurationPropertiesAnnotation.prefix()))
                    classConfigurationPropertyFieldPrefixText = configurationPropertiesAnnotation.prefix() + ".";
            }

            // Check to see if this field is optional
            if (f.getDeclaredAnnotation(OptionalSetting.class) != null) {
                optionalFieldPostFixText = " - Optional";
                isOptionalSetting = true;
            }

            // Check to see if a settings field is empty, if so then set the execution of the application to stop and logg the situations
            if (f.get(this) == null || (f.getType() == String.class && StringUtils.isEmpty(f.get(this)))) {
                // Skip empty field if they are set as optional
                if (!isOptionalSetting) {
                    failedSettingsCheck = true;
                }
                sb.append(classConfigurationPropertyFieldPrefixText + f.getName() + ": Missing" + optionalFieldPostFixText + System.lineSeparator());
            } else {
                // If the field is OK then mark than in the logging to give a better overview of the properties
                sb.append(classConfigurationPropertyFieldPrefixText + f.getName() + ": OK" + optionalFieldPostFixText + System.lineSeparator());
            }
        }

        // If even one required setting property is empty then stop the application execution and log the findings
        if(failedSettingsCheck) {
            throw new SettingsException(Optional.of(System.lineSeparator() + "SETTINGS FAILURE: You can't use these settings values of " + this.getClass() + " without setting all of the properties: " + System.lineSeparator() + sb.toString()));
        }
    }
}

Optional Annotation for optional properties

Use the following code to set optional properties in properties classes. This means that in the properties base classes any optional property is ignored as a fatal exception that needs to stop the execution of the application.

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.FIELD)
public @interface OptionalProperty {
}

Using all of the above

@ConfigurationProperties(prefix = "myfeature")
@MyFeature
public class MyFeatureProperties extends BaseProperties {
    @OptionalProperty
    private String secretKey;
    private String region;

    public String getSecretKey() {
        return secretKey;
    }

    public void setSecretKey(String secretKey) {
        this.secretKey = secretKey;
    }


    public String getRegion() {
        return region;
    }

    public void setRegion(String region) {
        this.region = region;
    }
}

Notice: In the usage example code above I do not set a @Configuration annotation to the class, this is because the component scan will pick up this class and automatically determine it is a configuration class because of the @ConfigurationProperties annotation, yep this is a trick but it work nicely.

My Kubernetes Cheat Sheet, things I find useful everyday

Hi,

Here is a list of my personal most used and useful commands with Kubernetes.

kubectl config current-context # Get the Kuberneste context where you are operating

kubectl get services # List all services in the namespace
kubectl get pods # Get all pods
kubectl get pods –all-namespaces # List all pods in all namespaces
kubectl get pods -o wide # List all pods in the namespace, with more details
kubectl get deployment my-dep # List a particular deployment
kubectl get pods –include-uninitialized # List all pods in the namespace, including uninitialized ones

kubectl describe nodes my-node
kubectl describe pods my-pod
kubectl describe my-dep

kubectl scale –replicas=0 my-dep # roll down a deployment to zore instances
kubectl scale –replicas=1 my-dep # roll up a deployment to desired instaces number

kubectl set image my-dep my-containers=my-image –record # Update the image of the diven deployment containers

kubectl apply -f my-file.yaml # apply a kubernetes specific conifiguration, secrets file, deployment file

kubectl logs -f –tail=1 my-pod # Attach to the pods output and print one line of at a time

kubectl exec my-podf — printenv | sort # print all environmental variables from a pod and sort them

kubectl get my-dep –output=yaml # Print a deployment yaml file the deployment is using

kubectl get pod my-pod –output=yaml # Print the pod related configurations it is using

kubectl logs -p my-pod # Print the logs of the previous container instance, you can use this if there was a crash

kubectl run -i –tty busybox –image=busybox –restart=Never — sh # run a busybox pod for troubleshooting

More useful commands: https://kubernetes.io/docs/reference/kubectl/cheatsheet/

Redis caching with Spring Boot

Hi,

A few example on how to handle Redis usage with Spring Boot. Also some examples on how to error handle exceptions and issues with Redis.

The code below will help you initialize your redis connect and how to use it. One thing to take notice is that redis keys are global so you must make sure that any method parameter you use with you keys and unique. For this reason below you have samples of custom key generators.

Redis Samples

 

Redis main configurations


import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cache.CacheManager;
import org.springframework.cache.annotation.*;
import org.springframework.cache.interceptor.CacheErrorHandler;
import org.springframework.cache.interceptor.KeyGenerator;
import org.springframework.context.annotation.*;
import org.springframework.context.support.PropertySourcesPlaceholderConfigurer;
import org.springframework.data.redis.cache.RedisCacheManager;
import org.springframework.data.redis.connection.jedis.JedisConnectionFactory;
import org.springframework.data.redis.core.RedisTemplate;

import org.springframework.data.redis.serializer.StringRedisSerializer;
import org.springframework.util.StringUtils;

import java.lang.reflect.Method;
import java.util.HashMap;
import java.util.Map;


@Configuration
@ComponentScan
@EnableCaching
@Profile({"dev","test"})
public class RedisCacheConfig extends CachingConfigurerSupport {
    @Override
    public CacheErrorHandler errorHandler() {

        return new CustomCacheErrorHandler();

    }

    protected final org.slf4j.Logger logger = LoggerFactory.getLogger(RedisCacheConfig.class);


    // This is a custom default keygenerator that is used if no other explicit key generator is specified
    @Bean
    public KeyGenerator keyGenerator() {
        return new KeyGenerator() {
            protected final org.slf4j.Logger logger = LoggerFactory.getLogger(RedisCacheConfig.class);

            @Override
            public Object generate(Object o, Method method, Object... objects) {
                return RedisCacheConfig.keyGeneratorProcessor(logger, o, method, null, objects);

            }
        };
    }

    // A custom key generator that generates a key based on the first method parameter while ignoring all other parameters
    @Bean("keyGeneratorFirstParamKey")
    public KeyGenerator keyGeneratorFirstParamKey() {

        return new KeyGenerator() {
            protected final org.slf4j.Logger logger = LoggerFactory.getLogger(RedisCacheConfig.class);

            @Override
            public Object generate(Object o, Method method, Object... objects) {

                return RedisCacheConfig.keyGeneratorProcessor(logger, o, method, 0, objects);
            }
        };
    }

    // A custom key generator that generates a key based on the second method parameter while ignoring all other parameters

    @Bean("keyGeneratorSecondParamKey")
    public KeyGenerator keyGeneratorSecondParamKey() {

        return new KeyGenerator() {
            protected final org.slf4j.Logger logger = LoggerFactory.getLogger(RedisCacheConfig.class);

            @Override
            public Object generate(Object o, Method method, Object... objects) {

                return RedisCacheConfig.keyGeneratorProcessor(logger, o, method, 1, objects);
            }
        };
    }

    // This is the main logic for creating cache keys
    public static String keyGeneratorProcessor(org.slf4j.Logger logger, Object o, Method method, Integer keyIndex, Object... objects) {

        // Retrieve all cache names for each anonation and compose a cache key prefix
        CachePut cachePutAnnotation = method.getAnnotation(CachePut.class);
        Cacheable cacheableAnnotation = method.getAnnotation(Cacheable.class);
        CacheEvict cacheEvictAnnotation = method.getAnnotation(CacheEvict.class);
        org.springframework.cache.annotation.CacheConfig cacheConfigClassAnnotation = o.getClass().getAnnotation(org.springframework.cache.annotation.CacheConfig.class);
        String keyPrefix = "";
        String[] cacheNames = null;

        if (cacheConfigClassAnnotation != null)
            cacheNames = cacheConfigClassAnnotation.cacheNames();


        if (cacheEvictAnnotation != null)
            if (cacheEvictAnnotation.value() != null)
                if (cacheEvictAnnotation.value().length > 0)
                    cacheNames = org.apache.commons.lang3.ArrayUtils.addAll(cacheNames, cacheEvictAnnotation.value());

        if (cachePutAnnotation != null)
            if (cachePutAnnotation.value() != null)
                if (cachePutAnnotation.value().length > 0)
                    cacheNames = org.apache.commons.lang3.ArrayUtils.addAll(cacheNames, cachePutAnnotation.value());

        if (cacheableAnnotation != null)
            if (cacheableAnnotation.value() != null)
                if (cacheableAnnotation.value().length > 0)
                    cacheNames = org.apache.commons.lang3.ArrayUtils.addAll(cacheNames, cacheableAnnotation.value());

        if (cacheNames != null)
            if (cacheNames.length > 0) {
                for (String cacheName : cacheNames)
                    keyPrefix += cacheName + "_";
            }

        StringBuilder sb = new StringBuilder();


        int parameterIndex = 0;
        for (Object obj : objects) {
            if (obj != null && !StringUtils.isEmpty(obj.toString())) {
                if (keyIndex == null)
                    sb.append(obj.toString());
                else if (parameterIndex == keyIndex) {
                    sb.append(obj.toString());
                    break;
                }
            }
            parameterIndex++;
        }


        String fullKey = keyPrefix + sb.toString();

        logger.debug("REDIS KEYGEN for CacheNames: " + keyPrefix + " with KEY: " + fullKey);

        return fullKey;
        //---------------------------------------------------------------------------------------------------------

        // Another example how to do custom cache keys
        // This will generate a unique key of the class name, the method name,
        // and all method parameters appended.
                /*StringBuilder sb = new StringBuilder();
                sb.append(o.getClass().getName());
                sb.append("-" + method.getName() );
                for (Object obj : objects) {
                    if(obj != null)
                        sb.append("-" + obj.toString());
                }

                if(logger.isDebugEnabled())
                    logger.debug("REDIS KEYGEN: " + sb.toString());
                return sb.toString();*/
    }

    // Create the redis connection here
    @Bean
    public JedisConnectionFactory jedisConnectionFactory() {
        JedisConnectionFactory jedisConFactory = new JedisConnectionFactory();

        jedisConFactory.setUseSsl(true);
        jedisConFactory.setHostName("127.0.0.1");
        jedisConFactory.setPort(6379);

        if (!StringUtils.isEmpty(mytoken)) {
            jedisConFactory.setPassword(mytoken);
        }

        jedisConFactory.setUsePool(true);
        jedisConFactory.afterPropertiesSet();

        return jedisConFactory;
    }

    @Bean
    public static PropertySourcesPlaceholderConfigurer propertySourcesPlaceholderConfigurer() {
        return new PropertySourcesPlaceholderConfigurer();
    }

    @Bean
    public RedisTemplate redisTemplate() {
        RedisTemplate redisTemplate = new RedisTemplate();
        redisTemplate.setConnectionFactory(jedisConnectionFactory());
        redisTemplate.setKeySerializer(new StringRedisSerializer());

        return redisTemplate;
    }

    // Cache configurations like how long data is cached
    @Bean
    public CacheManager cacheManager(RedisTemplate redisTemplate) {
        RedisCacheManager cacheManager = new RedisCacheManager(redisTemplate);

        Map cacheExpiration = new HashMap();


        cacheExpiration.put("USERS", 120);
        cacheExpiration.put("CARS", 3600):

        // Number of seconds before expiration. Defaults to unlimited (0)
        cacheManager.setDefaultExpiration(60);
        cacheManager.setExpires(cacheExpiration);
        return cacheManager;
    }
}

 

Redis Error/Exception Handling

 

public class CustomCacheErrorHandler implements CacheErrorHandler {


    protected final org.slf4j.Logger logger = LoggerFactory.getLogger(this.getClass());

    protected Gson gson = new GsonBuilder().create();


    @Override

    public void handleCacheGetError(RuntimeException exception,

                                    Cache cache, Object key) {

        logger.error("Error in REDIS GET operation for KEY: " + key, exception);
        try
        {
            if (cache.get(key) != null && logger.isDebugEnabled())
                logger.debug("Possible existing data which for the cache GET operation in REDIS Cache by KEY: " + key + " with TYPE: " + cache.get(key).get().getClass() + " and DATA: " + this.gson.toJson(cache.get(key).get()));
        } catch (Exception ex)
        {
            // NOTICE: This exception is not logged because this might occur because the cache connection is not established.
            // So if the initial exception that was thrown might have been the same, no connection to the cache server.
            // In such a case this is logged in above already, before the try catch.
        }
    }

    @Override

    public void handleCachePutError(RuntimeException exception, Cache cache,

                                    Object key, Object value) {

        logger.error("Error in REDIS PUT operation for KEY: " + key, exception);
        if(logger.isDebugEnabled())
            logger.debug("Error in REDIS PUT operation for KEY: " + key + " with TYPE: " + value.getClass() + " and DATA: " + this.gson.toJson(value), exception);
    }

    @Override

    public void handleCacheEvictError(RuntimeException exception, Cache cache,

                                      Object key) {

        logger.error("Error in REDIS EVICT operation for KEY: " + key, exception);
        try
        {
            if (cache.get(key) != null  && logger.isDebugEnabled())
                logger.debug("Possible existing data which for the cache EVICT operation in REDIS Cache by KEY: " + key + " with TYPE: " + cache.get(key).get().getClass() + " and DATA: " + this.gson.toJson(cache.get(key).get()));
        } catch (Exception ex)
        {
            // NOTICE: This exception is not logged because this might occur because the cache connection is not established.
            // So if the initial exception that was thrown might have been the same, no connection to the cache server.
            // In such a case this is logged in above already, before the try catch.
        }
    }

    @Override

    public void handleCacheClearError(RuntimeException exception,Cache cache){
        logger.error("Error in REDIS CLEAR operation ", exception);
    }

}

Custom Key Generator Example

 
@Cacheable(value = "USERS", keyGenerator = "keyGeneratorFirstParamKey")
    public UserData getUsers(String userId, Object data)
    {
        // Do something here
    }

Java + Spring Boot: Explicit Class Instances with Profiles, Beans and Qualifiers

Here is a simple example how to use beans to create instances of classes based on their profiles.

Firstly, you need is a base class or an interface that each class will inherit/implement.

The simple way of doing this is to just simply using the @Profile annotation on a class with the desired profile name. For example:

Use a dev profile for a class that is created when you dev profile is up and running and test profile when your test profile is used.

Then simply use the @Autowired annotation to on the base class/interface. The rest if induced automatically based on your profile. BUT this approach works fine in your classes and/or interface reside within the same package.

In a case that you are using a base class or an interface from another library/package and wanting to create a different class to be used with different profiles this might not work because you can’t change the used profile name in the base library/package.

In these cases you do the following:

  1. Create @Bean functions that return instantiate the desired class into an object based on a profile set to the bean function. The return value can be the base class or interface.
  2. Give the same bean name to all functions.
  3. On the @Autowired class member add the @Qualifier annotation giving the bean name which you want.

Spring will in this case induce the right object instance based on the profile in defined on a bean function.

@Bean(name="authenticationLogic")
    @Profile("dev")
    public BaseAuthentication getBaseAuth()
    {
        return new MockAuthenticationClient();
    }

    @Bean(name="authenticationLogic")
    @Autowired
    @Profile("test")
    public BaseAuthentication getMainAuth(MessageService messageService)
    {
        return new MainAuthClient(messageService);
    }



    @Autowired
    @Qualifier("authenticationLogic")
    private BaseAuthentication baseAuthentication;

Adding git information to your Spring Actuator Info endpoint with Gradle

Hi,

This is how you can add git related information in case you need that to keep track what functionality and code your development or test environments are using.

Configuration

First you need to add the following to your Gradle file:

plugins {
   id "com.gorylenko.gradle-git-properties" version "1.4.21"
}

apply plugin: 'com.gorylenko.gradle-git-properties'

After this you should have a new Gradle task that will generate a git.properties file that your Actuator Info Endpoint can use. This file by default is generated into the build path resources folder. So run this command before building your jar or docker image etc.

gradle generateGitProperties

Bonus

If you want to access the Info actutor enpoint to display that info from somewhere else you can do this:
@Autowired
InfoEndpoint infoEndpoint;

return new JSONObject(this.infoEndpoint.invoke()).toString();

Links

https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#howto-git-info

https://github.com/n0mer/gradle-git-properties

Helper Scripts for Docker, git and Java developers

Hi,

Here are some of my own scripts that I use when developing to ease my life:

Building a Java Gradle project, then building a docker image and pushing it

./gradlew test
if [ $? -eq 0 ]; then
    echo Tests OK
    gradle clean
    gradle generateGitProperties
    gradle bootRepackage
    ./cleandocker.sh
    docker rmi {your image name + tag}
    docker build -t {your image name + tag} .
    ./dockerregistrylogin.sh
    docker push {your image name + tag}
else
    echo Tests Failed
    exit 1
fi

Clean docker from all running containers and stopped ones

echo "Stoping all containers"
docker stop $(docker ps -a -q)
echo "Removing all containers"
docker rm $(docker ps -a -q)
echo "Starting dev environment"

Commit your code to git after gradle tests are successfull

./gradlew test
if [ $? -eq 0 ]; then
    echo Tests OK
    git add .
    git commit -m "$1"
    git push
else
    echo Tests Failed
    exit 1
fi

Merge your branch with your master

git checkout master
git pull origin master
git merge dev -m "$1"
git push origin master
git checkout dev

This one is for AWS Developers to run and get the AWS ECR docker login

#Notice: To use a certain profile for login define additional profiles like this: aws configure --profile awscli

function doAwsDockerRegistryLogin()
{
    local  myresult=$(aws ecr get-login --no-include-email --region eu-central-1 --profile awscli)
    echo "$myresult"
}

result=$(doAwsDockerRegistryLogin)   # or result=`myfunc`
eval $result

 

AWS ECS and Bitbucket Pipelines: Deploy your docker application

Hi,

Here are some tips and tricks on how to update an existing AWS ECS deployment.

NOTICE: This post assume that you have some knowledge on AWS, scripting, docker and Bitbucket.

The scripts and guide below will do the following:

  1. Clone external libraries needed for your project (Assumes that you have a multi-project application
  2. Build your application
  3. Build the docker image
  4. Push the docker image into Elastic Container Service own container registry
  5. Stop all tasks in a cluster to force the associated services to restart the tasks
    1. I am using this method of deploying to avoid making constant new task definitions. I find that unnecessary. My view is to have a deployment docker tag that your target service task definitions use. In this way you only make sure that you have one good task definition that you plan to you. If needed later update your task definition and service to use it. This deployment suggestion will not care of any other detail that except the docker image and cluster name.
  6. Then test your API or Web App in the cloud with some 3rd party tool in this case I am using Postman collection, Postman Environmental Settings and Newman

The above steps can be performed automatically when you make changes to a branch or manually from the commit view or branches view (on the row with the branch or commit id move you mouse on top of “…” to get the option “Run pipeline for a branch” and selected the manual pipeline option)

Needed steps:

  1. Create/have an IAM access keys for deployment into ECS and ECR from Bitbucket.
  2. Generate SSH keys in the Bitbucket repository where you plan to run your pipeline
  3. If you have any depended Bitbucket repositories copy the Public Key in Step 2 into that repository settings.
  4. Then in the primary repository from which you plan to deploy set environmental variables needed for the deployment.
  5. Create you pipeline with the example Bitbucket pipeline definition file and supplement scripts.

Step 1: AWS Access

You will need an access key/secrect to AWS with the following minimum policy settings:


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "ecs:ListTasks",
                "ecr:CompleteLayerUpload",
                "ecr:GetAuthorizationToken",
                "ecs:StopTask",
                "ecr:UploadLayerPart",
                "ecr:InitiateLayerUpload",
                "ecr:BatchCheckLayerAvailability",
                "ecr:PutImage"
            ],
            "Resource": "*"
        }
    ]
}

Step 2: SSH Keys for Bitbucket

More info here on how to generate a key:

https://confluence.atlassian.com/bitbucket/use-ssh-keys-in-bitbucket-pipelines-847452940.html?_ga=2.23419699.803528104.1518767114-2088289616.1517306997

Notice: Remember to get the public key

Step 3: Other repositories (Optional)

From bitbucket:

If you want your Pipelines builds to be able to access a different Bitbucket repository (other than the repo where the builds run):

  1. Add an SSH key to the settings for the repo where the build will run, as described in Step 1 above (you can create a new key in Bitbucket Pipelines or use an existing key).
  2. Add the public key from that SSH key pair directly to settings for the other Bitbucket repo (i.e. the repo that your builds need to have access to).
    See Use access keys for details on how to add a public key to a Bitbucket repo.

Step 4: Setting up environmental variables

APPIMAGE_TESTENV_CLUSTER : The cluster name where to which the docker image is deployed to  in this case a test environment that is manually triggered

APPIMAGE_DEVENV_CLUSTER: A dev target cluster that is associated with the master branch and starts automatically

APPIMAGE_NAME: The docker image name (Notice: Must match the one in your service -> task definition)

APPIMAGE_TAG: the docker image tag The docker image name (Notice: Must match the one in your service -> task definition)

AWS_ACCESS_KEY_ID (SECURE)

AWS_SECRET_ACCESS_KEY (SECURE)

AWS_DEFAULT_REGION : The region where your cluster is located

REGISTRYNAME : The ECR registry name wherethe image is to be pushed

Step 5: Bitbucket Pipeline

Pipeline definitions

The sample pipeline script has two options:
* Custom/manual deployment in the custom section of the script
* Branches/automatic deployment in the branches section of the script

# This is a sample build configuration for Java (Gradle).
# Check our guides at https://confluence.atlassian.com/x/zd-5Mw for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: atlassian/default-image:latest # Include Java support
options:
  max-time: 15 # 15 minutes incase something hangs up
  docker: true # Include Docker support
pipelines:
  custom: # Pipelines that can only be triggered manually
    test-env:
      - step:
          caches:
            - gradle
          script:
            # Modify the commands below to build your repository.
            # You must commit the Gradle wrapper to your repository
            # https://docs.gradle.org/current/userguide/gradle_wrapper.html
            - git clone {your external repository}
            - ls
            - bash ./scripts/bitbucket/buildproject.sh
            # Install AWS CLI and configure it
            #----------------------------------------
            - apt-get update
            - apt-get install jq
            - curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
            - unzip awscli-bundle.zip
            - ./awscli-bundle/install -b ~/bin/aws
            - export PATH=~/bin:$PATH
            #----------------------------------------
            - bash ./scripts/bitbucket/awsdev-dockerregistrylogin.sh
            - export IMAGE_NAME=$REGISTRYNAME/$APPIMAGE_NAME:$APPIMAGE_TAG
            - docker build -t $IMAGE_NAME .
            - docker push $IMAGE_NAME
            # This will stop all tasks in the AWS Cluster, by doing this the AWS Service will start the defined task definitions as new tasks.
            # NOTICE: This approach needs task definitions attached to services and no manually started tasks.
            - bash ./scripts/bitbucket/stopalltasks.sh $APPIMAGE_TESTENV_CLUSTER
            #----------------------------------------
            # Install Newman tool and test with postman collection and environmental settings your web app
            - npm install -g newman
            - ./scripts/newman-API-tests/run-testenv-tests.sh
            #----------------------------------------
    prod-env:
      - step:
          script:
            - echo "Manual triggers for deployments are awesome!"
  branches:
    master:
      - step:
          caches:
            - gradle
          script:
            #----------------------------------------
            # Modify the commands below to build your repository.
            # You must commit the Gradle wrapper to your repository
            # https://docs.gradle.org/current/userguide/gradle_wrapper.html
            - git clone {your external repository}
            - ls
            - bash ./scripts/bitbucket/buildproject.sh
            # Install AWS CLI and configure it
            #----------------------------------------
            - apt-get update
            - apt-get install jq
            - curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
            - unzip awscli-bundle.zip
            - ./awscli-bundle/install -b ~/bin/aws
            - export PATH=~/bin:$PATH
            #----------------------------------------
            # Build and install the newest docker image
            - bash ./scripts/bitbucket/awsdev-dockerregistrylogin.sh
            - export IMAGE_NAME=$REGISTRYNAME/$APPIMAGE_NAME:$APPIMAGE_TAG
            - docker build -t $IMAGE_NAME .
            - docker push $IMAGE_NAME
            #----------------------------------------
            # This will stop all tasks in the AWS Cluster, by doing this the AWS Service will start the defined task definitions as new tasks.
            # NOTICE: This approach needs task definitions attached to services and no manually started tasks.
            - bash ./scripts/bitbucket/stopalltasks.sh $APPIMAGE_DEVENV_CLUSTER
            #----------------------------------------
            # Install Newman tool and test with postman collection and environmental settings your web app
            - npm install -g newman
            - ./scripts/newman-API-tests/run-devenv-tests.sh
            #----------------------------------------

Stop all AWS tasks in the cloud

#!/usr/bin/env bash

echo "Getting tasks from AWS:"
echo "Cluster: $1 Service: $2"
#For a single task
#----------------
#task=$(aws ecs list-tasks --cluster "$1" --service-name "$2" | jq --raw-output '.taskArns[0] | split("/")[1]' )
 #echo "Stopping task: " $task

 #aws ecs stop-task --task $task --cluster "$1"
#----------------
tasks=$(aws ecs list-tasks --cluster "$1" --service-name "$2" | jq -r '.taskArns | map(.[0:]) | reduce .[] as $item (""; . + $item + " ")')
echo "Tasks received"
for task in $tasks; do
echo "Stopping task from AWS: " $task
	aws ecs stop-task --task $task --cluster "$1"
#echo "Task stopped."
done

Build your project

echo "Rebuilding project"
#gradlew_output=$(./gradlew build);
#echo "$gradlew_output"

./gradlew test
if [ $? -eq 0 ]; then
    echo Tests OK
    ./gradlew clean
    ./gradlew bootRepackage
else
    echo Tests Failed
fi

Get the AWS Login details for ECR docker login

#Notice: To use a certain profile for login define additional profiles like this: aws configure --profile awscli

function doAwsDockerRegistryLogin()
{
    local  myresult=$(aws ecr get-login --no-include-email)
    echo "$myresult"
}

result=$(doAwsDockerRegistryLogin)   # or result=`myfunc`
eval $result

Running API or WebApp tests with Newman and Postman

What do you need to make the tests work

  1. Create a new Postman collection
  2. Add your URLs to test
  3. Add scripts into the test tab
  4. When all your URLs in your collection are ready export them from the collection … button
  5. (Optional) Then create environment settings that you can export and use with newman

Bash script to run the newman tests

sleep 1m # Force a wait to make sure that all AWS services, your app, LBs etc are all loaded up and running

echo $DEVENV_URL
until $(curl --output /dev/null --silent --head --fail --insecure "$DEVENV_URL"); do
    printf '.'
    sleep 5
done

echo "Starting newman tests"
newman run {your postman collection}.postman_collection.json --environment "{your postman collection}.postman_environment.json" --insecure --delay-request 10<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>

Postman scripts example

Retrieving a token from the body and inserting it into an environmental variable

var jsonData = JSON.parse(responseBody);

console.log("TOKEN:" + jsonData.token);

var str_array = jsonData.token.split('.');
for(var i = 0; i < str_array.length -1; i++) {
console.log("Array Item: " + i);
console.log(str_array[i])
console.log(CryptoJS.enc.Utf8.stringify(CryptoJS.enc.Base64.parse(str_array[i])));
}
postman.setEnvironmentVariable("token", jsonData.token);

Testing a response for success and body content

// example using pm.response.to.be*
pm.test("response must be valid and have a body", function () {
// assert that the status code is 200
pm.response.to.be.ok; // info, success, redirection, clientError, serverError, are other variants
// assert that the response has a valid JSON body
pm.response.to.be.withBody;
pm.response.to.be.json; // this assertion also checks if a body exists, so the above check is not needed
});

console.log("BODY:" + responseBody);

Links

http://2mins4code.com/2017/11/08/building-a-cicd-environment-with-bitbucket-pipelines-docker-aws-ecs-for-an-angular-app/

https://bitbucket.org/awslabs/amazon-ecs-bitbucket-pipelines-python/overview

https://confluence.atlassian.com/bitbucket/deploy-to-amazon-ecs-892623902.html

https://bitbucket-pipelines.prod.public.atl-paas.net/validator

https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html

https://confluence.atlassian.com/bitbucketserver/getting-started-with-bitbucket-server-and-aws-776640193.html

https://confluence.atlassian.com/bitbucket/java-with-bitbucket-pipelines-872013773.html

https://confluence.atlassian.com/bitbucket/use-ssh-keys-in-bitbucket-pipelines-847452940.html?_ga=2.23419699.803528104.1518767114-2088289616.1517306997

https://confluence.atlassian.com/bitbucket/environment-variables-794502608.html

https://docs.aws.amazon.com/cli/latest/reference/index.html#cli-aws

https://confluence.atlassian.com/bitbucket/run-pipelines-manually-861242583.html

https://www.npmjs.com/package/newman

http://blog.getpostman.com/2017/10/25/writing-tests-in-postman/

Using multiple AWS CLI profiles to manage development environments

To avoid setting global AWS credentials/access to the AWS CLI you can use CLI Profiles like this:

Create a new profile:
aws configure –profile {profilename}

Then use it by adding the profile after a command like in the example below:
aws ecr get-login –no-include-email –region eu-central-1 –profile {profilename}

 

Thats it, this will allows you to use different access keys and policies for different purposes without different AWS security configurations overriding others.

 

This is especially true when you want to test your code against real world security settings in the cloud that can’t have higher level rights.

Azure DocumentDB Code Samples: How to use Azure DocumentDB

Hi,

I’ve been working a bit on DocumentDB and thought of posting some sample code on how to use it. It might save people time and energy. I have had to work around some issues and headaches.

 

Notice, there is one thing you should take care of. Define your functions with the async keyword and use the await keyword on async function calls. Failure to do this will result in hanging application code.

Also, make sure that you are not accidentally calling synchronous functions from the Task class or some other place that is related to an async call. This will also hang the application code. Calling the Wait() function is one of the, also calling the Result property in the wrong place will result in the same problem.

A quote on the problem from a site:

“If you call the async method on the SAME thread that you then call Result or Wait() on, you will probably deadlock because once the async task has finished, it will wait to re-acquire the previous thread but it can’t because the thread is blocked on the call to Result/Wait()

you can use async tasks and await to avoid this problem but there is also another clever trick, certainly in newer versions of the .net framework and that is to invoke your async task on another thread, not on the one you are calling your method with. It is as simple as this:

var task = Task.Run(() => myMethodAsync());

which involves the method on a thread from the thread pool. When your calling thread then waits and blocks using Wait() or Result, the async task will NOT need to wait for your thread, it will re-acquire the one from the threadpool, finish and signal your waiting thread to allow it to continue!” http://lukieb.blogspot.fi/2016/07/calls-to-azure-documentdb-hang.html

 


/// <summary>
/// Sample class to be used for object serialization when handling data to the DocumentDB
/// </summary>
public class MyDocumentDBDataContainer
{
public String Title { get; set; }
public byte[] FileData { get; set; }

public String FileName { get; set; }

public class InnerDataContainer
{
public String Title { get; set; }
public int SomeNumber { get; set; }
}

public InnerDataContainer InnerData { get; set; }
}

public partial class Form1 : Form
{
/// <summary>
/// The DocumentDB address, end point where it exists
/// </summary>
private const string EndpointUrl = "https://mydocumentdbtest.documents.azure.com:443/";

/// <summary>
/// This can be the primary key you get from the Azure DocumentDB settings UI
/// </summary>
private const string AuthorizationKey =
"";

/// <summary>
/// A temp object for holding the documentDB database for processing
/// </summary>
private Database database;

/// <summary>
/// Same as above but for a collection
/// </summary>
private DocumentCollection collection;

public Form1()
{
InitializeComponent();
}

private async void button1_Click(object sender, EventArgs e)
{
Stream myStream = null;
OpenFileDialog openFileDialog1 = new OpenFileDialog();

openFileDialog1.InitialDirectory = "c:\\";
openFileDialog1.Filter = "txt files (*.txt)|*.txt|All files (*.*)|*.*";
openFileDialog1.FilterIndex = 2;
openFileDialog1.RestoreDirectory = true;

// Open a file to get some byte data to upload into DocumentDB
if (openFileDialog1.ShowDialog() == DialogResult.OK)
{
try
{
if ((myStream = openFileDialog1.OpenFile()) != null)
{
using (myStream)
{
try
{
MemoryStream ms = new MemoryStream();
myStream.CopyTo(ms);
await CreateDocumentClient(ms, openFileDialog1.FileName);
}
catch (Exception ex)
{
Exception baseException = ex.GetBaseException();
Console.WriteLine("Error: {0}, Message: {1}", ex.Message, baseException.Message);
}
}
}
}
catch (Exception ex)
{
MessageBox.Show("Error: Could not read file from disk. Original error: " + ex.Message);
}
}
}

/// <summary>
/// This is the main work horse here. The function will create a database, a collection and a sample document if they do not exist.
/// NOTICE: This is very important, define you functions with the async keyword and use the await keyword on async function calls. Failure to do this will result in haning application code.
/// Also make sure that you are not accidentally calling synchronous functions from the Task class or some other place that is related to a async call. This will also hang the application code.
/// More on this: http://lukieb.blogspot.fi/2016/07/calls-to-azure-documentdb-hang.html
/// Also notice that documentDB uses "links" to identify things. You will run into DocumentDB objects and a property SelfLink. This seems to just be a way of how things work.
/// </summary>
/// <param name="fileDataStream"></param>
/// <param name="fileName"></param>
/// <returns></returns>
private async Task CreateDocumentClient(MemoryStream fileDataStream, String fileName)
{
// Create a new instance of the DocumentClient
var client = new DocumentClient(new Uri(EndpointUrl), AuthorizationKey);

var databaseID = "myDBTest";
var collectionID = "myCollectionTest";

// Get the database and if it does not exist create it
this.database = this.GetDatabase(client, databaseID);
if (database == null)
{
this.database = await CreateDatabase(client, databaseID);
}

// Get the collection and if it does not exist then create it
this.collection = this.GetDocumentCollection(client, collectionID);
if(this.collection == null)
{
this.collection = await this.CreateCollection(client, collectionID);
}

// Create a temp data container, pass forward to be created in DocumentDB
MyDocumentDBDataContainer data = new MyDocumentDBDataContainer() { Title = fileName, InnerData = new MyDocumentDBDataContainer.InnerDataContainer() { Title = "InnerDataTitle", SomeNumber = 1 }, FileData = fileDataStream.ToArray(), FileName = fileName };
var result = await this.CreateDocument(client, data);

// Get the newly created document. Notice: In these code examples I use a title but you can use any identifier you wish.
var dataFromDocumentDB = this.ReadDocument(client, data.Title);

// Re-Create the file from the byte data from the DocumentDB storage
File.WriteAllBytes(dataFromDocumentDB.FileName, dataFromDocumentDB.FileData);
}



#region DocumentManagement

private async Task<Document> DeleteDocument(DocumentClient client, String documentTitle)
{
var documentToDelete =
client.CreateDocumentQuery<MyDocumentDBDataContainer>(this.collection.SelfLink)
.Where(e => e.Title == documentTitle)
.AsEnumerable()
.First();

Document doc = GetDocument(client, documentToDelete.Title);

var result = await client.DeleteDocumentAsync(doc.SelfLink);
return result.Resource;
}

private async Task<Document> UpdateDocument(DocumentClient client, String documentTitle)
{
// Update a Document

var singleDocument =
client.CreateDocumentQuery<MyDocumentDBDataContainer>(this.collection.SelfLink)
.Where(e => e.Title == documentTitle)
.AsEnumerable()
.First();

Document doc = GetDocument(client, singleDocument.Title);
MyDocumentDBDataContainer employeUpdated = singleDocument;
singleDocument.InnerData.SomeNumber = singleDocument.InnerData.SomeNumber + 1;
var result = await client.ReplaceDocumentAsync(doc.SelfLink, singleDocument);

return result.Resource;
}

private Document GetDocument(DocumentClient client, string id)
{
return client.CreateDocumentQuery(this.collection.SelfLink)
.Where(e => e.Id == id)
.AsEnumerable()
.First();
}

private MyDocumentDBDataContainer ReadDocument(DocumentClient client, String documentTitle)
{
// Read the collection

//var data = client.CreateDocumentQuery<MyDocumentDBDataContainer>(this.collection.SelfLink).AsEnumerable();
//foreach (var item in data)
//{
// Console.WriteLine(item.Title);
// Console.WriteLine(item.FileData);
// Console.WriteLine(item.InnerData.Title);
// Console.WriteLine("----------------------------------");
//}

// Read A Document - Where Name == "John Doe"
var singleDocument =
client.CreateDocumentQuery<MyDocumentDBDataContainer>(this.collection.SelfLink)
.Where(e => e.Title == documentTitle)
.AsEnumerable()
.FirstOrDefault();

return singleDocument;

//Console.WriteLine("-------- Read a document---------");
//Console.WriteLine(singleDocument.Title);
//Console.WriteLine(singleDocument.FileData);
//Console.WriteLine(singleDocument.InnerData.Title);
//Console.WriteLine("-------------------------------");
}

private async Task<Document> CreateDocument(DocumentClient client, object documentObject)
{

var result = await client.CreateDocumentAsync(collection.SelfLink, documentObject);
var document = result.Resource;

Console.WriteLine("Created new document: {0}\r\n{1}", document.Id, document);
return document;
}

#endregion

private async Task<Database> CreateDatabase(DocumentClient client, String databaseID)
{
Console.WriteLine();
Console.WriteLine("******** Create Database *******");

var databaseDefinition = new Database { Id = databaseID };
var result = await client.CreateDatabaseIfNotExistsAsync(databaseDefinition);
var database = result.Resource;

Console.WriteLine(" Database Id: {0}; Rid: {1}", database.Id, database.ResourceId);
Console.WriteLine("******** Database Created *******");

return database;
}

private DocumentCollection GetDocumentCollection(DocumentClient client, String collectionID)
{
var collections = client.CreateDocumentCollectionQuery(database.CollectionsLink,
"SELECT * FROM c WHERE c.id = '" + collectionID + "'").AsEnumerable();
if(collections.Count() > 0)
return collections.First();

return null;
}

private async Task QueryDocumentsWithPaging(DocumentClient client)
{
Console.WriteLine();
Console.WriteLine("**** Query Documents (paged results) ****");
Console.WriteLine();
Console.WriteLine("Quering for all documents");

var sql = "SELECT * FROM c";
var query = client.CreateDocumentQuery(collection.SelfLink, sql).AsDocumentQuery();

while (query.HasMoreResults)
{
var documents = await query.ExecuteNextAsync();

foreach (var document in documents)
{
Console.WriteLine(" Id: {0}; Name: {1};", document.id, document.name);
}
}

Console.WriteLine();
}

private Database GetDatabase(DocumentClient client, String databaseID)
{
//bool databaseExists = false;
Console.WriteLine();
Console.WriteLine();
Console.WriteLine("******** Get Databases List ********");

var databases = client.CreateDatabaseQuery().ToList();

foreach (var database in databases)
{
Console.WriteLine(" Database Id: {0}; Rid: {1}", database.Id, database.ResourceId);
return database;
}

Console.WriteLine();
Console.WriteLine("Total databases: {0}", databases.Count);

return null;
}

private async Task<DocumentCollection> CreateCollection(DocumentClient client, string collectionId, string offerType = "S1")
{

Console.WriteLine();
Console.WriteLine("**** Create Collection {0} in {1} ****", collectionId,
database.Id);

var collectionDefinition = new DocumentCollection { Id = collectionId };
var options = new RequestOptions { OfferType = offerType };
var result = await

client.CreateDocumentCollectionAsync(database.SelfLink,
collectionDefinition, options);
var collection = result.Resource;

Console.WriteLine("Created new collection");
//ViewCollection(collection);

return collection;
}

#region DifferentWaysOfDoingThings
private async Task<Document> CreateDocuments2(DocumentClient client, byte[] fileData)
{
Console.WriteLine();
Console.WriteLine("**** Create Documents ****");
Console.WriteLine();

dynamic document1Definition = new
{
name = "New Customer 1",
address = new
{
addressType = "Main Office",
addressLine1 = "123 Main Street",
location = new
{
city = "Brooklyn",
stateProvinceName = "New York"
},
postalCode = "11229",
countryRegionName = "United States"
},
fileDataBinary = fileData
};

Document document1 = CreateDocument2(client, document1Definition);
Console.WriteLine("Created document {0} from dynamic object", document1.Id);
Console.WriteLine();

return document1;
}

private async Task<Document> CreateDocument2(DocumentClient client, object documentObject)
{

var result = await client.CreateDocumentAsync(collection.SelfLink, documentObject);
var document = result.Resource;

Console.WriteLine("Created new document: {0}\r\n{1}", document.Id, document);
return document;
}

#endregion
}