A docker self-hosted Logging API using Elasticsearch, Kibana and MongoDB

Hi,

In this post I wanted to show you a way of hosting a custom logging API that uses Elasticsearch, Kibana and MongoDB.

In the github link you will find the following:

  • A docker-compose file that contains the definitions needed to create the environment
  • Docker image definitions for Kibana, Elasticsearch and MongoDB

Things to know

  • Both Kibana and Elasticsearch don’t use X-Pack as a security measure, the original image definition that I used as the basis used SearchGuard but I wanted to create something that is free to use.
  • For a simple authentication I used Nginx reverse-proxies images.
  • The mongoDB uses and needs credentials to access
  • The LogAPI would use TSL and basic auth at the moment.

TODO

  • Adding token based authentication
  • Adding the LogAPI nodejs code
  • Using Readonlyrest to add security between Elasticsearch and Kibana

This is a work in progress

The docker compose file

Continue reading

Advertisements

My ELK stack + Spring Boot + PostgreSQL docker definition

Hi,

If you need an example to put up a docker environment for Spring Boot, Postgre and ELK stack take a look at this:

https://github.com/lionadi/Templates/blob/master/docker-compose.yml

 version: '2'
 
services:
 mydatabase:
 image: mydockerimageregistry.azurecr.io/mydatabase:0.1.0
 container_name: mydatabase
 restart: always
 build:
 context: .
 dockerfile: postgredockerfile
 ports:
 - 5432:5432
 environment:
 POSTGRES_PASSWORD: f130e10792
 
 volumes:
 - pgdata:/var/lib/postgresql/data
 networks:
 - mynetwork
 
 apiapp:
 image: mydockerimageregistry.azurecr.io/apiapp:0.1.0
 container_name: apiapp
 ports:
 - 8080:8080
 environment:
 SPRING_PROFILES_ACTIVE: localdevPostgre
 SPRING_DATASOURCE_URL: jdbc:postgresql://mydatabase:5432/postgres
 build:
 context: .
 dockerfile: DockerfileAPIApp
 networks:
 - mynetwork
 volumes:
 - applogs:/logs
 depends_on:
 - "elasticsearch1"
 - "mydatabase"
 links:
 - mydatabase
 
 elasticsearch1:
 image: docker.elastic.co/elasticsearch/elasticsearch:5.4.1
 container_name: elasticsearch1
 environment:
 - cluster.name=docker-cluster
 - bootstrap.memory_lock=true
 - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
 ulimits:
 memlock:
 soft: -1
 hard: -1
 nofile:
 soft: 65536
 hard: 65536
 mem_limit: 1g
 cap_add:
 - IPC_LOCK
 volumes:
 - esdata1:/usr/share/elasticsearch/data
 ports:
 - 9200:9200
 - 9300:9300
 networks:
 - mynetwork
 dockbeat:
 image: mydockerimageregistry.azurecr.io/dockbeat:0.1.0
 build:
 context: .
 dockerfile: dockerfileDockbeat
 container_name: dockbeat
 networks:
 - mynetwork
 volumes: 
 - /var/run/docker.sock:/var/run/docker.sock
 depends_on:
 - "elasticsearch1"
 cap_add:
 - ALL
 
 filebeat:
 image: mydockerimageregistry.azurecr.io/filebeat:0.1.0
 container_name: filebeat
 restart: always
 build:
 context: .
 dockerfile: dockerfileFilebeat
 networks:
 - mynetwork
 volumes:
 - applogs:/usr/share/filebeat/logs
 depends_on:
 - "elasticsearch1"
 - "apiapp"
 packetbeat:
 image: mydockerimageregistry.azurecr.io/packetbeat:0.1.0
 container_name: packetbeat
 restart: always
 build:
 context: .
 dockerfile: dockerfilePacketbeat
 networks:
 - mynetwork
 volumes:
 - applogs:/logs
 depends_on:
 - "elasticsearch1"
 - "apiapp"
 cap_add:
 - NET_ADMIN
 kibana:
 image: docker.elastic.co/kibana/kibana:5.4.1
 container_name: kibana
 networks:
 - mynetwork
 ports:
 - "9900:5601"
 environment:
 SERVER_NAME: kibana
 SERVER_PORT: 5601
 ELASTICSEARCH_URL: http://elasticsearch1:9200
 XPACK_SECURITY_ENABLED: "true"
 ELASTICSEARCH_PASSWORD: changeme
 ELASTICSEARCH_USERNAME: elastic
 depends_on:
 - "elasticsearch1"
 
volumes:
 pgdata:
 
 esdata1:
 driver: local
 esdata2:
 driver: local
 
 applogs:
networks:
 mynetwork:
 driver: bridge
 ipam:
 config:
 - subnet: 172.10.0.0/16
 gateway: 172.10.5.254
 aux_addresses:
 kibana: 172.10.1.8
 packetbeat: 172.10.1.7
 filebeat: 172.10.1.6
 dockbeat: 172.10.1.5
 elasticsearch2: 172.10.1.4
 elasticsearch1: 172.10.1.3
 mydatabase: 172.10.1.2
 apiapp: 172.10.1.1

Docker, Docker Swarm and Azure guide

Table of contents

Azure configurations. 1

Docker on Ubuntu Server. 1

Connect to Azure Virtual Image for Docker on Ubuntu Server. 3

Container Registry. 4

Build, push and remove images to and from registry. 4

Docker Swarm Configurations. 4

Create a swarm.. 5

Deploy a service. 5

Inspect a service. 5

Tips and Tricks. 5

Docker Swarm.. 5

Spring Boot 6

Docker Swarm.. 6

Notice: This guide assumes you have Docker installed locally for some of the steps to work.

Azure configurations

Docker on Ubuntu Server

First step is to create a new VM supporting docker.

The above is rather self-explanatory.

The only bigger thing is to choose between a password or SSH key. You can generate one with Putty, use an Unix environment or use linux bash support for Windows(type bash in powershell).

Then type ssh-keygen {filename or path + filename}

Then copy the public key from your generatedkeyfile {filename}.pub

Connect to Azure Virtual Image for Docker on Ubuntu Server

ssh -i {filename or path + filename} {username}@{virtual image IP or DNS name}

ssh -i /root/.ssh/mykeyfile dockeradmin

To get the virtual image IP go to the newly created virtual image in Azure UI. In the Azure Portal UI select you resource group where the Docker on ubuntu was created, from there you should be able to find your virtual image. Just press on the virtual image to get a new overview screen of the virtual image.

Next you should see the DNS name and Virtual IP address. Use either with the SSH command.

Remember to access the images from a Azure Container Registry you have to log in to the registry. See Container Registry.

Container Registry

Next create a private container registry for images publishing and management.

Follow this guide from Microsoft, it’s pretty good:

https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-portal

Now to connect to the registry open your terminal/powershell and type:

docker login {registry domain} -u {username} -p {password}

docker login myregistry.azurecr.io -u xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx -p myPassword

https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-docker-cli

Build, push and remove images to and from registry

Once you have your image definition ready use the following command to build a new image:

docker build -t {registry domain}/{image name}:{image version number} .

docker build -t myregistry.azurecr.io/CoolImage:1.0 .

Then to push it:

docker push {registry domain}/{image name}:{image version number}

docker push myregistry.azurecr.io/CoolImage:1.0

docker pull {registry domain}/{image name}:{image version number}

docker rmi {registry domain}/{image name}:{image version number}

docker rmi myregistry.azurecr.io/CoolImage:1.0

Docker Swarm Configurations

Create a swarm

This is very simple, just type the following command in the Azure Docker on Ubuntu virtual image:

docker swarm init

https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/

This will create you a basic structure for your swarm. More info on the Docker Swarm here:

https://docs.docker.com/engine/swarm/key-concepts/#services-and-tasks

Deploy a service

So this will create a container and replicate it in your swarm, meaning that the Docker Swarm will automatically create instances as task in your swarm worker nodes of your container.

docker service create –replicas 1 –name {service name} -p {docker network port}:{container internal port} –with-registry-auth {registry domain}/{image name}:{image version number}

docker service create –replicas 1 –name myservice -p 8080:8080 –with-registry-auth myregistry.azurecr.io/CoolImage:1.0

Inspect a service

docker service inspect {service name}

docker service inspect –pretty {service name}

This will give you an overview of the service and it’s configurations.

Run docker service ps {service name} to see which nodes are running the service.

Run docker ps on the node where the task is running to see details about the container for the task.

Docker Stack Deploy

The services can also be created and updated using a Docker Compose file (v3 and above). Deployment using a Docker Compose file is otherwise similar to the steps described above, but instead of creating and updating services, the compose file should be edited and deployed.

Deploying a stack

Before deploying the stack for the first time, you need to authenticate with the container registry:

docker login {registry-domain}

A stack defined in a compose file can be deployed with the following command:

docker stack deploy –with-registry-auth –compose-file {path-to-compose-file} {stack-name}

Updating a stack

An existing stack can be updated with the same command as creating it. However, with-registry-auth option may be omitted.

docker stack deploy –compose-file {path-to-compose-file} {stack-name}

Docker Compose

Using docker compose is rather simple but can be a bit confusing at first. Basically with docker-compose you can specify a cluster of container with apps, dbs etc in one shot. With a single command you can set things up.

Compose file example

To start our you need to create a docker-compose.yml file and start configuring the docker environment.

In the example below the compose file defines two containers/services:

  1. apppostgres
    1. A postgre database
  2. app
    1. A spring boot application using the database

These two services are joined together with a common network and static IP. Notice that the postgre database needs a volumes configuration to keep the data even if the database is shutdown.

Also notice that the spring boot application is linked and put to depend on the database service. An in the datasource URL for the database the configuration refers to the container name of the database service/container.

More details: https://docs.docker.com/compose/overview/

version: ‘2’

services:

apppostgres:

image: myregistry.azurecr.io/apppostgres:0.1.0

container_name: apppostgres

restart: always

build:

context: .

dockerfile: postgredockerfile

ports:

– 5432:5432

environment:

POSTGRES_PASSWORD: mypassword

POSTGRES_DB: appdb

volumes:

– pgdata:/var/lib/postgresql/data

networks:

– mynetwork

app:

image: myregistry.azurecr.io/app:0.1.3

container_name: app

ports:

– 8080:8080

environment:

SPRING_PROFILES_ACTIVE: devPostgre

SPRING_DATASOURCE_URL: jdbc:postgresql://apppostgres:5432/appdb

build:

context: .

dockerfile: Dockerfile

links:

– apppostgres

depends_on:

– "apppostgres"

networks:

– mynetwork

volumes:

pgdata:

networks:

mynetwork:

driver: bridge

ipam:

config:

– subnet: 172.10.0.0/16

gateway: 172.10.5.254

aux_addresses:

apppostgres: 172.10.1.2

app: 172.10.1.

Commands

To use these command navigate to the location where the docker-compose.yml file resides.

To build the environment:

docker-compose build

To start the services(notice the -d parameter, it means to detach from the services once they are up):

Docker-compose up -d

To stop and kill(remove) the services:

Dokcer-compose stop

Dokcer-compose kill

And to update a single service/container, try this link:

http://staxmanade.com/2016/09/how-to-update-a-single-running-docker-compose-container/

Tips and Tricks

Docker Swarm

Send registry authentication details to swarm agents:

docker service update –with-registry-auth {service name}

Publish or remove a port as a node port:

docker service update –publish-add 4578: 4578 {service name}

docker service update –publish-rm 4578 {service name}

Publish a port with a specific protocol:

docker service update –publish-add 4578: 4578 /udp client

Update the used image in the service:

docker service update –image {registry domain}/{image name}:{image version number} {service name}

Add and remove an environmental variable:

docker service update –env-add " MY_DATA_URL =http://test.com" {service name}

docker service update –env-rm "MY_DATA_URL" {service name}

Connect to the container terminal:

docker exec -it {container ID or name} /bin/sh

Look at the logs from the container:

docker logs –follow {container ID or name}

Spring Boot

Run jar package with spring profile:

java -jar .buildlibsmyapp.jar -d "SPRING_PROFILES_ACTIVE=profile1"

Repackage the spring boot application:

gradle bootrepackage

Docker Swarm

Create a service with environmental variable – spring profile:

docker service create –replicas 1 –name {service name} -e "SPRING_PROFILES_ACTIVE=profile1" -p {docker network port}:{container internal port} –with-registry-auth {registry domain}/{image name}:{image version number}

C# Parametrized Property

I am currently working on a project where I have to convert some VB.NET code to C#. One of the problems is some of the VB:s features that are not supported on C#, like parameterized properties.

I also wanted to be able to create C# code that is usable in VB in the same way it was earlier. Since parameterized properties are not supported in C# the first impression was to create getter and setter functions but this is clunky in VB and breaks old code which assumes accessing a variable/data in a certain manner.

After some wondering and not accepting defeat by C# I came up with a solution.

It requires the following steps:

  1. Create a new class that represents the property
  2. Use this keyword with [] operator to assign a get and set property (the class will function as the property itself)
  3. To use functionality from the parent class you force the property class with a reference to the parent class. After this, you are able to access functionalities from the parent class as long as the visibility is set to public.

Here is the code:


public class PropertyName
{
 private PartenClass refPartenClass = null;
 public PropertyName(refPartenClass parentClass)
 {
 if (parentClass == null)
 throw new Exception("parent class parameter can not be null");

 this.refPartenClass = parentClass;
 }

 public string this[string path, String DefaultValue = ""]
 {
 get
 {
 PartenClass item = null;
 item = this.refPartenClass.LocateItem(path, false);
 if (item == null)
 {
 return DefaultValue;
 }
 else if (string.IsNullOrEmpty(item.ItemValue))
 {
 return DefaultValue;
 }
 else
 {
 return item.ItemValue;
 }
 }
 set
 {
 PartenClass item = null;
 item = this.refPartenClass.LocateItem(path, true);
 item.ItemValue = value;
 }
 }
}

This is then how you use it:

  1. Create a declaration of the property class as a property itself
  2. Instantiate it during the ParentClass constructor and pass the parent as reference

public class ParentClass
{
public PropertyName Value { get; set; }

private void InitializeParametrizedProperties()
 {
 this.Value = new PropertyName(this);
 }
public ParentClass()
 {
 this.InitializeParametrizedProperties();
 }

}

Btw, nothing stops you from overloading the this[] definition to accept a different kind of parameters and return values. Just be careful with the same kind of parameter definitions. The compiler won’t know what to call.

Problems running scripts and batch files with Task Scheduler

This is probably one of the most annoying things I’ve encountered. I’ve been trying to run a scheduled task using a cmd file and every time I tried to run it simply didn’t work.

Then thanks to search results someone pointed out a solution and it makes no sense, thank you Microsoft. I guess there might some “logical” explanation but it is a mystery to me. Maybe something to do with UAC?

Anyway, to the solution should you happen to run into the same problem:

Do this when when specifying an action: Specify only the script/batch file name and in the Start IN(Optional) specify the full path where the file is located.

taskschedularbatch

 

How to backup your private repositories

GitHub Repository Backup Steps

Contents

Install git. 1

Use SSH key for communication with GitHub. 1

https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/. 1

Generate SSH Key. 1

Generating a new SSH key. 1

Adding your SSH key to the ssh-agent. 2

Add the SSH key to your GitHub account or the organization. 2

Setup a default user name and email 4

Set your username for every repository on your computer: 4

Setting your email address for every repository on your computer. 4

Create an access token if you want to clone repositories without a username and password. 5

Ways to access the GitHub API with an access token. 6

Cloning repositories. 7

Sample git command: 7

Sample node.js tool 7

Usage. 7

Options: 7

Examples: 7

Exclude and Only options. 7

Install git

https://git-scm.com/download/win

Use SSH key for communication with GitHub

https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/

Generate SSH Key

Generating a new SSH key

1. Open Git Bash.

2. Paste the text below, substituting in your GitHub email address.

3. ssh-keygen -t rsa -b 4096 -C "your_email@example.com"

This creates a new ssh key, using the provided email as a label.

Generating public/private rsa key pair.

4. When you’re prompted to "Enter a file in which to save the key," press Enter. This accepts the default file location.

5. Enter a file in which to save the key (/Users/you/.ssh/id_rsa): [Press enter]

6. At the prompt, type a secure passphrase. For more information, see "Working with SSH key passphrases".

7. Enter passphrase (empty for no passphrase): [Type a passphrase]
Enter same passphrase again: [Type passphrase again]

Adding your SSH key to the ssh-agent

Before adding a new SSH key to the ssh-agent to manage your keys, you should have checked for existing SSH keys and generated a new SSH key.

If you have GitHub for Windows installed, you can use it to clone repositories and not deal with SSH keys. It also comes with the Git Bash tool, which is the preferred way of running git commands on Windows.

1. Ensure the ssh-agent is running:

o If you are using the Git Shell that’s installed with GitHub Desktop, the ssh-agent should be running.

o If you are using another terminal prompt, such as Git for Windows, you can use the "Auto-launching the ssh-agent" instructions in "Working with SSH key passphrases", or start it manually:

o # start the ssh-agent in the background
o eval $(ssh-agent -s)
o Agent pid 59566

2. Add your SSH key to the ssh-agent. If you used an existing SSH key rather than generating a new SSH key, you’ll need to replace id_rsa in the command with the name of your existing private key file.

3. $ ssh-add ~/.ssh/id_rsa

https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/

Add the SSH key to your GitHub account or the organization

For a single user

https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account/

For organization

Copy the SSH key to your clipboard(use Git bash)

If your SSH key file has a different name than the example code, modify the filename to match your current setup. When copying your key, don’t add any newlines or whitespace.

$ clip < ~/.ssh/id_rsa.pub
# Copies the contents of the id_rsa.pub file to your clipboard

Tip: If clip isn’t working, you can locate the hidden .ssh folder, open the file in your favorite text editor, and copy it to your clipboard.

Notice that the .ssh folder is usually located at: C:Users{your account name}.ssh

Go to you Organization management windows and select from the left navigation: SSH and GPG keys

From this new view then press New SSH Key button

The type a title and copy the key from the clipboard

Setup a default user name and email

You can do this for a specific repository or globally. For this situation globally if preferable.

https://help.github.com/articles/setting-your-email-in-git/

https://help.github.com/articles/setting-your-username-in-git/

Set your username for every repository on your computer:

1. Navigate to your repository from a command-line prompt.

2. Set your username with the following command.

3. git config –global user.name "Billy Everyteen"

4. Confirm that you have set your username correctly with the following command.

5. git config –global user.name

Billy Everyteen

Setting your email address for every repository on your computer

1. Open Git Bash.

2. Set your email address with the following command:

3. git config --global user.email "your_email@example.com"

4. Confirm that you have set your email address correctly with the following command.

5. git config --global user.email
your_email@example.com

Create an access token if you want to clone repositories without a username and password

This is also useful if you want to create your own application to use the GitHub API for something you want to be done.

For a single user

https://help.github.com/articles/creating-an-access-token-for-command-line-use/

For the organization

Navigate to the Organization management and from the left navigation go to: Developer settings -> Personal access tokens

From this new view press the button: Generate new token

From the new view add a title to the access token then select the needed privileges to your organization. For repository backup you might need the following in the picture below.

IMPORTANT: After you have created the token remember to copy and store the key because this is the only time you will see it.

Ways to access the GitHub API with an access token

https://developer.github.com/v3/#authentication

https://developer.github.com/v3/repos/

https://developer.github.com/v3/oauth_authorizations/#create-a-new-authorization

Cloning repositories

https://help.github.com/articles/duplicating-a-repository/

Sample git command:

git clone --bare https://github.com/exampleuser/old-repository.git

Sample node.js tool

https://github.com/tegon/clone-org-repos

Usage

 cloneorg [OPTIONS] [ORG]

Options:

 -p, --perpage NUMBER number of repos per page (Default is 100)
 -t, --type STRING can be one of: all, public, private, forks, sources,
 member (Default is all)
 -e, --exclude STRING Exclude passed repos, comma separated
 -o, --only STRING Only clone passed repos, comma separated
 -r, --regexp BOOLEAN If true, exclude or only option will be evaluated as a
 regexp
 -u, --username STRING Username for basic authentication. Required to
 access github api
 --token STRING Token authentication. Required to access github api
 -a, --gitaccess Protocol to use in `git clone` command. Can be `ssh` (default), `https` or `git`
 -s, --gitsettings Additional parameters to pass to git clone command. Defaults to empty.
 --debug Show debug information
 -v, --version Display the current version
 -h, --help Display help and usage details

Examples:

clones all github/twitter repositories, with HTTP basic authentication. A password will be required

cloneorg twitter -u GITHUB_USERNAME
cloneorg twitter --username=GITHUB_USERNAME

clones all github/twitter repositories, with an access token provided by Github

cloneorg twitter --token GITHUB_TOKEN

If an environment variable GITHUB_TOKEN is set, it will be used.

export GITHUB_TOKEN='YOUR_GITHUB_API_TOKEN'

Add a -p or –perpage option to paginate response

cloneorg mozilla --token=GITHUB_TOKEN -p 10

Exclude and Only options

If you only need some repositories, you can pass -o or –only with their names

cloneorg angular --token=GITHUB_TOKEN -o angular

This can be an array to

cloneorg angular --token=GITHUB_TOKEN -o angular,material,bower-angular-i18n

This can also be an regular expression, with -r or –regexp option set to true.

cloneorg marionettejs --token=GITHUB_TOKEN -o ^backbone -r true

The same rules apply to exclude options

cloneorg jquery --token=GITHUB_TOKEN -e css-framework # simple
cloneorg emberjs --token=GITHUB_TOKEN -e website,examples # array
cloneorg gruntjs --token=GITHUB_TOKEN -e $-docs -r true # regexp
cloneorg gruntjs --token=GITHUB_TOKEN -e $-docs -r true --gitaccess=git # Clone using git protocol
# Clone using git protocol and pass --recurse to `git clone` to clone submodules also
cloneorg gruntjs --token=GITHUB_TOKEN -e $-docs -r true --gitaccess=git --gitsettings="--recurse"

MongoDB Magic a.k.a simple database operations

Hi,

This is my little post on how to do simple database operations with MongoDB.

First let’s start with needed MongoDB needed functionalities. The simplest way is to open Visual Studio and manage your projects nugets packages. Add these four nuggets to your projects. Notice that you do not necessary need the mongocshaprdiver, it’s a legacy driver. Add it only if you need it.

mongodbnugets

Next to be able to use the classes for handling MongoDB database operation include the following namespaces:

using MongoDB.Bson;
using MongoDB.Driver;

BSON is: “BSON is a binary serialization format used to store documents and make remote procedure calls in MongoDB” More info here: https://docs.mongodb.com/manual/reference/bson-types/

Driver is: “The official MongoDB C#/.NET Driver provides asynchronous interaction with MongoDB” More Info: https://docs.mongodb.com/ecosystem/drivers/csharp/

To code below will create a client connection to the MondoDB and will get a database in it.

var client = new MongoClient("mongodb://localhost:27017");
 var database = client.GetDatabase("databaseName");

To list all collections in a database:

foreach (var item in database.ListCollectionsAsync().Result.ToListAsync&lt;BsonDocument&gt;().Result)
 {
}

 

To get the name of the collection:

var name = item.Elements.FirstOrDefault().Value.RawValue as String;

To access a collection and all of its documents:

var collection = database.GetCollection&lt;BsonDocument&gt;(name);
// This filter simply means that all data is retrieved
 var filter = new BsonDocument();

 var documents = collection.Find(filter);
 {
 {
 foreach (var document in documents.ToEnumerable())
 {
 }
}
}

To check if a document has elements(fields) with a certain data type:

var dateTimeFields = document.Elements.Where(o =&gt; o.Value.BsonType == BsonType.DateTime);
foreach (var dateTimeField in dateTimeFields)
 {
}

To update a document with certain values:

var updateFilter = Builders&lt;BsonDocument&gt;.Filter.Eq("_id", document.Elements.FirstOrDefault().Value);
 var value = BsonTypeMapper.MapToDotNetValue(dateTimeField.Value);
 var updateDefiniotion = Builders&lt;BsonDocument&gt;.Update.Set(dateTimeField.Name, BsonTypeMapper.MapToBsonValue(((DateTime)value).AddDays(3)));

 var result = collection.UpdateOne(updateFilter, updateDefiniotion);
 if (result.ModifiedCount &gt; 0)
 {
 // Success
 }
 else
 {
 // No updates
 }

Retrieving documents based on a filter and copying them into another collection + deleting from the original collection:

var timeEntriesIntoTheFuture = Builders&lt;BsonDocument&gt;.Filter.Gt(dateTimeField.Name, BsonTypeMapper.MapToBsonValue(DateTime.Now));
 var entries = sourceCollection.Find(timeEntriesIntoTheFuture);
if(entries.Count() &gt; 0)
 {
tempTimeEntryCollection.InsertMany(entries.ToEnumerable());
 foreach(var entry in entries.ToEnumerable())
 {
 sourceCollection.DeleteOne(entry);
 }
}

Checking if a collection exists:

public static bool DoesCollectionExist(IMongoDatabase database, String collectionName)
 {
 bool collectionExists = false;

 foreach (var item in database.ListCollectionsAsync().Result.ToListAsync&lt;BsonDocument&gt;().Result)
 {
 var name = item.Elements.FirstOrDefault().Value.AsString;
 if (name.Contains(collectionName))
 {
 collectionExists = true;
 break;
 }
 }

 return collectionExists;
 }

Updating an element at a certain position:

var updateDefiniotion = Builders&lt;BsonDocument&gt;.Update.Set(firstDocument.ElementAt(1).Name, BsonTypeMapper.MapToBsonValue(DateTime.Now));

Inserting a document in a manually defined manner:

var dateTimeStampeValue = new BsonDocument()
{
{ "ResetDate", BsonTypeMapper.MapToBsonValue(DateTime.Now) }
};
settingsCollection.InsertOne(dateTimeStampeValue);