A docker self-hosted Logging API using Elasticsearch, Kibana and MongoDB

Hi,

In this post I wanted to show you a way of hosting a custom logging API that uses Elasticsearch, Kibana and MongoDB.

In the github link you will find the following:

  • A docker-compose file that contains the definitions needed to create the environment
  • Docker image definitions for Kibana, Elasticsearch and MongoDB

Things to know

  • Both Kibana and Elasticsearch don’t use X-Pack as a security measure, the original image definition that I used as the basis used SearchGuard but I wanted to create something that is free to use.
  • For a simple authentication I used Nginx reverse-proxies images.
  • The mongoDB uses and needs credentials to access
  • The LogAPI would use TSL and basic auth at the moment.

TODO

  • Adding token based authentication
  • Adding the LogAPI nodejs code
  • Using Readonlyrest to add security between Elasticsearch and Kibana

This is a work in progress

The docker compose file

Continue reading

Advertisements

My ELK stack + Spring Boot + PostgreSQL docker definition

Hi,

If you need an example to put up a docker environment for Spring Boot, Postgre and ELK stack take a look at this:

https://github.com/lionadi/Templates/blob/master/docker-compose.yml

 version: '2'
 
services:
 mydatabase:
 image: mydockerimageregistry.azurecr.io/mydatabase:0.1.0
 container_name: mydatabase
 restart: always
 build:
 context: .
 dockerfile: postgredockerfile
 ports:
 - 5432:5432
 environment:
 POSTGRES_PASSWORD: f130e10792
 
 volumes:
 - pgdata:/var/lib/postgresql/data
 networks:
 - mynetwork
 
 apiapp:
 image: mydockerimageregistry.azurecr.io/apiapp:0.1.0
 container_name: apiapp
 ports:
 - 8080:8080
 environment:
 SPRING_PROFILES_ACTIVE: localdevPostgre
 SPRING_DATASOURCE_URL: jdbc:postgresql://mydatabase:5432/postgres
 build:
 context: .
 dockerfile: DockerfileAPIApp
 networks:
 - mynetwork
 volumes:
 - applogs:/logs
 depends_on:
 - "elasticsearch1"
 - "mydatabase"
 links:
 - mydatabase
 
 elasticsearch1:
 image: docker.elastic.co/elasticsearch/elasticsearch:5.4.1
 container_name: elasticsearch1
 environment:
 - cluster.name=docker-cluster
 - bootstrap.memory_lock=true
 - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
 ulimits:
 memlock:
 soft: -1
 hard: -1
 nofile:
 soft: 65536
 hard: 65536
 mem_limit: 1g
 cap_add:
 - IPC_LOCK
 volumes:
 - esdata1:/usr/share/elasticsearch/data
 ports:
 - 9200:9200
 - 9300:9300
 networks:
 - mynetwork
 dockbeat:
 image: mydockerimageregistry.azurecr.io/dockbeat:0.1.0
 build:
 context: .
 dockerfile: dockerfileDockbeat
 container_name: dockbeat
 networks:
 - mynetwork
 volumes: 
 - /var/run/docker.sock:/var/run/docker.sock
 depends_on:
 - "elasticsearch1"
 cap_add:
 - ALL
 
 filebeat:
 image: mydockerimageregistry.azurecr.io/filebeat:0.1.0
 container_name: filebeat
 restart: always
 build:
 context: .
 dockerfile: dockerfileFilebeat
 networks:
 - mynetwork
 volumes:
 - applogs:/usr/share/filebeat/logs
 depends_on:
 - "elasticsearch1"
 - "apiapp"
 packetbeat:
 image: mydockerimageregistry.azurecr.io/packetbeat:0.1.0
 container_name: packetbeat
 restart: always
 build:
 context: .
 dockerfile: dockerfilePacketbeat
 networks:
 - mynetwork
 volumes:
 - applogs:/logs
 depends_on:
 - "elasticsearch1"
 - "apiapp"
 cap_add:
 - NET_ADMIN
 kibana:
 image: docker.elastic.co/kibana/kibana:5.4.1
 container_name: kibana
 networks:
 - mynetwork
 ports:
 - "9900:5601"
 environment:
 SERVER_NAME: kibana
 SERVER_PORT: 5601
 ELASTICSEARCH_URL: http://elasticsearch1:9200
 XPACK_SECURITY_ENABLED: "true"
 ELASTICSEARCH_PASSWORD: changeme
 ELASTICSEARCH_USERNAME: elastic
 depends_on:
 - "elasticsearch1"
 
volumes:
 pgdata:
 
 esdata1:
 driver: local
 esdata2:
 driver: local
 
 applogs:
networks:
 mynetwork:
 driver: bridge
 ipam:
 config:
 - subnet: 172.10.0.0/16
 gateway: 172.10.5.254
 aux_addresses:
 kibana: 172.10.1.8
 packetbeat: 172.10.1.7
 filebeat: 172.10.1.6
 dockbeat: 172.10.1.5
 elasticsearch2: 172.10.1.4
 elasticsearch1: 172.10.1.3
 mydatabase: 172.10.1.2
 apiapp: 172.10.1.1

Docker, Docker Swarm and Azure guide

Table of contents

Azure configurations. 1

Docker on Ubuntu Server. 1

Connect to Azure Virtual Image for Docker on Ubuntu Server. 3

Container Registry. 4

Build, push and remove images to and from registry. 4

Docker Swarm Configurations. 4

Create a swarm.. 5

Deploy a service. 5

Inspect a service. 5

Tips and Tricks. 5

Docker Swarm.. 5

Spring Boot 6

Docker Swarm.. 6

Notice: This guide assumes you have Docker installed locally for some of the steps to work.

Azure configurations

Docker on Ubuntu Server

First step is to create a new VM supporting docker.

The above is rather self-explanatory.

The only bigger thing is to choose between a password or SSH key. You can generate one with Putty, use an Unix environment or use linux bash support for Windows(type bash in powershell).

Then type ssh-keygen {filename or path + filename}

Then copy the public key from your generatedkeyfile {filename}.pub

Connect to Azure Virtual Image for Docker on Ubuntu Server

ssh -i {filename or path + filename} {username}@{virtual image IP or DNS name}

ssh -i /root/.ssh/mykeyfile dockeradmin

To get the virtual image IP go to the newly created virtual image in Azure UI. In the Azure Portal UI select you resource group where the Docker on ubuntu was created, from there you should be able to find your virtual image. Just press on the virtual image to get a new overview screen of the virtual image.

Next you should see the DNS name and Virtual IP address. Use either with the SSH command.

Remember to access the images from a Azure Container Registry you have to log in to the registry. See Container Registry.

Container Registry

Next create a private container registry for images publishing and management.

Follow this guide from Microsoft, it’s pretty good:

https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-portal

Now to connect to the registry open your terminal/powershell and type:

docker login {registry domain} -u {username} -p {password}

docker login myregistry.azurecr.io -u xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx -p myPassword

https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-docker-cli

Build, push and remove images to and from registry

Once you have your image definition ready use the following command to build a new image:

docker build -t {registry domain}/{image name}:{image version number} .

docker build -t myregistry.azurecr.io/CoolImage:1.0 .

Then to push it:

docker push {registry domain}/{image name}:{image version number}

docker push myregistry.azurecr.io/CoolImage:1.0

docker pull {registry domain}/{image name}:{image version number}

docker rmi {registry domain}/{image name}:{image version number}

docker rmi myregistry.azurecr.io/CoolImage:1.0

Docker Swarm Configurations

Create a swarm

This is very simple, just type the following command in the Azure Docker on Ubuntu virtual image:

docker swarm init

https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/

This will create you a basic structure for your swarm. More info on the Docker Swarm here:

https://docs.docker.com/engine/swarm/key-concepts/#services-and-tasks

Deploy a service

So this will create a container and replicate it in your swarm, meaning that the Docker Swarm will automatically create instances as task in your swarm worker nodes of your container.

docker service create –replicas 1 –name {service name} -p {docker network port}:{container internal port} –with-registry-auth {registry domain}/{image name}:{image version number}

docker service create –replicas 1 –name myservice -p 8080:8080 –with-registry-auth myregistry.azurecr.io/CoolImage:1.0

Inspect a service

docker service inspect {service name}

docker service inspect –pretty {service name}

This will give you an overview of the service and it’s configurations.

Run docker service ps {service name} to see which nodes are running the service.

Run docker ps on the node where the task is running to see details about the container for the task.

Docker Stack Deploy

The services can also be created and updated using a Docker Compose file (v3 and above). Deployment using a Docker Compose file is otherwise similar to the steps described above, but instead of creating and updating services, the compose file should be edited and deployed.

Deploying a stack

Before deploying the stack for the first time, you need to authenticate with the container registry:

docker login {registry-domain}

A stack defined in a compose file can be deployed with the following command:

docker stack deploy –with-registry-auth –compose-file {path-to-compose-file} {stack-name}

Updating a stack

An existing stack can be updated with the same command as creating it. However, with-registry-auth option may be omitted.

docker stack deploy –compose-file {path-to-compose-file} {stack-name}

Docker Compose

Using docker compose is rather simple but can be a bit confusing at first. Basically with docker-compose you can specify a cluster of container with apps, dbs etc in one shot. With a single command you can set things up.

Compose file example

To start our you need to create a docker-compose.yml file and start configuring the docker environment.

In the example below the compose file defines two containers/services:

  1. apppostgres
    1. A postgre database
  2. app
    1. A spring boot application using the database

These two services are joined together with a common network and static IP. Notice that the postgre database needs a volumes configuration to keep the data even if the database is shutdown.

Also notice that the spring boot application is linked and put to depend on the database service. An in the datasource URL for the database the configuration refers to the container name of the database service/container.

More details: https://docs.docker.com/compose/overview/

version: ‘2’

services:

apppostgres:

image: myregistry.azurecr.io/apppostgres:0.1.0

container_name: apppostgres

restart: always

build:

context: .

dockerfile: postgredockerfile

ports:

– 5432:5432

environment:

POSTGRES_PASSWORD: mypassword

POSTGRES_DB: appdb

volumes:

– pgdata:/var/lib/postgresql/data

networks:

– mynetwork

app:

image: myregistry.azurecr.io/app:0.1.3

container_name: app

ports:

– 8080:8080

environment:

SPRING_PROFILES_ACTIVE: devPostgre

SPRING_DATASOURCE_URL: jdbc:postgresql://apppostgres:5432/appdb

build:

context: .

dockerfile: Dockerfile

links:

– apppostgres

depends_on:

– "apppostgres"

networks:

– mynetwork

volumes:

pgdata:

networks:

mynetwork:

driver: bridge

ipam:

config:

– subnet: 172.10.0.0/16

gateway: 172.10.5.254

aux_addresses:

apppostgres: 172.10.1.2

app: 172.10.1.

Commands

To use these command navigate to the location where the docker-compose.yml file resides.

To build the environment:

docker-compose build

To start the services(notice the -d parameter, it means to detach from the services once they are up):

Docker-compose up -d

To stop and kill(remove) the services:

Dokcer-compose stop

Dokcer-compose kill

And to update a single service/container, try this link:

http://staxmanade.com/2016/09/how-to-update-a-single-running-docker-compose-container/

Tips and Tricks

Docker Swarm

Send registry authentication details to swarm agents:

docker service update –with-registry-auth {service name}

Publish or remove a port as a node port:

docker service update –publish-add 4578: 4578 {service name}

docker service update –publish-rm 4578 {service name}

Publish a port with a specific protocol:

docker service update –publish-add 4578: 4578 /udp client

Update the used image in the service:

docker service update –image {registry domain}/{image name}:{image version number} {service name}

Add and remove an environmental variable:

docker service update –env-add " MY_DATA_URL =http://test.com" {service name}

docker service update –env-rm "MY_DATA_URL" {service name}

Connect to the container terminal:

docker exec -it {container ID or name} /bin/sh

Look at the logs from the container:

docker logs –follow {container ID or name}

Spring Boot

Run jar package with spring profile:

java -jar .buildlibsmyapp.jar -d "SPRING_PROFILES_ACTIVE=profile1"

Repackage the spring boot application:

gradle bootrepackage

Docker Swarm

Create a service with environmental variable – spring profile:

docker service create –replicas 1 –name {service name} -e "SPRING_PROFILES_ACTIVE=profile1" -p {docker network port}:{container internal port} –with-registry-auth {registry domain}/{image name}:{image version number}

How to fix internet connectivity errors in Android Studio emulator

This was a strange problem for me. I am not a Java or Android developer but had to create a small Android app to test something. I noticed that no internet connection was available to the emulated device.

After to Googling and wondering I realized that the solution was to:

Disable my ethernet card from my laptop, then restart my emulator. This I had to do because for some reason the emulator or Android studio doesn’t recognize a WiFi card as the primary connection if you have both the ethernet card and the WiFi card enabled. Might be a configuration in Android Studio for make this work but I had to do it from Windows Control Panel: Control Panel\Network and Internet\Network Connections

Azure DocumentDB Code Samples: How to use Azure DocumentDB

Hi,

I’ve been working a bit on DocumentDB and thought of posting some sample code on how to use it. It might save people time and energy. I have had to work around some issues and headaches.

 

Notice, there is one thing you should take care of. Define your functions with the async keyword and use the await keyword on async function calls. Failure to do this will result in hanging application code.

Also, make sure that you are not accidentally calling synchronous functions from the Task class or some other place that is related to an async call. This will also hang the application code. Calling the Wait() function is one of the, also calling the Result property in the wrong place will result in the same problem.

A quote on the problem from a site:

“If you call the async method on the SAME thread that you then call Result or Wait() on, you will probably deadlock because once the async task has finished, it will wait to re-acquire the previous thread but it can’t because the thread is blocked on the call to Result/Wait()

you can use async tasks and await to avoid this problem but there is also another clever trick, certainly in newer versions of the .net framework and that is to invoke your async task on another thread, not on the one you are calling your method with. It is as simple as this:

var task = Task.Run(() => myMethodAsync());

which involves the method on a thread from the thread pool. When your calling thread then waits and blocks using Wait() or Result, the async task will NOT need to wait for your thread, it will re-acquire the one from the threadpool, finish and signal your waiting thread to allow it to continue!” http://lukieb.blogspot.fi/2016/07/calls-to-azure-documentdb-hang.html

 


/// <summary>
/// Sample class to be used for object serialization when handling data to the DocumentDB
/// </summary>
public class MyDocumentDBDataContainer
{
public String Title { get; set; }
public byte[] FileData { get; set; }

public String FileName { get; set; }

public class InnerDataContainer
{
public String Title { get; set; }
public int SomeNumber { get; set; }
}

public InnerDataContainer InnerData { get; set; }
}

public partial class Form1 : Form
{
/// <summary>
/// The DocumentDB address, end point where it exists
/// </summary>
private const string EndpointUrl = "https://mydocumentdbtest.documents.azure.com:443/";

/// <summary>
/// This can be the primary key you get from the Azure DocumentDB settings UI
/// </summary>
private const string AuthorizationKey =
"";

/// <summary>
/// A temp object for holding the documentDB database for processing
/// </summary>
private Database database;

/// <summary>
/// Same as above but for a collection
/// </summary>
private DocumentCollection collection;

public Form1()
{
InitializeComponent();
}

private async void button1_Click(object sender, EventArgs e)
{
Stream myStream = null;
OpenFileDialog openFileDialog1 = new OpenFileDialog();

openFileDialog1.InitialDirectory = "c:\\";
openFileDialog1.Filter = "txt files (*.txt)|*.txt|All files (*.*)|*.*";
openFileDialog1.FilterIndex = 2;
openFileDialog1.RestoreDirectory = true;

// Open a file to get some byte data to upload into DocumentDB
if (openFileDialog1.ShowDialog() == DialogResult.OK)
{
try
{
if ((myStream = openFileDialog1.OpenFile()) != null)
{
using (myStream)
{
try
{
MemoryStream ms = new MemoryStream();
myStream.CopyTo(ms);
await CreateDocumentClient(ms, openFileDialog1.FileName);
}
catch (Exception ex)
{
Exception baseException = ex.GetBaseException();
Console.WriteLine("Error: {0}, Message: {1}", ex.Message, baseException.Message);
}
}
}
}
catch (Exception ex)
{
MessageBox.Show("Error: Could not read file from disk. Original error: " + ex.Message);
}
}
}

/// <summary>
/// This is the main work horse here. The function will create a database, a collection and a sample document if they do not exist.
/// NOTICE: This is very important, define you functions with the async keyword and use the await keyword on async function calls. Failure to do this will result in haning application code.
/// Also make sure that you are not accidentally calling synchronous functions from the Task class or some other place that is related to a async call. This will also hang the application code.
/// More on this: http://lukieb.blogspot.fi/2016/07/calls-to-azure-documentdb-hang.html
/// Also notice that documentDB uses "links" to identify things. You will run into DocumentDB objects and a property SelfLink. This seems to just be a way of how things work.
/// </summary>
/// <param name="fileDataStream"></param>
/// <param name="fileName"></param>
/// <returns></returns>
private async Task CreateDocumentClient(MemoryStream fileDataStream, String fileName)
{
// Create a new instance of the DocumentClient
var client = new DocumentClient(new Uri(EndpointUrl), AuthorizationKey);

var databaseID = "myDBTest";
var collectionID = "myCollectionTest";

// Get the database and if it does not exist create it
this.database = this.GetDatabase(client, databaseID);
if (database == null)
{
this.database = await CreateDatabase(client, databaseID);
}

// Get the collection and if it does not exist then create it
this.collection = this.GetDocumentCollection(client, collectionID);
if(this.collection == null)
{
this.collection = await this.CreateCollection(client, collectionID);
}

// Create a temp data container, pass forward to be created in DocumentDB
MyDocumentDBDataContainer data = new MyDocumentDBDataContainer() { Title = fileName, InnerData = new MyDocumentDBDataContainer.InnerDataContainer() { Title = "InnerDataTitle", SomeNumber = 1 }, FileData = fileDataStream.ToArray(), FileName = fileName };
var result = await this.CreateDocument(client, data);

// Get the newly created document. Notice: In these code examples I use a title but you can use any identifier you wish.
var dataFromDocumentDB = this.ReadDocument(client, data.Title);

// Re-Create the file from the byte data from the DocumentDB storage
File.WriteAllBytes(dataFromDocumentDB.FileName, dataFromDocumentDB.FileData);
}



#region DocumentManagement

private async Task<Document> DeleteDocument(DocumentClient client, String documentTitle)
{
var documentToDelete =
client.CreateDocumentQuery<MyDocumentDBDataContainer>(this.collection.SelfLink)
.Where(e => e.Title == documentTitle)
.AsEnumerable()
.First();

Document doc = GetDocument(client, documentToDelete.Title);

var result = await client.DeleteDocumentAsync(doc.SelfLink);
return result.Resource;
}

private async Task<Document> UpdateDocument(DocumentClient client, String documentTitle)
{
// Update a Document

var singleDocument =
client.CreateDocumentQuery<MyDocumentDBDataContainer>(this.collection.SelfLink)
.Where(e => e.Title == documentTitle)
.AsEnumerable()
.First();

Document doc = GetDocument(client, singleDocument.Title);
MyDocumentDBDataContainer employeUpdated = singleDocument;
singleDocument.InnerData.SomeNumber = singleDocument.InnerData.SomeNumber + 1;
var result = await client.ReplaceDocumentAsync(doc.SelfLink, singleDocument);

return result.Resource;
}

private Document GetDocument(DocumentClient client, string id)
{
return client.CreateDocumentQuery(this.collection.SelfLink)
.Where(e => e.Id == id)
.AsEnumerable()
.First();
}

private MyDocumentDBDataContainer ReadDocument(DocumentClient client, String documentTitle)
{
// Read the collection

//var data = client.CreateDocumentQuery<MyDocumentDBDataContainer>(this.collection.SelfLink).AsEnumerable();
//foreach (var item in data)
//{
// Console.WriteLine(item.Title);
// Console.WriteLine(item.FileData);
// Console.WriteLine(item.InnerData.Title);
// Console.WriteLine("----------------------------------");
//}

// Read A Document - Where Name == "John Doe"
var singleDocument =
client.CreateDocumentQuery<MyDocumentDBDataContainer>(this.collection.SelfLink)
.Where(e => e.Title == documentTitle)
.AsEnumerable()
.FirstOrDefault();

return singleDocument;

//Console.WriteLine("-------- Read a document---------");
//Console.WriteLine(singleDocument.Title);
//Console.WriteLine(singleDocument.FileData);
//Console.WriteLine(singleDocument.InnerData.Title);
//Console.WriteLine("-------------------------------");
}

private async Task<Document> CreateDocument(DocumentClient client, object documentObject)
{

var result = await client.CreateDocumentAsync(collection.SelfLink, documentObject);
var document = result.Resource;

Console.WriteLine("Created new document: {0}\r\n{1}", document.Id, document);
return document;
}

#endregion

private async Task<Database> CreateDatabase(DocumentClient client, String databaseID)
{
Console.WriteLine();
Console.WriteLine("******** Create Database *******");

var databaseDefinition = new Database { Id = databaseID };
var result = await client.CreateDatabaseIfNotExistsAsync(databaseDefinition);
var database = result.Resource;

Console.WriteLine(" Database Id: {0}; Rid: {1}", database.Id, database.ResourceId);
Console.WriteLine("******** Database Created *******");

return database;
}

private DocumentCollection GetDocumentCollection(DocumentClient client, String collectionID)
{
var collections = client.CreateDocumentCollectionQuery(database.CollectionsLink,
"SELECT * FROM c WHERE c.id = '" + collectionID + "'").AsEnumerable();
if(collections.Count() > 0)
return collections.First();

return null;
}

private async Task QueryDocumentsWithPaging(DocumentClient client)
{
Console.WriteLine();
Console.WriteLine("**** Query Documents (paged results) ****");
Console.WriteLine();
Console.WriteLine("Quering for all documents");

var sql = "SELECT * FROM c";
var query = client.CreateDocumentQuery(collection.SelfLink, sql).AsDocumentQuery();

while (query.HasMoreResults)
{
var documents = await query.ExecuteNextAsync();

foreach (var document in documents)
{
Console.WriteLine(" Id: {0}; Name: {1};", document.id, document.name);
}
}

Console.WriteLine();
}

private Database GetDatabase(DocumentClient client, String databaseID)
{
//bool databaseExists = false;
Console.WriteLine();
Console.WriteLine();
Console.WriteLine("******** Get Databases List ********");

var databases = client.CreateDatabaseQuery().ToList();

foreach (var database in databases)
{
Console.WriteLine(" Database Id: {0}; Rid: {1}", database.Id, database.ResourceId);
return database;
}

Console.WriteLine();
Console.WriteLine("Total databases: {0}", databases.Count);

return null;
}

private async Task<DocumentCollection> CreateCollection(DocumentClient client, string collectionId, string offerType = "S1")
{

Console.WriteLine();
Console.WriteLine("**** Create Collection {0} in {1} ****", collectionId,
database.Id);

var collectionDefinition = new DocumentCollection { Id = collectionId };
var options = new RequestOptions { OfferType = offerType };
var result = await

client.CreateDocumentCollectionAsync(database.SelfLink,
collectionDefinition, options);
var collection = result.Resource;

Console.WriteLine("Created new collection");
//ViewCollection(collection);

return collection;
}

#region DifferentWaysOfDoingThings
private async Task<Document> CreateDocuments2(DocumentClient client, byte[] fileData)
{
Console.WriteLine();
Console.WriteLine("**** Create Documents ****");
Console.WriteLine();

dynamic document1Definition = new
{
name = "New Customer 1",
address = new
{
addressType = "Main Office",
addressLine1 = "123 Main Street",
location = new
{
city = "Brooklyn",
stateProvinceName = "New York"
},
postalCode = "11229",
countryRegionName = "United States"
},
fileDataBinary = fileData
};

Document document1 = CreateDocument2(client, document1Definition);
Console.WriteLine("Created document {0} from dynamic object", document1.Id);
Console.WriteLine();

return document1;
}

private async Task<Document> CreateDocument2(DocumentClient client, object documentObject)
{

var result = await client.CreateDocumentAsync(collection.SelfLink, documentObject);
var document = result.Resource;

Console.WriteLine("Created new document: {0}\r\n{1}", document.Id, document);
return document;
}

#endregion
}

Ethical Hacking: Terminology – Part 1

I’ve started a new course on ethical hacking to get a better understanding of the internet, software security, personal security etc.

I’ll post a series of posts where I will write down my notes on what I’ve learned.

I’ll start today with some basic terminology:

Term Description
White Hat Hacker People that do hacking to help others, legal and ethical
Black Hat Hacker Unethical and unlegal activities
Grey Hat Hacker Between White and Black hat
Footprinting Information gathering on your target, on your task: like figuring out network related information, or software related details, or getting information from real world things or people. General information gathering in regard to your chosen target
DoS (Just you) Denial Of Service – On person performs a certain amount of request, more than the server can handle, to make the server crash. Servers can handle only a certain amount of requests and so the requests that does not fit into the request pool limit will be dropped out. If the service attack comes from one location/machine this is should not be possible.
DDoS (multiple people) Domain Denial Of Service – When you multiple computers/machines doing the service attack it will be harder for the software to know who to kick out.

 

The attack is not hard to do but the preparation is hard. You need to have multiple machines and to do this usually you have to infect other computers to create a bot farm of machines.

RAT Remote Administration Tools – For DDoS attacks needed a software that can be distributed upon other computers. This gives you control of a computer and allows you to hide your identity. The operations are not visible to a normal user. You can even hide them so that they do not show in normal operating system diagnostic tools.
FUD ( Anti-virus can not detect) Fully Undetectable – Also needed for DDoS attacks. Not labeled as malicious by anti-virus programs
Fishing Applying a bait and someone acts on it. Example: You get an email from someone and you click on it. Either it uploads something malicious or you do something that compromises your data, security.

 

Usually these are done so that the links look authentic but once you click on them you are redirected to some other server, which is not the one you would expect.

 

An easy way to spot these kind of addresses is to look at the address. If it is not from an HTTPS address then you are probably dealing with a false address. HTTPS addresses are much harder to fake.

SQL Injections Passing SQL Queries to HTTP requests. Allowing SQL command to run on a server to get or alter data that is not others wise intended to see or use.
VPN Virtual Private Network – Routing and encrypting traffic data between you and the VPN server/provider. A way of anonymizing yourself.

 

There is no real easy way to identify you unless the VPN Provider gives up your identity.

Proxy A less reliable way of staying anonymous. You could route your traffic between many proxies but the more proxies you have the harder it is to add new proxies to your traffic. This is mostly because of internet speed limitations, not enough available bandwidth. It will slow down you actions.

 

You can use free proxies and you can use paid proxies but paid ones leave a trace of whom you are.

Tor Open Source – Another way to hide your identity. Faster than proxies but slower than VPNs. Routes traffic through different routes, routers, places to hide your trace.

 

There is a very high chance of staying hidden (99.99%), there are tools, ways to find but highly unlikely.

VPS Virtual Private Server – a “security layer”, example: a virtual machine inside an actual machine that serves as a database server for you web server. This is done so that the database is not accessible from the outside directly.

 

In this way you can be specific who and from where can access that virtual machine.

Key Loggers Tools that are used to extract information from a machine, these needs to be deployed to a machine where the tool gathers key strokes and send that information to a location for analysis.

 

Key Loggers can extract existing information as well, you can modify the settings of a key logger (what, where, how to act), you can take screenshots, to use a camera on a device, microphone etc.

Terminal An interface to control your operating system. GUI tools are not as nearly as powerful as terminal tools.

 

Most hacking tools are designed for the terminal. Once you know how to do it in the terminal, you’ll know how to do it in the GUI.

Firewall A firewall is configured through iptable commands.

 

Linux firewall is open source and it has a HUDE amount of options. On Windows, by default you have some of these options but you will need to buy some package or application to get more options.

Root Kit rootkit is a collection of computer software, typically malicious, designed to enable access to a computer or areas of its software that would not otherwise be allowed (for example, to an unauthorized user) and often masks its existence or the existence of other software.
Reverse-shells There are thousands of Reverse-shells. You have a program that infects another device that program opens up a reverse connection from that device back to you. Therefore, you can keep up controlling an external device.

 

Usually you need to break through a router first and reconfigure it to give you more access to a network and machines.

C# Parametrized Property

I am currently working on a project where I have to convert some VB.NET code to C#. One of the problems is some of the VB:s features that are not supported on C#, like parameterized properties.

I also wanted to be able to create C# code that is usable in VB in the same way it was earlier. Since parameterized properties are not supported in C# the first impression was to create getter and setter functions but this is clunky in VB and breaks old code which assumes accessing a variable/data in a certain manner.

After some wondering and not accepting defeat by C# I came up with a solution.

It requires the following steps:

  1. Create a new class that represents the property
  2. Use this keyword with [] operator to assign a get and set property (the class will function as the property itself)
  3. To use functionality from the parent class you force the property class with a reference to the parent class. After this, you are able to access functionalities from the parent class as long as the visibility is set to public.

Here is the code:


public class PropertyName
{
 private PartenClass refPartenClass = null;
 public PropertyName(refPartenClass parentClass)
 {
 if (parentClass == null)
 throw new Exception("parent class parameter can not be null");

 this.refPartenClass = parentClass;
 }

 public string this[string path, String DefaultValue = ""]
 {
 get
 {
 PartenClass item = null;
 item = this.refPartenClass.LocateItem(path, false);
 if (item == null)
 {
 return DefaultValue;
 }
 else if (string.IsNullOrEmpty(item.ItemValue))
 {
 return DefaultValue;
 }
 else
 {
 return item.ItemValue;
 }
 }
 set
 {
 PartenClass item = null;
 item = this.refPartenClass.LocateItem(path, true);
 item.ItemValue = value;
 }
 }
}

This is then how you use it:

  1. Create a declaration of the property class as a property itself
  2. Instantiate it during the ParentClass constructor and pass the parent as reference

public class ParentClass
{
public PropertyName Value { get; set; }

private void InitializeParametrizedProperties()
 {
 this.Value = new PropertyName(this);
 }
public ParentClass()
 {
 this.InitializeParametrizedProperties();
 }

}

Btw, nothing stops you from overloading the this[] definition to accept a different kind of parameters and return values. Just be careful with the same kind of parameter definitions. The compiler won’t know what to call.