Create an index for each Cloudwatch logstream

  1. Go to the AWS Lambda function and search your ElasticSearch lambda function associated with your wanted ES instance. The name of the function should start with: LogsToElasticsearch_
  2. Then in this JS file search for a code of line that generated the logging entry to be pushed to an ES index. This should be in a function named as: function transform(payload) {…}
  3. In here search for the line that created the index: var indexName = [ … ]
  4. Change it to the following(NOTICE: The index name must be in lower case):
    var indexName = [
    ‘cwl-‘ + payload.logStream.toLowerCase() + “-” + timestamp.getUTCFullYear(), // year
    (‘0’ + (timestamp.getUTCMonth() + 1)).slice(-2), // month
    (‘0’ + timestamp.getUTCDate()).slice(-2) // day
    ].join(‘.’)
Advertisements

Helper Scripts for Docker, git and Java developers

Hi,

Here are some of my own scripts that I use when developing to ease my life:

Building a Java Gradle project, then building a docker image and pushing it

./gradlew test
if [ $? -eq 0 ]; then
    echo Tests OK
    gradle clean
    gradle generateGitProperties
    gradle bootRepackage
    ./cleandocker.sh
    docker rmi {your image name + tag}
    docker build -t {your image name + tag} .
    ./dockerregistrylogin.sh
    docker push {your image name + tag}
else
    echo Tests Failed
    exit 1
fi

Clean docker from all running containers and stopped ones

echo "Stoping all containers"
docker stop $(docker ps -a -q)
echo "Removing all containers"
docker rm $(docker ps -a -q)
echo "Starting dev environment"

Commit your code to git after gradle tests are successfull

./gradlew test
if [ $? -eq 0 ]; then
    echo Tests OK
    git add .
    git commit -m "$1"
    git push
else
    echo Tests Failed
    exit 1
fi

Merge your branch with your master

git checkout master
git pull origin master
git merge dev -m "$1"
git push origin master
git checkout dev

This one is for AWS Developers to run and get the AWS ECR docker login

#Notice: To use a certain profile for login define additional profiles like this: aws configure --profile awscli

function doAwsDockerRegistryLogin()
{
    local  myresult=$(aws ecr get-login --no-include-email --region eu-central-1 --profile awscli)
    echo "$myresult"
}

result=$(doAwsDockerRegistryLogin)   # or result=`myfunc`
eval $result

 

AWS ECS and Bitbucket Pipelines: Deploy your docker application

Hi,

Here are some tips and tricks on how to update an existing AWS ECS deployment.

NOTICE: This post assume that you have some knowledge on AWS, scripting, docker and Bitbucket.

The scripts and guide below will do the following:

  1. Clone external libraries needed for your project (Assumes that you have a multi-project application
  2. Build your application
  3. Build the docker image
  4. Push the docker image into Elastic Container Service own container registry
  5. Stop all tasks in a cluster to force the associated services to restart the tasks
    1. I am using this method of deploying to avoid making constant new task definitions. I find that unnecessary. My view is to have a deployment docker tag that your target service task definitions use. In this way you only make sure that you have one good task definition that you plan to you. If needed later update your task definition and service to use it. This deployment suggestion will not care of any other detail that except the docker image and cluster name.
  6. Then test your API or Web App in the cloud with some 3rd party tool in this case I am using Postman collection, Postman Environmental Settings and Newman

The above steps can be performed automatically when you make changes to a branch or manually from the commit view or branches view (on the row with the branch or commit id move you mouse on top of “…” to get the option “Run pipeline for a branch” and selected the manual pipeline option)

Needed steps:

  1. Create/have an IAM access keys for deployment into ECS and ECR from Bitbucket.
  2. Generate SSH keys in the Bitbucket repository where you plan to run your pipeline
  3. If you have any depended Bitbucket repositories copy the Public Key in Step 2 into that repository settings.
  4. Then in the primary repository from which you plan to deploy set environmental variables needed for the deployment.
  5. Create you pipeline with the example Bitbucket pipeline definition file and supplement scripts.

Step 1: AWS Access

You will need an access key/secrect to AWS with the following minimum policy settings:


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "ecs:ListTasks",
                "ecr:CompleteLayerUpload",
                "ecr:GetAuthorizationToken",
                "ecs:StopTask",
                "ecr:UploadLayerPart",
                "ecr:InitiateLayerUpload",
                "ecr:BatchCheckLayerAvailability",
                "ecr:PutImage"
            ],
            "Resource": "*"
        }
    ]
}

Step 2: SSH Keys for Bitbucket

More info here on how to generate a key:

https://confluence.atlassian.com/bitbucket/use-ssh-keys-in-bitbucket-pipelines-847452940.html?_ga=2.23419699.803528104.1518767114-2088289616.1517306997

Notice: Remember to get the public key

Step 3: Other repositories (Optional)

From bitbucket:

If you want your Pipelines builds to be able to access a different Bitbucket repository (other than the repo where the builds run):

  1. Add an SSH key to the settings for the repo where the build will run, as described in Step 1 above (you can create a new key in Bitbucket Pipelines or use an existing key).
  2. Add the public key from that SSH key pair directly to settings for the other Bitbucket repo (i.e. the repo that your builds need to have access to).
    See Use access keys for details on how to add a public key to a Bitbucket repo.

Step 4: Setting up environmental variables

APPIMAGE_TESTENV_CLUSTER : The cluster name where to which the docker image is deployed to  in this case a test environment that is manually triggered

APPIMAGE_DEVENV_CLUSTER: A dev target cluster that is associated with the master branch and starts automatically

APPIMAGE_NAME: The docker image name (Notice: Must match the one in your service -> task definition)

APPIMAGE_TAG: the docker image tag The docker image name (Notice: Must match the one in your service -> task definition)

AWS_ACCESS_KEY_ID (SECURE)

AWS_SECRET_ACCESS_KEY (SECURE)

AWS_DEFAULT_REGION : The region where your cluster is located

REGISTRYNAME : The ECR registry name wherethe image is to be pushed

Step 5: Bitbucket Pipeline

Pipeline definitions

The sample pipeline script has two options:
* Custom/manual deployment in the custom section of the script
* Branches/automatic deployment in the branches section of the script

# This is a sample build configuration for Java (Gradle).
# Check our guides at https://confluence.atlassian.com/x/zd-5Mw for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: atlassian/default-image:latest # Include Java support
options:
  max-time: 15 # 15 minutes incase something hangs up
  docker: true # Include Docker support
pipelines:
  custom: # Pipelines that can only be triggered manually
    test-env:
      - step:
          caches:
            - gradle
          script:
            # Modify the commands below to build your repository.
            # You must commit the Gradle wrapper to your repository
            # https://docs.gradle.org/current/userguide/gradle_wrapper.html
            - git clone {your external repository}
            - ls
            - bash ./scripts/bitbucket/buildproject.sh
            # Install AWS CLI and configure it
            #----------------------------------------
            - apt-get update
            - apt-get install jq
            - curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
            - unzip awscli-bundle.zip
            - ./awscli-bundle/install -b ~/bin/aws
            - export PATH=~/bin:$PATH
            #----------------------------------------
            - bash ./scripts/bitbucket/awsdev-dockerregistrylogin.sh
            - export IMAGE_NAME=$REGISTRYNAME/$APPIMAGE_NAME:$APPIMAGE_TAG
            - docker build -t $IMAGE_NAME .
            - docker push $IMAGE_NAME
            # This will stop all tasks in the AWS Cluster, by doing this the AWS Service will start the defined task definitions as new tasks.
            # NOTICE: This approach needs task definitions attached to services and no manually started tasks.
            - bash ./scripts/bitbucket/stopalltasks.sh $APPIMAGE_TESTENV_CLUSTER
            #----------------------------------------
            # Install Newman tool and test with postman collection and environmental settings your web app
            - npm install -g newman
            - ./scripts/newman-API-tests/run-testenv-tests.sh
            #----------------------------------------
    prod-env:
      - step:
          script:
            - echo "Manual triggers for deployments are awesome!"
  branches:
    master:
      - step:
          caches:
            - gradle
          script:
            #----------------------------------------
            # Modify the commands below to build your repository.
            # You must commit the Gradle wrapper to your repository
            # https://docs.gradle.org/current/userguide/gradle_wrapper.html
            - git clone {your external repository}
            - ls
            - bash ./scripts/bitbucket/buildproject.sh
            # Install AWS CLI and configure it
            #----------------------------------------
            - apt-get update
            - apt-get install jq
            - curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
            - unzip awscli-bundle.zip
            - ./awscli-bundle/install -b ~/bin/aws
            - export PATH=~/bin:$PATH
            #----------------------------------------
            # Build and install the newest docker image
            - bash ./scripts/bitbucket/awsdev-dockerregistrylogin.sh
            - export IMAGE_NAME=$REGISTRYNAME/$APPIMAGE_NAME:$APPIMAGE_TAG
            - docker build -t $IMAGE_NAME .
            - docker push $IMAGE_NAME
            #----------------------------------------
            # This will stop all tasks in the AWS Cluster, by doing this the AWS Service will start the defined task definitions as new tasks.
            # NOTICE: This approach needs task definitions attached to services and no manually started tasks.
            - bash ./scripts/bitbucket/stopalltasks.sh $APPIMAGE_DEVENV_CLUSTER
            #----------------------------------------
            # Install Newman tool and test with postman collection and environmental settings your web app
            - npm install -g newman
            - ./scripts/newman-API-tests/run-devenv-tests.sh
            #----------------------------------------

Stop all AWS tasks in the cloud

#!/usr/bin/env bash

echo "Getting tasks from AWS:"
echo "Cluster: $1 Service: $2"
#For a single task
#----------------
#task=$(aws ecs list-tasks --cluster "$1" --service-name "$2" | jq --raw-output '.taskArns[0] | split("/")[1]' )
 #echo "Stopping task: " $task

 #aws ecs stop-task --task $task --cluster "$1"
#----------------
tasks=$(aws ecs list-tasks --cluster "$1" --service-name "$2" | jq -r '.taskArns | map(.[0:]) | reduce .[] as $item (""; . + $item + " ")')
echo "Tasks received"
for task in $tasks; do
echo "Stopping task from AWS: " $task
	aws ecs stop-task --task $task --cluster "$1"
#echo "Task stopped."
done

Build your project

echo "Rebuilding project"
#gradlew_output=$(./gradlew build);
#echo "$gradlew_output"

./gradlew test
if [ $? -eq 0 ]; then
    echo Tests OK
    ./gradlew clean
    ./gradlew bootRepackage
else
    echo Tests Failed
fi

Get the AWS Login details for ECR docker login

#Notice: To use a certain profile for login define additional profiles like this: aws configure --profile awscli

function doAwsDockerRegistryLogin()
{
    local  myresult=$(aws ecr get-login --no-include-email)
    echo "$myresult"
}

result=$(doAwsDockerRegistryLogin)   # or result=`myfunc`
eval $result

Running API or WebApp tests with Newman and Postman

What do you need to make the tests work

  1. Create a new Postman collection
  2. Add your URLs to test
  3. Add scripts into the test tab
  4. When all your URLs in your collection are ready export them from the collection … button
  5. (Optional) Then create environment settings that you can export and use with newman

Bash script to run the newman tests

sleep 1m # Force a wait to make sure that all AWS services, your app, LBs etc are all loaded up and running

echo $DEVENV_URL
until $(curl --output /dev/null --silent --head --fail --insecure "$DEVENV_URL"); do
    printf '.'
    sleep 5
done

echo "Starting newman tests"
newman run {your postman collection}.postman_collection.json --environment "{your postman collection}.postman_environment.json" --insecure --delay-request 10<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>

Postman scripts example

Retrieving a token from the body and inserting it into an environmental variable

var jsonData = JSON.parse(responseBody);

console.log("TOKEN:" + jsonData.token);

var str_array = jsonData.token.split('.');
for(var i = 0; i < str_array.length -1; i++) {
console.log("Array Item: " + i);
console.log(str_array[i])
console.log(CryptoJS.enc.Utf8.stringify(CryptoJS.enc.Base64.parse(str_array[i])));
}
postman.setEnvironmentVariable("token", jsonData.token);

Testing a response for success and body content

// example using pm.response.to.be*
pm.test("response must be valid and have a body", function () {
// assert that the status code is 200
pm.response.to.be.ok; // info, success, redirection, clientError, serverError, are other variants
// assert that the response has a valid JSON body
pm.response.to.be.withBody;
pm.response.to.be.json; // this assertion also checks if a body exists, so the above check is not needed
});

console.log("BODY:" + responseBody);

Links

http://2mins4code.com/2017/11/08/building-a-cicd-environment-with-bitbucket-pipelines-docker-aws-ecs-for-an-angular-app/

https://bitbucket.org/awslabs/amazon-ecs-bitbucket-pipelines-python/overview

https://confluence.atlassian.com/bitbucket/deploy-to-amazon-ecs-892623902.html

https://bitbucket-pipelines.prod.public.atl-paas.net/validator

https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html

https://confluence.atlassian.com/bitbucketserver/getting-started-with-bitbucket-server-and-aws-776640193.html

https://confluence.atlassian.com/bitbucket/java-with-bitbucket-pipelines-872013773.html

https://confluence.atlassian.com/bitbucket/use-ssh-keys-in-bitbucket-pipelines-847452940.html?_ga=2.23419699.803528104.1518767114-2088289616.1517306997

https://confluence.atlassian.com/bitbucket/environment-variables-794502608.html

https://docs.aws.amazon.com/cli/latest/reference/index.html#cli-aws

https://confluence.atlassian.com/bitbucket/run-pipelines-manually-861242583.html

https://www.npmjs.com/package/newman

http://blog.getpostman.com/2017/10/25/writing-tests-in-postman/

SharePoint change document set and items content type to a new content type

I’ll put it simply:

This is a PowerShell script that you can use to change a content type of a library or list to another one. This script can identify between Document Sets and library or list items.


param (
[string]$WebsiteUrl = "http://portal.spdev.com/",
[string]$OldCTName = "DSTestCT",
[string]$NewCTName = "DSTestCT"
)

if ( (Get-PSSnapin -Name MicroSoft.SharePoint.PowerShell -ErrorAction SilentlyContinue) -eq $null )
{
Add-PsSnapin MicroSoft.SharePoint.PowerShell
}

function Reset-ListContentType ($WebUrl, $ListName, $OldCTName, $NewCTName)
{
$web = $null
try
{
$web = Get-SPWeb $WebUrl

$list = $web.Lists.TryGetList($ListName)
$oldCT = $list.ContentTypes[$OldCTName]

$isChildOfCT = $list.ContentTypes.BestMatch($rootNewCT.ID).IsChildOf($rootNewCT.ID);
if($oldCT -ne $null -and $isChildOfCT -eq $false)
{
$hasOldCT = $true
$isFoldersCTReseted = Reset-SPFolderContentType –web $web -list $list –OldCTName $OldCTName –NewCTName $NewCTName
Reset-SPFileContentType –web $web -list $list –OldCTName $OldCTName –NewCTName $NewCTName
Remove-ListContentType –web $web -list $list –OldCTName $OldCTName –NewCTName $NewCTName
if($hasOldCT -eq $true)
{
Add-ListContentType –web $web -list $list –OldCTName $OldCTName –NewCTName $NewCTName
if($isFoldersCTReseted -eq $true)
{
Set-SPFolderContentType –web $web -list $list –OldCTName $OldCTName –NewCTName $NewCTName
}
}
}


}catch
{

}
finally
{
if($web)
{
$web.Dispose()
}
}

}

function Remove-ListContentType ($web, $list, $OldCTName, $NewCTName)
{


$oldCT = $list.ContentTypes[$OldCTName]

$isChildOfCT = $list.ContentTypes.BestMatch($oldCT.ID).IsChildOf($oldCT.ID);

if($isChildOfCT -eq $true)
{
$list.ContentTypes.Delete($oldCT.ID)
}
$web.Dispose()

return $isChildOfCT
}

function Add-ListContentType ($web, $list, $OldCTName, $NewCTName)
{



$list.ContentTypes.Add($rootNewCT)

$web.Dispose()
}

function Reset-SPFolderContentType ($web, $list, $OldCTName, $NewCTName)
{
#Get web, list and content type objects

$isFoldersCTReseted = $false


$isChildOfCT = $list.ContentTypes.BestMatch($rootNewCT.ID).IsChildOf($rootNewCT.ID);

$oldCT = $list.ContentTypes[$OldCTName]
$folderCT = $list.ContentTypes["Folder"]
$newCT = $rootNewCT

$newCTID = $newCT.ID

#Check if the values specified for the content types actually exist on the list
if (($oldCT -ne $null) -and ($newCT -ne $null))
{
$list.Folders | ForEach-Object {

if ($_.ContentType.ID.IsChildOf($rootNewCT.ID) -eq $false -and $_.ContentType.ID.IsChildOf($oldCT.ID) -eq $true -and $_.Folder.ProgID -eq "Sharepoint.DocumentSet")
{
Write-Host "Found a document set: " $_.Name "Processing document set"
$item = $list.GetItemById($_.ID);
$item["ContentTypeId"] = $folderCT.Id
$item.Update()
$isFoldersCTReseted = $true
}
}
}

$web.Dispose()

return $isFoldersCTReseted
}

function Set-SPFolderContentType ($web, $list, $OldCTName, $NewCTName)
{
#Get web, list and content type objects



$folderCT = $list.ContentTypes["Folder"]
$newCT = $list.ContentTypes[$NewCTName]

#Check if the values specified for the content types actually exist on the list
if (($newCT -ne $null))
{
$list.Folders | ForEach-Object {
if ($_.ContentType.ID.IsChildOf($newCT.ID) -eq $false -and $_.ContentType.ID.IsChildOf($folderCT.ID) -eq $true -and $_.Folder.ProgID -eq "Sharepoint.DocumentSet")
{
$item = $list.GetItemById($_.ID);
$item["ContentTypeId"] = $newCT.Id
$item.Update()
}
}
}

$web.Dispose()
}


function Reset-SPFileContentType ($web, $list, $OldCTName, $NewCTName)
{
#Get web, list and content type objects



$isChildOfCT = $list.ContentTypes.BestMatch($rootNewCT.ID).IsChildOf($rootNewCT.ID);

$oldCT = $list.ContentTypes[$OldCTName]
$folderCT = $list.ContentTypes["Folder"]
$newCT = $rootNewCT

$newCTID = $newCT.ID

#Check if the values specified for the content types actually exist on the list
if (($oldCT -ne $null) -and ($newCT -ne $null))
{
$list.Folders | ForEach-Object {
if ($_.ContentType.ID.IsChildOf($rootNewCT.ID) -eq $false -and $_.ContentType.ID.IsChildOf($oldCT.ID) -eq $true)
{
$_["ContentTypeId"] = $folderCT.Id
$_.Update()
}
}
#Go through each item in the list
$list.Items | ForEach-Object {
Write-Host "Item present CT ID :" $_.ContentType.ID
Write-Host "CT ID To change from :" $oldCT.ID
Write-Host "NEW CT ID to change to:" $rootNewCT.ID

#Check if the item content type currently equals the old content type specified
if ($_.ContentType.ID.IsChildOf($rootNewCT.ID) -eq $false -and $_.ContentType.ID.IsChildOf($oldCT.ID) -eq $true)
{
#Check the check out status of the file
if ($_.File.CheckOutType -eq "None")
{
Change the content type association for the item
$item = $list.GetItemById($_.ID);
$item.File.CheckOut()
write-host "Resetting content type for file: " $_.Name "from: " $oldCT.Name "to: " $newCT.Name

$item["ContentTypeId"] = $newCTID
$item.UpdateOverwriteVersion()
Write-Host "Item changed CT ID :" $item.ContentType.ID
$item.File.CheckIn("Content type changed to " + $newCT.Name, 1)
}
else
{
write-host "File" $_.Name "is checked out to" $_.File.CheckedOutByUser.ToString() "and cannot be modified"
}
}
else
{
write-host "File" $_.Name "is associated with the content type" $_.ContentType.Name "and shall not be modified"
}
}
}
else
{
write-host "One of the content types specified has not been attached to the list"$list.Title
return
}

$web.Dispose()
}

$web = Get-SPWeb $WebsiteUrl
$rootWeb = $web.Site.RootWeb;
$rootNewCT = $rootWeb.AvailableContentTypes[$NewCTName]

Foreach ($list in $web.Lists) {
Write-Host $list.BaseType
if($list.Hidden -eq $false -and $list.BaseType -eq "DocumentLibrary")
{
Write-Host "Processing list: " $list.Title
Reset-ListContentType –WebUrl $WebsiteUrl –ListName $list.Title –OldCTName $OldCTName –NewCTName $NewCTName
}
}

$web.Dispose()

 

Tutorial guide: Learn Python basics

Hi ok, here are my own notes on the basics of Python. I am working on a Udemy course on data science and got excited about Python(it has been while since I’ve done something cool with Python, so I am excited 🙂 ). This is based on the Udemy course https://www.udemy.com/data-science-and-machine-learning-with-python-hands-on

Where it goes, hope it helps.

You can run python code as scripts or as a python notebook.

 

Running a script from a command prompt:

python “your script file location and name”

 

Basic basics

 

List definition:

listOfNumbers = [1, 2, 3, 4, 5, 6]

 

Iteration through a list of items:

for number in listOfNumbers:

print number,

if (number % 2 == 0):

print “is even”

else:

print “is odd”

 

#Notice: In python you differentiate between blocks of code by whitespace, or a tab. Not the same as let say Java or C# where the char { and } are used to differentiate a block of code. Pay attention to you formatting, indentation.

 

#Notice: The char , (comma) is used to tell that something is going to continue on the same print line, within the same block of code. See example above.

 

#Notice: Colons : differentiate clauses.

 

Importing modules

 

import numpy as np

 

A = np.random.normal(25.0, 5.0, 10)

print A

 

Data structures

 

Lists

 

Defining a list(Notice: The brackets [] define an mutable list):

x = [1, 2, 3, 4, 5, 6]

 

Printing the length of a list:

print len(x)

 

Sub setting lists:

 

First 3 elements(counting starts from zero):

x[:3]

 

Last 3 elements:

x[3:]

 

Last two elements from starting from the end of the list:

x[-2:]

 

Extend the list with a new list :

x.extend([7,8])

 

Add a new item to the list:

x.append(9)

 

Python is a weekly typed language which allows you to put whatever you want in a list:

 

Creating a multidimensional list:

y = [10, 11, 12]

listOfLists = [x, y]

listOfLists

 

Sort a list(descending):

z = [3, 2, 1]

z.sort()

 

Sort a list(ascending):

z.sort(reverse=True)

 

Tuples

 

Are just like lists but immutable.

You can not extend them append them, sort them etc. You can not change them.

 

Example:

 

#Tuples are just immutable lists. Use () instead of []

x = (1, 2, 3)

len(x)

 

y = (4, 5, 6)

 

listOfTuples = [x, y]

 

Tuples common usage for data science or data processing:

Is to use it to assign variables to input data that as it is read in.

 

This example creates variable with values from a “source” where data is split by the comma.

#Notice: It is important that you have the same about of variables in your tuple as you are retrieving/assigning from the data “source”.

(age, income) = “32,120000”.split(‘,’)

print age

print income

 

Dictionaries

 

A way to define a “lookup” table:

 

# Like a map or hash table in other languages

captains = {}

captains[“Enterprise”] = “Kirk”

captains[“Enterprise D”] = “Picard”

captains[“Deep Space Nine”] = “Sisko”

captains[“Voyager”] = “Janeway”

 

print captains[“Voyager”]

 

print captains.get(“Enterprise”)

 

for ship in captains:

print ship + “: ” + captains[ship]

 

If something is not found the result will be none:

print captains.get(“NX-01”)

 

Functions

 

Let you repeat a set of operation over and over again with different parameters.

 

Notice: use def to define a function and  () chars to define the parameters and use the return keyword to return value from the function.

 

def SquareIt(x):

return x * x

 

print SquareIt(2)

 

Pass functions around as parameters

#Notice: You have to make sure that what you are typing is correct because there is no strong typing in Python. Typing the wrong function name will cause errors.

 

#You can pass functions around as parameters

def DoSomething(f, x):

return f(x)

 

print DoSomething(SquareIt, 3)

 

Lambda functions

Is Functional programming: You can inline a function into a function

 

#Notice: lambda keyword is telling that you are defining a inline function to be used where you put it. In the example below inside a function parameter named x followed by a colon : character followed by what the function actually does. To pass in multiple parameters to a lambda function use the comma , to separate the variables.

 

#Lambda functions let you inline simple functions

print DoSomething(lambda x: x * x * x, 3)

 

Boolean Expressions

 

Valye is false

print 1 == 3

 

Value is true(or keyword check which is true)

print (True or False)

 

Check if something is a certain value( Use the is keyword)

print 1 is 3

 

If else clauses

 

if 1 is 3:

print “How did that happen?”

elif 1 > 3:

print “Yikes”

else:

print “All is well with the world”

 

Looping

 

Normal looping

for x in range(10):

print x,

 

Continue and Break

 

#Process only the first 5 items but skip the number one

for x in range(10):

if (x is 1):

continue

if (x > 5):

break

print x,

 

While

 

x = 0

while (x < 10):

print x,

x += 1

 

 

Hiding and showing HTML elements with ASP .NET

Hi,

Ok, this is probably a quite simple thing to do and maybe everyone knows it BUT just in case… :).

Well, you need two things a C# code(or some other way to ouput HTML) to generate and anchor link by which to display or hide a given HTML element.

In this case, let’s say that you have a DIV element. You assign as the ID value the client ID which your control receives from ASP .NET(notice that the unique ID or the ID but the ClientID, it is HTML friendly).

Then you would do something like the code below. You are calling a JavaScript function which receives as parameters the full ID for the DIV element and the start of the DIV full id.

The idea is to be able to identify all of the elements which are under this same hide and show logic while still being able to identify a single item.

Exmaple ID:
id =”cars_ASP.NETUserControlClientID_carnumber”

The bolded part would be used to identify ALL of the items which need be processed at the same time. While the non-bolded value(carnumber) is the identifier for a single item.

The C# code below creates an anchor calling the function below the C# code. Notice the href definition “javascript:void(0)“, this is to avoid page jump in certain browsers and their versions.

String.Format("<a href=\"javascript:void(0)\" onclick=\"showInfo('{0}','{2}')\">{1}</a>", "HTML Element ID to find unique counter or ID" + this.ClienID, "The link title/name", this.ClienID);

function showInfo(id, controlUniqueID) {
// Search for all items and hide them
 $('*[id*=' + controlUniqueID + ']').each(function () {
 $(this).hide();
 });

// Next open only the item which you want to see
 if ($('#' + id).css('display') == 'none') {
 $('#' + id).show();
 }
 else {
 $('#' + id).hide();
 }

 }

Export SharePoint solution packages from your environment

Hi,

I created a script that might help someone out there who wants to backup their SharePoint solution packages. This script is able to export specific solutions or all if no solution names are passed to the script.

Sample script calling for specific solutions:

.\script.ps1 –Solutions MYWSPNUM1.wsp,MYWSPNUM2.wsp

Or if you want to export all available solutions then simply call the script without any parameters:

.\script.ps1

# Get Script params, in this case solution names to the -Solution attribute. Separate solutions names by , char
param([String[]] $Solutions)

Add-PSSnapin Microsoft.SharePoint.PowerShell –erroraction SilentlyContinue

#==================================================================================
# Functions definitions
#==================================================================================

#———————————————————————————-
# This function will export all available solutions in your environment
#———————————————————————————-
function ExportAllSolutions
{
$location = get-location
Write-Host Exporting all available solutions to: $location
foreach ($solution in Get-SPSolution)
{
$id = $Solution.SolutionID
$title = $Solution.Name
$filename = $Solution.SolutionFile.Name

try {
$solution.SolutionFile.SaveAs(“$(get-location)\$filename”)
Write-Host “Exported solution package – ‘$title’:” -foreground green
}
catch
{
Write-Host ” Error with solution package – ‘$title’: $_” -foreground red
}
}
}

#———————————————————————————-
# This Function will export a single solution by solution name
#———————————————————————————-
function ExportSolution
{
param([String] $solutionName)
$location = get-location
Write-Host Exporting solution to: $location
$Solution = Get-SPSolution -Identity $solutionName

$id = $Solution.SolutionID
$title = $Solution.Name
$filename = $Solution.SolutionFile.Name

try {
$solution.SolutionFile.SaveAs(“$(get-location)\$filename”)
Write-Host “Exported solution package – ‘$title’:” -foreground green
}
catch
{
Write-Host ” Error with solution package – ‘$title’: $_” -foreground red
}

}

#==================================================================================
# End of function definitions
#==================================================================================

#==================================================================================
# Main operations for the script
#==================================================================================

# If solutions names are passed then export only those solutions
if($Solutions.count -gt 0)
{
foreach ($solutionName in $Solutions)
{
Write-Host $solutionName
ExportSolution $solutionName
}
}
# If no solultions passed then export all solutions
else
{
ExportAllSolutions
}
#==================================================================================
# End of main operations
#==================================================================================