Executing end-to-end tests in Kubernetes

Share
  • March 19, 2019

As software applications transition towards a microservice architecture and platforms become more cloud native, these shifts have changed how development teams build and test software. To scale and manage containers, organizations are turning to orchestration platforms like Kubernetes to automate deployment, scaling, and management of containerized applications at scale, whether they run in a private, public, or hybrid cloud.

With increased complexity in the infrastructure and the need for timely delivery of quality features to customers, automated end-to-end testing plays an important role in the continuous integration and delivery process. Let’s look at how we can execute these tests in a container within a Kubernetes cluster on Google Cloud Kubernetes Engine.

Building the Testing Container Image

We start by building our tests, which are written in TestNG using Selenium WebDriver, into a container image. The image includes all the test files, libraries, drivers, and a properties file, as well as the shell script to start the tests.

Below you’ll find some sample code that should give you a sense of how you can structure and configure your testing, including snippets from the following files:

Dockerfile

FROM centos:7.3.1611 
  
  
RUN yum install -y  
       java-1.8.0-openjdk  
       java-1.8.0-openjdk-devel 
  
ENV JAVA_HOME /usr/lib/jvm/java-1.8.0-openjdk/ 
  
  
# Set local to UTF8 
RUN localedef -i en_US -f UTF-8 en_US.UTF-8 
ENV LANG en_US.UTF-8 
ENV LANGUAGE en_US.UTF-8 
ENV LC_ALL en_US.UTF-8 
  
  
# Install automation tests 
COPY ./build/libs ${test_dir}/bin 
COPY ./build/lib ${test_dir}/lib 
COPY ./testfiles ${test_dir}/testfiles 
COPY ./build/resources ${test_dir}/resources 
COPY ./build/drivers ${test_dir}/drivers 
RUN chmod -R 755 ${test_dir}/drivers 
COPY ./*.properties ${test_dir}/ 
  
  
WORKDIR /opt/automation 
USER root 
  
  
CMD [ "./run-suite.sh", "TestSuite" ] 

run-suite.sh:

#!/bin/bash 
  
# run test suite 
fullList="" 
function listOfSuites() { 
    for i in $(echo $1 | sed "s/,/ /g") 
    do 
        # call your procedure/other scripts here below 
        echo "$i" 
        fullPath=`find . -type f -name "$i.xml"` 
        #echo "fullList=$fullList" 
        #echo "fullPath=$fullPath" 
        fullList="$fullList$fullPath " 
    done 
} 
listOfSuites $1 
echo  $fullList 
  
  
java -Dlog4j.configuration=resources/main/log4j.properties -cp "./lib/*:./bin/*:."  
org.testng.TestNG  $fullList 
  
  
java -cp "./lib/*:./bin/*:." com.gcp.UploadTestData 

build.gradle

buildscript { 
    repositories { maven { url "${nexus}" } } 
    dependencies { 
        classpath 'com.bmuschko:gradle-docker-plugin:3.6.2' 
    } 
} 
  
apply plugin: com.bmuschko.gradle.docker.DockerRemoteApiPlugin 
  
import com.bmuschko.gradle.docker.tasks.image.DockerBuildImage 
  
def gcpLocation = 'gcr.io' 
def gcpProject = 'automation' 
def dockerRegistryAndProject = "${gcpLocation}/${gcpProject}" 

Provide a lazily resolved $project.version to get the execution time value, which may include -SNAPSHOT.

def projectVersionRuntime = "${-> project.version}-${buildNumber}" 
def projectVersionConfigurationTime = "${project.version}-${buildNumber}" 
def projectRpm = "${projectName}-${projectVersionConfigurationTime}.${arch}.rpm" 

We want to use the branch name as part of the GCR tag. However we don’t want the raw branch name, so we strip out symbols and non-alpha-numerics. We also strip out git branch text that contains remotes/origin or origin/, since we don’t care about that.

def sanitize = { input -> 
    return input.replaceAll("[^A-Za-z0-9.]", "_").toLowerCase().replaceAll("remotes_origin_", "").replaceAll("origin_", ""); 
} 
def gitbranchNameRev = 'git name-rev --name-only HEAD'.execute().text.trim() 
def gcpGitbranch = System.env.GIT_BRANCH ?: (project.hasProperty('gitbranch')) ? "${gitbranch}" : "${gitbranchNameRev}" 
def gitbranchTag = sanitize(gcpGitbranch) 
  
def projectVersionRuntimeTag = sanitize("${-> project.version}") 
def dockerTag = "${dockerRegistryAndProject}/${projectName}:${projectVersionRuntimeTag}-${buildNumber}-${gitbranchTag}-${githash}" 
def buildType = System.env.BUILD_NUMBER ? "JENKINS" : "LOCAL" 

Create gcpBuildVersion.properties file containing build information. This is for the build environment to pass onto other upstream callers that are unable to figure out this information on their own.

task versionProp() { 
    onlyIf { true } 
    doLast { 
        new File("$project.buildDir/gcpBuildVersion.properties").text = """APPLICATION=${projectName} 
VERSION=${-> project.version} 
BUILD=${buildNumber} 
BRANCH=${gcpGitbranch} 
GIT_HASH=${githash} 
TAG_FULL=${dockerTag} 
TAG=${projectVersionRuntimeTag}-${buildNumber}-${gitbranchTag}-${githash} 
TIMESTAMP=${new Date().format('yyyy-MM-dd HH:mm:ss')} 
BUILD_TYPE=${buildType} 
  
""" 
    } 
} 

Make sure the below version file generation is always run after build:

build.finalizedBy versionProp 
task dockerPrune(type: Exec) { 
    description 'Run docker system prune --force' 
    group 'Docker' 
  
    commandLine 'docker', 'system', 'prune', '--force' 
} 
  
  
task buildDockerImage(type: DockerBuildImage) { 
    description 'Build docker image locally' 
    group 'Docker' 
    dependsOn buildRpm 
    inputDir project.buildDir 
  
    buildArgs = [ 
            'rpm': "${projectRpm}", 
            'version': "${projectVersionConfigurationTime}" 
    ] 
  
    doFirst { 

Copy the Dockerfile to the build directory so we can limit the context provided to the Docker daemon.

copy { 
            from 'Dockerfile' 
            into "${project.buildDir}" 
        } 
  
        copy { 
            from 'docker' 
            into "${project.buildDir}/docker" 
            include "**/*jar" 
        } 
  
        println "Using the following build args: ${buildArgs}" 

This block will get the execution time value of $project.version, which may include -SNAPSHOT.

tag "${dockerTag}" 
    } 
} 
  
task publishContainerGcp(type: Exec) { 
    description 'Publish docker image to GCP container registry' 
    group 'Google Cloud Platform' 
    dependsOn buildDockerImage 
  
    commandLine 'docker', 'push', "${dockerTag}" 
} 

Selenium Grid Infrastructure Setup

Our end-to-end tests use Selenium WebDriver to execute browser-related tests, and we have a scalable container-based Zalenium Selenium grid deployed in a Kubernetes cluster (you can see setup details here). You can configure the grid URL and browser in the test_common.properties file included in the test container image:

targetUrl=http://www.mywebsite.com 
#  webDriver settings 
webdriver_gridURL=http://${SELENIUM_GRID}/wd/hub 
webdriver_browserType=${BROWSER_TYPE} 

iquegwe

Deploying Test Containers in Kubernetes Cluster

Now that we have the container image built and pushed to the Google Cloud Platform, let’s deploy the container in Kubernetes. Here’s a snippet of the template for the manifest file to execute tests as a Kubernetes job:

apiVersion: batch/v1 
kind: Job 
metadata: 
  # Unique key of the Job instance 
  name: run-suite-${JENKINS_JOB_INFO}-${TEST_SUITE_LOWER} 
  labels: 
    jobgroup: runtest 
spec: 
  template: 
    metadata: 
      name: runtest 
      namespace : automation 
      labels: 
        jobgroup: runtest 
    spec: 
      containers: 
      - name: testcontainer 
        image: gcr.io/automation/${IMAGE_NAME}:${IMAGE_TAG} 
        command: ["./run-suite.sh", "${TEST_SUITE}"] 
        env: 
        - name: SELENIUM_GRID 
          value: "${SELENIUM_GRID}" 
        - name: BROWSER_TYPE 
          value: "${BROWSER_TYPE}" 
        - name: TARGET_URL 
          value: "${TARGET_URL}" 
        - name: GCP_CREDENTIALS 
          value : "${GCP_CREDENTIALS}" 
        - name: GCP_BUCKET_NAME 
          value: "${GCP_BUCKET_NAME}" 
        - name : BUCKET_FOLDER 
          value : "${BUCKET-FOLDER}" 
      restartPolicy: Never 

And, here’s the command to create the job:

kubectl apply -f ./manifest.yaml --namespace automation

Publishing Test Results

Once test execution is complete, you can upload test results to your Google Cloud storage bucket using a code snippet similar to this:

public static void main(String... args) throws Exception {
  
        String gcp_credentials = readEnvVariable(GCP_CREDENTIALS); 
        String gcp_bucket = readEnvVariable(GCP_BUCKET_NAME); 
        String bucket_folder_name = readEnvVariable(BUCKET_FOLDER); 
  
        // authentication on gcloud 
        authExplicit(gcp_credentials); 
  
        // define source folder and destination folder 
        String source_folder = String.format("%s/%s", System.getProperty("user.dir"), TEST_RESULTS_FOLDER_NAME); 
        String destination_folder = ""; 
        if (!bucket_folder_name.isEmpty()) { 
            destination_folder = bucket_folder_name; 
        } else { 
            String timeStamp = new SimpleDateFormat("yyyyMMdd_HHmmss").format(Calendar.getInstance().getTime()); 
            destination_folder = TEST_RESULTS_FOLDER_NAME + "-" + timeStamp; 
            System.out.println("destination folder: " + destination_folder); 
        } 
  
        // get gcp bucket for automation-data 
        Bucket myBucket = null; 
        if (!gcp_bucket.isEmpty()) { 
            myBucket = storage.get(gcp_bucket); 
        } 
  
        // upload files 
        List files = new ArrayList(); 
        GcpBucket qaBucket = new GcpBucket(myBucket, TEST_RESULTS_FOLDER_NAME); 
        if (qaBucket.exists()) { 
            qaBucket.createBlobFromDirectory(destination_folder, source_folder, files); 
            System.out.println(files.size() + "files are uploaded to " + destination_folder); 
        } 
  
    } 
  
} 

Next Steps

We’re looking into publishing test results in a centralized result database fronted by an API service. This allows users to easily post test results data for test result monitoring and analytics, which I will cover in a future post about building a centralized test results dashboard. Until then, I hope this has helped you put all the pieces together for executing automated end-to-end testing using Kubernetes.

The post Executing end-to-end tests in Kubernetes appeared first on JAXenter.

Source : JAXenter