This article was authored by Supriya Sadhwani a PGP- Cloud Computing alumnus from Great Learning.
Over the last few years, Continuous Integration and Continuous Development have gained a lot of traction. Automation of Testing during CI/CD, ensures that as soon as developers check in code, automated tests run on it and any defects can be solved proactively by the development team. Jenkins is leading open source automation server which is used most widely by various organizations to automate their software development processes. It provides hundreds of plugins to support building, deploying and automating any project, including a lot of plugins to integrate with AWS. As Jenkins has been around since quite a lot of time and is still being currently used in almost 70% of the organizations using CI/CD automation, this project leverages AWS capabilities to simplify Jenkins configuration and management.
To set up Jenkins, an organization generally has to provision for machines on premises, set up Jenkins on each one of them. A typical Jenkins setup contains the master instance and one or multiple worker agents which run the whole time. In that case, there are either too many agents (adding up the costs) or too few agents (leading to wait times) which get overloaded and sometimes crash. Also, these agents often have (over time) a lot of different tools installed, and are not reliable anymore. In case of any changes in the configuration, then each machine needs to be checked and updated. Alternatively, even if Jenkins is set up on the cloud, then organizations need to set up various EC2 instances with Jenkins deployed on them, configure them and manage them.
Our Capstone Project done as a part of Post Graduate Program in Cloud Computing at Great Learning, aimed to resolve these issues by implementing Automation of testing in the CI/CD pipeline using Jenkins on AWS ECS. As the solution has been built around Jenkins, which is the most widely used automation tool, any organization can easily transition the current Jenkins setup to this one. AWS CloudFormation has been used to provision the entire AWS infrastructure, so an organization can migrate to this solution infrastructure in a matter of few clicks. Amazon Elastic File System (Amazon EFS) has been used to provide parallel shared access to all the Jenkins master instances on the ECS cluster. Amazon’s Simple Storage Service (S3 which is designed for 99.99% availability) is being used to store the test results rather than storing them on-premises or on a drive in any Jenkins instance. Amazon’s Simple Notification Service (SNS) has been used to send email notifications containing test results to the testing teams instantly after a code checkin. Amazon’s business intelligence service, Quicksight has been used for generating analytical dashboards for managers, where they can gain insightful information on the test executions and code quality. Thus, various AWS services have been effectively leveraged as part of this solution.
Our project leverages the powers of containers – as containers are portable and provisioning containers are very fast as compared to virtual machines. Further, it uses Amazon ECS – a highly scalable and high-performance container management service where a cluster of EC2 instances are run as Jenkins Masters in an auto-scalable fashion (Using the Amazon Elastic Container Service plugin of Jenkins). Each Jenkins build is executed on a dedicated slave docker container that is automatically wiped-out at the end of the build. We have configured slave docker containers to use our custom Docker image cloudbees_jnlpslave_with_chromedriver which have been built using the cloudbees/jnlp-slave-with-java-build-tools image which is generally used for ECS Jenkins Slave agents, but with added support for Chrome Browser for Selenium. Since the cloudbees slave image does not have Chrome and ChromeDriver in it, hence we built and used our custom docker image.
Jenkins stores master node configuration in $JENKINS_HOME directory rather than a database Therefore $JENKINS_HOME has been mapped to EFS so it can be accessible by multiple Jenkins servers and need not be repeated across each replicated instance. Also, if the Auto Scaling Group scales out and a new Master comes up, it already will have all the Jenkins configurations, required plugins, job configurations etc. as it uses this EFS drive.
CloudFormation template is being used to build the entire infrastructure – VPC, Public and Private Subnets, NAT and Internet Gateways, ECS Cluster, ECS Service and ECS Tasks, RDS, EFS, EC2 instances, Elastic LoadBalancer, Autoscaling Group and Launch Configuration, IAM roles and Security Groups. It accepts the parameters like:
|Stack Name||Will be used to name the resources which are created|
|AZ1 and AZ2||Availability Zones where Jenkins Masters will be created|
|DB username||RDS MySQL DB instance with the given username will be created|
|DB Password||Password of the RDS MySQL DB|
|KeyPair||The .pem file which will be used to SSH into the Jenkins Masters|
|PublicAccessCIDR||The CIDR address which will be given the SSH Port 22 access to Jenkins masters, this can be set as our own IP address to restrict the access of EC2 instances|
GitHub repository contains the maven project test automation framework code written in Selenium Java having the test cases for functional testing of the application. Any code changes in GitHub Test Code Repository, triggers the Jenkins Build process, which is handled by the ECS Jenkins cluster. GitHub webhooks in Jenkins are used to trigger the build whenever a developer commits on the repository. (using the GitHub Integration Plugin of Jenkins) Different masters on the Jenkins ECS Cluster can be configured to handle the commits on different branches of the Github Repository. WebHook URL on the GitHub repository is the URL of our Application Load Balancer that sits in front of the ECS Cluster. (Route53 Domain name can be used here). The LoadBalancer then, will direct the build on one of the Jenkins Masters, where a dedicated slave docker will come up and run the build.
As a part of build execution once all the test scenarios are executed, Jenkins will create test report in two formats. Cucumber JSON test results will be stored in a S3 bucket json-report (using the S3 publisher plugin of Jenkins) and html report artefacts(like html, CSS, images etc.) will be stored in the S3 bucket html-cucumber-reports (using the Post Build Task plugin of Jenkins and AWS CLI commands to copy the artefacts to S3). Jenkins will then finally use SNS email notifications to update about the Build Status to the Testing and Development Teams. (using the Amazon SNS Build Notifier plugin of Jenkins). The Email to the Testing Team will contain the Build Result along with the Link of the Cucumber HTML report showing the Passed and Failed Test Results (Generated from the S3 bucket html-cucumber-reports)
A Lambda function has been configured to be triggered on the json-report bucket Put event. Therefore, as soon as build finishes and json file is placed in the bucket json-report, the Lambda function will kick in. It will parse the JSON file and store the required fields from it, in the Multi-AZ RDS MySQL Database. QuickSight Dashboard has been further integrated with this RDS Database, where custom Dashboards have been configured which can be viewed by the Management. These analytical dashboards can be viewed on a Daily or Weekly basis to gain insights on the overall Builds Success/Failure rates and thus the Code quality.
Thus, in our Capstone Project, we have demonstrated Automation of testing in the CI/CD pipeline using Jenkins on AWS ECS and several managed services of AWS. With this robust CI/CD automation architecture, organizations can focus more on their business processes, thus ensuring faster development cycles with greater success rates.