Browse by Domains

Top DevOps Interview Questions and Answers 2022

Hi! Are you ready for the interview? Not feeling confident enough though? It’s okay to feel this way, it’s pretty normal and most people feel this way before an upcoming interview. Just remember you will do fine no matter what. Life throws a lot of rocks at us, we need to take them in our faces and keep walking.

Now let’s get into the meat of the topic: DevOps. You are probably looking to shift your career into DevOps as it pays more and can be more satisfying if you are into it. After successfully completing a DevOps course, you are prepared to work for the DevOps interview questions. So in this article, we will list down a set of DevOps interview questions that differed from different Domains.

Domains:

Feel free to browse directly through the domains you think you need more confidence in.

DevOps Overview

Can you explain to us What DevOps is?

“DevOps is a set of practices that combines software development (Dev) and IT operations (Ops). It aims to shorten the systems development life cycle and provide continuous delivery with high software quality. DevOps is complementary with Agile software development; several DevOps aspects came from Agile methodology.” But if you ask me, DevOps is not just this definition, it is not exactly a methodology but rather it is a culture or a mind-set that focuses on delivering a product and creating a healthy working environment where everyone can freely exchange ideas without being bogged down by silos that are present in traditional software development. You can even define DevOps as a set of tools, culture, methodology, set of values of principles.

But the name DevOps comes from Developer and Operations. A DevOps engineer bridges the communication gap between the software developers and the It operation teams.

How do agile & DevOps differ?

The whole idea of Agile is to make sure that teams can work faster and more efficiently in regular sprints and to keep on improving based on feedback given to them.

Earlier agile was used only by developing teams. It hadn’t trickled down to other parts of the whole software development process. So while successful at improving development speed, Testing and operations were still lacking.

With that, a need to improve the testing and operation was realised. It was achieved with the help of automation and that’s where the concept of continuous integration emerged.

Companies small & big quickly took up the initiative and improved their delivery part as well.

Then onwards, agile and DevOps were used in unison to develop and deliver software.

But on a fundamental level this is how they differ:

Definition:

  • DevOps is a Culture that focuses on bridging the gap between devs & ops teams and implementing automation in software development.
  • Agile is a methodology that in which shorter development lifecycles are implemented with constant feedbacks.

Frameworks:

  • DevOps has no frameworks as it more of a philosophy/culture.
  • With agile you have a lot of options – LEAN, SCRUM, XP, KANBAN, CYRSTAL, etc.

Team:

  • In DevOps you have all team members at equal footing, they all have the responsibility to deliver the software.
  • In agile you have teams divided into different skillset of varying levels.

Focus:

  • The focus with devops is on silo free environment developing a quality product and implementing automation in software development process.
  • Agile focuses on delivering the product on time based on relevant feedback.

Feedback:  

  • In DevOps the feedback can be given by both customers & the team involved in software development.
  • In agile the feedback is given by customers.

Why DevOps?

Earlier or in the early days of software development when it was something still very new, the requirements and demand was very different to how it is today. Waterfall model was very prevalent back then, as it helped them develop software in a structured manner. Waterfall model is a very standard model that is used in many different fields, not just software development. It was very useful back then as the requirements were concrete and the development cycles were long.

  • But it also had a lot of drawback like:
  • Long development lifecycle
  • Difficult to backtrack
  • No finished product till the end
  • Tougher to Plan for
  • High risk

All these drawbacks make it hard for us to use waterfall model in today’s market. Instead we use methodologies like: Agile, Spiral Model, Extreme programming, FDD, LEAN & DevOps.

What kind advantages will you see on implementing DevOps in software development?

Upon implementing DevOps, you will see such benefits:

  • Higher efficiency of software development
  • Higher quality of product/service being produced
  • Higher revenue for the company/organization
  • Ability to rapidly develop and deploy software over cloud.
  • A layer of robust security being introduced over the software development pipeline
  • Keeping up with a rapidly growing company through better scaling options.

How to implement DevOps within a project?

Step 1: Understand the Currently Existing Software Development Process. Understand what kind of software is being developed and what kind of resources you have at hand. Identify the places where improvement can help improve efficiency

Step 2: Once you have understood all the requirements, create a plan and prepare an environment for that plan to be executed. A good pipeline must be established on strong infrastructure. 

Step 3: Start with baby-steps. You know some of the glaring problems with the already existing pipeline, but try not to disturb the entire environment and instead try to implement small scale changes to test the waters. Maybe, first implement changes to one part of the development, if successful, move to the rest of the pipeline.

Step 4: Completely establish the new infrastructure and test the new pipeline in place. Once that is completed start developing and deploying the software using the new pipeline.

Step 5:  Start implementing an agile/lean culture within the different teams. Make sure the boundaries set between the different teams don’t cause any miscommunication, but rather allows for insightful analysis. All teams should be looking at deployment as a part of their goal.

Step 6: Understand the feedback from the team as well as the customer and implement changes based on them. Start a positive feedback loop.

What are the main phases implemented in DevOps?

There are 5 phases in DevOps, they form the devops lifecycle together:

  • Continuous Development – In this phase we keep on developing the software continuously based on requirements set by the clients/stake holders.
  • Continuous Integration – In this phase we configure the build jobs that will automate the rest of the software development process, like testing, staging, deploying, monitoring.
  • Continuous Testing – In this phase we test our software in various ways to make sure it’s of the highest quality and meets all the requirements set by the stake holders/clients.
  • Continuous Deployment – In this phase we go ahead and deploy our software so that the end users can use the service/product we provide to them.
  • Continuous Monitoring – In this final phase we monitor different aspect of our software and our software development process, like the metrics, logs, business activity, etc. This gives us valuable feedback to our process and product, using which we improve our process and product in the next development cycle.

All of these phases are set so that they happen continuously with minimal input from humans.

What are the different tools that are implemented within DevOps?

  • Continuous Development: Git, Mercurial, Azure Repository
  • Building Tools: Ant, maven, gradle, msbuild
  • Database management tools: Mysql, Mariadb, liquidbase
  • Continuous Testing: Sonarqube, selenium, pytest, katalon
  • Continuous Integration: Jenkins, CircleCi, Gitlab, Bamboo, teamcity
  • Continuous Deployment: XL deploy, juju, Octopus deploy
  • Configuration management tools: Puppet, ansible, chef, salt stack
  • Orchestration Tools: Docker Swarm, kubernetes, nomad, apache meso, EKS, ECS
  • Artefact Repositories: Jfrog artifactory, NPM, sontype nexus
  • Cloud services: AWS, Azure, GCP, Openshift, cloud foundry, Digital Ocean, Alibaba
  • Continuous Monitoring: ELK, Prometheus , grafana, splunk, nagios, google analytics
  • Scripting: Python, powershell, perl, java, .net

List the DevOps KPI

  • Recovery Rate
  • Speed of Deployment
  • Failed Deployments

Describe a basic devops workflow.

General Workflow: Start -> Development -> VCS commit -> Code Push to Repository -> Build job trigger -> Building -> Testing -> Staging -> Deployment -> Production ->Monitoring -> workflow end

List a few DevOps Best Practices.

  • Centralize all moving parts of all various DevOps tools
  • Reduce technical Debt
  • Regularly test
  • Automate wherever necessary
  • Maintain the mindset
  • Help others understand
  • Implement comprehensive dashboards & alerting systems

List the DevOps Principles

  • Take decisions with customer’s best interest in mind.
  • Always start a task with a goal in mind.
  • Instil the idea of End-To-End Responsibility within the team.
  • Assemble teams with the thought of removing silos
  • Every process should be improved.
  • Try to improve every process
  • Try to automate wherever possible.

Continuous Development

What is version control?

These days when software is developed, It is not developed with the mind-set that there will only be one piece of code that will be deployed and that’s it. These days smaller snippets of code are deployed in regular successions with regular feedbacks. This leads to many different versions of the code.

And that creates a need to organise the code and all of its different version of it. This is where Version Control comes in. It is a practice of managing and storing different version of a source code.

This is especially the case with Larger companies that have multiple projects and multiple teams working within it.

What role does a VCS play in software development?

Just as we discussed, the purpose of a version control system is as the name suggests, It controls the different versions of the code. The idea of a VCS meshes very well with the DevOps ideology. It allows for quick delivery of code and automatic job triggering. This allows for easy automation of the whole software development pipeline.

Explain Git rebase

The git rebase command allows a user to shift a set of commits and give them a new base. Its similar to git merge but the difference here is that the history of commits of the branch is also carried over when it is joined with the current branch.

How do you give an alias to a git command?

You can do this by using the git alias command.

$git config —global alias.<name of the alias> <the keyword alias will replace>

For example, I will replace commit with c like so:

$git config —global alias.c commit

What does the git log command do?

It lists all the commits made to the local repository from where you have launched the command.

What is the difference between the add and commit commands in Git?

git add : Basically stages all of the files you want to be tracked.

For example, You are working on a python script called mycode , Now you want to make sure the git tracks any changes that are made to it so you will go ahead track it like so:

$ git add mycode. py

git commit: This command basically like saving any changes you make to a file.

So for example, Once you are done making all the changes you want to mycode python script you can go ahead and commit the file like so:

$ git commit -m ”Added a new feature to mycode . py”

Revert to an older git commit

To revert to an older git you need to use the git revert command, like so:

$ git revert <commit id of the commit you want to revert>

How will you find all the changes that were made in a certain commit?

You will have to make use of the command:

git diff-tree –r <commit id>

What will you do if you happen to be working in a feature branch and quickly need to change to another branch to make a change?

Well the solution is simple, the developer would need to just use the command git stash. This command basically takes all the uncommitted files and puts them away in a buffer. So the developer can just git stash all the uncommitted items on his feature branch and then go and make the quick change on the other branch and then come back and unstash his files using the git stash pop command.

Design a git workflow for a company that needs to push master code once every end of the month.

We will be using 6 main branches for this purpose (let’s assume there are two features; A & B):

  • Feature A & Feature B – These branches are the ones that most junior developers will be working on. Any and all features of the software are coded in these branches. The dev’s pull the master branch into their local systems and then create a pull request and do their work and then push their code back. Their work is reviewed by a senior member and If it is approved, then the branch is merged onto the Develop branch.
  • Develop – This is the branch where all the features are merged onto. So it contains all of the functional code for the software. And once all there are a good number features on this branch it is merged onto the release branch.
  • Release – This is the branch where all of the code is reviewed for extra bugs, release preparation is done, and the documentation is completed. Then the release branch is merged with the master branch.
  • Master – This is the branch where the code is tagged with a version and is pushed to building, testing and deployment according to the schedule.
  • Hotfix – This branch as the names suggest is used to make quick fixes to the code. This branch allows for workflow interruption free bug resolution. It also quick and serves the singular purpose of solving bugs.

Continuous Testing

Why do we need to continuously test?

When we are implementing a whole system that will be able to run without any human intervention, it is obviously important to make sure that the test scripts that run on the software to make sure of its acceptability are also automated.

Now, the software should be tested continuously because with DevOps in mind we try to make sure that the software we are developing is of the highest quality; it should have no bugs, glitches, crashes in future, no code smells, no security vulnerability, etc.

Can you tell us the various components used in selenium?

Selenium IDE – It’s sort of the beginner’s crutch when it comes to learning selenium as it is a simple record and playback tool, which means you can record any task you want and selenium will repeat based on your requirements. It used with the webdriver.

Selenium RC – This component of selenium is a legacy tool; it was one of the firsts to be developed. It is used to create scripts for testing in multiple different types of languages like ruby, Perl, JavaScript. It can also be used with different browsers also. It requires the selenium server.

Selenium WebDriver – This component is the eventual evolution of selenium RC, overcoming many of its disadvantages like not requiring a selenium server to run and it also talks with the browser directly.

Selenium Grid – When you wish to do distributed testing or parallel testing you use this component. You can use both selenium grid and selenium rc together to run your test scripts on different systems at the same time.

How will you setup continuous testing in DevOps?

The DevOps process is a continuous one, so the testing is also continuously done. You generally have the source code built into an artifact and then only testing is done on it. So let’s say you have setup a build job which triggers when new code is pushed to a repo, through the build job the building process will start and once it’s completed the build will be stored and then the next stage of the development process will start that is testing. In this stage the build is pulled and tested for various different tests. This can be done using selenium scripts as its done in most cases.

Another form of testing is static testing which is done while the code is being generated. Tools like Sonarqube help in finding bugs, code smells, security issues during the creation of the code itself. This saves a lot of time & money. These tools can be attached in the pipeline as plugins in Jenkins. Sonarqube’s plugin is called sonar scanner.

Are driver.close() and driver.quit() the same? If not, then explain the difference between them.

Driver.quit() servers a larger purpose of closing all the browsers and even the selenium webdriver component whereas driver.close() only closes the focused browser window.

Please explain to us how you can use selenium webdriver component to launch any browser.

Its simple, use the following syntax:

WebDriver driver = new <name of browser driver>;

Eg.

For Internet Explorer:

WebDriver driver = new InternetExplorerDriver();

For Chrome Browser:

WebDriver driver = new ChromeDriver();

What is maven?

Maven is a build tool that helps us build java based software. It reads a pom.xml file that tells it how to build the software. It helps in structuring the code as well by creating robust project directories.

What is static testing?

Static testing is the process of testing your code while it is being created. They help us detect bugs, technical issues, security vulnerabilities, code smells, improve coding practices, instill specific project rules.

You can use tools such as sonarqube, pycharm etc.

What is dynamic testing?

Dynamic testing is testing that happens during the runtime of the software to make sure it has all of the requirements, it integrates well with all the other software components, it works well without any crashes, it can accept different values without crashing completely. Tools used for dynamic testing: selenium, katalon, casperjs, cypress, etc.

Continuous Integration

Is Continuous delivery the same as Continuous deployment?

There is a very simple difference between these two. In Continuous delivery the software is not automatically sent to the production server, beyond that It requires human involvement for the finally deployment to the client/customer in the production server.

Whereas with Continuous deployment the software is automatically sent to the production server and requires no involvement for humans, only thing that can stop this process is if a test case fails.

Explains Jenkin’s Architecture

Jenkins is a continuous integration tool which means that it creates build jobs to perform multiple different tasks in different phases. To do this in an organization with multiple teams and multiple projects is difficult if you do so only using one server. It will lead to server overload more often than not, which will lead to bottlenecks in the software development process which is a big no no in our eyes. To make sure such a problem does not happen Jenkins divides all of the tasks it receives onto its slaves. So Jenkins makes use of a master slave architecture, where one Jenkins server is the master from where all the tasks are distributed onto the multiple slaves. You can call these slaves as also agents that execute tasks based on your configurations. If you want, you can specify which task should be executed on which slave or you can leave it up to the master to decide. It will decide this by checking which slave is idle or which slave has enough of a resource pool to execute the tasks. That’s the Jenkins architecture.

How can you migrate Jenkins from server to another one?

You can do this in a few ways, such as:

  • You can simply move the Jenkins job directory from the system on which it is created to another.
  • You can also create a copy of the Jenkins Job (clone it) and shift that onto the other system under a different name.

How do you create a job in DevOps for automation purposes?

You can do this by making use of plethora of Continuous Integration tools available in the market such as Jenkins, Bamboo, TeamCity, etc.

Let’s take the scenario where Jenkins is the tool being used.

Go to the Home Page in Jenkins and then click on New Item or Create a New job. You will see a large list of possibilities over here such as:

  • Freestyle
  • Maven Project
  • Pipeline
  • Etc.

We will choose freestyle build. Upon selecting this option, we will have a lot of configurations to set, like delete previous builds or not, SCM used, web hook trigger, build options, notification options. We will set the build in such a way that it gets triggered when a specific condition happens like if we push code to our SCM repo. This will trigger our build job where we may execute the code, build it, archive it, test it or deploy it depending upon what is to be done.

What are the security measures that need to kept in mind while working with jenkins?

These measures are:

  • Keeping the global security always on
  • A proper authorization method is used either via the company’s own LDAP server or a third-party tool like atlassian crowd.
  • Analyzing Jenkins health on a regular basis in order to keep it from becoming faulty and open to attacks.
  • Allowing limited access to users, on a need to use basis.
  • Managing secrets for all the tools in a dedicated place while following the correct protocols.

What are the KPI of Continuous integration?

It is usually tougher to measure the performance here as there can be a lot of issues that go under looked, but generally you can list these KPIs when asked about them:

  • Build Job automation
  • Software deployment automation
  • Failed builds
  • Number of issues identified in development process
  • Deployment Time
  • Cost effective utilization of infrastructure

What is a jenkinsfile ?

Jenkins file ( text file ) is a pipeline script that is used to configure the stages and steps of a Jenkins pipeline. You can pull it with the source code or write it in Jenkins itself.

Example jenkins file:

pipeline {
  stages {
    stage(“Build”) {
       steps {
         echo “We are going to build our application in this stage”          
   }

    stage(“Test”) {
       steps {
         echo “Tests are going to be now executed”
   }
      stage(“Deploy”) {
       steps {
         echo “Software is going to be deployed”
   } 
 } 
}

Why is Jenkins a popular choice for a continuous integration tool?

This is because of a few reasons, such as:

  • It helps in automating a lot of processes and hence helps in decreasing downtime of software.
  • It is open-source, compared to other CI/CD tools Jenkins is preferred more because it is Opensource and has a massive community.
  • It easily fits well the agile & DevOps mindset.
  • It has crazy amounts of plugins that it can support
  • It can be used irrelevant of the platform it is launched on.

Container Orchestration & Microsystems

What is containerization in DevOps?

To understand Containerization in DevOps, we have to understand what kind of problem it helps us to resolve.

Let’s say there is this very standard delivery pipeline where a developer develops his code and sends it along the software development process where it gets built and then gets sent to the tester. The thing here is that the code does not run on the tester’s system. What can be the issue over here? Well it’s usually some compatibility issue as it ran totally fine on the developer’s system. Now this is a big headache as the people developing the software have a lot of other concerns, they shouldn’t be having bottle neck creating compatibility issues.

To resolve this, we make use of containerization technology. This technology basically acts a like a software wrapper that wraps up all of the code, it’s dependencies, the environment (OS, compiler) into a single unit called container. You can compare a container to a VM where they are a bit similar but not the same. So this time instead of sending the code directly for building and testing, the developer send along the container containing all the previously mentioned items to the software development pipeline. All the processes such as building, testing take place with the help of the container, so we have removed the element of compatibility issues.

Now when it comes to deployment of software these days most companies make use of Docker containers to deploy their software. They usually do it as microservices.

Explain the Docker ecosystem

So in the Docker Ecosystem you have a lot of entities, these are:

Docker Engine, Docker Objects, Docker Registry, Docker Compose, Docker swarm.

Explain how kubernetes benefits us in Software deployment

  • Increases productivity by reducing complexity in the software deployment environment.
  • Increases the overall stability of a software
  • It’s very cheap to implement
  • It’s very useful for very large projects with a lot of moving parts.
  • Allows you to automate software deployment process.
  • Allows you to easily update your software.
  • Allows you to easily scale your software and manage it in general.

Are expose instruction and publish flag in Docker the same? If not, Explain the difference between.

No, they both aren’t same. They both have different areas in which they are used; expose is used while writing a dockerfile and Publish is used in docker run command and docker-compose.yml files.

Expose allows you expose a container within a network whereas the publish flag allows you to expose the container to an external element.

Example: EXPOSE 80 & –publish or –p 80:80

If a Docker container’s inner process isn’t working according to assumptions, what will you do to stop it?

We can start by trying to stop the container using docker stop command, if it is taking too long we can go ahead make use of the docker kill command.

Eg. docker stop mycontainer  & docker kill mycontainer

Create a docker container that runs apache server with a sample html code in an ubuntu docker container.

First we will create our sample html file, like so:

$ nano index.hmtl

<html>

<title>Website</title>

<h1>Hope you all have a wonderful day</h1>

</html>

The we will go ahead and create our dockerfile, like so:

$ nano dockerfile

FROM Ubuntu:latest

RUN apt update && apt install apache2 –y

WORKDIR /var/www/html

COPY . /var/www/html

Then we will build our dockerfile with the following command:

$ sudo docker build –t fun-image .

Then we will run the container using this command:

$ sudo docker run –it –d –name mycontainer fun-image

What are the downsides of using Kubernetes over Docker Swarm?

Docker Swarm is very simple to install compared to Kubernetes

You are still using docker with Kubernetes so you need to know the Kubernetes CLI as well as Docker CLI. But with Docker Swarm, you have a lot of similar commands as with Docker.

Docker offers faster container deployment but Kubernetes provides a more unified group of APIs and good guarantee about the cluster state.

You can go more into the details point by point, but in the end Docker swarm is more like a beginners Kubernetes, Not trying to say docker swarm sucks if you have a specific requirement that needs docker swarm you should definitely use it, if not then go for Kubernetes.

What command will you use to enter into a docker continer?

$ docker exec –it <name of the container> bash or $ docker exec –it <name of the container> sh’

What are docker registries?

Docker registries are dedicated location where you can store docker images and then share them with whomever you want. Eg. Dockerhub, ECR, Jfrog Artifactory, Azure container repository.

Which cloud platforms have container friendly environments?

Here are a few companies to list:

  • AWS
  • Azure
  • Kontena
  • Apache Mesos
  • GCP
  • Rackspace

Explain what is a dockerfile.

A dockerfile is a file that is used to build docker images that acts as blueprints for creating docker containers.

How will you scale up a service in a docker swarm?

You can do so by using the command:

$ sudo docker service scale <service id>=<no of replicas you want to exist of this service>

You can get the service id using:

$ sudo docker service ls

How do you create a docker image without a base image?

You can do this by typing in FROM scratch and then type in your configurations.

What are the different parts of the docker engine?

The docker engine consists of three parts:

  • The Docker CLI (Command Line Interface) – This is what you will use to give requests to the docker daemon
  • The Docker API (Application Program Interface) – This part communicates your request from the CLI to the Daemon.
  • The Docker Daemon – This is the core of the Docker engine, this is the part of the docker engine that manages and creates all of the docker processes and objects.

How do you push a docker image to docker hub?

 Make sure you have your image that you want to push to dockerhub named as such:

<youdockerhub-username>/<actualname-name-of-image>

This is the nomenclature to be followed when pushing images to dockerhub.

eg. remi45/ubuntu-apache-image

then go ahead and login to dockerhub using docker login command:

$ sudo docker login

once successful go ahead and push your image:

$ sudo docker push remi45/ubuntu-apache-image

What other type of file can you use with docker compose other than a Yaml file?

You can use a JSON file format if you want. The same command will be used to execute a json file.

Differentiate between docker swarm and kubernetes.

We will compare them using certain criteria:

Installation & Setup

  • Docker swarm: Docker swarm installation is very easy and the setup is simple. Hence it is also fast.
  • Kubernetes: Kubernetes takes a lot more time and is a much more complicated process.

Connectivity/Networking

  • Docker swarm: Containers over different docker hosts are connects to each other through the use of an overlay network.
  • Kubernetes: All of the pods within a kubernetes service can communicate with each other.

Availability/Reliability

  • Docker swarm: Replication is easy for all the service that are running in a docker swarm. Unhealthy replicas are replaced with healthy ones. This increases the availability of services.
  • Kubernetes: Nodes in pods have high availability due to it high fault tolerance and replication. Failing nodes are replaced with healthy ones.

Auto-Scaling

  • Docker swarm: Docker Swarm perform faster scaling up than kubernetes, however it does not form a strong cluster.
  • Kubernetes: Though scaling is a bit slower than docker swarm scaling , it is more stronger than it.

Data Volumes

  • Docker swarm: Docker storage volumes can be shared between multiple containers or a docker swarm.
  • Kubernetes: In kubernetes storage volumes can be shared between different containers but only within a pod.

GUI

  • Docker swarm: Docker Swarm offers no Graphical User Interface and has to be accessed and monitored from the terminal it is launched from. Needs external dashboards.
  • Kubernetes: Kubernetes offers a comprehensive Dashboard that can be used to monitor the Pods and services launched. Simple to understand.

Updates & Rollbacks

  • Docker swarm: In docker swarm, you can easily update nodes and perform rollbacks. But here rollbacks require input.
  • Kubernetes: Kubernetes uses an ingress to perform load balancing. It needs to be manually configured

Load Balancing

  • Docker swarm: Docker Swarm can load balance a set of tasks automatically among different docker swarm nodes.
  • Kubernetes: Kubernetes uses an ingress to perform load balancing. It needs to be manually configured

Logging & Monitoring

  • Docker swarm: Docker Swarm does not offer an in-built monitoring system, but an external one like ELK stack can be used to monitor the Docker swarm.
  • Kubernetes: Kubernetes offers an in-built monitoring system for all of its processes & services.

Deployment

  • Docker swarm: Can only deploy applications as microservices. Services running on the nodes is run using yml files using docker compose.
  • Kubernetes: Applications can be easily deployed using kubernetes as deployments, and microservices, pods.

Continuous Monitoring

Why should an organization continuously monitor?

An organization should always be continuously monitoring their software, servers, systems, performance, resource usage, logs and even business activity. All of this is important for a few reasons, such as:

  • To see how well everything is performing, are systems and processing using more resources than necessary, if so, why?
  • To see if their systems and software are up and running to make sure they can immediately repair them or replace them so that the user always have access to the service/product.
  • If there is a problem, then they need to check why that problem occurred. This can be done by going through logs.
  • Monitoring also gives good feedback on how things can be improved for the next development cycle.
  • It also helps monitoring Business activity which can help us gain insights into business matters that can be improved or it can help us form new ideas.

What is the purpose kibana in the Elastic stack?

Kibana is a visualization tool that is used within the elastic stack to create beautiful visualisations, dashboards and basically acts as the Graphical user interface for the whole of the stack. Once data has been collected within the elasticsearch database, kibana identifies the sources of data to be visualized using index patterns.

What is Prometheus and how is it used with grafana?

Prometheus is an opnesource tool that is primarily used for metrics monitoring and alerting. It was developed in 2012 to fulfill soundcloud’s need of having a multidimensional data model, scaling and simplicity in their monitoring capabilities. It makes use of PromQL, a very powerful querying language. Prometheus is a pull based metric monitoring system which needs the location of specific endpoints. Grafana is an open source monitoring solutions that allows us to monitor real time data collected using collection agents like Prometheus using beautiful visualizations that give us in-depth insights.

What is the Elastic stack?

Elastic stack is a set of open source tools that help you store and manage logs.

These are the tools involved:

  • Elasticsearch ( Database – for storing & Indexing )
  • Logstash ( Collection agent – for collecting data & processing data )
  • Kibana ( Visualizer – for visualizing collected data and analyzing it )
  • Beat ( a set of light weight data collector – filebeat,metricbeat, etc. )
  • Xpack ( Extra set of tools – APM, security,ML, reporting, alerting tools )

You can set all of this up on your own system or you can make use of Elastic cloud that can seamlessly integrated with your IT infra.

What are the features of Elastic stack?

It has many features such as:

  • System & Application Performance
  • Logging
  • Stack Security & Alerting
  • Scalability and Resiliency
  • Dashboards & Visualisations

How does data flow within Elastic Stack?

First Data is collected from the source using either Filebeat or logstash and then it is sent either directly to elasticsearch for storage and indexing or it goes to logstash where it is filtered and preprocessed. Once elasticsearch stores the data and indexes it, the data is kept there until kibana asks for it. Through kibana we visualize, analyse the data.

Configuration Management

What is the role of configuration management in DevOps?

Configuration management allows us to manage and maintain systems, servers & software according to our requirements, It also allows us to create automation scripts

What are some configuration management tools?

Some configuration management tools are:

  • Puppet
  • Ansible
  • Chef
  • Salt Stack
  • JuJu
  • Rudder

When do you use ad-hoc commands and how are they different from playbooks in Ansible?

When there is a need for you to make a quick change without having to open and create a whole file for it you use ad-hoc commands. They serve a different purpose than playbooks as playbooks are generally used to perform task iteratively and usually used for the purpose of automation whereas ad-hoc commands are used for quick fixes.

What will you do when you want to share your ansible roles with your teammates but cannot do it physically?

You can make use of something called ansible galaxy, it acts as a repository for ansible roles. So you can share your anisble roles over ansible galaxy. It is easy to setup and use also.

Define the purpose of puppet manifests.

It is the file that we use to define configurations, resources that is to be enforced upon a system or node.

Where is the puppet codedir located?

It is located in different locations based on type of operating system you are using.

For windows you can find it in:

%PROGRAMDATA%\PuppetLabs\code (usually, C:\ProgramData\PuppetLabs\code)

For Linux systems you can find it in:

/etc/puppetlabs/code

Cloud

How is cloud integrated in software development?

Cloud services like AWS, Azure, GCP provide many different services that are useful for maintaining an infrastructure. If an organization requires so, its whole development infrastructure can be on a cloud – database, storage, source code repository, servers, security, etc.

What are the AWS service that you can to deploy software?

You can use services such as ECS and EKS to deploy software and you can store you docker images on ECR.

DevOps Interview Questions FAQS

Q: How do I prepare for a DevOps interview?

A: You have to prepare for each algorithm by practicing several questions of each kind. Also, find out the basics of DevOps principles such as continuous delivery, automation, and rapid reaction to feedback. You can also learn about DevOps tools Terraform, Docker, and Kubernetes. You must know about at least one CI/CD tool like Jenkins. You are advised to learn these tools on more than one cloud, such as GCP, AWS, or Azure. 

Q: What are the 7 DevOps practices?

A: The seven DevOps practices are continuous development, continuous testing, continuous integration, continuous delivery, continuous deployment, continuous monitoring, and infrastructure as code.

Q: How to answer about DevOps project in an interview?

A: You have to choose a DevOps project that highlights your strengths along with a thorough understanding of the various stages of a DevOps project, including the problem statement, the process, challenges faced, and the impact of the project. You can also check some of the commonly asked questions about DevOps and keep the answers ready.

Q: What are DevOps tools?

A: Some of the notable DevOps tools include Puppet, Git, Ansible, Docker, Chef, Jenkins, Bamboo, Splunk, Nagios, Selenium, ELK Stack, Kubernetes, Gradle, Maven, Vagrant, etc.

Q: How is DevOps different from Agile?

  • A: DevOps focuses on the test and delivery automation instead of Agile, which focuses on iterative development
  • Agile incorporates structure to the developer’s work while DevOps adds work unplanned

Q: What are the principles of DevOps?

A: There are various DevOps principles such as Continuous Integration, Automation, Continuous Delivery, Version Control, Feedback Sharing, DevOps Pipeline, Incremental Releases, etc.

Q: What is the goal of DevOps?

A: The basic goal of DevOps is to improve the flow of value from an idea to the end-user. There is a cultural change that should happen for a company to be successful with DevOps. Therefore, culture is a vital point, but the goal of DevOps is to deliver the money more efficiently as well as effectively.

Q: How do I monitor DevOps?

A: For DevOps continuous monitoring, you can practice server status and health, application performance log, user activity and behavior, system vulnerabilities, development milestones, network monitoring, infrastructure monitoring, etc.

This brings us to the end of the blog on DevOps Interview Questions. We hope that you are now better equipped to attend an interview. If you wish to learn more about the concepts, then you can join Great Learning Academy’s free online courses and power ahead in your career.

Avatar photo
Great Learning Team
Great Learning's Blog covers the latest developments and innovations in technology that can be leveraged to build rewarding careers. You'll find career guides, tech tutorials and industry news to keep yourself updated with the fast-changing world of tech and business.

Leave a Comment

Your email address will not be published. Required fields are marked *

Great Learning Free Online Courses
Scroll to Top