Top 50 DevOps Engineer Interview Questions & Answers

Here are Top 50 DevOps Engineer Interview Questions & Answers

Top 50 DevOps Engineer Interview Questions & Answers
  1. What are the popular DevOps tools that you use?
    We use following tools for work in DevOps:
    I. Jenkins : This is an open source automation server used as a continuous integration tool. We can build, deploy and run automated tests with Jenkins.
    II. GIT : It is a version control tool used for tracking changes in files and software.
    III. Docker : This is a popular tool for containerization of services. It is very useful in Cloud based deployments.
    IV. Nagios : We use Nagios for monitoring of IT infrastructure.
    V. Splunk : This is a powerful tool for log search as well as monitoring production systems.
    VI. Puppet : We use Puppet to automate our DevOps work so that it is reusable.
  2. What are the main benefits of DevOps?
    DevOps is a very popular trend in Software Development. Some of the main benefits of DevOps are as follows:
    I. Release Velocity : DevOps practices help in increasing the release velocity. We can release code to production more often and with more confidence.
    II. Development Cycle : With DevOps, the complete Development cycle from initial design to production
    deployment becomes shorter.
    III. Deployment Rollback : In DevOps, we plan for any failure in deployment rollback due to a bug in code or issue in production. This gives confidence in releasing feature without worrying about downtime for rollback.
    IV. Defect Detection : With DevOps approach, we can catch defects much earlier than releasing to production. It improves the quality of the software.
    V. Recovery from Failure : In case of a failure, we can recover very fast with DevOps process.
    VI. Collaboration : With DevOps, collaboration between development and operations professionals increases.
    VII. Performance-oriented : With DevOps, organization follows performance-oriented culture in which teams become more productive and more innovative.
  3. What is the typical DevOps workflow you use in your organization?
    The typical DevOps workflow in our organization is as follows:
    I. We use Atlassian Jira for writing requirements and tracking tasks.
    II. Based on the Jira tasks, developers checkin code into GIT version control system.
    III. The code checked into GIT is built by using Apache Maven.
    IV. The build process is automated with Jenkins.
    V. During the build process, automated tests run to validate the code checked in by developer.
    VI. Code built on Jenkins is sent to organization’s Artifactory.
    VII. Jenkins automatically picks the libraries from Artifactory and deploys it to Production.
    VIII. During Production deployment Docker images are used to deploy same code on multiple hosts.
    IX. Once code is deployed to Production, we use Nagios to monitor the health of production servers.
    X. Splunk based alerts inform us of any issues or exceptions in production.
  4. How do you take DevOps approach with Amazon Web Services?
    Amazon Web Services (AWS) provide many tools and features to deploy and manage applications in AWS. As per DevOps,
    we treat infrastructure as code. We mainly use following two services from AWS for DevOps:
    I. CloudFormation : We use AWS CloudFormation to create and deploy AWS resources by using templates. We can describe our dependencies and pass special parameters in these templates. CloudFormation can read these templates and deploy the application and resources in AWS cloud.
    II. OpsWorks : AWS provides another service called OpsWorks that is used for configuration management by utilizing Chef framework. We can automate server configuration, deployment and management by using OpsWorks. It helps in managing EC2 instances in AWS as well as any on-premises servers.
  5. How will you run a script automatically when a developer commits a
    change into GIT?
    GIT provides the feature to execute custom scripts when certain event occurs in GIT. This feature is called hooks. We can write two types of hooks.
    I. Client-side hooks
    II. Server-side hooks
    For this case, we can write a Client-side post-commit hook. This hook will execute a custom script in which we can add the message and code that we want to run automatically with each commit.
  6. What are the main features of AWS OpsWorks Stacks?
    Some of the main features of AWS OpsWorks Stacks are as follows:
    I. Server Support: AWS OpsWorks Stacks we can automate operational tasks on any server in AWS as well as our own data center.
    II. Scalable Automation : We get automated scaling support with AWS OpsWorks Stacks. Each new instance in AWS can read configuration from OpsWorks. It can even respond to system events in same way as other instances do.
    III. Dashboard : We can create dashboards in OpsWorks to display the status of all the stacks in AWS.
    IV. Configuration as Code : AWS OpsWorks Stacks are built on the principle of “Configuration as Code”. We can define and maintain configurations like application source code. Same configuration can be replicated on multiple servers and environments.
    V. Application Support : OpsQorks supports almost all kinds of applications. So it is universal in nature.
  7. How does CloudFormation work in AWS?
    AWS CloudFormation is used for deploying AWS resources.
    In CloudFormation, we have to first create a template for a resource. A template is a simple text file that contains information about a stack on AWS. A stack is a collection of AWS resourced that we want to deploy together in an AWS as a group.
    Once the template is ready and submitted to AWS, CloudFormation will create all the resources in the template. This helps in automation of building new environments in AWS.
  8. What is CICD in DevOps?
    CICD stands for Continuous Integration and Continuous Delivery. These are two different concepts that are complementary to each other.
    Continuous Integration (CI) : In CI all the developer work is merged to main branch several times a day. This helps in reducing integration problems. In CI we try to minimize the duration for which a branch remains checked out. A developer gets early feedback on the new code added to main repository by using CI. Continuous Delivery (CD): In CD, a software team plans to deliver software in short cycles. They perform development, testing and release in such a short time that incremental changes can be easily delivered to production. In CD, as a DevOps we create a repeatable deployment process that can help achieve the objective of Continuous Delivery.
  9. What are the best practices of Continuous Integration (CI)?
    Some of the best practices of Continuous Integration (CI) are as follows:
    I. Build Automation : In CI, we create such a build environment that even with one command build can be
    triggered. This automation is done all the way up to deployment to Production environment.
    II. Main Code Repository : In CI, we maintain a main branch in code repository that stores all the Production
    ready code. This is the branch that we can deploy to Production any time.
    III. Self-testing build : Every build in CI should be self-tested. It means with every build there is a set of tests that
    runs to ensure that changes are of high quality.
    IV. Every day commits to baseline: Developers will commit all of theirs changes to baseline everyday. This ensures that there is no big pileup of code waiting for integration with the main repository for a long time.
    V. Build every commit to baseline: With Automated Continuous Integration, every time a commit is made into baseline, a build is triggered. This helps in confirming that every change integrates correctly.
    VI. Fast Build Process : One of the requirements of CI is to keep the build process fast so that we can quickly identify any problem.
    VII. Production like environment testing : In CI, we maintain a production like environment also known as preproduction or staging environment, which is very close to Production environment. We perform testing in this environment to check for any integration issues.
    VIII. Publish Build Results : We publish build results on a common site so that everyone can see these and take corrective actions.
    IX. Deployment Automation : The deployment process is automated to the extent that in a build process we can add the step of deploying the code to a test environment. On this test environment all the stakeholders can access and test the latest delivery.
  10. What are the benefits of Continuous Integration (CI)?
    The benefits of Continuous Integration (CI) are as follows:
    I. CI makes the current build constantly available for testing, demo and release purpose.
    II. With CI, developers write modular code that works well with frequent code check-ins.
    III. In case of a unittest failure or bug, developer can easily revert back to the bug-free state of the code.
    IV. There is drastic reduction in chaos on release day with CI practices.
    V. With CI, we can detect Integration issues much earlier in the process.
    VI. Automated testing is one very useful side effect of implementing CI.
    VII. All the stakeholders including business partners can see the small changes deployed into pre-production environment. This provides early feedback on the changes to software.
    VIII. Automated CI and testing generates metrics like code-coverage, code complexity that help in improving the development process.
  11. What are the options for security in Jenkins?
    In Jenkins, it is very important to make the system secure by setting user authentication and authorization. To do this we have
    to do following:
    I. First we have to set up the Security Realm. We can integrate Jenkins with LDAP server to create user authentication.
    II. Second part is to set the authorization for users. This determines which user has access to what resources.
    In Jenkins some of the options to setup security are as follows:
    I. We can use Jenkins’ own User Database.
    II. We can use LDAP plugin to integrate Jenkins with LDAP server.
    III. We can also setup Matrix based security on Jenkins.
  12. What are the main benefits of Chef?
    Chef is an automation tool for keeping infrastructure as code. It has many benefits. Some of these are as follows:
    I. Cloud Deployment : We can use Chef to perform automated deployment in Cloud environment.
    II. Multi-cloud support : With Chef we can even use multiple cloud providers for our infrastructure.
    III. Hybrid Deployment : Chef supports both Cloud based as well as datacenter-based infrastructure.
    IV. High Availability : With Chef automation, we can create high availability environment. In case of hardware failure, Chef can maintain or start new servers in automated way to maintain highly available environment.
  13. What is the architecture of Chef?
    Chef is composed of many components like Chef Server, Client etc. Some of the main components in Chef are as follows:
    I. Client : These are the nodes or individual users that communicate with Chef server.
    II. Chef Manage : This is the web console that is used for interacting with Chef Server.
    III. Load Balancer : All the Chef server API requests are routed through Load Balancer. It is implemented in Nginx.
    IV. Bookshelf : This is the component that stores cookbooks. All the cookbooks are stored in a repository. It is separate storage from the Chef server.
    V. PostgreSQL : This is the data repository for Chef server.
    VI. Chef Server : This is the hub for configuration data. All the cookbooks and policies are stored in it. It can scale to the size of any enterprise.
  14. What is a Recipe in Chef?
    In any organization, Recipe is the most fundamental configuration element.
    It is written in Ruby language. It is a collection of resources defined by using patterns.
    A Recipe is stored in a Cookbook and it may have dependency on other Recipe.
    We can tag Recipe to create some kind of grouping. We have to add a Recipe in run-list before using it by chef-client. It always maintains the execution order specified in run-list.
  15. What are the main benefits of Ansible?
    Ansible is a powerful tool for IT Automation for large scale and complex deployments. It increases the productivity of team. Some of the main benefits of Ansible are as follows:
    I. Productivity : It helps in delivering and deploying with speed. It increases productivity in an organization.
    II. Automation : Ansible provides very good options for automation. With automation, people can focus on delivering smart solutions.
    III. Large-scale : Ansible can be used in small as well as very large-scale organizations.
    IV. Simple DevOps : With Ansible, we can write automation in a human-readable language. This simplifies the task of DevOps.
  16. What are the main use cases of Ansible?
    Some of the popular use cases of Ansible are as follows:
    I. App Deployment : With Ansible, we can deploy apps in a reliable and repeatable way.
    II. Configuration Management : Ansible supports the automation of configuration management across multiple environments.
    III. Continuous Delivery : We can release updates with zero downtime with Ansible.
    IV. Security : We can implement complex security policies with Ansible.
    V. Compliance : Ansible helps in verifying and organization’s systems in comparison with the rules and regulations.
    VI. Provisioning : We can provide new systems and resources to other users with Ansible.
    VII. Orchestration : Ansible can be used in orchestration of complex deployment in a simple way.
  17. What is Docker Hub?
    Docker Hub is a cloud-based registry. We can use Docker Hub to link code repositories. We can even build images and store them in Docker Hub. It also provides links to Docker Cloud to deploy the images to our hosts. Docker Hub is a central repository for container image discovery, distribution, change management, workflow automation and team collaboration.
  18. What is your favorite scripting language for DevOps?
    In DevOps, we use different scripting languages for different purposes. There is no single language that can work in all the scenarios. Some of the popular scripting languages that we use are as follows:
    I. Bash : On Unix based systems we use Bash shell scripting for automating tasks.
    II. Python : For complicated programming and large modules we use Python. We can easily use a wide variety of standard libraries with Python.
    III. Groovy : This is a Java based scripting language. We need JVM installed in an environment to use Groovy. It is very powerful and it provides very powerful features.
    IV. Perl : This is another language that is very useful for text parsing. We use it in web applications.
  19. What is multi-factor authentication?
    In security implementation, we use multi-factor authentication (MFA). In MFA, a user is authenticated by multiple means before giving access to a resource or service. It is different from simple user/password-based authentication. The most popular implementation of MFA is Two-factor authentication. In most of the organizations, we use username/password and an RSA token as two factors for authentication.
    With MFA, the system becomes more secure and it cannot be easily hacked.
  20. What are the main benefits of Nagios?
    Nagios is open source software to monitor systems, networks and infrastructure. The main benefits of Nagios are as follows:
    I. Monitor : DevOps can configure Nagios to monitor IT infrastructure components, system metrics and network protocols.
    II. Alert : Nagios will send alerts when a critical component in infrastructure fails.
    III. Response : DevOps acknowledges alerts and takes corrective actions.
    IV. Report : Periodically Nagios can publish/send reports on outages, events and SLAs etc.
    V. Maintenance: During maintenance windows, we can also disable alerts.
    VI. Planning : Based on past data, Nagios helps in infrastructure planning and upgrades.
  21. What is State Stalking in Nagios?
    State Stalking is a very useful feature. Though all the users do not use it all the time, it is very helpful when we want to investigate an issue. In State Stalking, we can enable stalking on a host. Nagios will monitor the state of the host very carefully and it will log any changes in the state. By this we can identify what changes might be causing an issue on the host.
  22. What are the main features of Nagios?
    Some of the main features of Nagios are as follows:
    I. Visibility : Nagios provides a centralized view of the entire IT infrastructure.
    II. Monitoring : We can monitor all the mission critical infrastructure components with Nagios.
    III. Proactive Planning : With Capacity Planning and Trending we can proactively plan to scale up or scale down the infrastructure.
    IV. Extendable : Nagios is extendable to a third party tools in APIs.
    V. Multi-tenant : Nagios supports multi-tenants architecture.
  23. What is Puppet?
    Puppet Enterprise is a DevOps software platform that is used for automation of infrastructure operations. It runs on Unix as well as on Windows. We can define system configuration by using Puppet’s language or Ruby DSL. The system configuration described in Puppet’s language can be distributed to a target system by using REST API calls.
  24. What is the architecture of Puppet?
    Puppet is Open-Source software. It is based on Client-server architecture. It is a Model Driven system. The client is also called Agent. And server is called Master.
    It has following architectural components:
    I. Configuration Language: Puppet provides a language that is used to configure Resources. We have to specify what Action has to be applied to which Resource.
    The Action has three items for each Resource: type, title and list of attributes of a resource. Puppet code is written in Manifests files.
    II. Resource Abstraction: We can create Resource Abstraction in Puppet so that we can configure resources on different platforms. Puppet agent uses a Facter for passing the information of an environment to Puppet server. In Facter we have information about IP, hostname, OS etc of the environment.
    III. Transaction: In Puppet, Agent sends Facter to Master server. Master sends back the catalog to Client. Agent applies any configuration changes to system. Once all changes are applied, the result is sent to Server.
  25. What are the main use cases of Puppet Enterprise?
    We can use Puppet Enterprise for following scenarios:
    I. Node Management: We can manage a large number of nodes with Puppet.
    II. Code Management: With Puppet we can define Infrastructure as code. We can review, deploy, and test the environment configuration for Development, Testing and Production environments.
    III. Reporting & Visualization: Puppet provides Graphical tools to visualize and see the exact status of infrastructure configuration.
    IV. Provisioning Automation: With Puppet we can automate deployment and creation of new servers and resources. So, users and business can get their infrastructure requirements completed very fast with Puppet.
    V. Orchestration: For a large Cluster of nodes, we can orchestrate the complete process by using Puppet. It can follow the order in which we want to deploy the infrastructure environments.
    VI. Automation of Configuration: With Configuration automation, the chances of manual errors are reduced. The process becomes more reliable with this.
  26. What is the use of Kubernetes?
    We use Kubernetes for automation of large-scale deployment of Containerized applications. It is an open-source system based on concepts similar to Google’s deployment process of millions of containers. It can be used on cloud, on-premises datacenter and hybrid infrastructure. In Kubernetes we can create a cluster of servers that are connected to work as a single unit. We can deploy a containerized application to all the servers in a cluster without specifying the machine name. We have to package applications in such a way that they do not depend on a specific host.
  27. What is the architecture of Kubernetes?
    The architecture of Kubernetes consists of following components:
    Master: There is a master node that is responsible for managing the cluster. Master performs following functions in a cluster.
    I. Scheduling Applications
    II. Maintaining desired state of applications
    III. Scaling applications
    IV. Applying updates to applications
    Nodes: A Node in Kubernetes is responsible for running an application. The Node can be a Virtual Machine or a Computer in the cluster. There is software called Kubelet on each node. This software is used for managing the node and communicating with the Master node in cluster. There is a Kubernetes API that is used by Nodes to communicate with the Master. When we deploy an application on Kubernetes, we request Master to start application containers on Nodes.
  28. How does Kubernetes provide high availability of applications in a
    Cluster?
    In a Kubernetes cluster, there is a Deployment Controller. This controller monitors the instances created by Kubernetes in a cluster. Once a node or the machine hosting the node goes down, Deployment Controller will replace the node. It is a self-healing mechanism in Kubernetes to provide high availability of applications. Therefore, in Kubernetes cluster, Kubernetes Deployment Controller is responsible for starting the instances as well as replacing the instances in case of a failure.
  29. Why Automated Testing is a must requirement for DevOps?
    In DevOps approach we release software with high frequency to production. We have to run tests to gain confidence on the quality of software deliverables.
    Running tests manually is a time taking process. Therefore, we first prepare automation tests and then deliver software. This ensures that we catch any defects early in our process.
  30. What is Chaos Monkey in DevOps?
    Chaos Monkey is a concept made popular by Netflix. In Chaos Monkey, we intentionally try to shut down the services or create failures. By failing one or more services, we test the reliability and recovery mechanism of the Production architecture. It checks whether our applications and deployment have survival strategy built into it or not.
  31. How do you perform Test Automation in DevOps?
    We use Jenkins to create automated flows to run Automation tests. The first part of test automation is to develop test strategy and test cases. Once automation test cases are ready for an application, we have to plug these into each Build run. In each Build we run Unit tests, Integration tests and Functional tests. With a Jenkins job, we can automate all these tasks. Once all the automated tests pass, we consider the build as green. This helps in deployment and release processes to build confidence on the application software.
  32. What are the main services of AWS that you have used?
    We use following main services of AWS in our environment:
    I. EC2: This is the Elastic Compute Cloud by Amazon. It is used to for providing computing capability to a system. We can use it in places of our standalone servers. We can deploy different kinds of applications on EC2.
    II. S3 : We use S3 in Amazon for our storage needs.
    III. DynamoDB : We use DynamoDB in AWS for storing data in NoSQL database form.
    IV. Amazon CloudWatch : We use CloudWatch to monitor our application in Cloud.
    V. Amazon SNS: We use Simple Notification Service to inform users about any issues in Production environment.
  33. Why GIT is considered better than CVS for version control system?
    GIT is a distributed system. In GIT, any person can create its own branch and start checking in the code. Once the code is tested, it is merged into main GIT repo. IN between, Dev, QA and product can validate the implementation of that code.
    In CVS, there is a centralized system that maintains all the commits and changes.
    GIT is open source software and there are plenty of extensions in GIT for use by our teams.
  34. What is the difference between a Container and a Virtual Machine?
    We need to select an Operating System (OS) to get a specific Virtual Machine (VM). VM provides full OS to an application for running in a virtualized environment.
    A Container uses APIs of an Operating System (OS) to provide runtime environment to an application. A Container is very lightweight in comparison with a VM. VM provides higher level of security compared to a Container. A Container just provides the APIs that are required by the application.
  35. What is Serverless architecture?
    Serverless Architecture is a term that refers to following:
    I. An Application that depends on a third-party service.
    II. An Application in which Code is run on ephemeral containers.
    In AWS, Lambda is a popular service to implement Serverless architecture.
    Another concept in Serverless Architecture is to treat code as a service or Function as a Service (FAAS). We just write code that can be run on any environment or server without the need of specifying which server should be used to run this code.
  36. What are the main principles of DevOps?
    DevOps is different from Technical Operations. It has following main principles:
    I. Incremental : In DevOps we aim to incrementally release software to production. We do releases to production more often than Waterfall approach of one large release.
    II. Automated: To enable use to make releases more often, we automate the operations from Code Check in to deployment in Production.
    III. Collaborative: DevOps is not only responsibility of Operations team. It is a collaborative effort of Dev, QA, Release and DevOps teams.
    IV. Iterative: DevOps is based on Iterative principle of using a process that is repeatable. But with each iteration we aim to make the process more efficient and better.
    V. Self-Service: In DevOps, we automate things and give self-service options to other teams so that they are empowered to deliver the work in their domain.
  37. Are you more Dev or more Ops?
    This is a tricky question. DevOps is a new concept and in any organization the maturity of DevOps varies from highly Operations oriented to highly DevOps oriented. In some projects teams are very mature and practice DevOps in it true form.
    In some projects, teams rely more on Operations team. As a DevOps person I give first priority to the needs of an organization and project. At sometimes I may have to perform a lot of operations work. But with each iteration, I aim to bring DevOps changes incrementally to an organization. Over time, organization/project starts seeing results of DevOps practices and embraces it fully.
  38. What is a REST service?
    REST is also known as Representational State Transfer. A REST service is a simple software functionality that is available over HTTP protocol. It is a lightweight service that is widely available due to the popularity of HTTP protocol. Sine REST is lightweight; it has very good performance in a software system. It is also one of the foundations for creating highly scalable systems that provide a service to large number of clients. Another key feature of a REST service is that as long as the interface is kept same, we can change the underlying implementation. E.g., Clients of REST service can keep calling the same service while we change the implementation from php to Java.
  39. What are the Three Ways of DevOps?
    Three Ways of DevOps refers to three basic principles of DevOps culture. These are as follows:
    I. The First Way: Systems Thinking: In this principle we see the DevOps as a flow of work from left to right. This is the time taken from Code check in to the feature being released to End customer. In DevOps culture we try to identify the bottlenecks in this.
    II. The Second Way: Feedback Loops: Whenever there is an issue in production it is feedback about the whole development and deployment process. We try to make the feedback loop more efficient so that teams. can get the feedback much faster. It is a way of catching defect much earlier in process than it being reported by customer.
    III. The Third Way: Continuous Learning: We make use of first- and second-way principles to keep on making improvements in the overall process. This is the third principle in which over the time we make the process and our operations highly efficient, automated and error free by continuously improving them.
  40. How do you apply DevOps principles to make system Secure?
    Security of a system is one of the most important goals for an organization. We use following ways to apply DevOps to security.
    I. Automated Security Testing: We automate and integrate Security testing techniques for Software Penetration testing and Fuzz testing in software development process.
    II. Early Security Checks: We ensure that teams know about the security concerns at the beginning of a project, rather than at the end of delivery. It is achieved by conducting Security trainings and knowledge sharing sessions.
    III. Standard Process: At DevOps we try to follow standard deployment and development process that has already gone through security audits. This helps in minimizing the introduction of any new security loopholes due to change in the standard process.
  41. What is Self-testing Code?
    Self-testing Code is an important feature of DevOps culture. In DevOps culture, development team members are expected to write self-testing code. It means we have to write code along with the tests that can test this code. Once the test passes, we
    feel confident to release the code. If we get an issue in production, we first write an automation test to validate that the issue happens in current release. Once the
    issue in release code is fixed, we run the same test to validate that the defect is not there. With each release we keep running these tests so that the issue does not appear anymore. One of the techniques of writing Self-testing code is Test Driven Development (TDD).
  42. What is a Deployment Pipeline?
    A Deployment Pipeline is an important concept in Continuous Delivery. In Deployment Pipeline we break the build process into distinct stages. In each stage we get the feedback to move onto the next stage. It is a collaborative effort between various groups involved in delivering software development. Often the first stage in Deployment Pipeline is compiling the code and converting into binaries.
    After that we run the automated tests. Depending on the scenario, there are stages like performance testing, security check, usability testing etc in a Deployment Pipeline.
    In DevOps, our aim is to automate all the stages of Deployment Pipeline. With a smooth-running Deployment Pipeline, we can achieve the goal of Continuous Delivery.
  43. What are the main features of Docker Hub?
    Docker Hub provides following main features:
    I. Image Repositories: In Docker Hub we can push, pull, find and manage Docker Images. It is a big library that has images from community, official as well as private sources.
    II. Automated Builds : We can use Docker Hub to create new images by making changes to source code repository of the image.
    III. Webhooks : With Webhooks in Docker Hub we can trigger actions that can create and build new images by pushing a change to repository.
    IV. Github/Bitbucket integration : Docker Hub also provides integration with Github and Bitbucket systems.
  44. What are the security benefits of using Container based system?
    Some of the main security benefits of using a Container based system are as follows:
    I. Segregation : In a Container based system we segregate the applications on different containers. Each application may be running on same host but in a separate container. Each application has access to ports, files and other resources that are provided to it by the container.
    II. Transient : In a Container based system, each application is considered as a transient system. It is better than a static system that has fixed environment which can be exposed overtime.
    III. Control: We use repeatable scripts to create the containers. This provides us tight control over the software application that we want to deploy and run. It also reduces the risk of unwanted changes in setup that can cause security loopholes.
    IV. Security Patch: In a Container based system; we can deploy security patches on multiple containers in a uniform way. Also, it is easier to patch a Container with an application update.
  45. How many heads can you create in a GIT repository?
    There can be any number of heads in a GIT repository. By default, there is one head known as HEAD in each repository in GIT.
  46. What is a Passive check in Nagios?
    In Nagios, we can monitor hosts and services by active checks. In addition, Nagios also supports Passive checks that are initiated by external applications. The results of Passive checks are submitted to Nagios. There are two main use cases of Passive checks:
    I. We use Passive checks to monitor asynchronous services that do not give positive result with Active checks at regular intervals of time.
    II. We can use Passive checks to monitor services or applications that are located behind a firewall.
  47. What is a Docker container?
    A Docker Container is a lightweight system that can be run on a Linux operating system or a virtual machine. It is a package of an application and related dependencies that can be run independently. Since Docker Container is very lightweight, multiple containers can be run simultaneously on a single server or virtual machine. With a Docker Container we can create an isolated system with restricted services and processes. A Container has private view of the operating system. It has its own process ID space, file system, and network interface. Multiple Docker Containers can share same Kernel.
  48. How will you remove an image from Docker?
    We can use docker rmi command to delete an image from our local system.
    Exact command is: docker rmi
    If we want to find IDs of all the Docker images in our local system, we can user docker images command. docker images
    If we want to remove a docker container then we use docker rm command. docker rm
  49. What are the common use cases of Docker?
    Some of the common use cases of Docker are as follows:
    I. Setting up Development Environment: We can use Docker to set the development environment with the applications on which our code is dependent.
    II. Testing Automation Setup: Docker can also help in creating the Testing Automation setup. We can setup different services and apps with Docker to create the automation-testing environment.
    III. Production Deployment: Docker also helps in implementing the Production deployment for an application. We can use it to create the exact environment and process that will be used for doing the production deployment.
  50. Can we lose our data when a Docker Container exits?
    A Docker Container has its own filesystem. In an application running on Docker Container, we can write to this filesystem. When the container exits, data written to filesystem still remains. When we restart the container, same data can be accessed.
    again. Only when we delete the container, related data will be deleted.

Happy Learning !!!!!!!!!!!!!!!!!!!

Top 50 DevOps Engineer Interview Questions & Answers

Leave a Comment