- Home
- Linux Foundation
- Kubernetes and Cloud Native
- KCNA
- KCNA - Kubernetes and Cloud Native Associate
Linux Foundation KCNA Kubernetes and Cloud Native Associate Exam Practice Test
Kubernetes and Cloud Native Associate Questions and Answers
Which group of container runtimes provides additional sandboxed isolation and elevated security?
Options:
rune, cgroups
docker, containerd
runsc, kata
crun, cri-o
Answer:
CExplanation:
The runtimes most associated with sandboxed isolation are gVisor’s runsc and Kata Containers, making C correct. Standard container runtimes (like containerd with runc) rely primarily on Linux namespaces and cgroups for isolation. That isolation is strong for many use cases, but it shares the host kernel, which can be a concern for multi-tenant or high-risk workloads.
gVisor (runsc) provides a user-space kernel-like layer that intercepts and mediates system calls, reducing the container’s direct interaction with the host kernel. Kata Containers takes a different approach: it runs containers inside lightweight virtual machines, providing hardware-virtualization boundaries (or VM-like isolation) while still integrating into container workflows. Both are used to increase isolation compared to traditional containers, and both can be integrated with Kubernetes through compatible CRI/runtime configurations.
The other options are incorrect for the question’s intent. “rune, cgroups” is not a meaningful pairing here (cgroups is a Linux resource mechanism, not a runtime). “docker, containerd” are commonly used container platforms/runtimes but are not specifically the “sandboxed isolation” category (containerd typically uses runc for standard isolation). “crun, cri-o” represents a low-level OCI runtime (crun) and a CRI implementation (CRI-O), again not specifically a sandboxed-isolation grouping.
So, when the question asks for the group that provides additional sandboxing and elevated security, the correct, well-established answer is runsc + Kata.
What is the Kubernetes object used for running a recurring workload?
Options:
Job
Batch
DaemonSet
CronJob
Answer:
DExplanation:
A recurring workload in Kubernetes is implemented with a CronJob, so the correct choice is D. A CronJob is a controller that creates Jobs on a schedule defined in standard cron format (minute, hour, day of month, month, day of week). This makes CronJobs ideal for periodic tasks like backups, report generation, log rotation, and cleanup tasks.
A Job (option A) is run-to-completion but is typically a one-time execution; it ensures that a specified number of Pods successfully terminate. You can use a Job repeatedly, but something else must create it each time—CronJob is that built-in scheduler. Option B (“Batch”) is not a standard workload resource type (batch is an API group, not the object name used here). Option C (DaemonSet) ensures one Pod runs on every node (or selected nodes), which is not “recurring,” it’s “always present per node.”
CronJobs include operational controls that matter in real clusters. For example, concurrencyPolicy controls what happens if a scheduled run overlaps with a previous run (Allow, Forbid, Replace). startingDeadlineSeconds can handle missed schedules (e.g., if the controller was down). History limits (successfulJobsHistoryLimit, failedJobsHistoryLimit) help manage cleanup and troubleshooting. Each scheduled execution results in a Job with its own Pods, which can be inspected with kubectl get jobs and kubectl logs.
So the correct Kubernetes object for a recurring workload is CronJob (D): it provides native scheduling and creates Jobs automatically according to the defined cadence.
=========
Which of the following is the correct command to run an nginx deployment with 2 replicas?
Options:
kubectl run deploy nginx --image=nginx --replicas=2
kubectl create deploy nginx --image=nginx --replicas=2
kubectl create nginx deployment --image=nginx --replicas=2
kubectl create deploy nginx --image=nginx --count=2
Answer:
BExplanation:
The correct answer is B: kubectl create deploy nginx --image=nginx --replicas=2. This uses kubectl create deployment (shorthand create deploy) to generate a Deployment resource named nginx with the specified container image. The --replicas=2 flag sets the desired replica count, so Kubernetes will create two Pod replicas (via a ReplicaSet) and keep that number stable.
Option A is incorrect because kubectl run is primarily intended to run a Pod (and in older versions could generate other resources, but it’s not the recommended/consistent way to create a Deployment in modern kubectl usage). Option C is invalid syntax: kubectl subcommand order is incorrect; you don’t say kubectl create nginx deployment. Option D uses a non-existent --count flag for Deployment replicas.
From a Kubernetes fundamentals perspective, this question tests two ideas: (1) Deployments are the standard controller for running stateless workloads with a desired number of replicas, and (2) kubectl create deployment is a common imperative shortcut for generating that resource. After running the command, you can confirm with kubectl get deploy nginx, kubectl get rs, and kubectl get pods -l app=nginx (label may vary depending on kubectl version). You’ll see a ReplicaSet created and two Pods brought up.
In production, teams typically use declarative manifests (kubectl apply -f) or GitOps, but knowing the imperative command is useful for quick labs and validation. The key is that replicas are managed by the controller, not by manually starting containers—Kubernetes reconciles the state continuously.
Therefore, B is the verified correct command.
=========
Which of the following cloud native proxies is used for ingress/egress in a service mesh and can also serve as an application gateway?
Options:
Frontend proxy
Kube-proxy
Envoy proxy
Reverse proxy
Answer:
CExplanation:
Envoy Proxy is a high-performance, cloud-native proxy widely used for ingress and egress traffic management in service mesh architectures, and it can also function as an application gateway. It is the foundational data-plane component for popular service meshes such as Istio, Consul, and AWS App Mesh, making option C the correct answer.
In a service mesh, Envoy is typically deployed as a sidecar proxy alongside each application Pod. This allows Envoy to transparently intercept and manage all inbound and outbound traffic for the service. Through this model, Envoy enables advanced traffic management features such as load balancing, retries, timeouts, circuit breaking, mutual TLS, and fine-grained observability without requiring application code changes.
Envoy is also commonly used at the mesh boundary to handle ingress and egress traffic. When deployed as an ingress gateway, Envoy acts as the entry point for external traffic into the mesh, performing TLS termination, routing, authentication, and policy enforcement. As an egress gateway, it controls outbound traffic from the mesh to external services, enabling security controls and traffic visibility. These capabilities allow Envoy to serve effectively as an application gateway, not just an internal proxy.
Option A, “Frontend proxy,” is a generic term and not a specific cloud-native component. Option B, kube-proxy, is responsible for implementing Kubernetes Service networking rules at the node level and does not provide service mesh features or gateway functionality. Option D, “Reverse proxy,” is a general architectural pattern rather than a specific cloud-native proxy implementation.
Envoy’s extensibility, performance, and deep integration with Kubernetes and service mesh control planes make it the industry-standard proxy for modern cloud-native networking. Its ability to function both as a sidecar proxy and as a centralized ingress or egress gateway clearly establishes Envoy proxy as the correct and verified answer.
How does dynamic storage provisioning work?
Options:
A user requests dynamically provisioned storage by including an existing StorageClass in their PersistentVolumeClaim.
An administrator creates a StorageClass and includes it in their Pod YAML definition file without creating a PersistentVolumeClaim.
A Pod requests dynamically provisioned storage by including a StorageClass and the Pod name in their PersistentVolumeClaim.
An administrator creates a PersistentVolume and includes the name of the PersistentVolume in their Pod YAML definition file.
Answer:
AExplanation:
Dynamic provisioning is the Kubernetes mechanism where storage is created on-demand when a user creates a PersistentVolumeClaim (PVC) that references a StorageClass, so A is correct. In this model, the user does not need to pre-create a PersistentVolume (PV). Instead, the StorageClass points to a provisioner (typically a CSI driver) that knows how to create a volume in the underlying storage system (cloud disk, SAN, NAS, etc.). When the PVC is created with storageClassName:
This is why option B is incorrect: you do not put a StorageClass “in the Pod YAML” to request provisioning. Pods reference PVCs, not StorageClasses directly. Option C is incorrect because the PVC does not need the Pod name; binding is done via the PVC itself. Option D describes static provisioning: an admin pre-creates PVs and users claim them by creating PVCs that match the PV (capacity, access modes, selectors). Static provisioning can work, but it is not dynamic provisioning.
Under the hood, the StorageClass can define parameters like volume type, replication, encryption, and binding behavior (e.g., volumeBindingMode: WaitForFirstConsumer to delay provisioning until the Pod is scheduled, ensuring the volume is created in the correct zone). Reclaim policies (Delete/Retain) define what happens to the underlying volume after the PVC is deleted.
In cloud-native operations, dynamic provisioning is preferred because it improves developer self-service, reduces manual admin work, and makes scaling stateful workloads easier and faster. The essence is: PVC + StorageClass → automatic PV creation and binding.
=========
What is Serverless computing?
Options:
A computing method of providing backend services on an as-used basis.
A computing method of providing services for AI and ML operating systems.
A computing method of providing services for quantum computing operating systems.
A computing method of providing services for cloud computing operating systems.
Answer:
AExplanation:
Serverless computing is a cloud execution model where the provider manages infrastructure concerns and you consume compute as a service, typically billed based on actual usage (requests, execution time, memory), which matches A. In other words, you deploy code (functions) or sometimes containers, configure triggers (HTTP events, queues, schedules), and the platform automatically provisions capacity, scales it up/down, and handles much of availability and fault tolerance behind the scenes.
From a cloud-native architecture standpoint, “serverless” doesn’t mean there are no servers; it means developers don’t manage servers. The platform abstracts away node provisioning, OS patching, and much of runtime scaling logic. This aligns with the “as-used basis” phrasing: you pay for what you run rather than maintaining always-on capacity.
It’s also useful to distinguish serverless from Kubernetes. Kubernetes automates orchestration (scheduling, self-healing, scaling), but operating Kubernetes still involves cluster-level capacity decisions, node pools, upgrades, networking baseline, and policy. With serverless, those responsibilities are pushed further toward the provider/platform. Kubernetes can enable serverless experiences (for example, event-driven autoscaling frameworks), but serverless as a model is about a higher level of abstraction than “orchestrate containers yourself.”
Options B, C, and D are incorrect because they describe specialized or vague “operating system” services rather than the commonly accepted definition. Serverless is not specifically about AI/ML OSs or quantum OSs; it’s a general compute delivery model that can host many kinds of workloads.
Therefore, the correct definition in this question is A: providing backend services on an as-used basis.
=========
Which of the following scenarios would benefit the most from a service mesh architecture?
Options:
A few applications with hundreds of Pod replicas running in multiple clusters, each one providing multiple services.
Thousands of distributed applications running in a single cluster, each one providing multiple services.
Tens of distributed applications running in multiple clusters, each one providing multiple services.
Thousands of distributed applications running in multiple clusters, each one providing multiple services.
Answer:
DExplanation:
A service mesh is most valuable when service-to-service communication becomes complex at large scale—many services, many teams, and often multiple clusters. That’s why D is the best fit: thousands of distributed applications across multiple clusters. In that scenario, the operational burden of securing, observing, and controlling east-west traffic grows dramatically. A service mesh (e.g., Istio, Linkerd) addresses this by introducing a dedicated networking layer (usually sidecar proxies such as Envoy) that standardizes capabilities across services without requiring each application to implement them consistently.
The common “mesh” value-adds are: mTLS for service identity and encryption, fine-grained traffic policy (retries, timeouts, circuit breaking), traffic shifting (canary, mirroring), and consistent telemetry (metrics, traces, access logs). Those features become increasingly beneficial as the number of services and cross-service calls rises, and as you add multi-cluster routing, failover, and policy management across environments. With thousands of applications, inconsistent libraries and configurations become a reliability and security risk; the mesh centralizes and standardizes these behaviors.
In smaller environments (A or C), you can often meet requirements with simpler approaches: Kubernetes Services, Ingress/Gateway, basic mTLS at the edge, and application-level libraries. A single large cluster (B) can still benefit from a mesh, but adding multiple clusters increases complexity: traffic management across clusters, identity trust domains, global observability correlation, and consistent policy enforcement. That’s where mesh architectures typically justify their additional overhead (extra proxies, control plane components, operational complexity).
So, the “most benefit” scenario is the largest, most distributed footprint—D.
=========
What is the API that exposes resource metrics from the metrics-server?
Options:
custom.k8s.io
resources.k8s.io
metrics.k8s.io
cadvisor.k8s.io
Answer:
CExplanation:
The correct answer is C: metrics.k8s.io. Kubernetes’ metrics-server is the standard component that provides resource metrics (primarily CPU and memory) for nodes and pods. It aggregates this information (sourced from kubelet/cAdvisor) and serves it through the Kubernetes aggregated API under the group metrics.k8s.io. This is what enables commands like kubectl top nodes and kubectl top pods, and it is also a key data source for autoscaling with the Horizontal Pod Autoscaler (HPA) when scaling on CPU/memory utilization.
Why the other options are wrong:
custom.k8s.io is not the standard API group for metrics-server resource metrics. Custom metrics are typically served through the custom metrics API (commonly custom.metrics.k8s.io) via adapters (e.g., Prometheus Adapter), not metrics-server.
resources.k8s.io is not the metrics-server API group.
cadvisor.k8s.io is not exposed as a Kubernetes aggregated metrics API. cAdvisor is a component integrated into kubelet that provides container stats, but metrics-server is the thing that exposes the aggregated Kubernetes metrics API, and the canonical group is metrics.k8s.io.
Operationally, it’s important to understand the boundary: metrics-server provides basic resource metrics suitable for core autoscaling and “top” views, but it is not a full observability system (it does not store long-term metrics history like Prometheus). For richer metrics (SLOs, application metrics, long-term trending), teams typically deploy Prometheus or a managed monitoring backend. Still, when the question asks specifically which API exposes metrics-server data, the answer is definitively metrics.k8s.io.
=========
In the DevOps framework and culture, who builds, automates, and offers continuous delivery tools for developer teams?
Options:
Application Users
Application Developers
Platform Engineers
Cluster Operators
Answer:
CExplanation:
The correct answer is C (Platform Engineers). In modern DevOps and platform operating models, platform engineering teams build and maintain the shared delivery capabilities that product/application teams use to ship software safely and quickly. This includes CI/CD pipeline templates, standardized build and test automation, artifact management (registries), deployment tooling (Helm/Kustomize/GitOps), secrets management patterns, policy guardrails, and paved-road workflows that reduce cognitive load for developers.
While application developers (B) write the application code and often contribute pipeline steps for their service, the “build, automate, and offer tooling for developer teams” responsibility maps directly to platform engineering: they provide the internal platform that turns Kubernetes and cloud services into a consumable product. This is especially common in Kubernetes-based organizations where you want consistent deployment standards, repeatable security checks, and uniform observability.
Cluster operators (D) typically focus on the health and lifecycle of the Kubernetes clusters themselves: upgrades, node pools, networking, storage, cluster security posture, and control plane reliability. They may work closely with platform engineers, but “continuous delivery tools for developer teams” is broader than cluster operations. Application users (A) are consumers of the software, not builders of delivery tooling.
In cloud-native application delivery, this division of labor is important: platform engineers enable higher velocity with safety by automating the software supply chain—builds, tests, scans, deploys, progressive delivery, and rollback. Kubernetes provides the runtime substrate, but the platform team makes it easy and safe for developers to use it repeatedly and consistently across many services.
Therefore, Platform Engineers (C) is the verified correct choice.
=========
Which of the following options includes valid API versions?
Options:
alpha1v1, beta3v3, v2
alpha1, beta3, v2
v1alpha1, v2beta3, v2
v1alpha1, v2beta3, 2.0
Answer:
CExplanation:
Kubernetes API versions follow a consistent naming pattern that indicates stability level and versioning. The valid forms include stable versions like v1, and pre-release versions such as v1alpha1, v1beta1, etc. Option C contains valid-looking Kubernetes version strings—v1alpha1, v2beta3, v2—so C is correct.
In Kubernetes, the “v” prefix is part of the standard for API versions. A stable API uses v1, v2, etc. Pre-release APIs include a stability marker: alpha (earliest, most changeable) and beta (more stable but still may change). The numeric suffix (e.g., alpha1, beta3) indicates iteration within that stability stage.
Option A is invalid because strings like alpha1v1 and beta3v3 do not match Kubernetes conventions (the v comes first, and alpha/beta are qualifiers after the version: v1alpha1). Option B is invalid because alpha1 and beta3 are missing the leading version prefix; Kubernetes API versions are not just “alpha1.” Option D includes 2.0, which looks like semantic versioning but is not the Kubernetes API version format. Kubernetes uses v2, not 2.0, for API versions.
Understanding this matters because API versions signal compatibility guarantees. Stable APIs are supported for a defined deprecation window, while alpha/beta APIs may change in incompatible ways and can be removed more easily. When authoring manifests, selecting the correct apiVersion ensures the API server accepts your resource and that controllers interpret fields correctly.
Therefore, among the choices, C is the only option comprised of valid Kubernetes-style API version strings.
=========
In CNCF, who develops specifications for industry standards around container formats and runtimes?
Options:
Open Container Initiative (OCI)
Linux Foundation Certification Group (LFCG)
Container Network Interface (CNI)
Container Runtime Interface (CRI)
Answer:
AExplanation:
The organization responsible for defining widely adopted standards around container formats and runtime specifications is the Open Container Initiative (OCI), so A is correct. OCI defines the image specification (how container images are structured and stored) and the runtime specification (how to run a container), enabling interoperability across tooling and vendors. This is foundational to the cloud-native ecosystem because it allows different build tools, registries, runtimes, and orchestration platforms to work together reliably.
Within Kubernetes and CNCF-adjacent ecosystems, OCI standards are the reason an image built by one tool can be pushed to a registry and pulled/run by many different runtimes. For example, a Kubernetes node running containerd or CRI-O can run OCI-compliant images consistently. OCI standardization reduces fragmentation and vendor lock-in, which is a core motivation in open source cloud-native architecture.
The other options are not correct for this question. CNI (Container Network Interface) is a standard for configuring container networking, not container image formats and runtimes. CRI (Container Runtime Interface) is a Kubernetes-specific interface between kubelet and container runtimes—it enables pluggable runtimes for Kubernetes, but it is not the industry standard body for container format/runtime specifications. “LFCG” is not the recognized standards body here.
In short: OCI defines the “language” for container images and runtime behavior, which is why the same image can be executed across environments. Kubernetes relies on those standards indirectly through runtimes and tooling, but the specification work is owned by OCI. Therefore, the verified correct answer is A.
=========
What is the purpose of the CRI?
Options:
To provide runtime integration control when multiple runtimes are used.
Support container replication and scaling on nodes.
Provide an interface allowing Kubernetes to support pluggable container runtimes.
Allow the definition of dynamic resource criteria across containers.
Answer:
CExplanation:
The Container Runtime Interface (CRI) exists so Kubernetes can support pluggable container runtimes behind a stable interface, which makes C correct. In Kubernetes, the kubelet is responsible for managing Pods on a node, but it does not implement container execution itself. Instead, it delegates container lifecycle operations (pull images, create pod sandbox, start/stop containers, fetch logs, exec/attach streaming) to a container runtime through a well-defined API. CRI is that API contract.
Because of CRI, Kubernetes can run with different container runtimes—commonly containerd or CRI-O—without changing kubelet core logic. This improves portability and keeps Kubernetes modular: runtime innovation can happen independently while Kubernetes retains a consistent operational model. CRI is accessed via gRPC and defines the services and message formats kubelet uses to communicate with runtimes.
Option B is incorrect because replication and scaling are handled by controllers (Deployments/ReplicaSets) and schedulers, not by CRI. Option D is incorrect because resource criteria (requests/limits) are expressed in Pod specs and enforced via OS mechanisms (cgroups) and kubelet/runtime behavior, but CRI is not “for defining dynamic resource criteria.” Option A is vague and not the primary statement; while CRI enables runtime integration, its key purpose is explicitly to make runtimes pluggable and interoperable.
This design became even more important as Kubernetes moved away from Docker Engine integration (dockershim removal from kubelet). With CRI, Kubernetes focuses on orchestrating Pods, while runtimes focus on executing containers. That separation of responsibilities is a core container orchestration principle and is exactly what the question is testing.
So the verified answer is C.
=========
What framework does Kubernetes use to authenticate users with JSON Web Tokens?
Options:
OpenID Connect
OpenID Container
OpenID Cluster
OpenID CNCF
Answer:
AExplanation:
Kubernetes commonly authenticates users using OpenID Connect (OIDC) when JSON Web Tokens (JWTs) are involved, so A is correct. OIDC is an identity layer on top of OAuth 2.0 that standardizes how clients obtain identity information and how JWTs are issued and validated.
In Kubernetes, authentication happens at the API server. When OIDC is configured, the API server validates incoming bearer tokens (JWTs) by checking token signature and claims against the configured OIDC issuer and client settings. Kubernetes can use OIDC claims (such as sub, email, groups) to map the authenticated identity to Kubernetes RBAC subjects. This is how enterprises integrate clusters with identity providers such as Okta, Dex, Azure AD, or other OIDC-compliant IdPs.
Options B, C, and D are fabricated phrases and not real frameworks. Kubernetes documentation explicitly references OIDC as a supported method for token-based user authentication (alongside client certificates, bearer tokens, static token files, and webhook authentication). The key point is that Kubernetes does not “invent” JWT auth; it integrates with standard identity providers through OIDC so clusters can participate in centralized SSO and group-based authorization.
Operationally, OIDC authentication is typically paired with:
RBAC for authorization (“what you can do”)
Audit logging for traceability
Short-lived tokens and rotation practices for security
Group claim mapping to simplify permission management
So, the verified framework Kubernetes uses with JWTs for user authentication is OpenID Connect.
=========
Which API object is the recommended way to run a scalable, stateless application on your cluster?
Options:
ReplicaSet
Deployment
DaemonSet
Pod
Answer:
BExplanation:
For a scalable, stateless application, Kubernetes recommends using a Deployment because it provides a higher-level, declarative management layer over Pods. A Deployment doesn’t just “run replicas”; it manages the entire lifecycle of rolling out new versions, scaling up/down, and recovering from failures by continuously reconciling the current cluster state to the desired state you define. Under the hood, a Deployment typically creates and manages a ReplicaSet, and that ReplicaSet ensures a specified number of Pod replicas are running at all times. This layering is the key: you get ReplicaSet’s self-healing replica maintenance plus Deployment’s rollout/rollback strategies and revision history.
Why not the other options? A Pod is the smallest deployable unit, but it’s not a scalable controller—if a Pod dies, nothing automatically replaces it unless a controller owns it. A ReplicaSet can maintain N replicas, but it does not provide the full rollout orchestration (rolling updates, pause/resume, rollbacks, and revision tracking) that you typically want for stateless apps that ship frequent releases. A DaemonSet is for node-scoped workloads (one Pod per node or subset of nodes), like log shippers or node agents, not for “scale by replicas.”
For stateless applications, the Deployment model is especially appropriate because individual replicas are interchangeable; the application does not require stable network identities or persistent storage per replica. Kubernetes can freely replace or reschedule Pods to maintain availability. Deployment strategies (like RollingUpdate) allow you to upgrade without downtime by gradually replacing old replicas with new ones while keeping the Service endpoints healthy. That combination—declarative desired state, self-healing, and controlled updates—makes Deployment the recommended object for scalable stateless workloads.
=========
When a Kubernetes Secret is created, how is the data stored by default in etcd?
Options:
As Base64-encoded strings that provide simple encoding but no actual encryption.
As plain text values that are directly stored without any obfuscation or additional encoding.
As compressed binary objects that are optimized for space but not secured against access.
As encrypted records automatically protected using the Kubernetes control plane master key.
Answer:
AExplanation:
By default, Kubernetes Secrets are stored in etcd as Base64-encoded values, which makes option A the correct answer. This is a common point of confusion because Base64 encoding is often mistaken for encryption, but in reality, it provides no security—only a reversible text encoding.
When a Secret is defined in a Kubernetes manifest or created via kubectl, its data fields are Base64-encoded before being persisted in etcd. This encoding ensures that binary data (such as certificates or keys) can be safely represented in JSON and YAML formats, which require text-based values. However, anyone with access to etcd or the Secret object via the Kubernetes API can easily decode these values.
Option B is incorrect because Secrets are not stored as raw plaintext; they are encoded using Base64 before storage. Option C is incorrect because Kubernetes does not compress Secret data by default. Option D is incorrect because Secrets are not encrypted at rest by default. Encryption at rest must be explicitly configured using an encryption provider configuration in the Kubernetes API server.
Because of this default behavior, Kubernetes strongly recommends additional security measures when handling Secrets. These include enabling encryption at rest for etcd, restricting access to Secrets using RBAC, using short-lived ServiceAccount tokens, and integrating with external secret management systems such as HashiCorp Vault or cloud provider key management services.
Understanding how Secrets are stored is critical for designing secure Kubernetes clusters. While Secrets provide a convenient abstraction for handling sensitive data, they rely on cluster-level security controls to ensure confidentiality. Without encryption at rest and proper access restrictions, Secret data remains vulnerable to unauthorized access.
Therefore, the correct and verified answer is Option A: Kubernetes stores Secrets as Base64-encoded strings in etcd by default, which offers encoding but not encryption.
What Kubernetes control plane component exposes the programmatic interface used to create, manage and interact with the Kubernetes objects?
Options:
kube-controller-manager
kube-proxy
kube-apiserver
etcd
Answer:
CExplanation:
The kube-apiserver is the front door of the Kubernetes control plane and exposes the programmatic interface used to create, read, update, delete, and watch Kubernetes objects—so C is correct. Every interaction with cluster state ultimately goes through the Kubernetes API. Tools like kubectl, client libraries, GitOps controllers, operators, and core control plane components (scheduler and controllers) all communicate with the API server to submit desired state and to observe current state.
The API server is responsible for handling authentication (who are you?), authorization (what are you allowed to do?), and admission control (should this request be allowed and possibly mutated/validated?). After a request passes these gates, the API server persists the object’s desired state to etcd (the backing datastore) and returns a response. The API server also provides a watch mechanism so controllers can react to changes efficiently, enabling Kubernetes’ reconciliation model.
It’s important to distinguish this from the other options. etcd stores cluster data but does not expose the cluster’s primary user-facing API; it’s an internal datastore. kube-controller-manager runs control loops (controllers) that continuously reconcile resources (like Deployments, Nodes, Jobs) but it consumes the API rather than exposing it. kube-proxy is a node-level component implementing Service networking rules and is unrelated to the control-plane API endpoint.
Because Kubernetes is “API-driven,” the kube-apiserver is central: if it is unavailable, you cannot create workloads, update configurations, or even reliably observe cluster state. This is why high availability architectures prioritize multiple API server instances behind a load balancer, and why securing the API server (RBAC, TLS, audit) is a primary operational concern.
=========
Which Prometheus metric represents a single value that can go up and down?
Options:
Counter
Gauge
Summary
Histogram
Answer:
BExplanation:
In Prometheus, a Gauge is the metric type used to represent a value that can increase and decrease over time, so B is correct. Gauges are suited for “current state” measurements such as current memory usage, number of active sessions, queue depth, temperature, or CPU usage—anything that can move up and down as the system changes.
This contrasts with a Counter (A), which is monotonically increasing (it only goes up, except when a process restarts and the counter resets to zero). Counters are ideal for totals like total HTTP requests served, total errors, or bytes sent, and you typically use rate()/irate() in PromQL to convert counters into per-second rates.
A Summary (C) and Histogram (D) are used for distributions, commonly request latency. Histograms record observations into buckets and can produce percentiles using functions like histogram_quantile(). Summaries compute quantiles on the client side and expose them directly, along with counts and sums. Neither of these is the simplest “single value that goes up and down” type.
In Kubernetes observability, Prometheus is often used to scrape metrics from cluster components (API server, kubelet) and applications. Choosing the right metric type matters operationally: use gauges for instantaneous measurements, counters for event totals, and histograms/summaries for latency distributions. That’s why Prometheus documentation and best practices emphasize understanding metric semantics—because misusing types leads to incorrect alerts and dashboards.
So for a single numeric value that can go up and down, the correct metric type is Gauge, option B.
=========
Which component of the node is responsible to run workloads?
Options:
The kubelet.
The kube-proxy.
The kube-apiserver.
The container runtime.
Answer:
DExplanation:
The verified correct answer is D (the container runtime). On a Kubernetes node, the container runtime (such as containerd or CRI-O) is the component that actually executes containers—it creates container processes, manages their lifecycle, pulls images, and interacts with the underlying OS primitives (namespaces, cgroups) through an OCI runtime like runc. In that direct sense, the runtime is what “runs workloads.”
It’s important to distinguish responsibilities. The kubelet (A) is the node agent that orchestrates what should run on the node: it watches the API server for Pods assigned to the node and then asks the runtime to start/stop containers accordingly. Kubelet is essential for node management, but it does not itself execute containers; it delegates execution to the runtime via CRI. kube-proxy (B) handles Service traffic routing rules (or is replaced by other dataplanes) and does not run containers. kube-apiserver (C) is a control plane component that stores and serves cluster state; it is not a node workload runner.
So, in the execution chain: scheduler assigns Pod → kubelet sees Pod assigned → kubelet calls runtime via CRI → runtime launches containers. When troubleshooting “containers won’t start,” you often inspect kubelet logs and runtime logs because the runtime is the component that can fail image pulls, sandbox creation, or container start operations.
Therefore, the best answer to “which node component is responsible to run workloads” is the container runtime, option D.
=========
Which option best represents the Pod Security Standards ordered from most permissive to most restrictive?
Options:
Privileged, Baseline, Restricted
Baseline, Privileged, Restricted
Baseline, Restricted, Privileged
Privileged, Restricted, Baseline
Answer:
AExplanation:
Pod Security Standards define a set of security profiles for Pods in Kubernetes, establishing clear expectations for how securely workloads should be configured. These standards were introduced to replace the deprecated PodSecurityPolicies (PSP) and are enforced through the Pod Security Admission controller. The standards are intentionally ordered from least restrictive to most restrictive to allow clusters to adopt security controls progressively.
The correct order from most permissive to most restrictive is: Privileged → Baseline → Restricted, which makes option A the correct answer.
The Privileged profile is the least restrictive. It allows Pods to run with elevated permissions, including privileged containers, host networking, host PID/IPC namespaces, and unrestricted access to host resources. This level is intended for trusted system components, infrastructure workloads, or cases where full access to the host is required. It offers maximum flexibility but minimal security enforcement.
The Baseline profile introduces a moderate level of security. It prevents common privilege escalation vectors, such as running privileged containers or using host namespaces, while still allowing typical application workloads to function without significant modification. Baseline is designed to be broadly compatible with most applications and serves as a reasonable default security posture for many clusters.
The Restricted profile is the most secure and restrictive. It enforces strong security best practices, such as requiring containers to run as non-root users, dropping unnecessary Linux capabilities, enforcing read-only root filesystems where possible, and preventing privilege escalation. Restricted is ideal for highly sensitive workloads or environments with strict security requirements, though it may require application changes to comply.
Options B, C, and D are incorrect because they misrepresent the intended progression of security strictness defined in Kubernetes documentation.
According to Kubernetes documentation, the Pod Security Standards are explicitly ordered to support gradual adoption: start permissive where necessary and move toward stronger security over time. Therefore, Privileged, Baseline, Restricted is the accurate and fully verified ordering, making option A the correct answer.
What is a best practice to minimize the container image size?
Options:
Use a DockerFile.
Use multistage builds.
Build images with different tags.
Add a build.sh script.
Answer:
BExplanation:
A proven best practice for minimizing container image size is to use multi-stage builds, so B is correct. Multi-stage builds allow you to separate the “build environment” from the “runtime environment.” In the first stage, you can use a full-featured base image (with compilers, package managers, and build tools) to compile your application or assemble artifacts. In the final stage, you copy only the resulting binaries or necessary runtime assets into a much smaller base image (for example, a distroless image or a slim OS image). This dramatically reduces the final image size because it excludes compilers, caches, and build dependencies that are not needed at runtime.
In cloud-native application delivery, smaller images matter for several reasons. They pull faster, which speeds up deployments, rollouts, and scaling events (Pods become Ready sooner). They also reduce attack surface by removing unnecessary packages, which helps security posture and scanning results. Smaller images tend to be simpler and more reproducible, improving reliability across environments.
Option A is not a size-minimization practice: using a Dockerfile is simply the standard way to define how to build an image; it doesn’t inherently reduce size. Option C (different tags) changes image identification but not size. Option D (a build script) may help automation, but it doesn’t guarantee smaller images; the image contents are determined by what ends up in the layers.
Multi-stage builds are commonly paired with other best practices: choosing minimal base images, cleaning package caches, avoiding copying unnecessary files (use .dockerignore), and reducing layer churn. But among the options, the clearest and most directly correct technique is multi-stage builds.
Therefore, the verified answer is B.
=========
Which statement best describes the role of kubelet on a Kubernetes worker node?
Options:
kubelet manages the container runtime and ensures that all Pods scheduled to the node are running as expected.
kubelet configures networking rules on each node to handle traffic routing for Services in the cluster.
kubelet monitors cluster-wide resource usage and assigns Pods to the most suitable nodes for execution.
kubelet acts as the primary API component that stores and manages cluster state information.
Answer:
AExplanation:
The kubelet is the primary node-level agent in Kubernetes and is responsible for ensuring that workloads assigned to a worker node are executed correctly. Its core function is to manage container execution on the node and ensure that all Pods scheduled to that node are running as expected, which makes option A the correct answer.
Once the Kubernetes scheduler assigns a Pod to a node, the kubelet on that node takes over responsibility for running the Pod. It continuously watches the API server for Pod specifications that target its node and then interacts with the container runtime (such as containerd or CRI-O) through the Container Runtime Interface (CRI). The kubelet starts, stops, and restarts containers to match the desired state defined in the Pod specification.
In addition to lifecycle management, the kubelet performs ongoing health monitoring. It executes liveness, readiness, and startup probes, reports Pod and node status back to the API server, and enforces resource limits defined in the Pod specification. If a container crashes or becomes unhealthy, the kubelet initiates recovery actions such as restarting the container.
Option B is incorrect because configuring Service traffic routing is the responsibility of kube-proxy and the cluster’s networking layer, not the kubelet. Option C is incorrect because cluster-wide resource monitoring and Pod placement decisions are handled by the kube-scheduler. Option D is incorrect because cluster state is managed by the API server and stored in etcd, not by the kubelet.
In summary, the kubelet acts as the executor and supervisor of Pods on each worker node. It bridges the Kubernetes control plane and the actual runtime environment, ensuring that containers are running, healthy, and aligned with the declared configuration. Therefore, Option A is the correct and verified answer.
What is a Service?
Options:
A static network mapping from a Pod to a port.
A way to expose an application running on a set of Pods.
The network configuration for a group of Pods.
An NGINX load balancer that gets deployed for an application.
Answer:
BExplanation:
The correct answer is B: a Kubernetes Service is a stable way to expose an application running on a set of Pods. Pods are ephemeral—IPs can change when Pods are recreated, rescheduled, or scaled. A Service provides a consistent network identity (DNS name and usually a ClusterIP virtual IP) and a policy for routing traffic to the current healthy backends.
Typically, a Service uses a label selector to determine which Pods are part of the backend set. Kubernetes then maintains the corresponding endpoint data (Endpoints/EndpointSlice), and the cluster dataplane (kube-proxy or an eBPF-based implementation) forwards traffic from the Service IP/port to one of the Pod IPs. This enables reliable service discovery and load distribution across replicas, especially during rolling updates where Pods are constantly replaced.
Option A is incorrect because Service routing is not a “static mapping from a Pod to a port.” It’s dynamic and targets a set of Pods. Option C is too vague and misstates the concept; while Services relate to networking, they are not “the network configuration for a group of Pods” (that’s closer to NetworkPolicy/CNI configuration). Option D is incorrect because Kubernetes does not automatically deploy an NGINX load balancer when you create a Service. NGINX might be used as an Ingress controller or external load balancer in some setups, but a Service is a Kubernetes API abstraction, not a specific NGINX component.
Services come in several types (ClusterIP, NodePort, LoadBalancer, ExternalName), but the core definition remains the same: stable access to a dynamic set of Pods. This is foundational for microservices and for decoupling clients from the churn of Pod lifecycles.
So, the verified correct definition is B.
=========
Which of these events will cause the kube-scheduler to assign a Pod to a node?
Options:
When the Pod crashes because of an error.
When a new node is added to the Kubernetes cluster.
When the CPU load on the node becomes too high.
When a new Pod is created and has no assigned node.
Answer:
DExplanation:
The kube-scheduler assigns a node to a Pod when the Pod is unscheduled—meaning it exists in the API server but has no spec.nodeName set. The event that triggers scheduling is therefore: a new Pod is created and has no assigned node, which is option D.
Kubernetes scheduling is declarative and event-driven. The scheduler continuously watches for Pods that are in a “Pending” unscheduled state. When it sees one, it runs a scheduling cycle: filtering nodes that cannot run the Pod (insufficient resources based on requests, taints/tolerations, node selectors/affinity rules, topology spread constraints), then scoring the remaining feasible nodes to pick the best candidate. Once selected, the scheduler “binds” the Pod to that node by updating the Pod’s spec.nodeName. After that, kubelet on the chosen node takes over to pull images and start containers.
Option A (Pod crashes) does not directly cause scheduling. If a container crashes, kubelet may restart it on the same node according to restart policy. If the Pod itself is replaced (e.g., by a controller like a Deployment creating a new Pod), that new Pod will be scheduled because it’s unscheduled—but the crash event itself isn’t the scheduler’s trigger. Option B (new node added) might create more capacity and affect future scheduling decisions, but it does not by itself trigger assigning a particular Pod; scheduling still happens because there are unscheduled Pods. Option C (CPU load high) is not a scheduling trigger; scheduling is based on declared requests and constraints, not instantaneous node CPU load (that’s a common misconception).
So the correct, Kubernetes-architecture answer is D: kube-scheduler assigns nodes to Pods that are newly created (or otherwise pending) and have no assigned node.
=========
Which of the following is a valid PromQL query?
Options:
SELECT * from http_requests_total WHERE job=apiserver
http_requests_total WHERE (job="apiserver")
SELECT * from http_requests_total
http_requests_total(job="apiserver")
Answer:
DExplanation:
Prometheus Query Language (PromQL) uses a function-and-selector syntax, not SQL. A valid query typically starts with a metric name and optionally includes label matchers in curly braces. In the simplified quiz syntax given, the valid PromQL-style selector is best represented by D: http_requests_total(job="apiserver"), so D is correct.
Conceptually, what this query means is “select time series for the metric http_requests_total where the job label equals apiserver.” In standard PromQL formatting you most often see this as: http_requests_total{job="apiserver"}. Many training questions abbreviate braces and focus on the idea of filtering by labels; the key is that PromQL uses label matchers rather than SQL WHERE clauses.
Options A and C are invalid because they use SQL (SELECT * FROM ...) which is not PromQL. Option B is also invalid because PromQL does not use the keyword WHERE. PromQL filtering is done by applying label matchers directly to the metric selector.
In Kubernetes observability, PromQL is central to building dashboards and alerts from cluster metrics. For example, you might compute rates from counters: rate(http_requests_total{job="apiserver"}[5m]), aggregate by labels: sum by (code) (...), or alert on error ratios. Understanding the selector and label-matcher model is foundational because Prometheus metrics are multi-dimensional—labels define the slices you can filter and aggregate on.
So, within the provided options, D is the only one that follows PromQL’s metric+label-filter style and therefore is the verified correct answer.
=========
Which of the following is a responsibility of the governance board of an open source project?
Options:
Decide about the marketing strategy of the project.
Review the pull requests in the main branch.
Outline the project's “terms of engagement”.
Define the license to be used in the project.
Answer:
CExplanation:
A governance board in an open source project typically defines how the community operates—its decision-making rules, roles, conflict resolution, and contribution expectations—so C (“Outline the project's terms of engagement”) is correct. In large cloud-native projects (Kubernetes being a prime example), clear governance is essential to coordinate many contributors, companies, and stakeholders. Governance establishes the “rules of the road” that keep collaboration productive and fair.
“Terms of engagement” commonly includes: how maintainers are selected, how proposals are reviewed (e.g., enhancement processes), how meetings and SIGs operate, what constitutes consensus, how voting works when consensus fails, and what code-of-conduct expectations apply. It also defines escalation and dispute resolution paths so technical disagreements don’t become community-breaking conflicts. In other words, governance is about ensuring the project has durable, transparent processes that outlive any individual contributor and support vendor-neutral decision making.
Option B (reviewing pull requests) is usually the responsibility of maintainers and SIG owners, not a governance board. The governance body may define the structure that empowers maintainers, but it generally does not do day-to-day code review. Option A (marketing strategy) is often handled by foundations, steering committees, or separate outreach groups, not governance boards as their primary responsibility. Option D (defining the license) is usually decided early and may be influenced by a foundation or legal process; while governance can shape legal/policy direction, the core governance responsibility is broader community operating rules rather than selecting a license.
In cloud-native ecosystems, strong governance supports sustainability: it encourages contributions, protects neutrality, and provides predictable processes for evolution. Therefore, the best verified answer is C.
=========
What is the resource type used to package sets of containers for scheduling in a cluster?
Options:
Pod
ContainerSet
ReplicaSet
Deployment
Answer:
AExplanation:
The Kubernetes resource used to package one or more containers into a schedulable unit is the Pod, so A is correct. Kubernetes schedules Pods onto nodes; it does not schedule individual containers. A Pod represents a single “instance” of an application component and includes one or more containers that share key runtime properties, including the same network namespace (same IP and port space) and the ability to share volumes.
Pods enable common patterns beyond “one container per Pod.” For example, a Pod may include a main application container plus a sidecar container for logging, proxying, or configuration reload. Because these containers share localhost networking and volume mounts, they can coordinate efficiently without requiring external service calls. Kubernetes manages the Pod lifecycle as a unit: the containers in a Pod are started according to container lifecycle rules and are co-located on the same node.
Option B (ContainerSet) is not a standard Kubernetes workload resource. Option C (ReplicaSet) manages a set of Pod replicas, ensuring a desired count is running, but it is not the packaging unit itself. Option D (Deployment) is a higher-level controller that manages ReplicaSets and provides rollout/rollback behavior, again operating on Pods rather than being the container-packaging unit.
From the scheduling perspective, the PodSpec defines container images, commands, resources, volumes, security context, and placement constraints. The scheduler evaluates these constraints and assigns the Pod to a node. This “Pod as the atomic scheduling unit” is fundamental to Kubernetes architecture and explains why Kubernetes-native concepts (Services, selectors, readiness, autoscaling) all revolve around Pods.
=========
What is the default value for authorization-mode in Kubernetes API server?
Options:
--authorization-mode=RBAC
--authorization-mode=AlwaysAllow
--authorization-mode=AlwaysDeny
--authorization-mode=ABAC
Answer:
BExplanation:
The Kubernetes API server supports multiple authorization modes that determine whether an authenticated request is allowed to perform an action (verb) on a resource. Historically, the API server’s default authorization mode was AlwaysAllow, meaning that once a request was authenticated, it would be authorized without further checks. That is why the correct answer here is B.
However, it’s crucial to distinguish “default flag value” from “recommended configuration.” In production clusters, running with AlwaysAllow is insecure because it effectively removes authorization controls—any authenticated user (or component credential) could do anything the API permits. Modern Kubernetes best practices strongly recommend enabling RBAC (Role-Based Access Control), often alongside Node and Webhook authorization, so that permissions are granted explicitly using Roles/ClusterRoles and RoleBindings/ClusterRoleBindings. Many managed Kubernetes distributions and kubeadm-based setups commonly enable RBAC by default as part of cluster bootstrap profiles, even if the API server’s historical default flag value is AlwaysAllow.
So, the exam-style interpretation of this question is about the API server flag default, not what most real clusters should run. With RBAC enabled, authorization becomes granular: you can control who can read Secrets, who can create Deployments, who can exec into Pods, and so on, scoped to namespaces or cluster-wide. ABAC (Attribute-Based Access Control) exists but is generally discouraged compared to RBAC because it relies on policy files and is less ergonomic and less commonly used. AlwaysDeny is useful for hard lockdown testing but not for normal clusters.
In short: AlwaysAllow is the API server’s default mode (answer B), but RBAC is the secure, recommended choice you should expect to see enabled in almost any serious Kubernetes environment.
=========
Which statement about Ingress is correct?
Options:
Ingress provides a simple way to track network endpoints within a cluster.
Ingress is a Service type like NodePort and ClusterIP.
Ingress is a construct that allows you to specify how a Pod is allowed to communicate.
Ingress exposes routes from outside the cluster to Services in the cluster.
Answer:
DExplanation:
Ingress is the Kubernetes API resource for defining external HTTP/HTTPS routing into the cluster, so D is correct. An Ingress object specifies rules such as hostnames (e.g., app.example.com), URL paths (e.g., /api), and TLS configuration, mapping those routes to Kubernetes Services. This provides Layer 7 routing capabilities beyond what a basic Service offers.
Ingress is not a Service type (so B is wrong). Service types (ClusterIP, NodePort, LoadBalancer, ExternalName) are part of the Service API and operate at Layer 4. Ingress is a separate API object that depends on an Ingress Controller to actually implement routing. The controller watches Ingress resources and configures a reverse proxy/load balancer (like NGINX, HAProxy, or a cloud load balancer integration) to enforce the desired routing. Without an Ingress Controller, creating an Ingress object alone will not route traffic.
Option A describes endpoint tracking (that’s closer to Endpoints/EndpointSlice). Option C describes NetworkPolicy, which controls allowed network flows between Pods/namespaces. Ingress is about exposing and routing incoming application traffic from outside the cluster to internal Services.
So the verified correct statement is D: Ingress exposes routes from outside the cluster to Services in the cluster.
What native runtime is Open Container Initiative (OCI) compliant?
Options:
runC
runV
kata-containers
gvisor
Answer:
AExplanation:
The Open Container Initiative (OCI) publishes open specifications for container images and container runtimes so that tools across the ecosystem remain interoperable. When a runtime is “OCI-compliant,” it means it implements the OCI Runtime Specification (how to run a container from a filesystem bundle and configuration) and/or works cleanly with OCI image formats through the usual layers (image → unpack → runtime). runC is the best-known, widely used reference implementation of the OCI runtime specification and is the low-level runtime underneath many higher-level systems. In Kubernetes, you typically interact with a higher-level container runtime (such as containerd or CRI-O) through the Container Runtime Interface (CRI). That higher-level runtime then uses a low-level OCI runtime to actually create Linux namespaces/cgroups, set up the container process, and start it. In many default installations, containerd delegates to runC for this low-level “create/start” work.
The other options are related but differ in what they are: Kata Containers uses lightweight VMs to provide stronger isolation while still presenting a container-like workflow; gVisor provides a user-space kernel for sandboxing containers; these can be used with Kubernetes via compatible integrations, but the canonical “native OCI runtime” answer in most curricula is runC. Finally, “runV” is not a common modern Kubernetes runtime choice in typical OCI discussions. So the most correct, standards-based answer here is A (runC) because it directly implements the OCI runtime spec and is commonly used as the default low-level runtime behind CRI implementations.
=========
A platform engineer wants to ensure that a new microservice is automatically deployed to every cluster registered in Argo CD. Which configuration best achieves this goal?
Options:
Set up a Kubernetes CronJob that redeploys the microservice to all registered clusters on a schedule.
Manually configure every registered cluster with the deployment YAML for installing the microservice.
Create an Argo CD ApplicationSet that uses a Git repository containing the microservice manifests.
Use a Helm chart to package the microservice and manage it with a single Application defined in Argo CD.
Answer:
CExplanation:
Argo CD is a declarative GitOps continuous delivery tool designed to manage Kubernetes applications across one or many clusters. When the requirement is to automatically deploy a microservice to every cluster registered in Argo CD, the most appropriate and scalable solution is to use an ApplicationSet.
The ApplicationSet controller extends Argo CD by enabling the dynamic generation of multiple Argo CD Applications from a single template. One of its most powerful features is the cluster generator, which automatically discovers all clusters registered with Argo CD and creates an Application for each of them. By combining this generator with a Git repository containing the microservice manifests, the platform engineer ensures that the microservice is consistently deployed to all existing clusters—and any new clusters added in the future—without manual intervention.
This approach aligns perfectly with GitOps principles. The desired state of the microservice is defined once in Git, and Argo CD continuously reconciles that state across all target clusters. Any updates to the microservice manifests are automatically rolled out everywhere in a controlled and auditable manner. This provides strong guarantees around consistency, scalability, and operational simplicity.
Option A is incorrect because a CronJob introduces imperative redeployment logic and does not integrate with Argo CD’s reconciliation model. Option B is not scalable or maintainable, as it requires manual configuration for each cluster and increases the risk of configuration drift. Option D, while useful for packaging applications, still results in a single Application object and does not natively handle multi-cluster fan-out by itself.
Therefore, the correct and verified answer is Option C: creating an Argo CD ApplicationSet backed by a Git repository, which is the recommended and documented solution for multi-cluster application delivery in Argo CD.
Which of the following sentences is true about container runtimes in Kubernetes?
Options:
If you let iptables see bridged traffic, you don't need a container runtime.
If you enable IPv4 forwarding, you don't need a container runtime.
Container runtimes are deprecated, you must install CRI on each node.
You must install a container runtime on each node to run pods on it.
Answer:
DExplanation:
A Kubernetes node must have a container runtime to run Pods, so D is correct. Kubernetes schedules Pods to nodes, but the actual execution of containers is performed by a runtime such as containerd or CRI-O. The kubelet communicates with that runtime via the Container Runtime Interface (CRI) to pull images, create sandboxes, and start/stop containers. Without a runtime, the node cannot launch container processes, so Pods cannot transition into running state.
Options A and B confuse networking kernel settings with runtime requirements. iptables bridged traffic visibility and IPv4 forwarding can be relevant for node networking, but they do not replace the need for a container runtime. Networking and container execution are separate layers: you need networking for connectivity, and you need a runtime for running containers.
Option C is also incorrect and muddled. Container runtimes are not deprecated; rather, Kubernetes removed the built-in Docker shim integration from kubelet in favor of CRI-native runtimes. CRI is an interface, not “something you install instead of a runtime.” In practice you install a CRI-compatible runtime (containerd/CRI-O), which implements CRI endpoints that kubelet talks to.
Operationally, the runtime choice affects node behavior: image management, logging integration, performance characteristics, and compatibility. Kubernetes installation guides explicitly list installing a container runtime as a prerequisite for worker nodes. If a cluster has nodes without a properly configured runtime, workloads scheduled there will fail to start (often stuck in ContainerCreating/ImagePullBackOff/Runtime errors).
Therefore, the only fully correct statement is D: each node needs a container runtime to run Pods.
=========
What is the purpose of the kubelet component within a Kubernetes cluster?
Options:
A dashboard for Kubernetes clusters that allows management and troubleshooting of applications.
A network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
A component that watches for newly created Pods with no assigned node, and selects a node for them to run on.
An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
Answer:
DExplanation:
The kubelet is the primary node agent in Kubernetes. It runs on every worker node (and often on control-plane nodes too if they run workloads) and is responsible for ensuring that containers described by PodSpecs are actually running and healthy on that node. The kubelet continuously watches the Kubernetes API (via the control plane) for Pods that have been scheduled to its node, then it collaborates with the node’s container runtime (through CRI) to pull images, create containers, start them, and manage their lifecycle. It also mounts volumes, configures the Pod’s networking (working with the CNI plugin), and reports Pod and node status back to the API server.
Option D captures the core: “an agent on each node that makes sure containers are running in a Pod.” That includes executing probes (liveness, readiness, startup), restarting containers based on the Pod’s restartPolicy, and enforcing resource constraints in coordination with the runtime and OS.
Why the other options are wrong: A describes the Kubernetes Dashboard (or similar UI tools), not kubelet. B describes kube-proxy, which programs node-level networking rules (iptables/ipvs/eBPF depending on implementation) to implement Service virtual IP behavior. C describes the kube-scheduler, which selects a node for Pods that do not yet have an assigned node.
A useful way to remember kubelet’s role is: scheduler decides where, kubelet makes it happen there. Once the scheduler binds a Pod to a node, kubelet becomes responsible for reconciling “desired state” (PodSpec) with “observed state” (running containers). If a container crashes, kubelet will restart it according to policy; if an image is missing, it will pull it; if a Pod is deleted, it will stop containers and clean up. This node-local reconciliation loop is fundamental to Kubernetes’ self-healing and declarative operation model.
=========
What can be used to create a job that will run at specified times/dates or on a repeating schedule?
Options:
Job
CalendarJob
BatchJob
CronJob
Answer:
DExplanation:
The correct answer is D: CronJob. A Kubernetes CronJob is specifically designed for creating Jobs on a schedule—either at specified times/dates (expressed via cron syntax) or on a repeating interval (hourly, daily, weekly). When the schedule triggers, the CronJob controller creates a Job, and the Job controller creates the Pods that execute the workload to completion.
Option A (Job) is not inherently scheduled. A Job runs when you create it, and it continues until it completes successfully or fails according to its retry/backoff behavior. If you want it to run periodically, you need something else to create the Job each time. CronJob is the built-in mechanism for that scheduling.
Options B and C are not standard Kubernetes workload objects. Kubernetes does not include “CalendarJob” or “BatchJob” as official API kinds. The scheduling primitive is CronJob.
CronJobs also include important operational controls: concurrency policies prevent overlapping runs, deadlines control missed schedules, and history limits manage old Job retention. This makes CronJobs more robust than ad-hoc scheduling approaches and keeps the workload lifecycle visible in the Kubernetes API (status/events/logs). It also means you can apply standard Kubernetes patterns: use a service account with least privilege, mount Secrets/ConfigMaps, run in specific namespaces, and manage resource requests/limits so that scheduled workloads don’t destabilize the cluster.
So the correct Kubernetes resource for scheduled and repeating job execution is CronJob (D).
=========
Which of the following is a correct definition of a Helm chart?
Options:
A Helm chart is a collection of YAML files bundled in a tar.gz file and can be applied without decompressing it.
A Helm chart is a collection of JSON files and contains all the resource definitions to run an application on Kubernetes.
A Helm chart is a collection of YAML files that can be applied on Kubernetes by using the kubectl tool.
A Helm chart is similar to a package and contains all the resource definitions to run an application on Kubernetes.
Answer:
DExplanation:
A Helm chart is best described as a package for Kubernetes applications, containing the resource definitions (as templates) and metadata needed to install and manage an application—so D is correct. Helm is a package manager for Kubernetes; the chart is the packaging format. Charts include a Chart.yaml (metadata), a values.yaml (default configuration values), and a templates/ directory containing Kubernetes manifests written as templates. When you install a chart, Helm renders those templates into concrete Kubernetes YAML manifests by substituting values, then applies them to the cluster.
Option A is misleading/incomplete. While charts are often distributed as a compressed tarball (.tgz), the defining feature is not “YAML bundled in tar.gz” but the packaging and templating model that supports install/upgrade/rollback. Option B is incorrect because Helm charts are not “collections of JSON files” by definition; Kubernetes resources can be expressed as YAML or JSON, but Helm charts overwhelmingly use templated YAML. Option C is incorrect because charts are not simply YAML applied by kubectl; Helm manages releases, tracks installed resources, and supports upgrades and rollbacks. Helm uses Kubernetes APIs under the hood, but the value of Helm is the lifecycle and packaging system, not “kubectl apply.”
In cloud-native application delivery, Helm helps standardize deployments across environments (dev/stage/prod) by externalizing configuration through values. It reduces copy/paste and supports reuse via dependencies and subcharts. Helm also supports versioning of application packages, allowing teams to upgrade predictably and roll back if needed—critical for production change management.
So, the correct and verified definition is D: a Helm chart is like a package containing the resource definitions needed to run an application on Kubernetes.
=========
In Kubernetes, what is the primary purpose of creating a Service resource for a Deployment?
Options:
To centrally manage and apply runtime configuration values for application components.
To provide a stable endpoint for accessing Pods even when their IP addresses change.
To automatically adjust the number of Pods based on CPU or memory utilization metrics.
To define and attach persistent volumes that store application data across Pod restarts.
Answer:
BExplanation:
In Kubernetes, Pods are inherently ephemeral. They can be created, destroyed, restarted, or rescheduled at any time, and each time this happens, a Pod may receive a new IP address. This dynamic behavior is essential for resilience and scalability, but it also creates a challenge for reliably accessing application workloads. The Service resource addresses this problem by providing a stable network endpoint for a group of Pods, making option B the correct answer.
A Service selects Pods using label selectors—typically the same labels applied by a Deployment—and exposes them through a consistent virtual IP address (ClusterIP) and DNS name. Regardless of how many Pods are running or whether individual Pods are replaced, the Service remains stable and automatically routes traffic to healthy Pods. This abstraction allows clients to communicate with an application without needing to track individual Pod IPs.
Deployments are responsible for managing the lifecycle of Pods, including scaling, rolling updates, and self-healing. However, Deployments do not provide networking or service discovery capabilities. Without a Service, consumers would need to directly reference Pod IPs, which would break as soon as Pods are rescheduled or updated.
Option A is incorrect because centralized configuration management is handled using ConfigMaps and Secrets, not Services. Option C is incorrect because automatic scaling based on CPU or memory is the responsibility of the Horizontal Pod Autoscaler (HPA), not Services. Option D is incorrect because persistent storage is managed using PersistentVolume and PersistentVolumeClaim resources, which are unrelated to Services.
Services can be configured for different access patterns, such as ClusterIP for internal communication, NodePort or LoadBalancer for external access, and headless Services for direct Pod discovery. Despite these variations, their core purpose remains the same: providing a reliable and stable way to access Pods managed by a Deployment.
Therefore, the correct and verified answer is Option B, which aligns with Kubernetes networking fundamentals and official documentation.
Which of the following statements is correct concerning Open Policy Agent (OPA)?
Options:
The policies must be written in Python language.
Kubernetes can use it to validate requests and apply policies.
Policies can only be tested when published.
It cannot be used outside Kubernetes.
Answer:
BExplanation:
Open Policy Agent (OPA) is a general-purpose policy engine used to define and enforce policy across different systems. In Kubernetes, OPA is commonly integrated through admission control (often via Gatekeeper or custom admission webhooks) to validate and/or mutate requests before they are persisted in the cluster. This makes B correct: Kubernetes can use OPA to validate API requests and apply policy decisions.
Kubernetes’ admission chain is where policy enforcement naturally fits. When a user or controller submits a request (for example, to create a Pod), the API server can call external admission webhooks. Those webhooks can evaluate the request against policy—such as “no privileged containers,” “images must come from approved registries,” “labels must include cost-center,” or “Ingress must enforce TLS.” OPA’s policy language (Rego) allows expressing these rules in a declarative form, and the decision (“allow/deny” and sometimes patches) is returned to the API server. This enforces governance consistently and centrally.
Option A is incorrect because OPA policies are written in Rego, not Python. Option C is incorrect because policies can be tested locally and in CI pipelines before deployment; in fact, testability is a key advantage. Option D is incorrect because OPA is designed to be platform-agnostic—it can be used with APIs, microservices, CI/CD pipelines, service meshes, and infrastructure tools, not only Kubernetes.
From a Kubernetes fundamentals view, OPA complements RBAC: RBAC answers “who can do what to which resources,” while OPA-style admission policies answer “even if you can create this resource, does it meet our organizational rules?” Together they help implement defense in depth: authentication + authorization + policy admission + runtime security controls. That is why OPA is widely used to enforce security and compliance requirements in Kubernetes environments.
=========
Why is Cloud-Native Architecture important?
Options:
Cloud Native Architecture revolves around containers, microservices and pipelines.
Cloud Native Architecture removes constraints to rapid innovation.
Cloud Native Architecture is modern for application deployment and pipelines.
Cloud Native Architecture is a bleeding edge technology and service.
Answer:
BExplanation:
Cloud-native architecture is important because it enables organizations to build and run software in a way that supports rapid innovation while maintaining reliability, scalability, and efficient operations. Option B best captures this: cloud native removes constraints to rapid innovation, so B is correct.
In traditional environments, innovation is slowed by heavyweight release processes, tightly coupled systems, manual operations, and limited elasticity. Cloud-native approaches—containers, declarative APIs, automation, and microservices-friendly patterns—reduce those constraints. Kubernetes exemplifies this by offering a consistent deployment model, self-healing, automated rollouts, scaling primitives, and a large ecosystem of delivery and observability tools. This makes it easier to ship changes more frequently and safely: teams can iterate quickly, roll back confidently, and standardize operations across environments.
Option A is partly descriptive (containers/microservices/pipelines are common in cloud native), but it doesn’t explain why it matters; it lists ingredients rather than the benefit. Option C is vague (“modern”) and again doesn’t capture the core value proposition. Option D is incorrect because cloud native is not primarily about being “bleeding edge”—it’s about proven practices that improve time-to-market and operational stability.
A good way to interpret “removes constraints” is: cloud native shifts the bottleneck away from infrastructure friction. With automation (IaC/GitOps), standardized runtime packaging (containers), and platform capabilities (Kubernetes controllers), teams spend less time on repetitive manual work and more time delivering features. Combined with observability and policy automation, this results in faster delivery with better reliability—exactly the reason cloud-native architecture is emphasized across the Kubernetes ecosystem.
=========
What is a sidecar container?
Options:
A Pod that runs next to another container within the same Pod.
A container that runs next to another Pod within the same namespace.
A container that runs next to another container within the same Pod.
A Pod that runs next to another Pod within the same namespace.
Answer:
CExplanation:
A sidecar container is an additional container that runs alongside the main application container within the same Pod, sharing network and storage context. That matches option C, so C is correct. The sidecar pattern is used to add supporting capabilities to an application without modifying the application code. Because both containers are in the same Pod, the sidecar can communicate with the main container over localhost and share volumes for files, sockets, or logs.
Common sidecar examples include: log forwarders that tail application logs and ship them to a logging system, proxies (service mesh sidecars like Envoy) that handle mTLS and routing policy, config reloaders that watch ConfigMaps and signal the main process, and local caching agents. Sidecars are especially powerful in cloud-native systems because they standardize cross-cutting concerns—security, observability, traffic policy—across many workloads.
Options A and D incorrectly describe “a Pod running next to …” which is not how sidecars work; sidecars are containers, not separate Pods. Running separate Pods “next to” each other in a namespace does not give the same shared network namespace and tightly coupled lifecycle. Option B is also incorrect for the same reason: a sidecar is not a separate Pod; it is a container in the same Pod.
Operationally, sidecars share the Pod lifecycle: they are scheduled together, scaled together, and generally terminated together. This is both a benefit (co-location guarantees) and a responsibility (resource requests/limits should include the sidecar’s needs, and failure modes should be understood). Kubernetes is increasingly formalizing sidecar behavior (e.g., sidecar containers with ordered startup semantics), but the core definition remains: a helper container in the same Pod.
=========
Can a Kubernetes Service expose multiple ports?
Options:
No, you can only expose one port per each Service.
Yes, but you must specify an unambiguous name for each port.
Yes, the only requirement is to use different port numbers.
No, because the only port you can expose is port number 443.
Answer:
BExplanation:
Yes, a Kubernetes Service can expose multiple ports, and when it does, each port should have a unique, unambiguous name, making B correct. In the Service spec, the ports field is an array, allowing you to define multiple port mappings (e.g., 80 for HTTP and 443 for HTTPS, or grpc and metrics). Each entry can include port (Service port), targetPort (backend Pod port), and protocol.
The naming requirement becomes important because Kubernetes needs to disambiguate ports, especially when other resources refer to them. For example, an Ingress backend or some proxies/controllers can reference Service ports by name. Also, when multiple ports exist, a name helps humans and automation reliably select the correct port. Kubernetes documentation and common practice recommend naming ports whenever there is more than one, and in several scenarios it’s effectively required to avoid ambiguity.
Option A is incorrect because multi-port Services are common and fully supported. Option C is insufficient: while different port numbers are necessary, naming is the correct distinguishing rule emphasized by Kubernetes patterns and required by some integrations. Option D is incorrect and nonsensical—Services can expose many ports and are not restricted to 443.
Operationally, exposing multiple ports through one Service is useful when a single backend workload provides multiple interfaces (e.g., application traffic and a metrics endpoint). You can keep stable discovery under one DNS name while still differentiating ports. The backend Pods must still listen on the target ports, and selectors determine which Pods are endpoints. The key correctness point for this question is: multi-port Services are allowed, and each port should be uniquely named to avoid confusion and integration issues.
=========
What are the two essential operations that the kube-scheduler normally performs?
Options:
Pod eviction or starting
Resource monitoring and reporting
Filtering and scoring nodes
Starting and terminating containers
Answer:
CExplanation:
The kube-scheduler is a core control plane component in Kubernetes responsible for assigning newly created Pods to appropriate nodes. Its primary responsibility is decision-making, not execution. To make an informed scheduling decision, the kube-scheduler performs two essential operations: filtering and scoring nodes.
The scheduling process begins when a Pod is created without a node assignment. The scheduler first evaluates all available nodes and applies a set of filtering rules. During this phase, nodes that do not meet the Pod’s requirements are eliminated. Filtering criteria include resource availability (CPU and memory requests), node selectors, node affinity rules, taints and tolerations, volume constraints, and other policy-based conditions. Any node that fails one or more of these checks is excluded from consideration.
Once filtering is complete, the scheduler moves on to the scoring phase. In this step, each remaining eligible node is assigned a score based on a collection of scoring plugins. These plugins evaluate factors such as resource utilization balance, affinity preferences, topology spread constraints, and custom scheduling policies. The purpose of scoring is to rank nodes according to how well they satisfy the Pod’s placement preferences. The node with the highest total score is selected as the best candidate.
Option A is incorrect because Pod eviction is handled by other components such as the kubelet and controllers, and starting Pods is the responsibility of the kubelet. Option B is incorrect because resource monitoring and reporting are performed by components like metrics-server, not the scheduler. Option D is also incorrect because starting and terminating containers is entirely handled by the kubelet and the container runtime.
By separating filtering (eligibility) from scoring (preference), the kube-scheduler provides a flexible, extensible, and policy-driven scheduling mechanism. This design allows Kubernetes to support diverse workloads and advanced placement strategies while maintaining predictable scheduling behavior.
Therefore, the correct and verified answer is Option C: Filtering and scoring nodes, as documented in Kubernetes scheduling architecture.
What are the initial namespaces that Kubernetes starts with?
Options:
default, kube-system, kube-public, kube-node-lease
default, system, kube-public
kube-default, kube-system, kube-main, kube-node-lease
kube-default, system, kube-main, kube-primary
Answer:
AExplanation:
Kubernetes creates a set of namespaces by default when a cluster is initialized. The standard initial namespaces are default, kube-system, kube-public, and kube-node-lease, making A correct.
default is the namespace where resources are created if you don’t specify another namespace. Many quick-start examples deploy here, though production environments typically use dedicated namespaces per app/team.
kube-system contains objects created and managed by Kubernetes system components (control plane add-ons, system Pods, controllers, DNS components, etc.). It’s a critical namespace, and access is typically restricted.
kube-public is readable by all users (including unauthenticated users in some configurations) and is intended for public cluster information, though it’s used sparingly in many environments.
kube-node-lease holds Lease objects used for node heartbeats. This improves scalability by reducing load on etcd compared to older heartbeat mechanisms and helps the control plane track node liveness efficiently.
The incorrect options contain non-standard namespace names like “system,” “kube-main,” or “kube-primary,” and “kube-default” is not a real default namespace. Kubernetes’ built-in namespace set is well-documented and consistent with typical cluster bootstraps.
Understanding these namespaces matters operationally: system workloads and controllers often live in kube-system, and many troubleshooting steps involve inspecting Pods and events there. Meanwhile, kube-node-lease is key to node health tracking, and default is the catch-all if you forget to specify -n.
So, the verified answer is A: default, kube-system, kube-public, kube-node-lease.
=========
Which authorization-mode allows granular control over the operations that different entities can perform on different objects in a Kubernetes cluster?
Options:
Webhook Mode Authorization Control
Role Based Access Control
Node Authorization Access Control
Attribute Based Access Control
Answer:
BExplanation:
Role Based Access Control (RBAC) is the standard Kubernetes authorization mode that provides granular control over what users and service accounts can do to which resources, so B is correct. RBAC works by defining Roles (namespaced) and ClusterRoles (cluster-wide) that contain sets of rules. Each rule specifies API groups, resource types, resource names (optional), and allowed verbs such as get, list, watch, create, update, patch, and delete. You then attach these roles to identities using RoleBindings or ClusterRoleBindings.
This gives fine-grained, auditable access control. For example, you can allow a CI service account to create and patch Deployments only in a specific namespace, while restricting it from reading Secrets. You can allow developers to view Pods and logs but prevent them from changing cluster-wide networking resources. This is exactly the “granular control over operations on objects” described by the question.
Why other options are not the best answer: “Webhook mode” is an authorization mechanism where Kubernetes calls an external service to decide authorization. While it can be granular depending on the external system, Kubernetes’ common built-in answer for granular object-level control is RBAC. “Node authorization” is a specialized authorizer for kubelets/nodes to access resources they need; it’s not the general-purpose system for all cluster entities. ABAC (Attribute-Based Access Control) is an older mechanism and is not the primary recommended authorization model; it can be expressive but is less commonly used and not the default best-practice for Kubernetes authorization today.
In Kubernetes security practice, RBAC is typically paired with authentication (certs/OIDC), admission controls, and namespaces to build a defense-in-depth security posture. RBAC policy is also central to least privilege: granting only what is necessary for a workload or user role to function. This reduces blast radius if credentials are compromised.
Therefore, the verified answer is B: Role Based Access Control.
=========
What is the default deployment strategy in Kubernetes?
Options:
Rolling update
Blue/Green deployment
Canary deployment
Recreate deployment
Answer:
AExplanation:
For Kubernetes Deployments, the default update strategy is RollingUpdate, which corresponds to “Rolling update” in option A. Rolling updates replace old Pods with new Pods gradually, aiming to maintain availability during the rollout. Kubernetes does this by creating a new ReplicaSet for the updated Pod template and then scaling the new ReplicaSet up while scaling the old one down.
The pace and safety of a rolling update are controlled by parameters like maxUnavailable and maxSurge. maxUnavailable limits how many replicas can be unavailable during the update, protecting availability. maxSurge controls how many extra replicas can be created temporarily above the desired count, helping speed up rollouts while maintaining capacity. If readiness probes fail, Kubernetes will pause progression because new Pods aren’t becoming Ready, helping prevent a bad version from fully replacing a good one.
Options B (Blue/Green) and C (Canary) are popular progressive delivery patterns, but they are not the default built-in Deployment strategy. They are typically implemented using additional tooling (service mesh routing, traffic splitting controllers, or specialized rollout controllers) or by operating multiple Deployments/Services. Option D (Recreate) is a valid strategy but not the default; it terminates all old Pods before creating new ones, causing downtime unless you have external buffering or multi-tier redundancy.
From an application delivery perspective, RollingUpdate aligns with Kubernetes’ declarative model: you update the desired Pod template and let the controller converge safely. kubectl rollout status is commonly used to monitor progress. Rollbacks are also supported because the Deployment tracks history. Therefore, the verified correct answer is A: Rolling update.
=========
If kubectl is failing to retrieve information from the cluster, where can you find Pod logs to troubleshoot?
Options:
/var/log/pods/
~/.kube/config
/var/log/k8s/
/etc/kubernetes/
Answer:
AExplanation:
The correct answer is A: /var/log/pods/. When kubectl logs can’t retrieve logs (for example, API connectivity issues, auth problems, or kubelet/API proxy issues), you can often troubleshoot directly on the node where the Pod ran. Kubernetes nodes typically store container logs on disk, and a common location is under /var/log/pods/, organized by namespace, Pod name/UID, and container. This directory contains symlinks or files that map to the underlying container runtime log location (often under /var/log/containers/ as well, depending on distro/runtime setup).
Option B (~/.kube/config) is your local kubeconfig file; it contains cluster endpoints and credentials, not Pod logs. Option D (/etc/kubernetes/) contains Kubernetes component configuration/manifests on some installations (especially control plane), not application logs. Option C (/var/log/k8s/) is not a standard Kubernetes log path.
Operationally, the node-level log locations depend on the container runtime and logging configuration, but the Kubernetes convention is that kubelet writes container logs to a known location and exposes them through the API so kubectl logs works. If the API path is broken, node access becomes your fallback. This is also why secure node access is sensitive: anyone with node root access can potentially read logs (and other data), which is part of the threat model.
So, the best answer for where to look on the node for Pod logs when kubectl can’t retrieve them is /var/log/pods/, option A.
=========
When modifying an existing Helm release to apply new configuration values, which approach is the best practice?
Options:
Use helm upgrade with the --set flag to apply new values while preserving the release history.
Use kubectl edit to modify the live release configuration and apply the updated resource values.
Delete the release and reinstall it with the desired configuration to force an updated deployment.
Edit the Helm chart source files directly and reapply them to push the updated configuration values.
Answer:
AExplanation:
Helm is a package manager for Kubernetes that provides a declarative and versioned approach to application deployment and lifecycle management. When updating configuration values for an existing Helm release, the recommended and best-practice approach is to use helm upgrade, optionally with the --set flag or a values file, to apply the new configuration while preserving the release’s history.
Option A is correct because helm upgrade updates an existing release in a controlled and auditable manner. Helm stores each revision of a release, allowing teams to inspect past configurations and roll back to a previous known-good state if needed. Using --set enables quick overrides of individual values, while using -f values.yaml supports more complex or repeatable configurations. This approach aligns with GitOps and infrastructure-as-code principles, ensuring consistency and traceability.
Option B is incorrect because modifying Helm-managed resources directly with kubectl edit breaks Helm’s state tracking. Helm maintains a record of the desired state for each release, and manual edits can cause configuration drift, making future upgrades unpredictable or unsafe. Kubernetes documentation and Helm guidance strongly discourage modifying Helm-managed resources outside of Helm itself.
Option C is incorrect because deleting and reinstalling a release discards the release history and may cause unnecessary downtime or data loss, especially for stateful applications. Helm’s upgrade mechanism is specifically designed to avoid this disruption while still applying configuration changes safely.
Option D is also incorrect because editing chart source files directly and reapplying them bypasses Helm’s release management model. While chart changes are appropriate during development, applying them directly to a running release without helm upgrade undermines versioning, rollback, and repeatability.
According to Helm documentation, helm upgrade is the standard and supported method for modifying deployed applications. It ensures controlled updates, preserves operational history, and enables safe rollbacks, making option A the correct and fully verified best practice.
Which of the following is a recommended security habit in Kubernetes?
Options:
Run the containers as the user with group ID 0 (root) and any user ID.
Disallow privilege escalation from within a container as the default option.
Run the containers as the user with user ID 0 (root) and any group ID.
Allow privilege escalation from within a container as the default option.
Answer:
BExplanation:
The correct answer is B. A widely recommended Kubernetes security best practice is to disallow privilege escalation inside containers by default. In Kubernetes Pod/Container security context, this is represented by allowPrivilegeEscalation: false. This setting prevents a process from gaining more privileges than its parent process—commonly via setuid/setgid binaries or other privilege-escalation mechanisms. Disallowing privilege escalation reduces the blast radius of a compromised container and aligns with least-privilege principles.
Options A and C are explicitly unsafe because they encourage running as root (UID 0 and/or GID 0). Running containers as root increases risk: if an attacker breaks out of the application process or exploits kernel/runtime vulnerabilities, having root inside the container can make privilege escalation and lateral movement easier. Modern Kubernetes security guidance strongly favors running as non-root (runAsNonRoot: true, explicit runAsUser), dropping Linux capabilities, using read-only root filesystems, and applying restrictive seccomp/AppArmor/SELinux profiles where possible.
Option D is the opposite of best practice. Allowing privilege escalation by default increases the attack surface and violates the idea of secure defaults.
Operationally, this habit is often enforced via admission controls and policies (e.g., Pod Security Admission in “restricted” mode, or policy engines like OPA Gatekeeper/Kyverno). It’s also important for compliance: many security baselines require containers to run as non-root and to prevent privilege escalation.
So, the recommended security habit among the choices is clearly B: Disallow privilege escalation.
=========
A site reliability engineer needs to temporarily prevent new Pods from being scheduled on node-2 while keeping the existing workloads running without disruption. Which kubectl command should be used?
Options:
kubectl cordon node-2
kubectl delete node-2
kubectl drain node-2
kubectl pause deployment
Answer:
AExplanation:
In Kubernetes, node maintenance and availability are common operational tasks, and the platform provides specific commands to control how the scheduler places Pods on nodes. When the requirement is to temporarily prevent new Pods from being scheduled on a node without affecting the currently running Pods, the correct approach is to cordon the node.
The command kubectl cordon node-2 marks the node as unschedulable. This means the Kubernetes scheduler will no longer place any new Pods onto that node. Importantly, cordoning a node does not evict, restart, or interrupt existing Pods. All workloads already running on the node continue operating normally. This makes cordoning ideal for scenarios such as diagnostics, monitoring, or preparing for future maintenance while ensuring zero workload disruption.
Option B, kubectl delete node-2, is incorrect because deleting a node removes it entirely from the cluster. This action would cause Pods running on that node to be terminated and rescheduled elsewhere, resulting in disruption—exactly what the question specifies must be avoided.
Option C, kubectl drain node-2, is also incorrect in this context. Draining a node safely evicts Pods (except for certain exclusions like DaemonSets) and reschedules them onto other nodes. While drain is useful for maintenance and upgrades, it does not keep existing workloads running on the node, making it unsuitable here.
Option D, kubectl pause deployment, applies only to Deployments and merely pauses rollout updates. It does not affect node-level scheduling behavior and has no impact on where Pods are placed by the scheduler.
Therefore, the correct and verified answer is Option A: kubectl cordon node-2, which aligns with Kubernetes operational best practices and official documentation for non-disruptive node management.
Which cloud native tool keeps Kubernetes clusters in sync with sources of configuration (like Git repositories), and automates updates to configuration when there is new code to deploy?
Options:
Flux and ArgoCD
GitOps Toolkit
Linkerd and Istio
Helm and Kustomize
Answer:
AExplanation:
Tools that continuously reconcile cluster state to match a Git repository’s desired configuration are GitOps controllers, and the best match here is Flux and ArgoCD, so A is correct. GitOps is the practice where Git is the source of truth for declarative system configuration. A GitOps tool continuously compares the desired state (manifests/Helm/Kustomize outputs stored in Git) with the actual state in the cluster and then applies changes to eliminate drift.
Flux and Argo CD both implement this reconciliation loop. They watch Git repositories, detect updates (new commits/tags), and apply the updated Kubernetes resources. They also surface drift and sync status, enabling auditable, repeatable deployments and easy rollbacks (revert Git). This model improves delivery velocity and security because changes flow through code review, and cluster changes can be restricted to the GitOps controller identity rather than ad-hoc human kubectl access.
Option B (“GitOps Toolkit”) is related—Flux uses a GitOps Toolkit internally—but the question asks for a “tool” that keeps clusters in sync; the recognized tools are Flux and Argo CD in this list. Option C lists service meshes (traffic/security/telemetry), not deployment synchronization tools. Option D lists packaging/templating tools; Helm and Kustomize help build manifests, but they do not, by themselves, continuously reconcile cluster state to a Git source.
In Kubernetes application delivery, GitOps tools become the deployment engine: CI builds artifacts, updates references in Git (image tags/digests), and the GitOps controller deploys those changes. This separation strengthens traceability and reduces configuration drift. Therefore, A is the verified correct answer.
=========
The Container Runtime Interface (CRI) defines the protocol for the communication between:
Options:
The kubelet and the container runtime.
The container runtime and etcd.
The kube-apiserver and the kubelet.
The container runtime and the image registry.
Answer:
AExplanation:
The CRI (Container Runtime Interface) defines how the kubelet talks to the container runtime, so A is correct. The kubelet is the node agent responsible for ensuring containers are running in Pods on that node. It needs a standardized way to request operations such as: create a Pod sandbox, pull an image, start/stop containers, execute commands, attach streams, and retrieve logs. CRI provides that contract so kubelet does not need runtime-specific integrations.
This interface is a key part of Kubernetes’ modular design. Different container runtimes implement the CRI, allowing Kubernetes to run with containerd, CRI-O, and other CRI-compliant runtimes. This separation of concerns lets Kubernetes focus on orchestration, while runtimes focus on executing containers according to the OCI runtime spec, managing images, and handling low-level container lifecycle.
Why the other options are incorrect:
etcd is the control plane datastore; container runtimes do not communicate with etcd via CRI.
kube-apiserver and kubelet communicate using Kubernetes APIs, but CRI is not their protocol; CRI is specifically kubelet ↔ runtime.
container runtime and image registry communicate using registry protocols (image pull/push APIs), but that is not CRI. CRI may trigger image pulls via runtime requests, yet the actual registry communication is separate.
Operationally, this distinction matters when debugging node issues. If Pods are stuck in “ContainerCreating” due to image pull failures or runtime errors, you often investigate kubelet logs and the runtime (containerd/CRI-O) logs. Kubernetes administrators also care about CRI streaming (exec/attach/logs streaming), runtime configuration, and compatibility across Kubernetes versions.
So, the verified answer is A: the kubelet and the container runtime.
=========
What is the core metric type in Prometheus used to represent a single numerical value that can go up and down?
Options:
Summary
Counter
Histogram
Gauge
Answer:
DExplanation:
In Prometheus, a Gauge represents a single numerical value that can increase and decrease over time, which makes D the correct answer. Gauges are used for values like current memory usage, number of in-flight requests, queue depth, temperature, or CPU usage—anything that can move up and down.
This contrasts with a Counter, which is strictly monotonically increasing (it only goes up, except for resets when a process restarts). Counters are ideal for cumulative totals like total HTTP requests served, total errors, or bytes transmitted. Histograms and Summaries are used to capture distributions (often latency distributions), providing bucketed counts (histogram) or quantile approximations (summary), and are not the “single value that goes up and down” primitive the question asks for.
In Kubernetes observability, metrics are a primary signal for understanding system health and performance. Prometheus is widely used to scrape metrics from Kubernetes components (kubelet, API server, controller-manager), cluster add-ons, and applications. Gauges are common for resource utilization metrics and for instantaneous states, such as container_memory_working_set_bytes or go_goroutines.
When you build alerting and dashboards, selecting the right metric type matters. For example, if you want to alert on the current memory usage, a gauge is appropriate. If you want to compute request rates, you typically use counters with Prometheus functions like rate() to derive per-second rates. Histograms and summaries are used when you need latency percentiles or distribution analysis.
So, for “a single numerical value that can go up and down,” the correct Prometheus metric type is Gauge (D).
=========
CI/CD stands for:
Options:
Continuous Information / Continuous Development
Continuous Integration / Continuous Development
Cloud Integration / Cloud Development
Continuous Integration / Continuous Deployment
Answer:
DExplanation:
CI/CD is a foundational practice for delivering software rapidly and reliably, and it maps strongly to cloud native delivery workflows commonly used with Kubernetes. CI stands for Continuous Integration: developers merge code changes frequently into a shared repository, and automated systems build and test those changes to detect issues early. CD is commonly used to mean Continuous Delivery or Continuous Deployment depending on how far automation goes. In many certification contexts and simplified definitions like this question, CD is interpreted as Continuous Deployment, meaning every change that passes the automated pipeline is automatically released to production. That matches option D.
In a Kubernetes context, CI typically produces artifacts such as container images (built from Dockerfiles or similar build definitions), runs unit/integration tests, scans dependencies, and pushes images to a registry. CD then promotes those images into environments by updating Kubernetes manifests (Deployments, Helm charts, Kustomize overlays, etc.). Progressive delivery patterns (rolling updates, canary, blue/green) often use Kubernetes-native controllers and Service routing to reduce risk.
Why the other options are incorrect: “Continuous Development” isn’t the standard “D” term; it’s ambiguous and not the established acronym expansion. “Cloud Integration/Cloud Development” is unrelated. Continuous Delivery (in the stricter sense) means changes are always in a deployable state and releases may still require a manual approval step, while Continuous Deployment removes that final manual gate. But because the option set explicitly includes “Continuous Deployment,” and that is one of the accepted canonical expansions for CD, D is the correct selection here.
Practically, CI/CD complements Kubernetes’ declarative model: pipelines update desired state (Git or manifests), and Kubernetes reconciles it. This combination enables frequent releases, repeatability, reduced human error, and faster recovery through automated rollbacks and controlled rollout strategies.
=========
What default level of protection is applied to the data in Secrets in the Kubernetes API?
Options:
The values use AES symmetric encryption
The values are stored in plain text
The values are encoded with SHA256 hashes
The values are base64 encoded
Answer:
DExplanation:
Kubernetes Secrets are designed to store sensitive data such as tokens, passwords, or certificates and make them available to Pods in controlled ways (as environment variables or mounted files). However, the default protection applied to Secret values in the Kubernetes API is base64 encoding, not encryption. That is why D is correct. Base64 is an encoding scheme that converts binary data into ASCII text; it is reversible and does not provide confidentiality.
By default, Secret objects are stored in the cluster’s backing datastore (commonly etcd) as base64-encoded strings inside the Secret manifest. Unless the cluster is configured for encryption at rest, those values are effectively stored unencrypted in etcd and may be visible to anyone who can read etcd directly or who has API permissions to read Secrets. This distinction is critical for security: base64 can prevent accidental issues with special characters in YAML/JSON, but it does not protect against attackers.
Option A is only correct if encryption at rest is explicitly configured on the API server using an EncryptionConfiguration (for example, AES-CBC or AES-GCM providers). Many managed Kubernetes offerings enable encryption at rest for etcd as an option or by default, but that is a deployment choice, not the universal Kubernetes default. Option C is incorrect because hashing is used for verification, not for secret retrieval; you typically need to recover the original value, so hashing isn’t suitable for Secrets. Option B (“plain text”) is misleading: the stored representation is base64-encoded, but because base64 is reversible, the security outcome is close to plain text unless encryption at rest and strict RBAC are in place.
The correct operational stance is: treat Kubernetes Secrets as sensitive; lock down access with RBAC, enable encryption at rest, avoid broad Secret read permissions, and consider external secret managers when appropriate. But strictly for the question’s wording—default level of protection—base64 encoding is the right answer.
=========
In a Kubernetes cluster, what is the primary role of the Kubernetes scheduler?
Options:
To manage the lifecycle of the Pods by restarting them when they fail.
To monitor the health of the nodes and Pods in the cluster.
To handle network traffic between services within the cluster.
To distribute Pods across nodes based on resource availability and constraints.
Answer:
DExplanation:
The Kubernetes scheduler is a core control plane component responsible for deciding where Pods should run within a cluster. Its primary role is to assign newly created Pods that do not yet have a node assigned to an appropriate node based on a variety of factors such as resource availability, scheduling constraints, and policies.
When a Pod is created, it enters a Pending state until the scheduler selects a suitable node. The scheduler evaluates all available nodes and filters out those that do not meet the Pod’s requirements. These requirements may include CPU and memory requests, node selectors, node affinity rules, taints and tolerations, topology spread constraints, and other scheduling policies. After filtering, the scheduler scores the remaining nodes to determine the best placement for the Pod and then binds the Pod to the selected node.
Option A is incorrect because restarting failed Pods is handled by other components such as the kubelet and higher-level controllers like Deployments, ReplicaSets, or StatefulSets—not the scheduler. Option B is incorrect because monitoring node and Pod health is primarily the responsibility of the kubelet and the Kubernetes controller manager, which reacts to node failures and ensures desired state. Option C is incorrect because handling network traffic is managed by Services, kube-proxy, and the cluster’s networking implementation, not the scheduler.
Option D correctly describes the scheduler’s purpose. By distributing Pods across nodes based on resource availability and constraints, the scheduler helps ensure efficient resource utilization, high availability, and workload isolation. This intelligent placement is essential for maintaining cluster stability and performance, especially in large-scale or multi-tenant environments.
According to Kubernetes documentation, the scheduler’s responsibility is strictly focused on Pod placement decisions. Once a Pod is scheduled, the scheduler’s job is complete for that Pod, making option D the accurate and fully verified answer.
Which of these components is part of the Kubernetes Control Plane?
Options:
CoreDNS
cloud-controller-manager
kube-proxy
kubelet
Answer:
BExplanation:
The Kubernetes control plane is the set of components responsible for making cluster-wide decisions (like scheduling) and detecting and responding to cluster events (like starting new Pods when they fail). In upstream Kubernetes architecture, the canonical control plane components include kube-apiserver, etcd, kube-scheduler, and kube-controller-manager, and—when running on a cloud provider—the cloud-controller-manager. That makes option B the correct answer: cloud-controller-manager is explicitly a control plane component that integrates Kubernetes with the underlying cloud.
The cloud-controller-manager runs controllers that talk to cloud APIs for infrastructure concerns such as node lifecycle, routes, and load balancers. For example, when you create a Service of type LoadBalancer, a controller in this component is responsible for provisioning a cloud load balancer and updating the Service status. This is clearly control-plane behavior: reconciling desired state into real infrastructure state.
Why the others are not control plane components (in the classic classification): kubelet is a node component (agent) responsible for running and managing Pods on a specific node. kube-proxy is also a node component that implements Service networking rules on nodes. CoreDNS is usually deployed as a cluster add-on for DNS-based service discovery; it’s critical, but it’s not a control plane component in the strict architectural list.
So, while many clusters run CoreDNS in kube-system, the Kubernetes component that is definitively “part of the control plane” among these choices is cloud-controller-manager (B).
=========
Which Kubernetes Service type exposes a service only within the cluster?
Options:
ClusterIP
NodePort
LoadBalancer
ExternalName
Answer:
AExplanation:
In Kubernetes, a Service provides a stable network endpoint for a set of Pods and abstracts away their dynamic nature. Kubernetes offers several Service types, each designed for different exposure requirements. Among these, ClusterIP is the Service type that exposes an application only within the cluster, making it the correct answer.
When a Service is created with the ClusterIP type, Kubernetes assigns it a virtual IP address that is reachable exclusively from within the cluster’s network. This IP is used by other Pods and internal components to communicate with the Service through cluster DNS or environment variables. External traffic from outside the cluster cannot directly access a ClusterIP Service, which makes it ideal for internal APIs, backend services, and microservices that should not be publicly exposed.
Option B (NodePort) is incorrect because NodePort exposes the Service on a static port on each node’s IP address, allowing access from outside the cluster. Option C (LoadBalancer) is incorrect because it provisions an external load balancer—typically through a cloud provider—to expose the Service publicly. Option D (ExternalName) is incorrect because it does not create a proxy or internal endpoint at all; instead, it maps the Service name to an external DNS name outside the cluster.
ClusterIP is also the default Service type in Kubernetes. If no type is explicitly specified in a Service manifest, Kubernetes automatically assigns it as ClusterIP. This default behavior reflects the principle of least exposure, encouraging internal-only access unless external access is explicitly required.
From a cloud native architecture perspective, ClusterIP Services are fundamental to building secure, scalable microservices systems. They enable internal service-to-service communication while reducing the attack surface by preventing unintended external access.
According to Kubernetes documentation, ClusterIP Services are intended for internal communication within the cluster and are not reachable from outside the cluster network. Therefore, ClusterIP is the correct and fully verified answer, making option A the right choice.
What is a probe within Kubernetes?
Options:
A monitoring mechanism of the Kubernetes API.
A pre-operational scope issued by the kubectl agent.
A diagnostic performed periodically by the kubelet on a container.
A logging mechanism of the Kubernetes API.
Answer:
CExplanation:
In Kubernetes, a probe is a health check mechanism that the kubelet executes against containers, so C is correct. Probes are part of how Kubernetes implements self-healing and safe traffic management. The kubelet runs probes periodically according to the configuration in the Pod spec and uses the results to decide whether a container is healthy, ready to receive traffic, or still starting up.
Kubernetes supports three primary probe types:
Liveness probe: determines whether the container should be restarted. If liveness fails repeatedly, kubelet restarts the container (subject to restartPolicy).
Readiness probe: determines whether the Pod should receive traffic via Services. If readiness fails, the Pod is removed from Service endpoints, preventing traffic from being routed to it until it becomes ready again.
Startup probe: used for slow-starting containers. It disables liveness/readiness failures until startup succeeds, preventing premature restarts during initialization.
Probe mechanisms can be HTTP GET, TCP socket checks, or exec commands run inside the container. These checks are performed by kubelet on the node where the Pod is running, not by the API server.
Options A and D incorrectly attribute probes to the Kubernetes API. While probe configuration is stored in the API as part of Pod specs, execution is node-local. Option B is not a Kubernetes concept.
So the correct definition is: a probe is a periodic diagnostic run by kubelet to assess container health/readiness, enabling reliable rollouts, traffic gating, and automatic recovery.
=========
Which type of Service requires manual creation of Endpoints?
Options:
LoadBalancer
Services without selectors
NodePort
ClusterIP with selectors
Answer:
BExplanation:
A Kubernetes Service without selectors requires you to manage its backend endpoints manually, so B is correct. Normally, a Service uses a selector to match a set of Pods (by labels). Kubernetes then automatically maintains the backend list (historically Endpoints, now commonly EndpointSlice) by tracking which Pods match the selector and are Ready. This automation is one of the key reasons Services provide stable connectivity to dynamic Pods.
When you create a Service without a selector, Kubernetes has no way to know which Pods (or external IPs) should receive traffic. In that pattern, you explicitly create an Endpoints object (or EndpointSlices, depending on your approach and controller support) that maps the Service name to one or more IP:port tuples. This is commonly used to represent external services (e.g., a database running outside the cluster) while still providing a stable Kubernetes Service DNS name for in-cluster clients. Another use case is advanced migration scenarios where endpoints are controlled by custom controllers rather than label selection.
Why the other options are wrong: Service types like ClusterIP, NodePort, and LoadBalancer describe how a Service is exposed, but they do not inherently require manual endpoint management. A ClusterIP Service with selectors (D) is the standard case where endpoints are automatically created and updated. NodePort and LoadBalancer Services also typically use selectors and therefore inherit automatic endpoint management; the difference is in how traffic enters the cluster, not how backends are discovered.
Operationally, when using Services without selectors, you must ensure endpoint IPs remain correct, health is accounted for (often via external tooling), and you update endpoints when backends change. The key concept is: no selector → Kubernetes can’t auto-populate endpoints → you must provide them.
=========
In Kubernetes, what is the primary responsibility of the kubelet running on each worker node?
Options:
To allocate persistent storage volumes and manage distributed data replication for Pods.
To manage cluster state information and handle all scheduling decisions for workloads.
To ensure that containers defined in Pod specifications are running and remain healthy on the node.
To provide internal DNS resolution and route service traffic between Pods and nodes.
Answer:
CExplanation:
The kubelet is a critical Kubernetes component that runs on every worker node and acts as the primary execution agent for Pods. Its core responsibility is to ensure that the containers defined in Pod specifications are running and remain healthy on the node, making option C the correct answer.
Once the Kubernetes scheduler assigns a Pod to a specific node, the kubelet on that node becomes responsible for carrying out the desired state described in the Pod specification. It continuously watches the API server for Pods assigned to its node and communicates with the container runtime (such as containerd or CRI-O) to start, stop, and restart containers as needed. The kubelet does not make scheduling decisions; it simply executes them.
Health management is another key responsibility of the kubelet. It runs liveness, readiness, and startup probes as defined in the Pod specification. If a container fails a liveness probe, the kubelet restarts it. If a readiness probe fails, the kubelet marks the Pod as not ready, preventing traffic from being routed to it. The kubelet also reports detailed Pod and node status information back to the API server, enabling controllers to take corrective actions when necessary.
Option A is incorrect because persistent volume provisioning and data replication are handled by storage systems, CSI drivers, and controllers—not by the kubelet. Option B is incorrect because cluster state management and scheduling are responsibilities of control plane components such as the API server, controller manager, and kube-scheduler. Option D is incorrect because DNS resolution and service traffic routing are handled by components like CoreDNS and kube-proxy.
In summary, the kubelet serves as the node-level guardian of Kubernetes workloads. By ensuring containers are running exactly as specified and continuously reporting their health and status, the kubelet forms the essential bridge between Kubernetes’ declarative control plane and the actual execution of applications on worker nodes.
What is an ephemeral container?
Options:
A specialized container that runs as root for infosec applications.
A specialized container that runs temporarily in an existing Pod.
A specialized container that extends and enhances the main container in a Pod.
A specialized container that runs before the app container in a Pod.
Answer:
BExplanation:
B is correct: an ephemeral container is a temporary container you can add to an existing Pod for troubleshooting and debugging without restarting the Pod. This capability is especially useful when a running container image is minimal (distroless) and lacks debugging tools like sh, curl, or ps. Instead of rebuilding the workload image or disrupting the Pod, you attach an ephemeral container that includes the tools you need, then inspect processes, networking, filesystem mounts, and runtime behavior.
Ephemeral containers are not part of the original Pod spec the same way normal containers are. They are added via a dedicated subresource and are generally not restarted automatically like regular containers. They are meant for interactive investigation, not for ongoing workload functionality.
Why the other options are incorrect:
D describes init containers, which run before app containers start and are used for setup tasks.
C resembles the “sidecar” concept (a supporting container that runs alongside the main container), but sidecars are normal containers defined in the Pod spec, not ephemeral containers.
A is not a definition; ephemeral containers are not “root by design” (they can run with various security contexts depending on policy), and they aren’t limited to infosec use cases.
In Kubernetes operations, ephemeral containers complement kubectl exec and logs. If the target container is crash-looping or lacks a shell, exec may not help; adding an ephemeral container provides a safe and Kubernetes-native debugging path. So, the accurate definition is B.
=========
Let’s assume that an organization needs to process large amounts of data in bursts, on a cloud-based Kubernetes cluster. For instance: each Monday morning, they need to run a batch of 1000 compute jobs of 1 hour each, and these jobs must be completed by Monday night. What’s going to be the most cost-effective method?
Options:
Run a group of nodes with the exact required size to complete the batch on time, and use a combination of taints, tolerations, and nodeSelectors to reserve these nodes to the batch jobs.
Leverage the Kubernetes Cluster Autoscaler to automatically start and stop nodes as they’re needed.
Commit to a specific level of spending to get discounted prices (with e.g. “reserved instances” or similar mechanisms).
Use PriorityClasses so that the weekly batch job gets priority over other workloads running on the cluster, and can be completed on time.
Answer:
BExplanation:
Burst workloads are a classic elasticity problem: you need large capacity for a short window, then very little capacity the rest of the week. The most cost-effective approach in a cloud-based Kubernetes environment is to scale infrastructure dynamically, matching node count to current demand. That’s exactly what Cluster Autoscaler is designed for: it adds nodes when Pods cannot be scheduled due to insufficient resources and removes nodes when they become underutilized and can be drained safely. Therefore B is correct.
Option A can work operationally, but it commonly results in paying for a reserved “standing army” of nodes that sit idle most of the week—wasteful for bursty patterns unless the nodes are repurposed for other workloads. Taints/tolerations and nodeSelectors are placement tools; they don’t reduce cost by themselves and may increase waste if they isolate nodes. Option D (PriorityClasses) affects which Pods get scheduled first given available capacity, but it does not create capacity. If the cluster doesn’t have enough nodes, high priority Pods will still remain Pending. Option C (reserved instances or committed-use discounts) can reduce unit price, but it assumes relatively predictable baseline usage. For true bursts, you usually want a smaller baseline plus autoscaling, and optionally combine it with discounted capacity types if your cloud supports them.
In Kubernetes terms, the control loop is: batch Jobs create Pods → scheduler tries to place Pods → if many Pods are Pending due to insufficient CPU/memory, Cluster Autoscaler observes this and increases the node group size → new nodes join and kube-scheduler places Pods → after jobs finish and nodes become empty, Cluster Autoscaler drains and removes nodes. This matches cloud-native principles: elasticity, pay-for-what-you-use, and automation. It minimizes idle capacity while still meeting the completion deadline.
=========
Which component of the Kubernetes architecture is responsible for integration with the CRI container runtime?
Options:
kubeadm
kubelet
kube-apiserver
kubectl
Answer:
BExplanation:
The correct answer is B: kubelet. The Container Runtime Interface (CRI) defines how Kubernetes interacts with container runtimes in a consistent, pluggable way. The component that speaks CRI is the kubelet, the node agent responsible for running Pods on each node. When the kube-scheduler assigns a Pod to a node, the kubelet reads the PodSpec and makes the runtime calls needed to realize that desired state—pull images, create a Pod sandbox, start containers, stop containers, and retrieve status and logs. Those calls are made via CRI to a CRI-compliant runtime such as containerd or CRI-O.
Why not the others:
kubeadm bootstraps clusters (init/join/upgrade workflows) but does not run containers or speak CRI for workload execution.
kube-apiserver is the control plane API frontend; it stores and serves cluster state and does not directly integrate with runtimes.
kubectl is just a client tool that sends API requests; it is not involved in runtime integration on nodes.
This distinction matters operationally. If the runtime is misconfigured or CRI endpoints are unreachable, kubelet will report errors and Pods can get stuck in ContainerCreating, image pull failures, or runtime errors. Debugging often involves checking kubelet logs and runtime service health, because kubelet is the integration point bridging Kubernetes scheduling/state with actual container execution.
So, the node-level component responsible for CRI integration is the kubelet—option B.
=========
What edge and service proxy tool is designed to be integrated with cloud native applications?
Options:
CoreDNS
CNI
gRPC
Envoy
Answer:
DExplanation:
The correct answer is D: Envoy. Envoy is a high-performance edge and service proxy designed for cloud-native environments. It is commonly used as the data plane in service meshes and modern API gateways because it provides consistent traffic management, observability, and security features across microservices without requiring every application to implement those capabilities directly.
Envoy operates at Layer 7 (application-aware) and supports protocols like HTTP/1.1, HTTP/2, gRPC, and more. It can handle routing, load balancing, retries, timeouts, circuit breaking, rate limiting, TLS termination, and mutual TLS (mTLS). Envoy also emits rich telemetry (metrics, access logs, tracing) that integrates well with cloud-native observability stacks.
Why the other options are incorrect:
CoreDNS (A) provides DNS-based service discovery within Kubernetes; it is not an edge/service proxy.
CNI (B) is a specification and plugin ecosystem for container networking (Pod networking), not a proxy.
gRPC (C) is an RPC protocol/framework used by applications; it’s not a proxy tool. (Envoy can proxy gRPC traffic, but gRPC itself isn’t the proxy.)
In Kubernetes architectures, Envoy often appears in two places: (1) at the edge as part of an ingress/gateway layer, and (2) sidecar proxies alongside Pods in a service mesh (like Istio) to standardize service-to-service communication controls and telemetry. This is why it is described as “designed to be integrated with cloud native applications”: it’s purpose-built for dynamic service discovery, resilient routing, and operational visibility in distributed systems.
So the verified correct choice is D (Envoy).
=========
What is the primary purpose of a Horizontal Pod Autoscaler (HPA) in Kubernetes?
Options:
To automatically scale the number of Pod replicas based on resource utilization.
To track performance metrics and report health status for nodes and Pods.
To coordinate rolling updates of Pods when deploying new application versions.
To allocate and manage persistent volumes required by stateful applications.
Answer:
AExplanation:
The Horizontal Pod Autoscaler (HPA) is a core Kubernetes feature designed to automatically scale the number of Pod replicas in a workload based on observed metrics, making option A the correct answer. Its primary goal is to ensure that applications can handle varying levels of demand while maintaining performance and resource efficiency.
HPA works by continuously monitoring metrics such as CPU utilization, memory usage, or custom and external metrics provided through the Kubernetes metrics APIs. Based on target thresholds defined by the user, the HPA increases or decreases the number of replicas in a scalable resource like a Deployment, ReplicaSet, or StatefulSet. When demand increases, HPA adds more Pods to handle the load. When demand decreases, it scales down Pods to free resources and reduce costs.
Option B is incorrect because tracking performance metrics and reporting health status is handled by components such as the metrics-server, monitoring systems, and observability tools—not by the HPA itself. Option C is incorrect because rolling updates are managed by Deployment strategies, not by the HPA. Option D is incorrect because persistent volume management is handled by Kubernetes storage resources and CSI drivers, not by autoscalers.
HPA operates at the Pod replica level, which is why it is called “horizontal” scaling—scaling out or in by changing the number of Pods, rather than adjusting resource limits of individual Pods (which would be vertical scaling). This makes HPA particularly effective for stateless applications that can scale horizontally to meet demand.
In practice, HPA is commonly used in production Kubernetes environments to maintain application responsiveness under load while optimizing cluster resource usage. It integrates seamlessly with Kubernetes’ declarative model and self-healing mechanisms.
Therefore, the correct and verified answer is Option A, as the Horizontal Pod Autoscaler’s primary function is to automatically scale Pod replicas based on resource utilization and defined metrics.
Which mechanism can be used to automatically adjust the amount of resources for an application?
Options:
Horizontal Pod Autoscaler (HPA)
Kubernetes Event-driven Autoscaling (KEDA)
Cluster Autoscaler
Vertical Pod Autoscaler (VPA)
Answer:
AExplanation:
The verified answer in the PDF is A (HPA), and that aligns with the common Kubernetes meaning of “adjust resources for an application” by scaling replicas. The Horizontal Pod Autoscaler automatically changes the number of Pod replicas for a workload (typically a Deployment) based on observed metrics such as CPU utilization, memory (in some configurations), or custom/external metrics. By increasing replicas under load, the application gains more total CPU/memory capacity available across Pods; by decreasing replicas when load drops, it reduces resource consumption and cost.
It’s important to distinguish what each mechanism adjusts:
HPA adjusts replica count (horizontal scaling).
VPA adjusts Pod resource requests/limits (vertical scaling), which is literally “amount of CPU/memory per pod,” but it often requires restarts to apply changes depending on mode.
Cluster Autoscaler adjusts the number of nodes in the cluster, not application replicas.
KEDA is event-driven autoscaling that often drives HPA behavior using external event sources (queues, streams), but it’s not the primary built-in mechanism referenced in many foundational Kubernetes questions.
Given the wording and the provided answer key, the intended interpretation is: “automatically adjust the resources available to the application” by scaling out/in the number of replicas. That’s exactly HPA’s role. For example, if CPU utilization exceeds a target (say 60%), HPA computes a higher desired replica count and updates the workload. The Deployment then creates more Pods, distributing load and increasing available compute.
So, within this question set, the verified correct choice is A (Horizontal Pod Autoscaler).
=========
What is CRD?
Options:
Custom Resource Definition
Custom Restricted Definition
Customized RUST Definition
Custom RUST Definition
Answer:
AExplanation:
A CRD is a CustomResourceDefinition, making A correct. Kubernetes is built around an API-driven model: resources like Pods, Services, and Deployments are all objects served by the Kubernetes API. CRDs allow you to extend the Kubernetes API by defining your own resource types. Once a CRD is installed, the API server can store and serve custom objects (Custom Resources) of that new type, and Kubernetes tooling (kubectl, RBAC, admission, watch mechanisms) can interact with them just like built-in resources.
CRDs are a core building block of the Kubernetes ecosystem because they enable operators and platform extensions. A typical pattern is: define a CRD that represents the desired state of some higher-level concept (for example, a database cluster, a certificate request, an application release), and then run a controller (often called an “operator”) that watches those custom resources and reconciles the cluster to match. That controller may create Deployments, StatefulSets, Services, Secrets, or cloud resources to implement the desired state encoded in the custom resource.
The incorrect answers are made-up expansions. CRDs are not related to Rust in Kubernetes terminology, and “custom restricted definition” is not the standard meaning.
So the verified meaning is: CRD = CustomResourceDefinition, used to extend Kubernetes APIs and enable Kubernetes-native automation via controllers/operators.
In which framework do the developers no longer have to deal with capacity, deployments, scaling and fault tolerance, and OS?
Options:
Docker Swarm
Kubernetes
Mesos
Serverless
Answer:
DExplanation:
Serverless is the model where developers most directly avoid managing server capacity, OS operations, and much of the deployment/scaling/fault-tolerance mechanics, which is why D is correct. In serverless computing (commonly Function-as-a-Service, FaaS, and managed serverless container platforms), the provider abstracts away the underlying servers. You typically deploy code (functions) or a container image, define triggers (HTTP events, queues, schedules), and the platform automatically provisions the required compute, scales it based on demand, and handles much of the availability and fault tolerance behind the scenes.
It’s important to compare this to Kubernetes: Kubernetes does automate scheduling, self-healing, rolling updates, and scaling, but it still requires you (or your platform team) to design and operate cluster capacity, node pools, upgrades, runtime configuration, networking, and baseline reliability controls. Even in managed Kubernetes services, you still choose node sizes, scale policies, and operational configuration. Kubernetes reduces toil, but it does not eliminate infrastructure concerns in the same way serverless does.
Docker Swarm and Mesos are orchestration platforms that schedule workloads, but they also require managing the underlying capacity and OS-level aspects. They are not “no longer have to deal with capacity and OS” frameworks.
From a cloud native viewpoint, serverless is about consuming compute as an on-demand utility. Kubernetes can be a foundation for a serverless experience (for example, with event-driven autoscaling or serverless frameworks), but the pure framework that removes the most operational burden from developers is serverless.
What is the reference implementation of the OCI runtime specification?
Options:
lxc
CRI-O
runc
Docker
Answer:
CExplanation:
The verified correct answer is C (runc). The Open Container Initiative (OCI) defines standards for container image format and runtime behavior. The OCI runtime specification describes how to run a container (process execution, namespaces, cgroups, filesystem mounts, capabilities, etc.). runc is widely recognized as the reference implementation of that runtime spec and is used underneath many higher-level container runtimes.
In common container stacks, Kubernetes nodes typically run a CRI-compliant runtime such as containerd or CRI-O. Those runtimes handle image management, container lifecycle coordination, and CRI integration, but they usually invoke an OCI runtime to actually create and start containers. In many deployments, that OCI runtime is runc (or a compatible alternative). This layering helps keep responsibilities separated: CRI runtime manages orchestration-facing operations; OCI runtime performs the low-level container creation according to the standardized spec.
Option A (lxc) is an older Linux containers technology and tooling ecosystem, but it is not the OCI runtime reference implementation. Option B (CRI-O) is a Kubernetes-focused container runtime that implements CRI; it uses OCI runtimes (often runc) underneath, so it’s not the reference implementation itself. Option D (Docker) is a broader platform/tooling suite; while Docker historically used runc under the hood and helped popularize containers, the OCI reference runtime implementation is runc, not Docker.
Understanding this matters in container orchestration contexts because it clarifies what Kubernetes depends on: Kubernetes relies on CRI for runtime integration, and runtimes rely on OCI standards for interoperability. OCI standards ensure that images and runtime behavior are portable across tools and vendors, and runc is the canonical implementation that demonstrates those standards in practice.
Therefore, the correct answer is C: runc.
=========
At which layer would distributed tracing be implemented in a cloud native deployment?
Options:
Network
Application
Database
Infrastructure
Answer:
BExplanation:
Distributed tracing is implemented primarily at the application layer, so B is correct. The reason is simple: tracing is about capturing the end-to-end path of a request as it traverses services, libraries, queues, and databases. That “request context” (trace ID, span ID, baggage) must be created, propagated, and enriched as code executes. While infrastructure components (proxies, gateways, service meshes) can generate or augment trace spans, the fundamental unit of tracing is still tied to application operations (an HTTP handler, a gRPC call, a database query, a cache lookup).
In Kubernetes-based microservices, distributed tracing typically uses standards like OpenTelemetry for instrumentation and context propagation. Application frameworks emit spans for key operations, attach attributes (route, status code, tenant, retry count), and propagate context via headers (e.g., W3C Trace Context). This is what lets you reconstruct “Service A → Service B → Service C” for one user request and identify the slow or failing hop.
Why other layers are not the best answer:
Network focuses on packets/flows, but tracing is not a packet-capture problem; it’s a causal request-path problem across services.
Database spans are part of traces, but tracing is not “implemented in the database layer” overall; DB spans are one component.
Infrastructure provides the platform and can observe traffic, but without application context it can’t fully represent business operations (and many useful attributes live in app code).
So the correct layer for “where tracing is implemented” is the application layer—even when a mesh or proxy helps, it’s still describing application request execution across components.
=========
What are the two steps performed by the kube-scheduler to select a node to schedule a pod?
Options:
Grouping and placing
Filtering and selecting
Filtering and scoring
Scoring and creating
Answer:
CExplanation:
The kube-scheduler selects a node in two main phases: filtering and scoring, so C is correct. First, filtering identifies which nodes are feasible for the Pod by applying hard constraints. These include resource availability (CPU/memory requests), node taints/tolerations, node selectors and required affinities, topology constraints, and other scheduling requirements. Nodes that cannot satisfy the Pod’s requirements are removed from consideration.
Second, scoring ranks the remaining feasible nodes using priority functions to choose the “best” placement. Scoring can consider factors like spreading Pods across nodes/zones, packing efficiency, affinity preferences, and other policies configured in the scheduler. The node with the highest score is selected (with tie-breaking), and the scheduler binds the Pod by setting spec.nodeName.
Option B (“filtering and selecting”) is close but misses the explicit scoring step that is central to scheduler design. The scheduler does “select” a node, but the canonical two-step wording in Kubernetes scheduling is filtering then scoring. Options A and D are not how scheduler internals are described.
Operationally, understanding filtering vs scoring helps troubleshoot scheduling failures. If a Pod can’t be scheduled, it failed in filtering—kubectl describe pod often shows “0/… nodes are available” reasons (insufficient CPU, taints, affinity mismatch). If it schedules but lands in unexpected places, it’s often about scoring preferences (affinity weights, topology spread preferences, default scheduler profiles).
So the verified correct answer is C: kube-scheduler uses Filtering and Scoring.
=========
The cloud native architecture centered around microservices provides a strong system that ensures ______________.
Options:
fallback
resiliency
failover
high reachability
Answer:
BExplanation:
The best answer is B (resiliency). A microservices-centered cloud-native architecture is designed to build systems that continue to operate effectively under change and failure. “Resiliency” is the umbrella concept: the ability to tolerate faults, recover from disruptions, and maintain acceptable service levels through redundancy, isolation, and automated recovery.
Microservices help resiliency by reducing blast radius. Instead of one monolith where a single defect can take down the entire application, microservices separate concerns into independently deployable components. Combined with Kubernetes, you get resiliency mechanisms such as replication (multiple Pod replicas), self-healing (restart and reschedule on failure), rolling updates, health probes, and service discovery/load balancing. These enable the platform to detect and replace failing instances automatically, and to keep traffic flowing to healthy backends.
Options C (failover) and A (fallback) are resiliency techniques but are narrower terms. Failover usually refers to switching to a standby component when a primary fails; fallback often refers to degraded behavior (cached responses, reduced features). Both can exist in microservice systems, but the broader architectural guarantee microservices aim to support is resiliency overall. Option D (“high reachability”) is not the standard term used in cloud-native design and doesn’t capture the intent as precisely as resiliency.
In practice, achieving resiliency also requires good observability and disciplined delivery: monitoring/alerts, tracing across service boundaries, circuit breakers/timeouts/retries, and progressive delivery patterns. Kubernetes provides platform primitives, but resilient microservices also need careful API design and failure-mode thinking.
So the intended and verified completion is resiliency, option B.
=========
In Kubernetes, what is the primary purpose of using annotations?
Options:
To control the access permissions for users and service accounts.
To provide a way to attach metadata to objects.
To specify the deployment strategy for applications.
To define the specifications for resource limits and requests.
Answer:
BExplanation:
Annotations in Kubernetes are a flexible mechanism for attaching non-identifying metadata to Kubernetes objects. Their primary purpose is to store additional information that is not used for object selection or grouping, which makes Option B the correct answer.
Unlike labels, which are designed to be used for selection, filtering, and grouping of resources (for example, by Services or Deployments), annotations are intended purely for informational or auxiliary purposes. They allow users, tools, and controllers to store arbitrary key–value data on objects without affecting Kubernetes’ core behavior. This makes annotations ideal for storing data such as build information, deployment timestamps, commit hashes, configuration hints, or ownership details.
Annotations are commonly consumed by external tools and controllers rather than by the Kubernetes scheduler or control plane for decision-making. For example, ingress controllers, service meshes, monitoring agents, and CI/CD systems often read annotations to enable or customize specific behaviors. Because annotations are not used for querying or selection, Kubernetes places no strict size or structure requirements on their values beyond general object size limits.
Option A is incorrect because access permissions are managed using Role-Based Access Control (RBAC), which relies on roles, role bindings, and service accounts—not annotations. Option C is incorrect because deployment strategies (such as RollingUpdate or Recreate) are defined in the specification of workload resources like Deployments, not through annotations. Option D is also incorrect because resource limits and requests are specified explicitly in the Pod or container spec under the resources field.
In summary, annotations provide a powerful and extensible way to associate metadata with Kubernetes objects without influencing scheduling, selection, or identity. They support integration, observability, and operational tooling while keeping core Kubernetes behavior predictable and stable. This design intent is clearly documented in Kubernetes metadata concepts, making Option B the correct and verified answer.
Unlock KCNA Features
- KCNA All Real Exam Questions
- KCNA Exam easy to use and print PDF format
- Download Free KCNA Demo (Try before Buy)
- Free Frequent Updates
- 100% Passing Guarantee by Activedumpsnet
Questions & Answers PDF Demo
- KCNA All Real Exam Questions
- KCNA Exam easy to use and print PDF format
- Download Free KCNA Demo (Try before Buy)
- Free Frequent Updates
- 100% Passing Guarantee by Activedumpsnet