- Home
- Linux Foundation
- Kubernetes and Cloud Native
- KCNA
- KCNA - Kubernetes and Cloud Native Associate
Linux Foundation KCNA Kubernetes and Cloud Native Associate Exam Practice Test
Kubernetes and Cloud Native Associate Questions and Answers
What does “continuous” mean in the context of CI/CD?
Options:
Frequent releases, manual processes, repeatable, fast processing
Periodic releases, manual processes, repeatable, automated processing
Frequent releases, automated processes, repeatable, fast processing
Periodic releases, automated processes, repeatable, automated processing
Answer:
CExplanation:
The correct answer isC: in CI/CD, “continuous” impliesfrequent releases,automation,repeatability, andfast feedback/processing. The intent is to reduce batch size and latency between code change and validation/deployment. Instead of integrating or releasing in large, risky chunks, teams integrate changes continually and rely on automation to validate and deliver them safely.
“Continuous” does not mean “periodic” (which eliminates B and D). It also does not mean “manual processes” (which eliminates A and B). Automation is core: build, test, security checks, and deployment steps are consistently executed by pipeline systems, producing reliable outcomes and auditability.
In practice, CI means every merge triggers automated builds and tests so the main branch stays in a healthy state. CD means those validated artifacts are promoted through environments with minimal manual steps, often including progressive delivery controls (canary, blue/green), automated rollbacks on health signal failures, and policy checks. Kubernetes works well with CI/CD because it is declarative and supports rollout primitives: Deployments, readiness probes, and rollback revision history enable safer continuous delivery when paired with pipeline automation.
Repeatability is a major part of “continuous.” The same pipeline should run the same way every time, producing consistent artifacts and deployments. This reduces “works on my machine” issues and shortens incident resolution because changes are traceable and reproducible. Fast processing and frequent releases also mean smaller diffs, easier debugging, and quicker customer value delivery.
So, the combination that accurately reflects “continuous” in CI/CD isfrequent + automated + repeatable + fast, which is optionC.
=========
What is the core metric type in Prometheus used to represent a single numerical value that can go up and down?
Options:
Summary
Counter
Histogram
Gauge
Answer:
DExplanation:
In Prometheus, aGaugerepresents a single numerical value that canincrease and decreaseover time, which makesDthe correct answer. Gauges are used for values like current memory usage, number of in-flight requests, queue depth, temperature, or CPU usage—anything that can move up and down.
This contrasts with aCounter, which is strictly monotonically increasing (it only goes up, except for resets when a process restarts). Counters are ideal for cumulative totals like total HTTP requests served, total errors, or bytes transmitted. Histograms and Summaries are used to capture distributions (often latency distributions), providing bucketed counts (histogram) or quantile approximations (summary), and are not the “single value that goes up and down” primitive the question asks for.
In Kubernetes observability, metrics are a primary signal for understanding system health and performance. Prometheus is widely used to scrape metrics from Kubernetes components (kubelet, API server, controller-manager), cluster add-ons, and applications. Gauges are common for resource utilization metrics and for instantaneous states, such as container_memory_working_set_bytes or go_goroutines.
When you build alerting and dashboards, selecting the right metric type matters. For example, if you want to alert on thecurrentmemory usage, a gauge is appropriate. If you want to compute request rates, you typically use counters with Prometheus functions like rate() to derive per-second rates. Histograms and summaries are used when you need latency percentiles or distribution analysis.
So, for “a single numerical value that can go up and down,” the correct Prometheus metric type isGauge (D).
=========
What is a best practice to minimize the container image size?
Options:
Use a DockerFile.
Use multistage builds.
Build images with different tags.
Add a build.sh script.
Answer:
BExplanation:
A proven best practice for minimizing container image size is to usemulti-stage builds, soBis correct. Multi-stage builds allow you to separate the “build environment” from the “runtime environment.” In the first stage, you can use a full-featured base image (with compilers, package managers, and build tools) to compile your application or assemble artifacts. In the final stage, you copy only the resulting binaries or necessary runtime assets into a much smaller base image (for example, a distroless image or a slim OS image). This dramatically reduces the final image size because it excludes compilers, caches, and build dependencies that are not needed at runtime.
In cloud-native application delivery, smaller images matter for several reasons. They pull faster, which speeds up deployments, rollouts, and scaling events (Pods become Ready sooner). They also reduce attack surface by removing unnecessary packages, which helps security posture and scanning results. Smaller images tend to be simpler and more reproducible, improving reliability across environments.
Option A is not a size-minimization practice: using a Dockerfile is simply the standard way to define how to build an image; it doesn’t inherently reduce size. Option C (different tags) changes image identification but not size. Option D (a build script) may help automation, but it doesn’t guarantee smaller images; the image contents are determined by what ends up in the layers.
Multi-stage builds are commonly paired with other best practices: choosing minimal base images, cleaning package caches, avoiding copying unnecessary files (use .dockerignore), and reducing layer churn. But among the options, the clearest and most directly correct technique ismulti-stage builds.
Therefore, the verified answer isB.
=========
What's the most adopted way of conflict resolution and decision-making for the open-source projects under the CNCF umbrella?
Options:
Financial Analysis
Discussion and Voting
Flipism Technique
Project Founder Say
Answer:
BExplanation:
B (Discussion and Voting)is correct. CNCF-hosted open-source projects generally operate with open governance practices that emphasize transparency, community participation, and documented decision-making. While each project can have its own governance model (maintainers, technical steering committees, SIGs, TOC interactions, etc.), a very common and widely adopted approach to resolving disagreements and making decisions is to first pursuediscussion(often on GitHub issues/PRs, mailing lists, or community meetings) and then usevoting/consensus mechanismswhen needed.
This approach is important because open-source communities are made up of diverse contributors across companies and geographies. “Project Founder Say” (D) is not a sustainable or typical CNCF governance norm for mature projects; CNCF explicitly encourages neutral, community-led governance rather than single-person control. “Financial Analysis” (A) is not a conflict resolution mechanism for technical decisions, and “Flipism Technique” (C) is not a real governance practice.
In Kubernetes specifically, community decisions are often made within structured groups (e.g., SIGs) using discussion and consensus-building, sometimes followed by formal votes where governance requires it. The goal is to ensure decisions are fair, recorded, and aligned with the project’s mission and contributor expectations. This also reduces risk of vendor capture and builds trust: anyone can review the rationale in meeting notes, issues, or PR threads, and decisions can be revisited with new evidence.
Therefore, the most adopted conflict resolution and decision-making method across CNCF open-source projects isdiscussion and voting, makingBthe verified correct answer.
=========
Which of the following is a responsibility of the governance board of an open source project?
Options:
Decide about the marketing strategy of the project.
Review the pull requests in the main branch.
Outline the project's “terms of engagement”.
Define the license to be used in the project.
Answer:
CExplanation:
A governance board in an open source project typically defines how the community operates—its decision-making rules, roles, conflict resolution, and contribution expectations—soC(“Outline the project's terms of engagement”) is correct. In large cloud-native projects (Kubernetes being a prime example), clear governance is essential to coordinate many contributors, companies, and stakeholders. Governance establishes the “rules of the road” that keep collaboration productive and fair.
“Terms of engagement” commonly includes: how maintainers are selected, how proposals are reviewed (e.g., enhancement processes), how meetings and SIGs operate, what constitutes consensus, how voting works when consensus fails, and what code-of-conduct expectations apply. It also defines escalation and dispute resolution paths so technical disagreements don’t become community-breaking conflicts. In other words, governance is about ensuring the project has durable, transparent processes that outlive any individual contributor and support vendor-neutral decision making.
Option B (reviewing pull requests) is usually the responsibility of maintainers and SIG owners, not a governance board. The governance body may define the structure that empowers maintainers, but it generally does not do day-to-day code review. Option A (marketing strategy) is often handled by foundations, steering committees, or separate outreach groups, not governance boards as their primary responsibility. Option D (defining the license) is usually decided early and may be influenced by a foundation or legal process; while governance can shape legal/policy direction, the core governance responsibility is broader community operating rules rather than selecting a license.
In cloud-native ecosystems, strong governance supports sustainability: it encourages contributions, protects neutrality, and provides predictable processes for evolution. Therefore, the best verified answer isC.
=========
Which of the following sentences is true about container runtimes in Kubernetes?
Options:
If you let iptables see bridged traffic, you don't need a container runtime.
If you enable IPv4 forwarding, you don't need a container runtime.
Container runtimes are deprecated, you must install CRI on each node.
You must install a container runtime on each node to run pods on it.
Answer:
DExplanation:
A Kubernetes node must have acontainer runtimeto run Pods, soDis correct. Kubernetes schedules Pods to nodes, but the actual execution of containers is performed by a runtime such ascontainerdorCRI-O. The kubelet communicates with that runtime via theContainer Runtime Interface (CRI)to pull images, create sandboxes, and start/stop containers. Without a runtime, the node cannot launch container processes, so Pods cannot transition into running state.
Options A and B confuse networking kernel settings with runtime requirements. iptables bridged traffic visibility and IPv4 forwarding can be relevant for node networking, but they do not replace the need for a container runtime. Networking and container execution are separate layers: you need networking for connectivity, and you need a runtime for running containers.
Option C is also incorrect and muddled. Container runtimes are not deprecated; rather, Kubernetes removed the built-in Docker shim integration from kubelet in favor of CRI-native runtimes. CRI is an interface, not “something you install instead of a runtime.” In practice you install a CRI-compatible runtime (containerd/CRI-O), which implements CRI endpoints that kubelet talks to.
Operationally, the runtime choice affects node behavior: image management, logging integration, performance characteristics, and compatibility. Kubernetes installation guides explicitly list installing a container runtime as a prerequisite for worker nodes. If a cluster has nodes without a properly configured runtime, workloads scheduled there will fail to start (often stuck in ContainerCreating/ImagePullBackOff/Runtime errors).
Therefore, the only fully correct statement isD: each node needs a container runtime to run Pods.
=========
What are the advantages of adopting a GitOps approach for your deployments?
Options:
Reduce failed deployments, operational costs, and fragile release processes.
Reduce failed deployments, configuration drift, and fragile release processes.
Reduce failed deployments, operational costs, and learn git.
Reduce failed deployments, configuration drift and improve your reputation.
Answer:
BExplanation:
The correct answer isB: GitOps helps reduce failed deployments, reduce configuration drift, and reduce fragile release processes. GitOps is an operating model whereGit is the source of truthfor declarative configuration (Kubernetes manifests, Helm releases, Kustomize overlays). A GitOps controller (like Flux or Argo CD) continuously reconciles the cluster’s actual state to match what’s declared in Git. This creates a stable, repeatable deployment pipeline and minimizes “snowflake” environments.
Reducing failed deployments: changes go through pull requests, code review, automated checks, and controlled merges. Deployments become predictable because the controller applies known-good, versioned configuration rather than ad-hoc manual commands. Rollbacks are also simpler—reverting a Git commit returns the cluster to the prior desired state.
Reducing configuration drift: without GitOps, clusters often drift because humans apply hotfixes directly in production or because different environments diverge over time. With GitOps, the controller detects drift and either alerts or automatically corrects it, restoring alignment with Git.
Reducing fragile release processes: releases become standardized and auditable. Git history provides an immutable record of who changed what and when. Promotion between environments becomes systematic (merge/branch/tag), and the same declarative artifacts are used consistently.
The other options include items that are either not the primary GitOps promise (like “learn git”) or subjective (“improve your reputation”). Operational cost reduction can happen indirectly through fewer incidents and more automation, but the most canonical and direct GitOps advantages in Kubernetes delivery are reliability and drift control—captured precisely inB.
=========
How many different Kubernetes service types can you define?
Options:
2
3
4
5
Answer:
CExplanation:
Kubernetes definesfourprimary Service types, which is whyC (4)is correct. The commonly recognized Service spec.type values are:
ClusterIP: The default type. Exposes the Service on an internal virtual IP reachable only within the cluster. This supports typical east-west traffic between workloads.
NodePort: Exposes the Service on a static port on each node. Traffic to
LoadBalancer: Integrates with a cloud provider (or load balancer implementation) to provision an external load balancer and route traffic to the Service. This is common in managed Kubernetes.
ExternalName: Maps the Service name to an external DNS name via a CNAME record, allowing in-cluster clients to use a consistent Service DNS name to reach an external dependency.
Some people also talk about “Headless Services,” but headless is not a separate type; it’s a behavior achieved by setting clusterIP: None. Headless Services still use the Service API object but change DNS and virtual-IP behavior to return endpoint IPs directly rather than a ClusterIP. That’s why the canonical count of “Service types” is four.
This question tests understanding of the Service abstraction: Service type controlshowa stable service identity is exposed (internal VIP, node port, external LB, or DNS alias), while selectors/endpoints controlwheretraffic goes (the backend Pods). Different environments will favor different types: ClusterIP for internal microservices, LoadBalancer for external exposure in cloud, NodePort for bare-metal or simple access, ExternalName for bridging to outside services.
Therefore, the verified answer isC (4).
=========
Which statement about Secrets is correct?
Options:
A Secret is part of a Pod specification.
Secret data is encrypted with the cluster private key by default.
Secret data is base64 encoded and stored unencrypted by default.
A Secret can only be used for confidential data.
Answer:
CExplanation:
The correct answer isC. By default, KubernetesSecretsstore their data asbase64-encodedvalues in the API (backed by etcd). Base64 is an encoding mechanism, not encryption, so this doesnotprovide confidentiality. Unless you explicitly configureencryption at restfor etcd (via the API server encryption provider configuration) and secure access controls, Secret contents should be treated as potentially readable by anyone with sufficient API access or access to etcd backups.
Option A is misleading: a Secret is its own Kubernetes resource (kind: Secret). While Pods canreferenceSecrets (as environment variables or mounted volumes), the Secret itself is not “part of the Pod spec” as an embedded object. Option B is incorrect because Kubernetes does not automatically encrypt Secret data with a cluster private key by default; encryption at rest is optional and must be enabled. Option D is incorrect because Secrets can store a range of sensitive or semi-sensitive data (tokens, certs, passwords), but Kubernetes does not enforce “only confidential data” semantics; it’s a storage mechanism with size and format constraints.
Operationally, best practices include: enabling encryption at rest, limiting access via RBAC, avoiding broad “list/get secrets” permissions, using dedicated service accounts, auditing access, and considering external secrets managers (Vault, cloud KMS-backed solutions) for higher assurance. Also, don’t confuse “Secret” with “secure by default.” The default protection is mainly about avoiding accidental plaintext exposure in manifests, not about cryptographic security.
So the only correct statement in the options isC.
=========
The Container Runtime Interface (CRI) defines the protocol for the communication between:
Options:
The kubelet and the container runtime.
The container runtime and etcd.
The kube-apiserver and the kubelet.
The container runtime and the image registry.
Answer:
AExplanation:
The CRI (Container Runtime Interface) defines how thekubelettalks to thecontainer runtime, soAis correct. The kubelet is the node agent responsible for ensuring containers are running in Pods on that node. It needs a standardized way to request operations such as: create a Pod sandbox, pull an image, start/stop containers, execute commands, attach streams, and retrieve logs. CRI provides that contract so kubelet does not need runtime-specific integrations.
This interface is a key part of Kubernetes’ modular design. Different container runtimes implement the CRI, allowing Kubernetes to run withcontainerd,CRI-O, and other CRI-compliant runtimes. This separation of concerns lets Kubernetes focus on orchestration, while runtimes focus on executing containers according to the OCI runtime spec, managing images, and handling low-level container lifecycle.
Why the other options are incorrect:
etcd is the control plane datastore; container runtimes do not communicate with etcd via CRI.
kube-apiserver and kubelet communicate using Kubernetes APIs, but CRI is not their protocol; CRI is specifically kubelet ↔ runtime.
container runtime and image registry communicate using registry protocols (image pull/push APIs), but that is not CRI. CRI maytriggerimage pulls via runtime requests, yet the actual registry communication is separate.
Operationally, this distinction matters when debugging node issues. If Pods are stuck in “ContainerCreating” due to image pull failures or runtime errors, you often investigate kubelet logs and the runtime (containerd/CRI-O) logs. Kubernetes administrators also care about CRI streaming (exec/attach/logs streaming), runtime configuration, and compatibility across Kubernetes versions.
So, the verified answer isA: the kubelet and the container runtime.
=========
How to load and generate data required before the Pod startup?
Options:
Use an init container with shared file storage.
Use a PVC volume.
Use a sidecar container with shared volume.
Use another Pod with a PVC.
Answer:
AExplanation:
The Kubernetes-native mechanism to run setup stepsbeforethe main application containers start is aninit container, soAis correct. Init containers run sequentially and must complete successfully before the regular containers in the Pod are started. This makes them ideal for preparing configuration, downloading artifacts, performing migrations, generating files, or waiting for dependencies.
The question specifically asks how to “load and generate data required before Pod startup.” The most common pattern is: an init container writes files into a shared volume (like an emptyDir volume) mounted by both the init container and the app container. When the init container finishes, the app container starts and reads the generated files. This is deterministic and aligns with Kubernetes Pod lifecycle semantics.
A sidecar container (option C) runsconcurrentlywith the main container, so it is not guaranteed to complete work before startup. Sidecars are great for ongoing concerns (log shipping, proxies, config reloaders), but they are not the primary “before startup” mechanism. A PVC volume (option B) is just storage; it doesn’t itself perform generation or ensure ordering. “Another Pod with a PVC” (option D) introduces coordination complexity and still does not guarantee the data is prepared before this Pod starts unless you build additional synchronization.
Init containers are explicitly designed for this kind of pre-flight work, and Kubernetes guarantees ordering: all init containers complete in order, then the app containers begin. That guarantee is whyAis the best and verified answer.
How are ReplicaSets and Deployments related?
Options:
Deployments manage ReplicaSets and provide declarative updates to Pods.
ReplicaSets manage stateful applications, Deployments manage stateless applications.
Deployments are runtime instances of ReplicaSets.
ReplicaSets are subsets of Jobs and CronJobs which use imperative Deployments.
Answer:
AExplanation:
In Kubernetes, aDeploymentis a higher-level controller that managesReplicaSets, and ReplicaSets in turn managePods. That is exactly what optionAstates, making it the correct answer.
A ReplicaSet’s job is straightforward: ensure that a specified number of Pod replicas matching a selector are running. It continuously reconciles actual state to desired state by creating new Pods when replicas are missing or removing Pods when there are too many. However, ReplicaSets alone do not provide the richer application rollout lifecycle features most teams need.
A Deployment adds those features by managing ReplicaSets across versions of your Pod template. When you update a Deployment (for example, change the container image tag), Kubernetes creates a new ReplicaSet with the new Pod template and then gradually scales the new ReplicaSet up and the old one down according to the Deployment strategy (RollingUpdate by default). Deployments also maintain rollout history, support rollback (kubectl rollout undo), and allow pause/resume of rollouts. This is why the common guidance is: you almost always create Deployments rather than ReplicaSets directly for stateless apps.
Option B is incorrect because stateful workloads are typically handled by StatefulSets, not ReplicaSets. Deployments can run stateless apps, but ReplicaSets are also used under Deployments and are not “for stateful only.” Option C is reversed: ReplicaSets are not “instances” of Deployments; Deployments create/manage ReplicaSets. Option D is incorrect because Jobs/CronJobs are separate controllers for run-to-completion workloads and do not define ReplicaSets as subsets.
So the accurate relationship is:Deployment → manages ReplicaSets → which manage Pods, enabling declarative updates and controlled rollouts.
=========
Which two elements are shared between containers in the same pod?
Options:
Network resources and liveness probes.
Storage and container image registry.
Storage and network resources.
Network resources and Dockerfiles.
Answer:
CExplanation:
The correct answer isC: Storage and network resources. In Kubernetes, a Pod is the smallest schedulable unit and acts like a “logical host” for its containers. Containers inside the same Pod share a number of namespaces and resources, most notably:
Network: all containers in a Pod share the same network namespace, which means they share a single Pod IP address and the same port space. They can talk to each other via localhost and coordinate tightly without exposing separate network endpoints.
Storage: containers in a Pod can share data throughPod volumes. Volumes (like emptyDir, ConfigMap/Secret volumes, or PVC-backed volumes) are defined at the Pod level and can be mounted into multiple containers within the Pod. This enables common patterns like a sidecar writing logs to a shared volume that the main container generates, or an init/sidecar container producing configuration or certificates for the main container.
Why other options are wrong: liveness probes (A) are defined per container (or per Pod template) but are not a “shared” resource between containers. A container image registry (B) is an external system and not a shared in-Pod element. Dockerfiles (D) are build-time artifacts, irrelevant at runtime, and not shared resources.
This question is a classic test of Pod fundamentals: multi-container Pods work precisely because they share networking and volumes. This is also why the sidecar pattern is feasible—sidecars can intercept traffic on localhost, export metrics, or ship logs while sharing the same lifecycle boundary and scheduling placement.
Therefore, the verified correct choice isC.
=========
What is a Pod?
Options:
A networked application within Kubernetes.
A storage volume within Kubernetes.
A single container within Kubernetes.
A group of one or more containers within Kubernetes.
Answer:
DExplanation:
APodis the smallest deployable/schedulable unit in Kubernetes and consists of agroup of one or more containersthat are deployed together on the same node—soDis correct. The key idea is that Kubernetes schedules Pods, not individual containers. Containers in the same Pod share important runtime context: they share thesame network namespace(one Pod IP and port space) and can share storage volumes defined at the Pod level. This is why a Pod is often described as a “logical host” for its containers.
Most Pods run a single container, but multi-container Pods are common for sidecar patterns. For example, an application container might run alongside a service mesh proxy sidecar, a log shipper, or a config reloader. Because these containers share localhost networking, they can communicate efficiently without exposing extra network endpoints. Because they can share volumes, one container can produce files that another consumes (for example, writing logs to a shared volume).
Options A and B are incorrect because a Pod is not “an application” abstraction nor is it a storage volume. Pods can host applications, but they are the execution unit for containers rather than the application concept itself. Option C is incorrect because a Pod is not limited to a single container; “one or more containers” is fundamental to the Pod definition.
Operationally, understanding Pods is essential because many Kubernetes behaviors key off Pods: Services select Pods (typically by labels), autoscalers scale Pods (replica counts), probes determine Pod readiness/liveness, and scheduling constraints place Pods on nodes. When a Pod is replaced (for example during a Deployment rollout), a new Pod is created with a new UID and potentially a new IP—reinforcing why Services exist to provide stable access.
Therefore, the verified correct answer isD: a Pod is a group of one or more containers within Kubernetes.
=========
What is the Kubernetes object used for running a recurring workload?
Options:
Job
Batch
DaemonSet
CronJob
Answer:
DExplanation:
A recurring workload in Kubernetes is implemented with aCronJob, so the correct choice isD. A CronJob is a controller that createsJobson a schedule defined in standard cron format (minute, hour, day of month, month, day of week). This makes CronJobs ideal for periodic tasks like backups, report generation, log rotation, and cleanup tasks.
AJob(option A) is run-to-completion but is typically aone-timeexecution; it ensures that a specified number of Pods successfully terminate. Youcanuse a Job repeatedly, but something else must create it each time—CronJob is that built-in scheduler. Option B (“Batch”) is not a standard workload resource type (batch is an API group, not the object name used here). Option C (DaemonSet) ensures one Pod runs on every node (or selected nodes), which is not “recurring,” it’s “always present per node.”
CronJobs include operational controls that matter in real clusters. For example, concurrencyPolicy controls what happens if a scheduled run overlaps with a previous run (Allow, Forbid, Replace). startingDeadlineSeconds can handle missed schedules (e.g., if the controller was down). History limits (successfulJobsHistoryLimit, failedJobsHistoryLimit) help manage cleanup and troubleshooting. Each scheduled execution results in a Job with its own Pods, which can be inspected with kubectl get jobs and kubectl logs.
So the correct Kubernetes object for a recurring workload isCronJob (D): it provides native scheduling and creates Jobs automatically according to the defined cadence.
=========
In Kubernetes, which abstraction defines a logical set of Pods and a policy by which to access them?
Options:
Service Account
NetworkPolicy
Service
Custom Resource Definition
Answer:
CExplanation:
The correct answer isC: Service. A Kubernetes Service is an abstraction that provides stable access to alogical set of Pods. Pods are ephemeral: they can be rescheduled, recreated, and scaled, which changes their IP addresses over time. A Service solves this by providing a stable identity—typically a virtual IP (ClusterIP) and a DNS name—and a traffic-routing policy that directs requests to the current set of backend Pods.
Services commonly select Pods usinglabelsvia a selector (e.g., app=web). Kubernetes then maintains the backend endpoint list (Endpoints/EndpointSlices). The cluster networking layer routes traffic sent to the Service IP/port to one of the Pod endpoints, enabling load distribution across replicas. This is fundamental to microservices architectures: clients call the Service name, not individual Pods.
Why the other options are incorrect:
AServiceAccountis an identity for Pods to authenticate to the Kubernetes API; it doesn’t define a set of Pods nor traffic access policy.
ANetworkPolicydefines allowed network flows (who can talk to whom) but does not provide stable addressing or load-balanced access to Pods. It is a security policy, not an exposure abstraction.
ACustomResourceDefinitionextends the Kubernetes API with new resource types; it’s unrelated to service discovery and traffic routing for a set of Pods.
Understanding Services is core Kubernetes fundamentals: they decouple backend Pod churn from client connectivity. Services also integrate with different exposure patterns via type (ClusterIP, NodePort, LoadBalancer, ExternalName) and can be paired with Ingress/Gateway for HTTP routing. But the essential definition in the question—“logical set of Pods and a policy to access them”—is exactly the textbook description of aService.
Therefore, the verified correct answer isC.
=========
What is the name of the lightweight Kubernetes distribution built for IoT and edge computing?
Options:
OpenShift
k3s
RKE
k1s
Answer:
BExplanation:
Edge and IoT environments often have constraints that differ from traditional datacenters: limited CPU/RAM, intermittent connectivity, smaller footprints, and a desire for simpler operations.k3sis a well-known lightweight Kubernetes distribution designed specifically to run in these environments, makingBthe correct answer.
What makes k3s “lightweight” is that it packages Kubernetes components in a simplified way and reduces operational overhead. It typically uses a single binary distribution and can run with an embedded datastore option for smaller installations (while also supporting external datastores for HA use cases). It streamlines dependencies and is aimed at faster installation and reduced resource consumption, which is ideal for edge nodes, IoT gateways, small servers, labs, and development environments.
By contrast,OpenShiftis a Kubernetes distribution focused on enterprise platform capabilities, with additional security defaults, integrated developer tooling, and a larger operational footprint—excellent for many enterprises but not “built for IoT and edge” as the defining characteristic.RKE(Rancher Kubernetes Engine) is a Kubernetes installer/engine used to deploy Kubernetes, but it’s not specifically the lightweight edge-focused distribution in the way k3s is. “k1s” is not a standard, widely recognized Kubernetes distribution name in this context.
From a cloud native architecture perspective, edge Kubernetes distributions extend the same declarative and API-driven model to places where you want consistent operations across cloud, datacenter, and edge. You can apply GitOps patterns, standard manifests, and Kubernetes-native controllers across heterogeneous footprints. k3s provides that familiar Kubernetes experience while optimizing for constrained environments, which is why it has become a common choice for edge/IoT Kubernetes deployments.
=========
What edge and service proxy tool is designed to be integrated with cloud native applications?
Options:
CoreDNS
CNI
gRPC
Envoy
Answer:
DExplanation:
The correct answer isD: Envoy. Envoy is a high-performanceedge and service proxydesigned for cloud-native environments. It is commonly used as the data plane in service meshes and modern API gateways because it provides consistent traffic management, observability, and security features across microservices without requiring every application to implement those capabilities directly.
Envoy operates at Layer 7 (application-aware) and supports protocols like HTTP/1.1, HTTP/2, gRPC, and more. It can handle routing, load balancing, retries, timeouts, circuit breaking, rate limiting, TLS termination, and mutual TLS (mTLS). Envoy also emits rich telemetry (metrics, access logs, tracing) that integrates well with cloud-native observability stacks.
Why the other options are incorrect:
CoreDNS (A)provides DNS-based service discovery within Kubernetes; it is not an edge/service proxy.
CNI (B)is a specification and plugin ecosystem for container networking (Pod networking), not a proxy.
gRPC (C)is an RPC protocol/framework used by applications; it’s not a proxy tool. (Envoy can proxy gRPC traffic, but gRPC itself isn’t the proxy.)
In Kubernetes architectures, Envoy often appears in two places: (1) at the edge as part of an ingress/gateway layer, and (2) sidecar proxies alongside Pods in a service mesh (like Istio) to standardize service-to-service communication controls and telemetry. This is why it is described as “designed to be integrated with cloud native applications”: it’s purpose-built for dynamic service discovery, resilient routing, and operational visibility in distributed systems.
So the verified correct choice isD (Envoy).
=========
Kubernetes ___ allows you to automatically manage the number of nodes in your cluster to meet demand.
Options:
Node Autoscaler
Cluster Autoscaler
Horizontal Pod Autoscaler
Vertical Pod Autoscaler
Answer:
BExplanation:
Kubernetes supports multiple autoscaling mechanisms, but they operate at different layers. The question asks specifically about automatically managing thenumber of nodesin the cluster, which is the role of theCluster Autoscaler—thereforeBis correct.
Cluster Autoscaler monitors the scheduling state of the cluster. When Pods are pending because there are not enough resources (CPU/memory) available on existing nodes—meaning the scheduler cannot place them—Cluster Autoscaler can request that the underlying infrastructure (typically a cloud provider node group / autoscaling group) add nodes. Conversely, when nodes are underutilized and Pods can be rescheduled elsewhere, Cluster Autoscaler can drain those nodes (respecting disruption constraints like PodDisruptionBudgets) and then remove them to reduce cost. This aligns with cloud-native elasticity: scale infrastructure up and down automatically based on workload needs.
The other options are different:Horizontal Pod Autoscaler (HPA)changes the number of Pod replicas for a workload (like a Deployment) based on metrics (CPU utilization, memory, or custom metrics). It scales the application layer, not the node layer.Vertical Pod Autoscaler (VPA)changes resource requests/limits (CPU/memory) for Pods, effectively “scaling up/down” the size of individual Pods. It also does not directly change node count, though its adjustments can influence scheduling pressure. “Node Autoscaler” is not the canonical Kubernetes component name used in standard terminology; the widely referenced upstream component for node count is Cluster Autoscaler.
In real systems, these autoscalers often work together: HPA increases replicas when traffic rises; that may cause Pods to go Pending if nodes are full; Cluster Autoscaler then adds nodes; scheduling proceeds; later, traffic drops, HPA reduces replicas and Cluster Autoscaler removes nodes. This layered approach provides both performance and cost efficiency.
=========
Which group of container runtimes provides additional sandboxed isolation and elevated security?
Options:
rune, cgroups
docker, containerd
runsc, kata
crun, cri-o
Answer:
CExplanation:
The runtimes most associated withsandboxed isolationaregVisor’s runscandKata Containers, makingCcorrect. Standard container runtimes (like containerd with runc) rely primarily on Linux namespaces and cgroups for isolation. That isolation is strong for many use cases, but it shares the host kernel, which can be a concern for multi-tenant or high-risk workloads.
gVisor (runsc)provides a user-space kernel-like layer that intercepts and mediates system calls, reducing the container’s direct interaction with the host kernel.Kata Containerstakes a different approach: it runs containers inside lightweight virtual machines, providing hardware-virtualization boundaries (or VM-like isolation) while still integrating into container workflows. Both are used to increase isolation compared to traditional containers, and both can be integrated with Kubernetes through compatible CRI/runtime configurations.
The other options are incorrect for the question’s intent. “rune, cgroups” is not a meaningful pairing here (cgroups is a Linux resource mechanism, not a runtime). “docker, containerd” are commonly used container platforms/runtimes but are not specifically the “sandboxed isolation” category (containerd typically uses runc for standard isolation). “crun, cri-o” represents a low-level OCI runtime (crun) and a CRI implementation (CRI-O), again not specifically a sandboxed-isolation grouping.
So, when the question asks for the group that providesadditional sandboxingandelevated security, the correct, well-established answer isrunsc + Kata.
Which type of Service requires manual creation of Endpoints?
Options:
LoadBalancer
Services without selectors
NodePort
ClusterIP with selectors
Answer:
BExplanation:
A KubernetesService without selectorsrequires you to manage its backend endpoints manually, soBis correct. Normally, a Service uses aselectorto match a set of Pods (by labels). Kubernetes then automatically maintains the backend list (historically Endpoints, now commonly EndpointSlice) by tracking which Pods match the selector and are Ready. This automation is one of the key reasons Services provide stable connectivity to dynamic Pods.
When you create a Servicewithout a selector, Kubernetes has no way to know which Pods (or external IPs) should receive traffic. In that pattern, you explicitly create anEndpointsobject (or EndpointSlices, depending on your approach and controller support) that maps the Service name to one or more IP:port tuples. This is commonly used to representexternal services(e.g., a database running outside the cluster) while still providing a stable Kubernetes Service DNS name for in-cluster clients. Another use case is advanced migration scenarios where endpoints are controlled by custom controllers rather than label selection.
Why the other options are wrong: ServicetypeslikeClusterIP,NodePort, andLoadBalancerdescribe how a Service is exposed, but they do not inherently require manual endpoint management. A ClusterIP Servicewith selectors(D) is the standard case where endpoints are automatically created and updated. NodePort and LoadBalancer Services also typically use selectors and therefore inherit automatic endpoint management; the difference is in how traffic enters the cluster, not how backends are discovered.
Operationally, when using Services without selectors, you must ensure endpoint IPs remain correct, health is accounted for (often via external tooling), and you update endpoints when backends change. The key concept is:no selector → Kubernetes can’t auto-populate endpoints → you must provide them.
=========
What is the goal of load balancing?
Options:
Automatically measure request performance across instances of an application.
Automatically distribute requests across different versions of an application.
Automatically distribute instances of an application across the cluster.
Automatically distribute requests across instances of an application.
Answer:
DExplanation:
The core goal ofload balancingis todistribute incoming requests across multiple instancesof a service so that no single instance becomes overloaded and so that the overall service is more available and responsive. That matches optionD, which is the correct answer.
In Kubernetes, load balancing commonly appears through theServiceabstraction. A Service selects a set of Pods using labels and provides stable access via a virtual IP (ClusterIP) and DNS name. Traffic sent to the Service is then forwarded to one of the healthy backend Pods. This spreads load across replicas and provides resilience: if one Pod fails, it is removed from endpoints (or becomes NotReady) and traffic shifts to remaining replicas. The actual traffic distribution mechanism depends on the networking implementation (kube-proxy using iptables/IPVS or an eBPF dataplane), but the intent remains consistent: distribute requests across multiple backends.
Option A describes monitoring/observability, not load balancing. Option B describes progressive delivery patterns like canary or A/B routing; that can be implemented with advanced routing layers (Ingress controllers, service meshes), but it’s not the general definition of load balancing. Option C describes scheduling/placement of instances (Pods) across cluster nodes, which is the role of the scheduler and controllers, not load balancing.
In cloud environments, load balancing may also be implemented by external load balancers (cloud LBs) in front of the cluster, then forwarded to NodePorts or ingress endpoints, and again balanced internally to Pods. At each layer, the objective is the same:spread request traffic across multiple service instancesto improve performance and availability.
=========
What is the minimum number of etcd members that are required for a highly available Kubernetes cluster?
Options:
Two etcd members.
Five etcd members.
Six etcd members.
Three etcd members.
Answer:
DExplanation:
D (three etcd members)is correct. etcd is a distributed key-value store that uses the Raft consensus algorithm. High availability in consensus systems depends on maintaining aquorum(majority) of members to continue serving writes reliably. With3 members, the cluster can tolerate1 failureand still have 2/3 available—enough for quorum.
Two members is a common trap: with 2, a single failure leaves 1/2, which is not a majority, so the cluster cannot safely make progress. That means 2-member etcd is not HA; it is fragile and can be taken down by one node loss, network partition, or maintenance event. Five members can tolerate 2 failures and is a valid HA configuration, but it is not theminimum. Six is even-sized and generally discouraged for consensus because it doesn’t improve failure tolerance compared to five (quorum still requires 4), while increasing coordination overhead.
In Kubernetes, etcd reliability directly affects the API server and the entire control plane because etcd stores cluster state: object specs, status, controller state, and more. If etcd loses quorum, the API server will be unable to persist or reliably read/write state, leading to cluster management outages. That’s why the minimum HA baseline is three etcd members, often across distinct failure domains (nodes/AZs), with strong disk performance and consistent low-latency networking.
So, the smallest etcd topology that provides true fault tolerance is3 members, which corresponds to optionD.
=========
Which of the following statements is correct concerning Open Policy Agent (OPA)?
Options:
The policies must be written in Python language.
Kubernetes can use it to validate requests and apply policies.
Policies can only be tested when published.
It cannot be used outside Kubernetes.
Answer:
BExplanation:
Open Policy Agent (OPA) is a general-purpose policy engine used to define and enforce policy across different systems. In Kubernetes, OPA is commonly integrated through admission control (often via Gatekeeper or custom admission webhooks) tovalidate and/or mutate requestsbefore they are persisted in the cluster. This makesBcorrect: Kubernetes can use OPA to validate API requests and apply policy decisions.
Kubernetes’ admission chain is where policy enforcement naturally fits. When a user or controller submits a request (for example, to create a Pod), the API server can call external admission webhooks. Those webhooks can evaluate the request against policy—such as “no privileged containers,” “images must come from approved registries,” “labels must include cost-center,” or “Ingress must enforce TLS.” OPA’s policy language (Rego) allows expressing these rules in a declarative form, and the decision (“allow/deny” and sometimes patches) is returned to the API server. This enforces governance consistently and centrally.
Option A is incorrect because OPA policies are written inRego, not Python. Option C is incorrect because policies can be tested locally and in CI pipelines before deployment; in fact, testability is a key advantage. Option D is incorrect because OPA is designed to beplatform-agnostic—it can be used with APIs, microservices, CI/CD pipelines, service meshes, and infrastructure tools, not only Kubernetes.
From a Kubernetes fundamentals view, OPA complements RBAC: RBAC answers “who can do what to which resources,” while OPA-style admission policies answer “even if youcancreate this resource, does it meet our organizational rules?” Together they help implement defense in depth: authentication + authorization + policy admission + runtime security controls. That is why OPA is widely used to enforce security and compliance requirements in Kubernetes environments.
=========
Which API object is the recommended way to run a scalable, stateless application on your cluster?
Options:
ReplicaSet
Deployment
DaemonSet
Pod
Answer:
BExplanation:
For a scalable, stateless application, Kubernetes recommends using aDeploymentbecause it provides a higher-level, declarative management layer over Pods. A Deployment doesn’t just “run replicas”; it manages the entire lifecycle of rolling out new versions, scaling up/down, and recovering from failures by continuously reconciling the current cluster state to the desired state you define. Under the hood, a Deployment typically creates and manages aReplicaSet, and that ReplicaSet ensures a specified number of Pod replicas are running at all times. This layering is the key: you get ReplicaSet’s self-healing replica maintenance plus Deployment’s rollout/rollback strategies and revision history.
Why not the other options? APodis the smallest deployable unit, but it’s not a scalable controller—if a Pod dies, nothing automatically replaces it unless a controller owns it. AReplicaSetcan maintain N replicas, but it does not provide the full rollout orchestration (rolling updates, pause/resume, rollbacks, and revision tracking) that you typically want for stateless apps that ship frequent releases. ADaemonSetis for node-scoped workloads (one Pod per node or subset of nodes), like log shippers or node agents, not for “scale by replicas.”
For stateless applications, the Deployment model is especially appropriate because individual replicas are interchangeable; the application does not require stable network identities or persistent storage per replica. Kubernetes can freely replace or reschedule Pods to maintain availability. Deployment strategies (like RollingUpdate) allow you to upgrade without downtime by gradually replacing old replicas with new ones while keeping the Service endpoints healthy. That combination—declarative desired state, self-healing, andcontrolled updates—makesDeploymentthe recommended object for scalable stateless workloads.
=========
Which of the following is the correct command to run an nginx deployment with 2 replicas?
Options:
kubectl run deploy nginx --image=nginx --replicas=2
kubectl create deploy nginx --image=nginx --replicas=2
kubectl create nginx deployment --image=nginx --replicas=2
kubectl create deploy nginx --image=nginx --count=2
Answer:
BExplanation:
The correct answer isB: kubectl create deploy nginx --image=nginx --replicas=2. This uses kubectl create deployment (shorthand create deploy) to generate aDeploymentresource named nginx with the specified container image. The --replicas=2 flag sets the desired replica count, so Kubernetes will create two Pod replicas (via a ReplicaSet) and keep that number stable.
Option A is incorrect because kubectl run is primarily intended to run a Pod (and in older versions could generate other resources, but it’s not the recommended/consistent way to create a Deployment in modern kubectl usage). Option C is invalid syntax: kubectl subcommand order is incorrect; you don’t say kubectl create nginx deployment. Option D uses a non-existent --count flag for Deployment replicas.
From a Kubernetes fundamentals perspective, this question tests two ideas: (1) Deployments are the standard controller for running stateless workloads with a desired number of replicas, and (2) kubectl create deployment is a common imperative shortcut for generating that resource. After running the command, you can confirm with kubectl get deploy nginx, kubectl get rs, and kubectl get pods -l app=nginx (label may vary depending on kubectl version). You’ll see a ReplicaSet created and two Pods brought up.
In production, teams typically use declarative manifests (kubectl apply -f) or GitOps, but knowing the imperative command is useful for quick labs and validation. The key is that replicas are managed by the controller, not by manually starting containers—Kubernetes reconciles the state continuously.
Therefore,Bis the verified correct command.
=========
What are the characteristics for building every cloud-native application?
Options:
Resiliency, Operability, Observability, Availability
Resiliency, Containerd, Observability, Agility
Kubernetes, Operability, Observability, Availability
Resiliency, Agility, Operability, Observability
Answer:
DExplanation:
Cloud-native applications are typically designed to thrive in dynamic, distributed environments where infrastructure is elastic and failures are expected. The best set of characteristics listed isResiliency, Agility, Operability, Observability, makingDcorrect.
Resiliencymeans the application and its supporting platform can tolerate failures and continue providing service. In Kubernetes terms, resiliency is supported through self-healing controllers, replica management, health probes, and safe rollout mechanisms, but the application must also be designed to handle transient failures, retries, and graceful degradation.
Agilityreflects the ability to deliver changes quickly and safely. Cloud-native systems emphasize automation, CI/CD, declarative configuration, and small, frequent releases—often enabled by Kubernetes primitives like Deployments and rollout strategies. Agility is about reducing the friction to ship improvements while maintaining reliability.
Operabilityis how manageable the system is in production: clear configuration, predictable deployments, safe scaling, and automation-friendly operations. Kubernetes encourages operability through consistent APIs, controllers, and standardized patterns for configuration and lifecycle.
Observabilitymeans you can understand what’s happening inside the system using telemetry—metrics, logs, and traces—so you can troubleshoot issues, measure SLOs, and improve performance. Kubernetes provides many integration points for observability, but cloud-native apps must also emit meaningful signals.
Options B and C include items that are not “characteristics” (containerd is a runtime; Kubernetes is a platform). Option A includes “availability,” which is important, but the canonical cloud-native framing in this question emphasizes the four qualities in D as the foundational build characteristics.
=========
What fields must exist in any Kubernetes object (e.g. YAML) file?
Options:
apiVersion, kind, metadata
kind, namespace, data
apiVersion, metadata, namespace
kind, metadata, data
Answer:
AExplanation:
Any Kubernetes object manifest must includeapiVersion,kind, andmetadata, which makesAcorrect. This comes directly from how Kubernetes resources are represented and processed by the API server.
apiVersiontells Kubernetes which API group and version should be used to interpret the object (for example v1, apps/v1, batch/v1). This matters because schemas and available fields can change between versions.
kindspecifies the type of object you are creating (for example Pod, Service, Deployment, ConfigMap). Kubernetes uses this to route the request to the correct API endpoint and schema.
metadatacontains identifying and organizational information such as name, namespace (when namespaced), labels, and annotations. At minimum, most objects require a name; labels and annotations are optional but extremely common for selection and tooling.
A common point of confusion isspec. Many Kubernetes objects include spec because they define desired state (like a Deployment’s replica count, Pod template, update strategy). However, the question asks what fields must exist inanyKubernetes object file. Not all objects require a spec in the same way (and some objects include other top-level sections like data for ConfigMaps/Secrets or rules for RBAC objects). The truly universal top-level requirements are the trio in option A.
Options B, C, and D include fields that are not universally required (namespace is not required for cluster-scoped objects, and data only applies to certain kinds like ConfigMaps/Secrets). Therefore,apiVersion + kind + metadatais the correct, general rule and matches Kubernetes object structure.
=========
What does the "nodeSelector" within a PodSpec use to place Pods on the target nodes?
Options:
Annotations
IP Addresses
Hostnames
Labels
Answer:
DExplanation:
nodeSelector is a simple scheduling constraint that matchesnode labels, so the correct answer isD (Labels). In Kubernetes, nodes have key/valuelabels(for example, disktype=ssd, topology.kubernetes.io/zone=us-east-1a, kubernetes.io/os=linux). When you set spec.nodeSelector in a Pod template, you provide a map of required label key/value pairs. The kube-scheduler will then only consider nodes that haveallthose labels with matching values as eligible placement targets for that Pod.
This is different from annotations: annotations are also key/value metadata, but they are not intended for selection logic and are not used by the scheduler for nodeSelector. IP addresses and hostnames are not the mechanism used by nodeSelector either. While Kubernetes nodes do have hostnames and IPs, nodeSelector specifically operates on labels because labels are designed for selection, grouping, and placement constraints.
Operationally, nodeSelector is the most basic form of node placement control. It is commonly used to pin workloads to specialized hardware (GPU nodes), compliance zones, or certain OS/architecture pools. However, it has limitations: it only supports exact match on labels and cannot express more complex rules (like “in this set of zones” or “prefer but don’t require”). For that, Kubernetes offersnode affinity(requiredDuringSchedulingIgnoredDuringExecution, preferredDuringSchedulingIgnoredDuringExecution) which supports richer expressions.
Still, the underlying mechanism is the same concept: the scheduler evaluates your Pod’s placement requirements against node metadata, and for nodeSelector, that metadata islabels. Therefore, the verified correct answer isD.
=========
How can you extend the Kubernetes API?
Options:
Adding a CustomResourceDefinition or implementing an aggregation layer.
Adding a new version of a resource, for instance v4beta3.
With the command kubectl extend api, logged in as an administrator.
Adding the desired API object as a kubelet parameter.
Answer:
AExplanation:
Ais correct: Kubernetes’ API can be extended by addingCustomResourceDefinitions (CRDs)and/or by implementing theAPI Aggregation Layer. These are the two canonical extension mechanisms.
CRDslet you define new resource types (new kinds) that the Kubernetes API server stores in etcd and serves like native objects. You typically pair a CRD with a controller/operator that watches those custom objects and reconciles real resources accordingly. This pattern is foundational to the Kubernetes ecosystem (many popular add-ons install CRDs).
Theaggregation layerallows you to add entire API services (aggregated API servers) that serve additional endpoints under the Kubernetes API. This is used when you want custom API behavior, custom storage, or specialized semantics beyond what CRDs provide (or when implementing APIs like metrics APIs historically).
Why the other answers are wrong:
Bis not how API extension works. You don’t “extend the API” by inventing new versions like v4beta3; versions are defined and implemented by API servers/controllers, not by users arbitrarily.
Cis fictional; there is no standard kubectl extend api command.
Dis also incorrect; kubelet parameters configure node agent behavior, not API server types and discovery.
So, the verified ways to extend Kubernetes’ API surface areCRDs and API aggregation, which is optionA.
=========
Which control plane component is responsible for updating the node Ready condition if a node becomes unreachable?
Options:
The kube-proxy
The node controller
The kubectl
The kube-apiserver
Answer:
BExplanation:
The correct answer isB: the node controller. In Kubernetes, node health is monitored and reflected through Node conditions such asReady. TheNode Controller(a controller that runs as part of the control plane, within the controller-manager) is responsible for monitoring node heartbeats and updating node status when a node becomes unreachable or unhealthy.
Nodes periodically report status (including kubelet heartbeats) to the API server. The Node Controller watches these updates. If it detects that a node has stopped reporting within expected time windows, it marks the node conditionReadyas Unknown (or otherwise updates conditions) to indicate the control plane can’t confirm node health. This status change then influences higher-level behaviors such as Pod eviction and rescheduling: after grace periods and eviction timeouts, Pods on an unhealthy node may be evicted so the workload can be recreated on healthy nodes (assuming a controller manages replicas).
Option A (kube-proxy) is a node component for Service traffic routing and does not manage node health conditions. Option C (kubectl) is a CLI client; it does not participate in control plane health monitoring. Option D (kube-apiserver) stores and serves Node status, but it doesn’t decide when a node is unreachable; it persists what controllers and kubelets report. The “decision logic” for updating the Ready condition in response to missing heartbeats is the Node Controller’s job.
So, the component that updates the Node Ready condition when a node becomes unreachable is thenode controller, which is optionB.
=========
Which of the following systems is NOT compatible with the CRI runtime interface standard?
(Typo corrected: “CRI-0” → “CRI-O”)
Options:
CRI-O
dockershim
systemd
containerd
Answer:
CExplanation:
Kubernetes uses theContainer Runtime Interface (CRI)to supportpluggable container runtimes. The kubelet talks to a CRI-compatible runtime via gRPC, and that runtime is responsible for pulling images and running containers. In this context,containerdandCRI-Oare CRI-compatible container runtimes (or runtime stacks) used widely with Kubernetes, anddockershimhistorically served as a compatibility layer that allowed kubelet to talk to Docker Engine as if it were CRI (before dockershim was removed from kubelet in newer Kubernetes versions). That leavessystemdas the correct “NOT compatible with CRI” answer, soCis correct.
systemdis an init system and service manager for Linux. While it can be involved in how services (like kubelet) are started and managed on the host, it is not a container runtime implementing CRI. It does not provide CRI gRPC endpoints for kubelet, nor does it manage containers in the CRI sense.
The deeper Kubernetes concept here is separation of responsibilities: kubelet is responsible for Pod lifecycle at the node level, but it delegates “run containers” to a runtime via CRI. Runtimes likecontainerdandCRI-Oimplement that contract; Kubernetes can swap them without changing kubelet logic. Historically,dockershimtranslated kubelet’s CRI calls into Docker Engine calls. Even though dockershim is no longer part of kubelet, it was still “CRI-adjacent” in purpose and often treated as compatible in older curricula.
Therefore, among the provided options,systemdis the only one that is clearly not a CRI-compatible runtime system, makingCcorrect.
=========
Which of these is a valid container restart policy?
Options:
On login
On update
On start
On failure
Answer:
DExplanation:
The correct answer isD: On failure. In Kubernetes, restart behavior is controlled by the Pod-level field spec.restartPolicy, with valid valuesAlways,OnFailure, andNever. The option presented here (“On failure”) maps to Kubernetes’OnFailurepolicy. This setting determines what the kubelet should do when containers exit:
Always: restart containers whenever they exit (typical for long-running services)
OnFailure: restart containers only if they exit with a non-zero status (common for batch workloads)
Never: do not restart containers (fail and leave it terminated)
So “On failure” is a valid restart policy concept and the only one in the list that matches Kubernetes semantics.
The other options are not Kubernetes restart policies. “On login,” “On update,” and “On start” are not recognized values and don’t align with how Kubernetes models container lifecycle. Kubernetes is declarative and event-driven: it reacts to container exit codes and controller intent, not user “logins.”
Operationally, choosing the right restart policy is important. For example, Jobs typically use restartPolicy: OnFailure or Never because the goal is completion, not continuous uptime. Deployments usually imply “Always” because the workload should keep serving traffic, and a crashed container should be restarted. Also note that controllers interact with restarts: a Deployment may recreate Pods if they fail readiness, while a Job counts completions and failures based on Pod termination behavior.
Therefore, among the options, the only valid (Kubernetes-aligned) restart policy isD.
=========
What feature must a CNI support to control specific traffic flows for workloads running in Kubernetes?
Options:
Border Gateway Protocol
IP Address Management
Pod Security Policy
Network Policies
Answer:
DExplanation:
To control which workloads can communicate with which other workloads in Kubernetes, you useNetworkPolicyresources—but enforcement depends on the cluster’s networking implementation. Therefore, for traffic-flow control, the CNI/plugin must supportNetwork Policies, makingDcorrect.
Kubernetes defines the NetworkPolicy API as a declarative way to specify allowedingressandegresstraffic based on selectors (Pod labels, namespaces, IP blocks) and ports/protocols. However, Kubernetes itself does not enforce NetworkPolicy rules; enforcement is provided by the network plugin (or associated dataplane components). If your CNI does not implement NetworkPolicy, the objects may exist in the API but have no effect—Pods will communicate freely by default.
Option B (IP Address Management) is often part of CNI responsibilities, but IPAM is about assigning addresses, not enforcing L3/L4 security policy. Option A (BGP) is used by some CNIs to advertise routes (for example, in certain Calico deployments), but BGP is not the general requirement for policy enforcement. Option C (Pod Security Policy) is a deprecated/removed Kubernetes admission feature related to Pod security settings, not network flow control.
From a Kubernetes security standpoint, NetworkPolicies are a key tool for implementing least privilege at the network layer—limiting lateral movement, reducing blast radius, and segmenting environments. But they only work when the chosen CNI supports them. Thus, the correct answer isD: Network Policies.
=========
Which command provides information about the field replicas within the spec resource of a deployment object?
Options:
kubectl get deployment.spec.replicas
kubectl explain deployment.spec.replicas
kubectl describe deployment.spec.replicas
kubectl explain deployment --spec.replicas
Answer:
BExplanation:
The correct command to getfield-level schema informationabout spec.replicas in a Deployment is kubectl explain deployment.spec.replicas, soBis correct. kubectl explain is designed to retrieve documentation for resource fields directly from Kubernetes API discovery and OpenAPI schemas. When you use kubectl explain deployment.spec.replicas, kubectl shows what the field means, its type, and any relevant notes—exactly what “provides information about the field” implies.
This differs from kubectl get and kubectl describe. kubectl get is for retrieving actual objects or listing resources; it does not accept dot-paths like deployment.spec.replicas as a normal resource argument. You can use JSONPath/custom-columns with kubectl get deployment
Option D is not valid syntax: kubectl explain deployment --spec.replicas is not how kubectl explain accepts nested field references. The correct pattern is positional dot notation: kubectl explain
Understanding spec.replicas matters operationally: it defines the desired number of Pod replicas for a Deployment. The Deployment controller ensures that the corresponding ReplicaSet maintains that count, supporting self-healing if Pods fail. While autoscalers can adjust replicas automatically, the field remains the primary declarative knob. The question is specifically about finding information (schema docs) for that field, which is whykubectl explain deployment.spec.replicasis the verified correct answer.
=========
Which storage operator in Kubernetes can help the system to self-scale, self-heal, etc?
Options:
Rook
Kubernetes
Helm
Container Storage Interface (CSI)
Answer:
AExplanation:
Rookis a Kubernetesstorage operatorthat helps manage and automate storage systems in a Kubernetes-native way, soAis correct. The key phrase in the question is “storage operator … self-scale, self-heal.” Operators extend Kubernetes by using controllers to reconcile a desired state. Rook applies that model to storage, commonly by managing storage backends like Ceph (and other systems depending on configuration).
With an operator approach, you declare how you want storage to look (cluster size, pools, replication, placement, failure domains), and the operator works continuously to maintain that state. That includes operational behaviors that feel “self-healing” such as reacting to failed storage Pods, rebalancing, or restoring desired replication counts (the exact behavior depends on the backend and configuration). The important KCNA-level idea is that Rook uses Kubernetes controllers to automate day-2 operations for storage in a way consistent with Kubernetes’ reconciliation loops.
The other options do not match the question: “Kubernetes” is the orchestrator itself, not a storage operator. “Helm” is a package manager for Kubernetes apps—it can install storage software, but it is not an operator that continuously reconciles and self-manages. “CSI” (Container Storage Interface) is an interface specification that enables pluggable storage drivers; CSI drivers provision and attach volumes, but CSI itself is not a “storage operator” with the broader self-managing operator semantics described here.
So, for “storage operator that can help with self-* behaviors,”Rookis the correct choice.
=========
What is Helm?
Options:
An open source dashboard for Kubernetes.
A package manager for Kubernetes applications.
A custom scheduler for Kubernetes.
An end-to-end testing project for Kubernetes applications.
Answer:
BExplanation:
Helmis best described as apackage manager for Kubernetes applications, makingBcorrect. Helm packages Kubernetes resource manifests (Deployments, Services, ConfigMaps, Ingress, RBAC, etc.) into a unit called achart. A chart includes templates and default values, allowing teams to parameterize deployments for different environments (dev/stage/prod) without rewriting YAML.
From an application delivery perspective, Helm solves common problems: repeatable installation, upgrade management, versioning, and sharing of standardized application definitions. Instead of copying and editing raw YAML, users install a chart and supply a values.yaml file (or CLI overrides) to configure image tags, replica counts, ingress hosts, resource requests, and other settings. Helm then renders templates into concrete Kubernetes manifests and applies them to the cluster.
Helm also managesreleases: it tracks what has been installed and supports upgrades and rollbacks. This aligns with cloud native delivery practices where deployments are automated, reproducible, and auditable. Helm is commonly integrated into CI/CD pipelines and GitOps workflows (sometimes with charts stored in Git or Helm repositories).
The other options are incorrect: a dashboard is a UI like Kubernetes Dashboard; a scheduler is kube-scheduler (or custom scheduler implementations, but Helm is not that); end-to-end testing projects exist in the ecosystem, but Helm’s role is packaging and lifecycle management of Kubernetes app definitions.
So the verified, standard definition is:Helm = Kubernetes package manager.
Which of the following is a lightweight tool that manages traffic flows between services, enforces access policies, and aggregates telemetry data, all without requiring changes to application code?
Options:
NetworkPolicy
Linkerd
kube-proxy
Nginx
Answer:
BExplanation:
Linkerdis a lightweightservice meshthat manages service-to-service traffic, security policies, and telemetry without requiring application code changes—soBis correct. A service mesh introduces a dedicated layer foreast-west traffic(internal service calls) and typically provides features like mutual TLS (mTLS), retries/timeouts, traffic shaping, and consistent metrics/tracing signals. Linkerd is known for being simpler and resource-efficient relative to some alternatives, which aligns with the “lightweight tool” phrasing.
Why this matches the description: In a service mesh, workload traffic is intercepted by aproxylayer (often as a sidecar or node-level/ambient proxy) and managed centrally by mesh control components. This allows security and traffic policy to be applied uniformly without modifying each microservice. Telemetry is also generated consistently because the proxies observe traffic directly and emit metrics and traces about request rates, latency, and errors.
The other choices don’t fit.NetworkPolicyis a Kubernetes resource that controls allowed network flows (L3/L4) but does not provide L7 traffic management, retries, identity-based mTLS, or automatic telemetry aggregation.kube-proxyimplements Service networking rules (ClusterIP/NodePort forwarding) but does not enforce access policies at the service identity level and is not a telemetry system.Nginxcan be used as an ingress controller or reverse proxy, but it is not inherently a full service mesh spanning all service-to-service communication and policy/telemetry across the mesh by default.
In cloud native architecture, service meshes help address cross-cutting concerns—security, observability, and traffic management—without embedding that logic into every application. The question’s combination of “traffic flows,” “access policies,” and “aggregates telemetry” maps directly to a mesh, and the lightweight mesh option provided isLinkerd.
=========
CI/CD stands for:
Options:
Continuous Information / Continuous Development
Continuous Integration / Continuous Development
Cloud Integration / Cloud Development
Continuous Integration / Continuous Deployment
Answer:
DExplanation:
CI/CD is a foundational practice for delivering software rapidly and reliably, and it maps strongly to cloud native delivery workflows commonly used with Kubernetes.CIstands forContinuous Integration: developers merge code changes frequently into a shared repository, and automated systems build and test those changes to detect issues early.CDis commonly used to meanContinuous DeliveryorContinuous Deploymentdepending on how far automation goes. In many certification contexts and simplified definitions like this question, CD is interpreted asContinuous Deployment, meaning every change that passes the automated pipeline is automatically released to production. That matches optionD.
In a Kubernetes context, CI typically produces artifacts such as container images (built from Dockerfiles or similar build definitions), runs unit/integration tests, scans dependencies, and pushes images to a registry. CD then promotes those images into environments by updating Kubernetes manifests (Deployments, Helm charts, Kustomize overlays, etc.). Progressive delivery patterns (rolling updates, canary, blue/green) often use Kubernetes-native controllers and Service routing to reduce risk.
Why the other options are incorrect: “Continuous Development” isn’t the standard “D” term; it’s ambiguous and not the established acronym expansion. “Cloud Integration/Cloud Development” is unrelated. Continuous Delivery (in the stricter sense) means changes are always in a deployable state and releases may still require a manual approval step, while Continuous Deployment removes that final manual gate. But because the option set explicitly includes “Continuous Deployment,” and that is one of the accepted canonical expansions for CD,Dis the correct selection here.
Practically, CI/CD complements Kubernetes’ declarative model: pipelines update desired state (Git or manifests), and Kubernetes reconciles it. This combination enables frequent releases, repeatability, reduced human error, and faster recovery through automated rollbacks and controlled rollout strategies.
=========
What is the main role of the Kubernetes DNS within a cluster?
Options:
Acts as a DNS server for virtual machines that are running outside the cluster.
Provides a DNS as a Service, allowing users to create zones and registries for domains that they own.
Allows Pods running in dual stack to convert IPv6 calls into IPv4 calls.
Provides consistent DNS names for Pods and Services for workloads that need to communicate with each other.
Answer:
DExplanation:
Kubernetes DNS (commonly implemented byCoreDNS) providesservice discoveryinside the cluster by assigning stable, consistent DNS names to Services and (optionally) Pods, which makesDcorrect. In a Kubernetes environment, Pods are ephemeral—IP addresses can change when Pods restart or move between nodes. DNS-based discovery allows applications to communicate using stable names rather than hardcoded IPs.
For Services, Kubernetes creates DNS records like service-name.namespace.svc.cluster.local, which resolve to the Service’s virtual IP (ClusterIP) or, for headless Services, to the set of Pod endpoints. This supports both load-balanced communication (standard Service) and per-Pod addressing (headless Service, commonly used with StatefulSets). Kubernetes DNS is therefore a core building block that enables microservices to locate each other reliably.
Option A is not Kubernetes DNS’s purpose; it serves cluster workloads rather than external VMs. Option B describes a managed DNS hosting product (creating zones/registries), which is outside the scope of cluster DNS. Option C describes protocol translation, which is not the role of DNS. Dual-stack support relates to IP families and networking configuration, not DNS translating IPv6 to IPv4.
In day-to-day Kubernetes operations, DNS reliability impacts everything: if DNS is unhealthy, Pods may fail to resolve Services, causing cascading outages. That’s why CoreDNS is typically deployed as a highly available add-on in kube-system, and why DNS caching and scaling are important for large clusters.
So the correct statement isD: Kubernetes DNS provides consistent DNS names so workloads can communicate reliably.
=========
Which command will list the resource types that exist within a cluster?
Options:
kubectl api-resources
kubectl get namespaces
kubectl api-versions
curlhttps://kubectrl/namespaces
Answer:
AExplanation:
To list theresource typesavailable in a Kubernetes cluster, you use kubectl api-resources, soAis correct. This command queries the API server’s discovery endpoints and prints a table of resources (kinds) that the cluster knows about, including their names, shortnames, API group/version, whether they are namespaced, and supported verbs. It’s extremely useful for learning what objects exist in a cluster—especially when CRDs are installed, because those custom resource types will also appear in the output.
Option C (kubectl api-versions) lists availableAPI versions(group/version strings like v1, apps/v1, batch/v1) but does not directly list the resource kinds/types. It’s related discovery information but answers a different question. Option B (kubectl get namespaces) lists namespaces, not resource types. Option D is invalid (typo in URL and conceptually not the Kubernetes discovery mechanism).
Practically, kubectl api-resources is used during troubleshooting and exploration: you might use it to confirm whether a CRD is installed (e.g., certificates.cert-manager.io kinds), to check whether a resource is namespaced, or to find the correct kind name for kubectl get. It also helps understand what your cluster supports at the API layer (including aggregated APIs).
So, the verified correct command to list resource types that exist in the cluster isA: kubectl api-resources.
What is the common standard for Service Meshes?
Options:
Service Mesh Specification (SMS)
Service Mesh Technology (SMT)
Service Mesh Interface (SMI)
Service Mesh Function (SMF)
Answer:
CExplanation:
A widely referenced interoperability standard in the service mesh ecosystem is theService Mesh Interface (SMI), soCis correct. SMI was created to provide a common set of APIs for basic service mesh capabilities—helping users avoid being locked into a single mesh implementation for core features. While service meshes differ in architecture and implementation (e.g., Istio, Linkerd, Consul), SMI aims to standardize how common behaviors are expressed.
In cloud native architecture, service meshes address cross-cutting concerns for service-to-service communication: traffic policies, observability, and security (mTLS, identity). Rather than baking these concerns into every application, a mesh typically introduces data-plane proxies and a control plane to manage policy and configuration. SMI sits above those implementations as a common API model.
The other options are not commonly used industry standards. You may see other efforts and emerging APIs, but among the listed choices,SMIis the recognized standard name that appears in cloud native discussions and tooling integrations.
Also note a practical nuance: even with SMI, not every mesh implements every SMI spec fully, and many users still adopt mesh-specific CRDs and APIs for advanced features. But for this question’s framing—“common standard”—Service Mesh Interfaceis the correct answer.
What helps an organization to deliver software more securely at a higher velocity?
Options:
Kubernetes
apt-get
Docker Images
CI/CD Pipeline
Answer:
DExplanation:
ACI/CD pipelineis a core practice/tooling approach that enables organizations to deliver softwarefaster and more securely, soDis correct. CI (Continuous Integration) automates building and testing code changes frequently, reducing integration risk and catching defects early. CD (Continuous Delivery/Deployment) automates releasing validated builds into environments using consistent, repeatable steps—reducing manual errors and enabling rapid iteration.
Security improves because automation enables standardized checks on every change: static analysis, dependency scanning, container image scanning, policy validation, and signing/verification steps can be integrated into the pipeline. Instead of relying on ad-hoc human processes, security controls become repeatable gates. In Kubernetes environments, pipelines commonly build container images, run tests, publish artifacts to registries, and then deploy via manifests, Helm, or GitOps controllers—keeping deployments consistent and auditable.
Option A (Kubernetes) is a platform that helps run and manage workloads, but by itself it doesn’t guarantee secure high-velocity delivery. It provides primitives (rollouts, declarative config, RBAC), yet the delivery workflow still needs automation. Option B (apt-get) is a package manager for Debian-based systems and is not a delivery pipeline. Option C (Docker Images) are artifacts; they improve portability and repeatability, but they don’t provide the end-to-end automation of building, testing, promoting, and deploying across environments.
In cloud-native application delivery, the pipeline is the “engine” that turns code changes into safe production releases. Combined with Kubernetes’ declarative deployment model (Deployments, rolling updates, health probes), a CI/CD pipeline supports frequent releases with controlled rollouts, fast rollback, and strong auditability. That is exactly what the question is targeting. Therefore, the verified answer isD.
=========
How does Horizontal Pod autoscaling work in Kubernetes?
Options:
The Horizontal Pod Autoscaler controller adds more CPU or memory to the pods when the load is above the configured threshold, and reduces CPU or memory when the load is below.
The Horizontal Pod Autoscaler controller adds more pods when the load is above the configured threshold, but does not reduce the number of pods when the load is below.
The Horizontal Pod Autoscaler controller adds more pods to the specified DaemonSet when the load is above the configured threshold, and reduces the number of pods when the load is below.
The Horizontal Pod Autoscaler controller adds more pods when the load is above the configured threshold, and reduces the number of pods when the load is below.
Answer:
DExplanation:
Horizontal Pod Autoscaling (HPA) adjusts thenumber of Pod replicasfor a workload controller (most commonly a Deployment) based on observed metrics, increasing replicas when load is high and decreasing when load drops. That matchesD, soDis correct.
HPA doesnotadd CPU or memory to existing Pods—that would be vertical scaling (VPA). Instead, HPA changes spec.replicas on the target resource, and the controller then creates or removes Pods accordingly. HPA commonly scales based on CPU utilization and memory (resource metrics), and it can also scale using custom or external metrics if those are exposed via the appropriate Kubernetes metrics APIs.
Option A is vertical scaling behavior, not HPA. Option B is incorrect because HPA can scale down as well as up (subject to stabilization windows and configuration), so it’s not “scale up only.” Option C is incorrect because HPA does not scale DaemonSets in the usual model; DaemonSets are designed to run one Pod per node (or per selected nodes) rather than a replica count. HPA targets resources like Deployments, ReplicaSets (via Deployment), and StatefulSets in typical usage, where replica count is a meaningful knob.
Operationally, HPA works as a control loop: it periodically reads metrics (for example, via metrics-server for CPU/memory, or via adapters for custom metrics), compares the current value to the desired target, and calculates a desired replica count within min/max bounds. To avoid flapping, HPA includes stabilization behavior and cooldown logic so it doesn’t scale too aggressively in response to short spikes or dips. You can configure minimum and maximum replicas and behavior policies to tune responsiveness.
In cloud-native systems, HPA is a key elasticity mechanism: it enables services to handle variable traffic while controlling cost by scaling down during low demand. Therefore, the verified correct answer isD.
=========
What function does kube-proxy provide to a cluster?
Options:
Implementing the Ingress resource type for application traffic.
Forwarding data to the correct endpoints for Services.
Managing data egress from the cluster nodes to the network.
Managing access to the Kubernetes API.
Answer:
BExplanation:
kube-proxyis a node-level networking component that helps implement the KubernetesServiceabstraction. Services provide a stable virtual IP and DNS name that route traffic to a set of Pods (endpoints). kube-proxy watches the API for Service and EndpointSlice/Endpoints changes and then programs the node’s networking rules so that traffic sent to a Service is forwarded (load-balanced) to one of the correct backend Pod IPs. This is whyBis correct.
Conceptually, kube-proxy turns the declarative Service configuration into concrete dataplane behavior. Depending on the mode, it may use iptables rules, IPVS, or integrate with eBPF-capable networking stacks (sometimes kube-proxy is replaced or bypassed by CNI implementations, but the classic kube-proxy role remains the canonical answer). In iptables mode, kube-proxy creates NAT rules that rewrite traffic from the Service virtual IP to one of the Pod endpoints. In IPVS mode, it programs kernel load-balancing tables for more scalable service routing. In all cases, the job is to connect “Service IP/port” to “Pod IP/port endpoints.”
Option A is incorrect becauseIngressis a separate API resource and requires anIngress Controller(like NGINX Ingress, HAProxy, Traefik, etc.) to implement HTTP routing, TLS termination, and host/path rules. kube-proxy is not an Ingress controller. Option C is incorrect because general node egress management is not kube-proxy’s responsibility; egress behavior typically depends on the CNI plugin, NAT configuration, and network policies. Option D is incorrect because API access control is handled by the API server’s authentication/authorization layers (RBAC, webhooks, etc.), not kube-proxy.
So kube-proxy’s essential function is: keep node networking rules in sync so thatService traffic reaches the right Pods. It is one of the key components that makes Services “just work” across nodes without clients needing to know individual Pod IPs.
=========
Which of the following options is true about considerations for large Kubernetes clusters?
Options:
Kubernetes supports up to 1000 nodes and recommends no more than 1000 containers per node.
Kubernetes supports up to 5000 nodes and recommends no more than 500 Pods per node.
Kubernetes supports up to 5000 nodes and recommends no more than 110 Pods per node.
Kubernetes supports up to 50 nodes and recommends no more than 1000 containers per node.
Answer:
CExplanation:
The correct answer isC: Kubernetes scalability guidance commonly cites support up to5000 nodesand recommends no more than110 Pods per node. The “110 Pods per node” recommendation is a practical limit based on kubelet, networking, and IP addressing constraints, as well as performance characteristics for scheduling, service routing, and node-level resource management. It is also historically aligned with common CNI/IPAM defaults where node Pod CIDRs are sized for ~110 usable Pod IPs.
Why the other options are incorrect: A and D reference “containers per node,” which is not the standard sizing guidance (Kubernetes typically discusses Pods per node). B’s “500 Pods per node” is far above typical recommended limits for many environments and would stress IPAM, kubelet, and node resources significantly.
In large clusters, several considerations matter beyond the headline limits: API server and etcd performance, watch/list traffic, controller reconciliation load, CoreDNS scaling, and metrics/observability overhead. You must also plan for IP addressing (cluster CIDR sizing), node sizes (CPU/memory), and autoscaling behavior. On each node, kubelet and the container runtime must handle churn (starts/stops), logging, and volume operations. Networking implementations (kube-proxy, eBPF dataplanes) also have scaling characteristics.
Kubernetes provides patterns to keep systems stable at scale: request/limit discipline, Pod disruption budgets, topology spread constraints, namespaces and quotas, and careful observability sampling. But the exam-style fact this question targets is the published scalability figure and per-node Pod recommendation.
Therefore, the verified true statement among the options isC.
=========
Can a Kubernetes Service expose multiple ports?
Options:
No, you can only expose one port per each Service.
Yes, but you must specify an unambiguous name for each port.
Yes, the only requirement is to use different port numbers.
No, because the only port you can expose is port number 443.
Answer:
BExplanation:
Yes, a KubernetesService can expose multiple ports, and when it does, each port should have aunique, unambiguous name, makingBcorrect. In the Service spec, the ports field is an array, allowing you to define multiple port mappings (e.g., 80 for HTTP and 443 for HTTPS, or grpc and metrics). Each entry can include port (Service port), targetPort (backend Pod port), and protocol.
The naming requirement becomes important because Kubernetes needs to disambiguate ports, especially when other resources refer to them. For example, an Ingress backend or some proxies/controllers can reference Service ports byname. Also, when multiple ports exist, a name helps humans and automation reliably select the correct port. Kubernetes documentation and common practice recommend naming ports whenever there is more than one, and in several scenarios it’s effectively required to avoid ambiguity.
Option A is incorrect because multi-port Services are common and fully supported. Option C is insufficient: while different port numbers are necessary, naming is the correct distinguishing rule emphasized by Kubernetes patterns and required by some integrations. Option D is incorrect and nonsensical—Services can expose many ports and are not restricted to 443.
Operationally, exposing multiple ports through one Service is useful when a single backend workload provides multiple interfaces (e.g., application traffic and a metrics endpoint). You can keep stable discovery under one DNS name while still differentiating ports. The backend Pods must still listen on the target ports, and selectors determine which Pods are endpoints. The key correctness point for this question is:multi-port Services are allowed, and each port should be uniquely named to avoid confusion and integration issues.
=========
In the Kubernetes platform, which component is responsible for running containers?
Options:
etcd
CRI-O
cloud-controller-manager
kube-controller-manager
Answer:
BExplanation:
In Kubernetes, the actual act ofrunning containerson a node is performed by thecontainer runtime. The kubelet instructs the runtime via CRI, and the runtime pulls images, creates containers, and manages their lifecycle. Among the options provided,CRI-Ois the only container runtime, soBis correct.
It’s important to be precise: the component that “runs containers” is not the control plane and not etcd.etcd(option A) stores cluster state (API objects) as the backing datastore. It never runs containers.cloud-controller-manager(option C) integrates with cloud APIs for infrastructure like load balancers and nodes.kube-controller-manager(option D) runs controllers that reconcile Kubernetes objects (Deployments, Jobs, Nodes, etc.) but does not execute containers on worker nodes.
CRI-O is a CRI implementation that is optimized for Kubernetes and typically uses an OCI runtime (like runc) under the hood to start containers. Another widely used runtime is containerd. The runtime is installed on nodes and is a prerequisite for kubelet to start Pods. When a Pod is scheduled to a node, kubelet reads the PodSpec and asks the runtime to create a “pod sandbox” and then start the container processes. Runtime behavior also includes pulling images, setting up namespaces/cgroups, and exposing logs/stdout streams back to Kubernetes tooling.
So while “the container runtime” is the most general answer, the question’s option list makesCRI-Othe correct selection because it is a container runtime responsible for running containers in Kubernetes.
=========
What is ephemeral storage?
Options:
Storage space that need not persist across restarts.
Storage that may grow dynamically.
Storage used by multiple consumers (e.g., multiple Pods).
Storage that is always provisioned locally.
Answer:
AExplanation:
The correct answer isA: ephemeral storage isnon-persistentstorage whose data does not need to survive Pod restarts or rescheduling. In Kubernetes, ephemeral storage typically refers to storage tied to the Pod’s lifetime—such as the container writable layer, emptyDir volumes, and other temporary storage types. When a Pod is deleted or moved to a different node, that data is generally lost.
This is different from persistent storage, which is backed by PersistentVolumes and PersistentVolumeClaims and is designed to outlive individual Pod instances. Ephemeral storage is commonly used for caches, scratch space, temporary files, and intermediate build artifacts—data that can be recreated and is not the authoritative system of record.
Option B is incorrect because “may grow dynamically” describes an allocation behavior, not the defining characteristic of ephemeral storage. Option C is incorrect because multiple consumers is about access semantics (ReadWriteMany etc.) and shared volumes, not ephemerality. Option D is incorrect because ephemeral storage is not “always provisioned locally” in a strict sense; while many ephemeral forms are local to the node, the definition is about lifecycle and persistence guarantees, not necessarily physical locality.
Operationally, ephemeral storage is an important scheduling and reliability consideration. Pods can request/limit ephemeral storage similarly to CPU/memory, and nodes can evict Pods under disk pressure. Mismanaged ephemeral storage (logs written to the container filesystem, runaway temp files) can cause node disk exhaustion and cascading failures. Best practices include shipping logs off-node, using emptyDir intentionally with size limits where supported, and using persistent volumes for state that must survive restarts.
So, ephemeral storage is best defined as storage thatdoes not need to persist across restarts/rescheduling, matching optionA.
=========
The IPv4/IPv6 dual stack in Kubernetes:
Options:
Translates an IPv4 request from a Service to an IPv6 Service.
Allows you to access the IPv4 address by using the IPv6 address.
Requires NetworkPolicies to prevent Services from mixing requests.
Allows you to create IPv4 and IPv6 dual stack Services.
Answer:
DExplanation:
The correct answer isD: Kubernetes dual-stack support allows you to create Services (and Pods, depending on configuration) that useboth IPv4 and IPv6addressing. Dual-stack means the cluster is configured to allocate and route traffic for both IP families. For Services, this can mean assigning both an IPv4 ClusterIP and an IPv6 ClusterIP so clients can connect using either family, depending on their network stack and DNS resolution.
Option A is incorrect because dual-stack is not about protocol translation (that would be NAT64/other gateway mechanisms, not the core Kubernetes dual-stack feature). Option B is also a form of translation/aliasing that isn’t what Kubernetes dual-stack implies; having both addresses available is different from “access IPv4 via IPv6.” Option C is incorrect: dual-stack does not inherently require NetworkPolicies to “prevent mixing requests.” NetworkPolicies are about traffic control, not IP family separation.
In Kubernetes, dual-stack requires support across components: the network plugin (CNI) must support IPv4/IPv6, the cluster must be configured with both Pod CIDRs and Service CIDRs, and DNS should return appropriate A and AAAA records for Service names. Once configured, you can specify preferences such as ipFamilyPolicy (e.g., PreferDualStack) and ipFamilies (IPv4, IPv6 order) for Services to influence allocation behavior.
Operationally, dual-stack is useful for environments transitioning to IPv6, supporting IPv6-only clients, or running in mixed networks. But it adds complexity: address planning, firewalling, and troubleshooting need to consider two IP families. Still, the definition in the question is straightforward: Kubernetes dual-stack enablesdual-stack Services, which is optionD.
=========
Which Kubernetes resource workload ensures that all (or some) nodes run a copy of a Pod?
Options:
DaemonSet
StatefulSet
kubectl
Deployment
Answer:
AExplanation:
ADaemonSetis the workload controller that ensures a Pod runs onall nodesor on aselected subset of nodes, soAis correct. DaemonSets are used for node-level agents and infrastructure components that must be present everywhere—examples include log collectors, monitoring agents, storage daemons, CNI components, and node security tools.
The DaemonSet controller watches for node additions/removals. When a new node joins the cluster, Kubernetes automatically schedules a new DaemonSet Pod onto that node (subject to constraints such as node selectors, affinities, and taints/tolerations). When a node is removed, its DaemonSet Pod naturally disappears with it. This creates the “one per node” behavior that differentiates DaemonSets from other workload types.
A Deployment manages areplica countacross the cluster, not “one per node.” A StatefulSet manages stable identity and ordered operations for stateful replicas; it does not inherently map one Pod to every node. kubectl is a CLI tool and not a workload resource.
DaemonSets can also be scoped: by using node selectors, node affinity, and tolerations, you can ensure Pods run only on GPU nodes, only on Linux nodes, only in certain zones, or only on nodes with a particular label. That’s why the question says “all (or some) nodes.”
Therefore, the correct and verified answer isDaemonSet (A).
Imagine you're releasing open-source software for the first time. Which of the following is a valid semantic version?
Options:
1.0
2021-10-11
0.1.0-rc
v1beta1
Answer:
CExplanation:
Semantic Versioning (SemVer) follows the patternMAJOR.MINOR.PATCHwith optional pre-release identifiers (e.g., -rc, -alpha.1) and build metadata. Among the options,0.1.0-rcmatches SemVer rules, soCis correct.
0.1.0-rc breaks down as: MAJOR=0, MINOR=1, PATCH=0, and -rc indicates apre-release(“release candidate”). Pre-release versions are valid SemVer and are explicitly allowed to denote versions that are not yet considered stable. For a first-time open-source release, 0.x.y is common because it signals the API may still change in backward-incompatible ways before reaching 1.0.0.
Why the other options are not correct SemVer as written:
1.0 is missing the PATCH segment; SemVer requires three numeric components (e.g., 1.0.0).
2021-10-11 is a date string, not MAJOR.MINOR.PATCH.
v1beta1 resembles Kubernetes API versioning conventions, not SemVer.
In cloud-native delivery and Kubernetes ecosystems, SemVer matters because it communicates compatibility. Incrementing MAJOR indicates breaking changes, MINOR indicates backward-compatible feature additions, and PATCH indicates backward-compatible bug fixes. Pre-release tags allow releasing candidates for testing without claiming full stability. This is especially useful for open-source consumers and automation systems that need consistent version comparison and upgrade planning.
So, the only valid semantic version in the choices is0.1.0-rc, optionC.
=========
How many hosts are required to set up a highly available Kubernetes cluster when using an external etcd topology?
Options:
Four hosts. Two for control plane nodes and two for etcd nodes.
Four hosts. One for a control plane node and three for etcd nodes.
Three hosts. The control plane nodes and etcd nodes share the same host.
Six hosts. Three for control plane nodes and three for etcd nodes.
Answer:
DExplanation:
In a highly available (HA) Kubernetes control plane using anexternal etcd topology, you typically runthree control plane nodesandthree separate etcd nodes, totalingsix hosts, makingDcorrect. HA design relies on quorum-based consensus: etcd uses Raft and requires a majority of members available to make progress. Runningthreeetcd members is the common minimum for HA because it tolerates one member failure while maintaining quorum (2/3).
In the external etcd topology, etcd is decoupled from the control plane nodes. This separation improves fault isolation: if a control plane node fails or is replaced, etcd remains stable and independent; likewise, etcd maintenance can be handled separately. Kubernetes API servers (often multiple instances behind a load balancer) talk to the external etcd cluster for storage of cluster state.
Options A and B propose four hosts, but they break common HA/quorum best practices. Two etcd nodes do not form a robust quorum configuration (a two-member etcd cluster cannot tolerate a single failure without losing quorum). One control plane node is not HA for the API server/scheduler/controller-manager components. Option C describes astacked etcdtopology (control plane + etcd on same hosts), which can be HA with three hosts, but the question explicitly saysexternal etcd, not stacked. In stacked topology, you often use three control plane nodes each running an etcd member. In external topology, you usethree control plane + three etcd.
Operationally, external etcd topology is often used when you want dedicated resources, separate lifecycle management, or stronger isolation for the datastore. It can reduce blast radius but increases infrastructure footprint and operational complexity (TLS, backup/restore, networking). Still, for the canonical HA external-etcd pattern, the expected answer issix hosts:3 control plane + 3 etcd.
=========
What happens with a regular Pod running in Kubernetes when a node fails?
Options:
A new Pod with the same UID is scheduled to another node after a while.
A new, near-identical Pod but with different UID is scheduled to another node.
By default, a Pod can only be scheduled to the same node when the node fails.
A new Pod is scheduled on a different node only if it is configured explicitly.
Answer:
BExplanation:
Bis correct: when a node fails, Kubernetes does not “move” the same Pod instance; instead, anew Pod object (new UID)is created to replace it—assuming the Pod is managed by a controller (Deployment/ReplicaSet, StatefulSet, etc.). A Pod is an API object with a unique identifier (UID) and is tightly associated with the node it’s scheduled to via spec.nodeName. If the node becomes unreachable, that original Pod cannot be restarted elsewhere because it was bound to that node.
Kubernetes’ high availability comes from controllers maintaining desired state. For example, a Deployment desires N replicas. If a node fails and the replicas on that node are lost, the controller will create replacement Pods, and the scheduler will place them onto healthy nodes. These replacement Pods will be “near-identical” in spec (same template), but they are still new instances with new UIDs and typically new IPs.
Why the other options are wrong:
Ais incorrect because the UID does not remain the same—Kubernetes creates a new Pod object rather than reusing the old identity.
Cis incorrect; pods are not restricted to the same node after failure. The whole point of orchestration is to reschedule elsewhere.
Dis incorrect; rescheduling does not require special explicit configuration for typical controller-managed workloads. The controller behavior is standard. (If it’s a bare Pod without a controller, it will not be recreated automatically.)
This also ties to the difference between “regular Pod” vs controller-managed workloads: a standalone Pod is not self-healing by itself, while a Deployment/ReplicaSet provides that resilience. In typical production design, you run workloads under controllers specifically so node failure triggers replacement and restores replica count.
Therefore, the correct outcome isB.
=========
At which layer would distributed tracing be implemented in a cloud native deployment?
Options:
Network
Application
Database
Infrastructure
Answer:
BExplanation:
Distributed tracing is implemented primarily at theapplication layer, soBis correct. The reason is simple: tracing is about capturing the end-to-end path of a request as it traverses services, libraries, queues, and databases. That “request context” (trace ID, span ID, baggage) must be created, propagated, and enriched as code executes. While infrastructure components (proxies, gateways, service meshes) can generate or augment trace spans, the fundamental unit of tracing is still tied to application operations (an HTTP handler, a gRPC call, a database query, a cache lookup).
In Kubernetes-based microservices, distributed tracing typically uses standards likeOpenTelemetryfor instrumentation and context propagation. Application frameworks emit spans for key operations, attach attributes (route, status code, tenant, retry count), and propagate context via headers (e.g., W3C Trace Context). This is what lets you reconstruct “Service A → Service B → Service C” for one user request and identify the slow or failing hop.
Why other layers are not the best answer:
Networkfocuses on packets/flows, but tracing is not a packet-capture problem; it’s a causal request-path problem across services.
Databasespans are part of traces, but tracing is not “implemented in the database layer” overall; DB spans are one component.
Infrastructureprovides the platform and can observe traffic, but without application context it can’t fully represent business operations (and many useful attributes live in app code).
So the correct layer for “where tracing is implemented” is theapplication layer—even when a mesh or proxy helps, it’s still describing application request execution across components.
=========
How can you monitor the progress for an updated Deployment/DaemonSets/StatefulSets?
Options:
kubectl rollout watch
kubectl rollout progress
kubectl rollout state
kubectl rollout status
Answer:
DExplanation:
To monitor rollout progress for Kubernetes workload updates (most commonly Deployments, and also StatefulSets and DaemonSets where applicable), the standard kubectl command iskubectl rollout status, which makesDcorrect.
Kubernetes manages updates declaratively through controllers. For a Deployment, an update typically creates a new ReplicaSet and gradually shifts replicas from the old to the new according to the strategy (e.g., RollingUpdate with maxUnavailable and maxSurge). For StatefulSets, updates may be ordered and respect stable identities, and for DaemonSets, an update replaces node-level Pods according to update strategy. In all cases, you often want a single command that tells you whether the controller has completed the update and whether the new replicas are available. kubectl rollout status queries the resource status and prints a progress view until completion or timeout.
The other commands listed are not the canonical kubectl subcommands. kubectl rollout watch, kubectl rollout progress, and kubectl rollout state are not standard rollout verbs in kubectl. The supported rollout verbs typically include status, history, undo, pause, and resume (depending on kubectl version and resource type).
Operationally, kubectl rollout status deployment/
=========
Let’s assume that an organization needs to process large amounts of data in bursts, on a cloud-based Kubernetes cluster. For instance: each Monday morning, they need to run a batch of 1000 compute jobs of 1 hour each, and these jobs must be completed by Monday night. What’s going to be the most cost-effective method?
Options:
Run a group of nodes with the exact required size to complete the batch on time, and use a combination of taints, tolerations, and nodeSelectors to reserve these nodes to the batch jobs.
Leverage the Kubernetes Cluster Autoscaler to automatically start and stop nodes as they’re needed.
Commit to a specific level of spending to get discounted prices (with e.g. “reserved instances” or similar mechanisms).
Use PriorityClasses so that the weekly batch job gets priority over other workloads running on the cluster, and can be completed on time.
Answer:
BExplanation:
Burst workloads are a classic elasticity problem: you need large capacity for a short window, then very little capacity the rest of the week. The most cost-effective approach in a cloud-based Kubernetes environment is to scale infrastructuredynamically, matching node count to current demand. That’s exactly whatCluster Autoscaleris designed for: it adds nodes when Pods cannot be scheduled due to insufficient resources and removes nodes when they become underutilized and can be drained safely. ThereforeBis correct.
Option A can work operationally, but it commonly results in paying for a reserved “standing army” of nodes that sit idle most of the week—wasteful for bursty patterns unless the nodes are repurposed for other workloads. Taints/tolerations and nodeSelectors are placement tools; they don’t reduce cost by themselves and may increase waste if they isolate nodes. Option D (PriorityClasses) affects which Pods get scheduled firstgiven available capacity, but it does not create capacity. If the cluster doesn’t have enough nodes, high priority Pods will still remain Pending. Option C (reserved instances or committed-use discounts) can reduce unit price, but it assumes relatively predictable baseline usage. For true bursts, you usually want a smaller baseline plus autoscaling, and optionally combine it with discounted capacity types if your cloud supports them.
In Kubernetes terms, the control loop is: batch Jobs create Pods → scheduler tries to place Pods → if many Pods are Pending due to insufficient CPU/memory, Cluster Autoscaler observes this and increases the node group size → new nodes join and kube-scheduler places Pods → after jobs finish and nodes become empty, Cluster Autoscaler drains and removes nodes. This matches cloud-native principles: elasticity, pay-for-what-you-use, and automation. It minimizes idle capacity while still meeting the completion deadline.
=========
What is the name of the Kubernetes resource used to expose an application?
Options:
Port
Service
DNS
Deployment
Answer:
BExplanation:
To expose an application running on Pods so that other components can reliably reach it, Kubernetes uses aService, makingBthe correct answer. Pods are ephemeral: they can be recreated, rescheduled, and scaled, which means Pod IPs change. A Service provides astable endpoint(virtual IP and DNS name) and load-balances traffic across the set of Pods selected by its label selector.
Services come in multiple forms. The default isClusterIP, which exposes the application inside the cluster.NodePortexposes the Service on a static port on each node, andLoadBalancer(in supported clouds) provisions an external load balancer that routes traffic to the Service.ExternalNamemaps a Service name to an external DNS name. But across these variants, the abstraction is consistent: a Service defineshowto access a logical group of Pods.
Option A (Port) is not a Kubernetes resource type; ports are fields within resources. Option C (DNS) is a supporting mechanism (CoreDNS creates DNS entries for Services), but DNS is not the resource you create to expose the app. Option D (Deployment) manages Pod replicas and rollouts, but it does not directly provide stable networking access; you typically pair a Deployment with a Service to expose it.
This is a core cloud-native pattern:controllers manage compute,Services manage stable connectivity, and higher-level gateways likeIngressprovide L7 routing for HTTP/HTTPS. So, the Kubernetes resource used to expose an application isService (B).
=========
Which of these events will cause the kube-scheduler to assign a Pod to a node?
Options:
When the Pod crashes because of an error.
When a new node is added to the Kubernetes cluster.
When the CPU load on the node becomes too high.
When a new Pod is created and has no assigned node.
Answer:
DExplanation:
The kube-scheduler assigns a node to a Pod when the Pod isunscheduled—meaning it exists in the API server but hasno spec.nodeName set. The event that triggers scheduling is therefore:a new Pod is created and has no assigned node, which is optionD.
Kubernetes scheduling is declarative and event-driven. The scheduler continuously watches for Pods that are in a “Pending” unscheduled state. When it sees one, it runs a scheduling cycle: filtering nodes that cannot run the Pod (insufficient resources based on requests, taints/tolerations, node selectors/affinity rules, topology spread constraints), then scoring the remaining feasible nodes to pick the best candidate. Once selected, the scheduler “binds” the Pod to that node by updating the Pod’s spec.nodeName. After that, kubelet on the chosen node takes over to pull images and start containers.
Option A (Pod crashes) does not directly cause scheduling. If a container crashes, kubelet may restart it on the same node according to restart policy. If the Pod itself is replaced (e.g., by a controller like a Deployment creating a new Pod), thatnewPod will be scheduled because it’s unscheduled—but the crash event itself isn’t the scheduler’s trigger. Option B (new node added) might create more capacity and affect future scheduling decisions, but it does not by itself trigger assigning a particular Pod; scheduling still happens because there are unscheduled Pods. Option C (CPU load high) is not a scheduling trigger; scheduling is based on declared requests and constraints, not instantaneous node CPU load (that’s a common misconception).
So the correct, Kubernetes-architecture answer isD: kube-scheduler assigns nodes to Pods that are newly created (or otherwise pending) and have no assigned node.
=========
Unlock KCNA Features
- KCNA All Real Exam Questions
- KCNA Exam easy to use and print PDF format
- Download Free KCNA Demo (Try before Buy)
- Free Frequent Updates
- 100% Passing Guarantee by Activedumpsnet
Questions & Answers PDF Demo
- KCNA All Real Exam Questions
- KCNA Exam easy to use and print PDF format
- Download Free KCNA Demo (Try before Buy)
- Free Frequent Updates
- 100% Passing Guarantee by Activedumpsnet