Labour Day Special Limited Time Flat 70% Discount offer - Ends in 0d 00h 00m 00s - Coupon code: 70spcl

Google Professional-Cloud-DevOps-Engineer Google Cloud Certified - Professional Cloud DevOps Engineer Exam Exam Practice Test

Page: 1 / 16
Total 162 questions

Google Cloud Certified - Professional Cloud DevOps Engineer Exam Questions and Answers

Question 1

You are leading a DevOps project for your organization. The DevOps team is responsible for managing the service infrastructure and being on-call for incidents. The Software Development team is responsible for writing, submitting, and reviewing code. Neither team has any published SLOs. You want to design a new joint-ownership model for a service between the DevOps team and the Software Development team. Which responsibilities should be assigned to each team in the new joint-ownership model?

Question # 1

Options:

A.

Option A

B.

Option B

C.

Option C

D.

Option D

Question 2

You are responsible for the reliability of a high-volume enterprise application. A large number of users report that an important subset of the application’s functionality – a data intensive reporting feature – is consistently failing with an HTTP 500 error. When you investigate your application’s dashboards, you notice a strong correlation between the failures and a metric that represents the size of an internal queue used for generating reports. You trace the failures to a reporting backend that is experiencing high I/O wait times. You quickly fix the issue by resizing the backend’s persistent disk (PD). How you need to create an availability Service Level Indicator (SLI) for the report generation feature. How would you define it?

Options:

A.

As the I/O wait times aggregated across all report generation backends

B.

As the proportion of report generation requests that result in a successful response

C.

As the application’s report generation queue size compared to a known-good threshold

D.

As the reporting backend PD throughout capacity compared to a known-good threshold

Question 3

You are responsible for creating and modifying the Terraform templates that define your Infrastructure. Because two new engineers will also be working on the same code, you need to define a process and adopt a tool that will prevent you from overwriting each other's code. You also want to ensure that you capture all updates in the latest version. What should you do?

Options:

A.

• Store your code in a Git-based version control system.

• Establish a process that allows developers to merge their own changes at the end of each day.

• Package and upload code lo a versioned Cloud Storage bucket as the latest master version.

B.

• Store your code in a Git-based version control system.

• Establish a process that includes code reviews by peers and unit testing to ensure integrity and functionality before integration of code.

• Establish a process where the fully integrated code in the repository becomes the latest master version.

C.

• Store your code as text files in Google Drive in a defined folder structure that organizes the files.

• At the end of each day. confirm that all changes have been captured in the files within the folder structure.

• Rename the folder structure with a predefined naming convention that increments the version.

D.

• Store your code as text files in Google Drive in a defined folder structure that organizes the files.

• At the end of each day, confirm that all changes have been captured in the files within the folder structure and create a new .zip archive with a predefined naming convention.

• Upload the .zip archive to a versioned Cloud Storage bucket and accept it as the latest version.

Question 4

You have migrated an e-commerce application to Google Cloud Platform (GCP). You want to prepare the application for the upcoming busy season. What should you do first to prepare for the busy season?

Options:

A.

Load teat the application to profile its performance for scaling.

B.

Enable AutoScaling on the production clusters, in case there is growth.

C.

Pre-provision double the compute power used last season, expecting growth.

D.

Create a runbook on inflating the disaster recovery (DR) environment if there is growth.

Question 5

You have an application running in Google Kubernetes Engine. The application invokes multiple services per request but responds too slowly. You need to identify which downstream service or services are causing the delay. What should you do?

Options:

A.

Analyze VPC flow logs along the path of the request.

B.

Investigate the Liveness and Readiness probes for each service.

C.

Create a Dataflow pipeline to analyze service metrics in real time.

D.

Use a distributed tracing framework such as OpenTelemetry or Stackdriver Trace.

Question 6

Your team is writing a postmortem after an incident on your external facing application Your team wants to improve the postmortem policy to include triggers that indicate whether an incident requires a postmortem Based on Site Reliability Engineenng (SRE) practices, what triggers should be defined in the postmortem policy?

Choose 2 answers

Options:

A.

An external stakeholder asks for a postmortem

B.

Data is lost due to an incident

C.

An internal stakeholder requests a postmortem

D.

The monitoring system detects that one of the instances for your application has failed

E.

The CD pipeline detects an issue and rolls back a problematic release.

Question 7

You are working with a government agency that requires you to archive application logs for seven years. You need to configure Stackdriver to export and store the logs while minimizing costs of storage. What should you do?

Options:

A.

Create a Cloud Storage bucket and develop your application to send logs directly to the bucket.

B.

Develop an App Engine application that pulls the logs from Stackdriver and saves them in BigQuery.

C.

Create an export in Stackdriver and configure Cloud Pub/Sub to store logs in permanent storage for seven years.

D.

Create a sink in Stackdriver, name it, create a bucket on Cloud Storage for storing archived logs, and then select the bucket as the log export destination.

Question 8

You support a high-traffic web application that runs on Google Cloud Platform (GCP). You need to measure application reliability from a user perspective without making any engineering changes to it. What should you do?

Choose 2 answers

Options:

A.

Review current application metrics and add new ones as needed.

B.

Modify the code to capture additional information for user interaction.

C.

Analyze the web proxy logs only and capture response time of each request.

D.

Create new synthetic clients to simulate a user journey using the application.

E.

Use current and historic Request Logs to trace customer interaction with the application.

Question 9

You support a multi-region web service running on Google Kubernetes Engine (GKE) behind a Global HTTP'S Cloud Load Balancer (CLB). For legacy reasons, user requests first go through a third-party Content Delivery Network (CDN). which then routes traffic to the CLB. You have already implemented an availability Service Level Indicator (SLI) at the CLB level. However, you want to increase coverage in case of a potential load balancer misconfiguration. CDN failure, or other global networking catastrophe. Where should you measure this new SLI?

Choose 2 answers

Options:

A.

Your application servers' logs

B.

Instrumentation coded directly in the client

C.

Metrics exported from the application servers

D.

GKE health checks for your application servers

E.

A synthetic client that periodically sends simulated user requests

Question 10

Your product is currently deployed in three Google Cloud Platform (GCP) zones with your users divided between the zones. You can fail over from one zone to another, but it causes a 10-minute service disruption for the affected users. You typically experience a database failure once per quarter and can detect it within five minutes. You are cataloging the reliability risks of a new real-time chat feature for your product. You catalog the following information for each risk:

• Mean Time to Detect (MUD} in minutes

• Mean Time to Repair (MTTR) in minutes

• Mean Time Between Failure (MTBF) in days

• User Impact Percentage

The chat feature requires a new database system that takes twice as long to successfully fail over between zones. You want to account for the risk of the new database failing in one zone. What would be the values for the risk of database failover with the new system?

Options:

A.

MTTD: 5

MTTR: 10

MTBF: 90

Impact: 33%

B.

MTTD:5

MTTR: 20

MTBF: 90

Impact: 33%

C.

MTTD:5

MTTR: 10

MTBF: 90

Impact 50%

D.

MTTD:5

MTTR: 20

MTBF: 90

Impact: 50%

Question 11

You need to run a business-critical workload on a fixed set of Compute Engine instances for several months. The workload is stable with the exact amount of resources allocated to it. You want to lower the costs for this workload without any performance implications. What should you do?

Options:

A.

Purchase Committed Use Discounts.

B.

Migrate the instances to a Managed Instance Group.

C.

Convert the instances to preemptible virtual machines.

D.

Create an Unmanaged Instance Group for the instances used to run the workload.

Question 12

You built a serverless application by using Cloud Run and deployed the application to your production environment You want to identify the resource utilization of the application for cost optimization What should you do?

Options:

A.

Use Cloud Trace with distributed tracing to monitor the resource utilization of the application

B.

Use Cloud Profiler with Ops Agent to monitor the CPU and memory utilization of the application

C.

Use Cloud Monitoring to monitor the container CPU and memory utilization of the application

D.

Use Cloud Ops to create logs-based metrics to monitor the resource utilization of the application

Question 13

Your application images are built using Cloud Build and pushed to Google Container Registry (GCR). You want to be able to specify a particular version of your application for deployment based on the release version tagged in source control. What should you do when you push the image?

Options:

A.

Reference the image digest in the source control tag.

B.

Supply the source control tag as a parameter within the image name.

C.

Use Cloud Build to include the release version tag in the application image.

D.

Use GCR digest versioning to match the image to the tag in source control.

Question 14

You support a popular mobile game application deployed on Google Kubernetes Engine (GKE) across several Google Cloud regions. Each region has multiple Kubernetes clusters. You receive a report that none of the users in a specific region can connect to the application. You want to resolve the incident while following Site Reliability Engineering practices. What should you do first?

Options:

A.

Reroute the user traffic from the affected region to other regions that don’t report issues.

B.

Use Stackdriver Monitoring to check for a spike in CPU or memory usage for the affected region.

C.

Add an extra node pool that consists of high memory and high CPU machine type instances to the cluster.

D.

Use Stackdriver Logging to filter on the clusters in the affected region, and inspect error messages in the logs.

Question 15

You are configuring your CI/CD pipeline natively on Google Cloud. You want builds in a pre-production Google Kubernetes Engine (GKE) environment to be automatically load-tested before being promoted to the production GKE environment. You need to ensure that only builds that have passed this test are deployed to production. You want to follow Google-recommended practices. How should you configure this pipeline with Binary Authorization?

Options:

A.

Create an attestation for the builds that pass the load test by requiring the lead quality assurance engineer to sign the attestation by using a key stored in Cloud Key Management Service (Cloud KMS).

B.

Create an attestation for the builds that pass the load test by using a private key stored in Cloud Key Management Service (Cloud KMS) authenticated through Workload Identity.

C.

Create an attestation for the builds that pass the load test by using a private key stored in Cloud Key Management Service (Cloud KMS) with a service account JSON key stored as a Kubernetes Secret.

D.

Create an attestation for the builds that pass the load test by requiring the lead quality assurance engineer to sign the attestation by using their personal private key.

Question 16

You work for a global organization and are running a monolithic application on Compute Engine You need to select the machine type for the application to use that optimizes CPU utilization by using the fewest number of steps You want to use historical system metncs to identify the machine type for the application to use You want to follow Google-recommended practices What should you do?

Options:

A.

Use the Recommender API and apply the suggested recommendations

B.

Create an Agent Policy to automatically install Ops Agent in all VMs

C.

Install the Ops Agent in a fleet of VMs by using the gcloud CLI

D.

Review the Cloud Monitoring dashboard for the VM and choose the machine type with the lowest CPU utilization

Question 17

You are running a real-time gaming application on Compute Engine that has a production and testing environment. Each environment has their own Virtual Private Cloud (VPC) network. The application frontend and backend servers are located on different subnets in the environment's VPC. You suspect there is a malicious process communicating intermittently in your production frontend servers. You want to ensure that network traffic is captured for analysis. What should you do?

Options:

A.

Enable VPC Flow Logs on the production VPC network frontend and backend subnets only with a sample volume scale of 0.5.

B.

Enable VPC Flow Logs on the production VPC network frontend and backend subnets only with a sample volume scale of 1.0.

C.

Enable VPC Flow Logs on the testing and production VPC network frontend and backend subnets with a volume scale of 0.5. Apply changes in

testing before production.

D.

Enable VPC Flow Logs on the testing and production VPC network frontend and backend subnets with a volume scale of 1.0. Apply changes in testing before production.

Question 18

You are managing an application that runs in Compute Engine The application uses a custom HTTP server to expose an API that is accessed by other applications through an internal TCP/UDP load balancer A firewall rule allows access to the API port from 0.0.0-0/0. You need to configure Cloud Logging to log each IP address that accesses the API by using the fewest number of steps What should you do Bret?

Options:

A.

Enable Packet Mirroring on the VPC

B.

Install the Ops Agent on the Compute Engine instances.

C.

Enable logging on the firewall rule

D.

Enable VPC Flow Logs on the subnet

Question 19

You are creating and assigning action items in a postmodern for an outage. The outage is over, but you need to address the root causes. You want to ensure that your team handles the action items quickly and efficiently. How should you assign owners and collaborators to action items?

Options:

A.

Assign one owner for each action item and any necessary collaborators.

B.

Assign multiple owners for each item to guarantee that the team addresses items quickly

C.

Assign collaborators but no individual owners to the items to keep the postmortem blameless.

D.

Assign the team lead as the owner for all action items because they are in charge of the SRE team.

Question 20

Your company has a Google Cloud resource hierarchy with folders for production test and development Your cyber security team needs to review your company's Google Cloud security posture to accelerate security issue identification and resolution You need to centralize the logs generated by Google Cloud services from all projects only inside your production folder to allow for alerting and near-real time analysis. What should you do?

Options:

A.

Enable the Workflows API and route all the logs to Cloud Logging

B.

Create a central Cloud Monitoring workspace and attach all related projects

C.

Create an aggregated log sink associated with the production folder that uses a Pub Sub topic as the destination

D.

Create an aggregated log sink associated with the production folder that uses a Cloud Logging bucket as the destination

Question 21

You are designing a system with three different environments: development, quality assurance (QA), and production.

Each environment will be deployed with Terraform and has a Google Kubemetes Engine (GKE) cluster created so that application teams can deploy their applications. Anthos Config Management will be used and templated to deploy

infrastructure level resources in each GKE cluster. All users (for example, infrastructure operators and application owners) will use GitOps. How should you structure your source control repositories for both Infrastructure as Code (laC) and application code?

Options:

A.

Cloud Infrastructure (Terraform) repository is shared: different directories are different environments GKE Infrastructure (Anthos Config Management Kustomize manifests) repository is shared: different overlay directories are different environments Application (app source code) repositories are separated: different branches are different features

B.

Cloud Infrastructure (Terraform) repository is shared: different directories are different environments GKE Infrastructure (Anthos Config Management Kustomize manifests) repositories are separated: different branches are different environments Application (app source code) repositories are separated: different branches are different features

C.

Cloud Infrastructure (Terraform) repository is shared: different branches are different environments GKE Infrastructure (Anthos Config Management Kustomize manifests) repository is shared: different overlay directories are different environments Application (app source code) repository is shared: different directories are different features

D.

Cloud Infrastructure (Terraform) repositories are separated: different branches are different environments GKE Infrastructure (Anthos Config Management Kustomize manifests) repositories are separated: different overlay directories are different environments Application (app source code) repositories are separated: different branches are different features

Question 22

Your company runs services by using Google Kubernetes Engine (GKE). The GKE clusters in the development environment run applications with verbose logging enabled. Developers view logs by using the kubect1 logs

command and do not use Cloud Logging. Applications do not have a uniform logging structure defined. You need to minimize the costs associated with application logging while still collecting GKE operational logs. What should you do?

Options:

A.

Run the gcloud container clusters update --logging—SYSTEM command for the development cluster.

B.

Run the gcloud container clusters update logging=WORKLOAD command for the development cluster.

C.

Run the gcloud logging sinks update _Defau1t --disabled command in the project associated with the development environment.

D.

Add the severity >= DEBUG resource. type "k83 container" exclusion filter to the Default logging sink in the project associated with the development environment.

Question 23

Your application images are built and pushed to Google Container Registry (GCR). You want to build an automated pipeline that deploys the application when the image is updated while minimizing the development effort. What should you do?

Options:

A.

Use Cloud Build to trigger a Spinnaker pipeline.

B.

Use Cloud Pub/Sub to trigger a Spinnaker pipeline.

C.

Use a custom builder in Cloud Build to trigger a Jenkins pipeline.

D.

Use Cloud Pub/Sub to trigger a custom deployment service running in Google Kubernetes Engine (GKE).

Question 24

You support the backend of a mobile phone game that runs on a Google Kubernetes Engine (GKE) cluster. The application is serving HTTP requests from users. You need to implement a solution that will reduce the network cost. What should you do?

Options:

A.

Configure the VPC as a Shared VPC Host project.

B.

Configure your network services on the Standard Tier.

C.

Configure your Kubernetes duster as a Private Cluster.

D.

Configure a Google Cloud HTTP Load Balancer as Ingress.

Question 25

You work for a global organization and run a service with an availability target of 99% with limited engineering resources. For the current calendar month you noticed that the service has 99 5% availability. You must ensure that your service meets the defined availability goals and can react to business changes including the upcoming launch of new features You also need to reduce technical debt while minimizing operational costs You want to follow Google-recommended practices What should you do?

Options:

A.

Add N+1 redundancy to your service by adding additional compute resources to the service

B.

Identify, measure and eliminate toil by automating repetitive tasks

C.

Define an error budget for your service level availability and minimize the remaining error budget

D.

Allocate available engineers to the feature backlog while you ensure that the sen/ice remains within the availability target

Question 26

You are running an application on Compute Engine and collecting logs through Stackdriver. You discover that some personally identifiable information (PII) is leaking into certain log entry fields. You want to prevent these fields from being written in new log entries as quickly as possible. What should you do?

Options:

A.

Use the filter-record-transformer Fluentd filter plugin to remove the fields from the log entries in flight.

B.

Use the fluent-plugin-record-reformer Fluentd output plugin to remove the fields from the log entries in flight.

C.

Wait for the application developers to patch the application, and then verify that the log entries are no longer exposing PII.

D.

Stage log entries to Cloud Storage, and then trigger a Cloud Function to remove the fields and write the entries to Stackdriver via the Stackdriver Logging API.

Question 27

You are deploying a Cloud Build job that deploys Terraform code when a Git branch is updated. While testing, you noticed that the job fails. You see the following error in the build logs:

Initializing the backend. ..

Error: Failed to get existing workspaces : querying Cloud Storage failed: googleapi : Error

403

You need to resolve the issue by following Google-recommended practices. What should you do?

Options:

A.

Change the Terraform code to use local state.

B.

Create a storage bucket with the name specified in the Terraform configuration.

C.

Grant the roles/ owner Identity and Access Management (IAM) role to the Cloud Build service account on the project.

D.

Grant the roles/ storage. objectAdmin Identity and Access Management (IAM) role to the Cloud Build service account on the state file bucket.

Question 28

You recently migrated an ecommerce application to Google Cloud. You now need to prepare the application for the upcoming peak traffic season. You want to follow Google-recommended practices. What should you do first to prepare for the busy season?

Options:

A.

Migrate the application to Cloud Run, and use autoscaling.

B.

Load test the application to profile its performance for scaling.

C.

Create a Terraform configuration for the application's underlying infrastructure to quickly deploy to additional regions.

D.

Pre-provision the additional compute power that was used last season, and expect growth.

Question 29

Your team is running microservices in Google Kubernetes Engine (GKE) You want to detect consumption of an error budget to protect customers and define release policies What should you do?

Options:

A.

Create SLIs from metrics Enable Alert Policies if the services do not pass

B.

Use the metrics from Anthos Service Mesh to measure the health of the microservices

C.

Create a SLO Create an Alert Policy on select_slo_bum_rate

D.

Create a SLO and configure uptime checks for your services Enable Alert Policies if the services do not pass

Question 30

Your organization is using Helm to package containerized applications Your applications reference both public and private charts Your security team flagged that using a public Helm repository as a dependency is a risk You want to manage all charts uniformly, with native access control and VPC Service Controls What should you do?

Options:

A.

Store public and private charts in OCI format by using Artifact Registry

B.

Store public and private charts by using GitHub Enterprise with Google Workspace as the identity provider

C.

Store public and private charts by using Git repository Configure Cloud Build to synchronize contents of the repository into a Cloud Storage bucket Connect Helm to the bucket by using https: // [bucket] .srorage.googleapis.com/ [holnchart] as the Helm repository

D.

Configure a Helm chart repository server to run in Google Kubernetes Engine (GKE) with Cloud Storage bucket as the storage backend

Question 31

You are building the Cl/CD pipeline for an application deployed to Google Kubernetes Engine (GKE) The application is deployed by using a Kubernetes Deployment, Service, and Ingress The application team asked you to deploy the application by using the blue'green deployment methodology You need to implement the rollback actions What should you do?

Options:

A.

Run the kubectl rollout undo command

B.

Delete the new container image, and delete the running Pods

C.

Update the Kubernetes Service to point to the previous Kubernetes Deployment

D.

Scale the new Kubernetes Deployment to zero

Question 32

A third-party application needs to have a service account key to work properly When you try to export the key from your cloud project you receive an error "The organization policy constraint larn.disableServiceAccountKeyCreation is enforcedM You need to make the third-party application work while following Google-recommended security practices What should you do?

Options:

A.

Enable the default service account key. and download the key

B.

Remove the iam.disableServiceAccountKeyCreation policy at the organization level, and create a key.

C.

Disable the service account key creation policy at the project's folder, and download the default key

D.

Add a rule to set the iam.disableServiceAccountKeyCreation policy to off in your project and create a key.

Question 33

You are configuring the frontend tier of an application deployed in Google Cloud The frontend tier is hosted in ngmx and deployed using a managed instance group with an Envoy-based external HTTP(S) load balancer in front The application is deployed entirely within the europe-west2 region: and only serves users based in the United Kingdom. You need to choose the most cost-effective network tier and load balancing configuration What should you use?

Options:

A.

Premium Tier with a global load balancer

B.

Premium Tier with a regional load balancer

C.

Standard Tier with a global load balancer

D.

Standard Tier with a regional load balancer

Question 34

You recently deployed your application in Google Kubernetes Engine (GKE) and now need to release a new version of the application You need the ability to instantly roll back to the previous version of the application in case there are issues with the new version Which deployment model should you use?

Options:

A.

Perform a rolling deployment and test your new application after the deployment is complete

B.

Perform A. B testing, and test your application periodically after the deployment is complete

C.

Perform a canary deployment, and test your new application periodically after the new version is deployed

D.

Perform a blue/green deployment and test your new application after the deployment is complete

Question 35

You manage several production systems that run on Compute Engine in the same Google Cloud Platform (GCP) project. Each system has its own set of dedicated Compute Engine instances. You want to know how must it costs to run each of the systems. What should you do?

Options:

A.

In the Google Cloud Platform Console, use the Cost Breakdown section to visualize the costs per system.

B.

Assign all instances a label specific to the system they run. Configure BigQuery billing export and query costs per label.

C.

Enrich all instances with metadata specific to the system they run. Configure Stackdriver Logging to export to BigQuery, and query costs based on the metadata.

D.

Name each virtual machine (VM) after the system it runs. Set up a usage report export to a Cloud Storage bucket. Configure the bucket as a source in BigQuery to query costs based on VM name.

Question 36

You are configuring Cloud Logging for a new application that runs on a Compute Engine instance with a public IP address. A user-managed service account is attached to the instance. You confirmed that the necessary agents are running on the instance but you cannot see any log entries from the instance in Cloud Logging. You want to resolve the issue by following Google-recommended practices. What should you do?

Options:

A.

Add the Logs Writer role to the service account.

B.

Enable Private Google Access on the subnet that the instance is in.

C.

Update the instance to use the default Compute Engine service account.

D.

Export the service account key and configure the agents to use the key.

Question 37

Your team deploys applications to three Google Kubernetes Engine (GKE) environments development staging and production You use GitHub reposrtones as your source of truth You need to ensure that the three environments are consistent You want to follow Google-recommended practices to enforce and install network policies and a logging DaemonSet on all the GKE clusters in those environments What should you do?

Options:

A.

Use Google Cloud Deploy to deploy the network policies and the DaemonSet Use Cloud Monitoring to trigger an alert if the network policies and DaemonSet drift from your source in the repository.

B.

Use Google Cloud Deploy to deploy the DaemonSet and use Policy Controller to configure the network policies Use Cloud Monitoring to detect drifts from the source in the repository and Cloud Functions to

correct the drifts

C.

Use Cloud Build to render and deploy the network policies and the DaemonSet Set up Config Sync to sync the configurations for the three environments

D.

Use Cloud Build to render and deploy the network policies and the DaemonSet Set up a Policy Controller to enforce the configurations for the three environments

Question 38

You are part of an organization that follows SRE practices and principles. You are taking over the management of a new service from the Development Team, and you conduct a Production Readiness Review (PRR). After the PRR analysis phase, you determine that the service cannot currently meet its Service Level Objectives (SLOs). You want to ensure that the service can meet its SLOs in production. What should you do next?

Options:

A.

Adjust the SLO targets to be achievable by the service so you can bring it into production.

B.

Notify the development team that they will have to provide production support for the service.

C.

Identify recommended reliability improvements to the service to be completed before handover.

D.

Bring the service into production with no SLOs and build them when you have collected operational data.

Question 39

You need to define Service Level Objectives (SLOs) for a high-traffic multi-region web application. Customers expect the application to always be available and have fast response times. Customers are currently happy with the application performance and availability. Based on current measurement, you observe that the 90th percentile of latency is 120ms and the 95th percentile of latency is 275ms over a 28-day window. What latency SLO would you recommend to the team to publish?

Options:

A.

90th percentile – 100ms

95th percentile – 250ms

B.

90th percentile – 120ms

95th percentile – 275ms

C.

90th percentile – 150ms

95th percentile – 300ms

D.

90th percentile – 250ms

95th percentile – 400ms

Question 40

You are running a web application deployed to a Compute Engine managed instance group Ops Agent is installed on all instances You recently noticed suspicious activity from a specific IP address You need to configure Cloud Monitoring to view the number of requests from that specific IP address with minimal operational overhead. What should you do?

Options:

A.

Configure the Ops Agent with a logging receiver Create a logs-based metric

B.

Create a script to scrape the web server log Export the IP address request metrics to the Cloud Monitoring API

C.

Update the application to export the IP address request metrics to the Cloud Monitoring API

D.

Configure the Ops Agent with a metrics receiver

Question 41

You use a multiple step Cloud Build pipeline to build and deploy your application to Google Kubernetes Engine (GKE). You want to integrate with a third-party monitoring platform by performing a HTTP POST of the build information to a webhook. You want to minimize the development effort. What should you do?

Options:

A.

Add logic to each Cloud Build step to HTTP POST the build information to a webhook.

B.

Add a new step at the end of the pipeline in Cloud Build to HTTP POST the build information to a webhook.

C.

Use Stackdriver Logging to create a logs-based metric from the Cloud Buitd logs. Create an Alert with a Webhook notification type.

D.

Create a Cloud Pub/Sub push subscription to the Cloud Build cloud-builds PubSub topic to HTTP POST the build information to a webhook.

Question 42

Your team is designing a new application for deployment into Google Kubernetes Engine (GKE). You need to set up monitoring to collect and aggregate various application-level metrics in a centralized location. You want to use Google Cloud Platform services while minimizing the amount of work required to set up monitoring. What should you do?

Options:

A.

Publish various metrics from the application directly to the Slackdriver Monitoring API, and then observe these custom metrics in Stackdriver.

B.

Install the Cloud Pub/Sub client libraries, push various metrics from the application to various topics, and then observe the aggregated metrics in Stackdriver.

C.

Install the OpenTelemetry client libraries in the application, configure Stackdriver as the export destination for the metrics, and then observe the application's metrics in Stackdriver.

D.

Emit all metrics in the form of application-specific log messages, pass these messages from the containers to the Stackdriver logging collector, and then observe metrics in Stackdriver.

Question 43

You support an application running on App Engine. The application is used globally and accessed from various device types. You want to know the number of connections. You are using Stackdriver Monitoring for App Engine. What metric should you use?

Options:

A.

flex/connections/current

B.

tcp_ssl_proxy/new_connections

C.

tcp_ssl_proxy/open_connections

D.

flex/instance/connections/current

Question 44

Your company is using HTTPS requests to trigger a public Cloud Run-hosted service accessible at the https://booking-engine-abcdef .a.run.app URL You need to give developers the ability to test the latest revisions of the service before the service is exposed to customers What should you do?

Options:

A.

Runthegcioud run deploy booking-engine —no-traffic —-ag dev command Use the https://dev----booking-engine-abcdef. a. run. app URL for testing

B.

Runthegcioud run services update-traffic booking-engine —to-revisions LATEST*! command Use the ht tps: //booking-engine-abcdef. a. run. ape URL for testing

C.

Pass the curl -K "Authorization: Hearer S(gclcud auth print-identity-token)" auth token Use the https: / /booking-engine-abcdef. a. run. app URL to test privately

D.

Grant the roles/run. invoker role to the developers testing the booking-engine service Use the https: //booking-engine-abcdef. private. run. app URL for testing

Question 45

Your application services run in Google Kubernetes Engine (GKE). You want to make sure that only images from your centrally-managed Google Container Registry (GCR) image registry in the altostrat-images project can be deployed to the cluster while minimizing development time. What should you do?

Options:

A.

Create a custom builder for Cloud Build that will only push images to gcr.io/altostrat-images.

B.

Use a Binary Authorization policy that includes the whitelist name pattern gcr.io/attostrat-images/.

C.

Add logic to the deployment pipeline to check that all manifests contain only images from gcr.io/altostrat-images.

D.

Add a tag to each image in gcr.io/altostrat-images and check that this tag is present when the image is deployed.

Question 46

You support a production service that runs on a single Compute Engine instance. You regularly need to spend time on recreating the service by deleting the crashing instance and creating a new instance based on the relevant image. You want to reduce the time spent performing manual operations while following Site Reliability Engineering principles. What should you do?

Options:

A.

File a bug with the development team so they can find the root cause of the crashing instance.

B.

Create a Managed Instance Group with a single instance and use health checks to determine the system status.

C.

Add a Load Balancer in front of the Compute Engine instance and use health checks to determine the system status.

D.

Create a Stackdriver Monitoring dashboard with SMS alerts to be able to start recreating the crashed instance promptly after it has crashed.

Page: 1 / 16
Total 162 questions