Professional-Cloud-Architect Exam Questions

Total 251 Questions

Last Updated Exam : 22-Oct-2024

Topic 8, Mountkrik Games Case 2

   

Company Overview
Mountkirk Games makes online, session-based, multiplayer games for mobile platforms. They build all of
their games using some server-side integration. Historically, they have used cloud providers to lease physical
servers.
Due to the unexpected popularity of some of their games, they have had problems scaling their global
audience, application servers, MySQL databases, and analytics tools.
Their current model is to write game statistics to files and send them through an ETL tool that loads them into
a centralized MySQL database for reporting.
Solution Concept
Mountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the
game’s backend on Google Compute Engine so they can capture streaming metrics, run intensive analytics,
and take advantage of its autoscaling server environment and integrate with a managed NoSQL database.
Business Requirements
Increase to a global footprint.
Improve uptime – downtime is loss of players.
Increase efficiency of the cloud resources we use.
Reduce latency to all customers.
Technical Requirements
Requirements for Game Backend Platform
Dynamically scale up or down based on game activity.
Connect to a transactional database service to manage user profiles and game state.
Store game activity in a timeseries database service for future analysis.

As the system scales, ensure that data is not lost due to processing backlogs.
Run hardened Linux distro.
Requirements for Game Analytics Platform
Dynamically scale up or down based on game activity
Process incoming data on the fly directly from the game servers
Process data that arrives late because of slow mobile networks
Allow queries to access at least 10 TB of historical data
Process files that are regularly uploaded by users’ mobile devices
Executive Statement
Our last successful game did not scale well with our previous cloud provider, resulting in lower user adoption
and affecting the game’s reputation. Our investors want more key performance indicators (KPIs) to evaluate
the speed and stability of the game, as well as other metrics that provide deeper insight into usage patterns so
we can adapt the game to target users. Additionally, our current technology stack cannot provide the scale we
need, so we want to replace MySQL and move to an environment that provides autoscaling, low latency load
balancing, and frees us up from managing physical servers.

For this question, refer to the Mountkirk Games case study. Mountkirk Games wants to migrate from their current analytics and statistics reporting model to one that meets their technical requirements on Google Cloud Platform. Which two steps should be part of their migration plan? (Choose two.)


A.

Evaluate the impact of migrating their current batch ETL code to Cloud Dataflow.


B.

Write a schema migration plan to denormalize data for better performance in BigQuery.


C.

Draw an architecture diagram that shows how to move from a single MySQL database to a MySQL
cluster.


D.

Load 10 TB of analytics data from a previous game into a Cloud SQL instance, and run test queries
against the full dataset to confirm that they complete successfully.


E.

Integrate Cloud Armor to defend against possible SQL injection attacks in analytics files uploaded to
Cloud Storage.





A.
  

Evaluate the impact of migrating their current batch ETL code to Cloud Dataflow.



B.
  

Write a schema migration plan to denormalize data for better performance in BigQuery.



You need to upload files from your on-premises environment to Cloud Storage. You want the files to be
encrypted on Cloud Storage using customer-supplied encryption keys. What should you do?


A.

Supply the encryption key in a .boto configuration file. Use gsutil to upload the files.


B.

Supply the encryption key using gcloud config. Use gsutil to upload the files to that bucket.


C.

Use gsutil to upload the files, and use the flag --encryption-key to supply the encryption key.


D.

Use gsutil to create a bucket, and use the flag --encryption-key to supply the encryption key. Use gsutil
to upload the files to that bucket





D.
  

Use gsutil to create a bucket, and use the flag --encryption-key to supply the encryption key. Use gsutil
to upload the files to that bucket



Google Cloud Platform resources are managed hierarchically using organization, folders, and projects. When Cloud Identity and Access Management (IAM) policies exist at these different levels, what is the effective policy at a particular node of the hierarchy?


A.

The effective policy is determined only by the policy set at the node


B.

The effective policy is the policy set at the node and restricted by the policies of its ancestors


C.

The effective policy is the union of the policy set at the node and policies inherited from its ancestors


D.

The effective policy is the intersection of the policy set at the node and policies inherited from its
ancestors





C.
  

The effective policy is the union of the policy set at the node and policies inherited from its ancestors



Reference: https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy

An application development team believes their current logging tool will not meet their needs for their new cloud-based product. They want a bettor tool to capture errors and help them analyze their historical log data. You want to help them find a solution that meets their needs, what should you do?



A.

Direct them to download and install the Google StackDriver logging agent.


B.

Send them a list of online resources about logging best practices.


C.

Help them define their requirements and assess viable logging tools.


D.

Help them upgrade their current tool to take advantage of any new features.





A.
  

Direct them to download and install the Google StackDriver logging agent.



The Stackdriver Logging agent streams logs from your VM instances and from selected third party software packages to Stackdriver Logging. Using the agent is optional but we recommend it. The agent runs under both Linux and Microsoft Windows. Note: Stackdriver Logging allows you to store, search, analyze, monitor, and alert on log data and events fromGoogle Cloud Platform and Amazon Web Services (AWS). Our API also allows ingestion of any custom log data from any source. Stackdriver Logging is a fully managed service that performs at scale and can ingest application and system log data from thousands of VMs. Even better, you can analyze all that log data in real time. References: https://cloud.google.com/logging/docs/agent/installation

Your company is migrating its on-premises data center into the cloud. As part of the migration, you want to integrate Kubernetes Engine for workload orchestration. Parts of your architecture must also be PCI
DSScompliant. Which of the following is most accurate?


A.

App Engine is the only compute platform on GCP that is certified for PCI DSS hosting.


B.

Kubernetes Engine cannot be used under PCI DSS because it is considered shared hosting.


C.

Kubernetes Engine and GCP provide the tools you need to build a PCI DSS-compliant environment.


D.

All Google Cloud services are usable because Google Cloud Platform is certified PCI-compliant





C.
  

Kubernetes Engine and GCP provide the tools you need to build a PCI DSS-compliant environment.



Your customer support tool logs all email and chat conversations to Cloud Bigtable for retention and analysis. What is the recommended approach for sanitizing this data of personally identifiable information or payment card information before initial storage?


A.

Hash all data using SHA256


B.

Encrypt all data using elliptic curve cryptography


C.

De-identify the data with the Cloud Data Loss Prevention API


D.

Use regular expressions to find and redact phone numbers, email addresses, and credit card numbers





C.
  

De-identify the data with the Cloud Data Loss Prevention API



Reference: https://cloud.google.com/solutions/pci-dss-compliance-ingcp#
using_data_loss_prevention_api_to_sanitize_data

Your company is using BigQuery as its enterprise data warehouse. Data is distributed over several Google Cloud projects. All queries on BigQuery need to be billed on a single project. You want to make sure that no query costs are incurred on the projects that contain the data. Users should be able to query the datasets, but not edit them. How should you configure users’ access roles?



A.

Add all users to a group. Grant the group the role of BigQuery user on the billing project and BigQuery
dataViewer on the projects that contain the data


B.

Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project and
BigQuery user on the projects that contain the data.


C.

Add all users to a group. Grant the group the roles of BigQuery jobUser on the billing project and
BigQuery dataViewer on the projects that contain the data.


D.

Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project and
BigQuery jobUser on the projects that contain the data





A.
  

Add all users to a group. Grant the group the role of BigQuery user on the billing project and BigQuery
dataViewer on the projects that contain the data



Your company is using BigQuery as its enterprise data warehouse. Data is distributed over several Google Cloud projects. All queries on BigQuery need to be billed on a single project. You want to make sure that no query costs are incurred on the projects that contain the data. Users should be able to query the datasets, butnot edit them. How should you configure users’ access roles?


A.

Add all users to a group. Grant the group the role of BigQuery user on the billing project and BigQuery
dataViewer on the projects that contain the data.


B.

Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project and
BigQuery user on the projects that contain the data.


C.

Add all users to a group. Grant the group the roles of BigQuery jobUser on the billing project and
BigQuery dataViewer on the projects that contain the data.


D.

Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project and
BigQuery jobUser on the projects that contain the data.





A.
  

Add all users to a group. Grant the group the role of BigQuery user on the billing project and BigQuery
dataViewer on the projects that contain the data.



Reference: https://cloud.google.com/bigquery/docs/running-queries

You want to create a private connection between your instances on Compute Engine and your on-premises data center. You require a connection of at least 20 Gbps. You want to follow Google-recommended practices How should you set up the connection?


A.

Create a VPC and connect it to your on-premises data center using Dedicated Interconnect.


B.

Create a VPC and connect it to your on-premises data center using a single Cloud VPN.


C.

Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises data center
using Dedicated Interconnect.


D.

Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises datacenter
using a single Cloud VPN.





B.
  

Create a VPC and connect it to your on-premises data center using a single Cloud VPN.



Reference: https://cloud.google.com/compute/docs/instances/connecting-advanced

You have found an error in your App Engine application caused by missing Cloud Datastore indexes. You
have created a YAML file with the required indexes and want to deploy these new indexes to Cloud Datastore. What should you do?


A.

Point gcloud datastore create-indexes to your configuration file


B.

Upload the configuration file the App Engine’s default Cloud Storage bucket, and have App Engine
detect the new indexes


C.

In the GCP Console, use Datastore Admin to delete the current indexes and upload the new configuration file


D.

Create an HTTP request to the built-in python module to send the index configuration file to your
application





C.
  

In the GCP Console, use Datastore Admin to delete the current indexes and upload the new configuration file




Page 11 out of 26 Pages
Previous