Professional-Cloud-Architect Exam Questions

Total 251 Questions

Last Updated Exam : 22-Nov-2024

Topic 6, Dress4Win Case 2

   

Company Overview
Dress4win is a web-based company that helps their users organize and manage their personal wardrobe using a
website and mobile application. The company also cultivates an active social network that connects their users
with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a
freemium app model. The application has grown from a few servers in the founder’s garage to several hundred
servers and appliances in a collocated data center. However, the capacity of their infrastructure is now
insufficient for the application’s rapid growth. Because of this growth and the company’s desire to innovate
faster. Dress4Win is committing to a full migration to a public cloud.
Solution Concept
For the first phase of their migration to the cloud, Dress4win is moving their development and test
environments. They are also building a disaster recovery site, because their current infrastructure is at a single
location. They are not sure which components of their architecture they can migrate as is and which
components they need to change before migrating them.
Existing Technical Environment
The Dress4win application is served out of a single data center location. All servers run Ubuntu LTS v16.04.
Databases:
MySQL. 1 server for user data, inventory, static data:
- MySQL 5.8
- 8 core CPUs
- 128 GB of RAM
- 2x 5 TB HDD (RAID 1)
Redis 3 server cluster for metadata, social graph, caching. Each server is:
- Redis 3.2
- 4 core CPUs
- 32GB of RAM
Compute:
40 Web Application servers providing micro-services based APIs and static content.
- Tomcat - Java

- Nginx
- 4 core CPUs
- 32 GB of RAM
20 Apache Hadoop/Spark servers:
- Data analysis
- Real-time trending calculations
- 8 core CPUS
- 128 GB of RAM
- 4x 5 TB HDD (RAID 1)
3 RabbitMQ servers for messaging, social notifications, and events:
- 8 core CPUs
- 32GB of RAM
Miscellaneous servers:
- Jenkins, monitoring, bastion hosts, security scanners
- 8 core CPUs
- 32GB of RAM
Storage appliances:
iSCSI for VM hosts
Fiber channel SAN – MySQL databases
- 1 PB total storage; 400 TB available
NAS – image storage, logs, backups
- 100 TB total storage; 35 TB available
Business Requirements
Build a reliable and reproducible environment with scaled parity of production.
Improve security by defining and adhering to a set of security and Identity and Access Management (IAM)
best practices for cloud.
Improve business agility and speed of innovation through rapid provisioning of new resources.
Analyze and optimize architecture for performance in the cloud.
Technical Requirements
Easily create non-production environment in the cloud.
Implement an automation framework for provisioning resources in cloud.
Implement a continuous deployment process for deploying applications to the on-premises datacenter or cloud.
Support failover of the production environment to cloud during an emergency.
Encrypt data on the wire and at rest.
Support multiple private connections between the production data center and cloud environment.
Executive Statement
Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They
are also concerned that a competitor could use a public cloud platform to offset their up-front investment and
free them to focus on developing better features. Our traffic patterns are highest in the mornings and weekend
evenings; during other times, 80% of our capacity is sitting idle.
Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an
initial increase in spending, but we expect to fully transition before our next hardware refresh cycle. Our total
cost of ownership (TCO) analysis over the next 5 years for a public cloud strategy achieves a cost reduction
between 30% and 50% over our current model.

 

For this question, refer to the Dress4Win case study. Dress4Win is expected to grow to 10 times its size in 1 year with a corresponding growth in data and traffic that mirrors the existing patterns of usage. The CIO has set the target of migrating production infrastructure to the cloud within the next 6 months. How will you
configure the solution to scale for this growth without making major application changes and still maximize
the ROI?


A.

Migrate the web application layer to App Engine, and MySQL to Cloud Datastore, and NAS to Cloud
Storage. Deploy RabbitMQ, and deploy Hadoop servers using Deployment Manager.


B.

Migrate RabbitMQ to Cloud Pub/Sub, Hadoop to BigQuery, and NAS to Compute Engine with
Persistent Disk storage. Deploy Tomcat, and deploy Nginx using Deployment Manager.
ge


C.

Implement managed instance groups for Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ
to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Compute Engine with Persistent Disk
storage.


D.

Implement managed instance groups for the Tomcat and Nginx. Migrate MySQL to Cloud SQL,
RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Cloud Stora





C.
  

Implement managed instance groups for Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ
to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Compute Engine with Persistent Disk
storage.



For this question, refer to the Dress4Win case study. Considering the given business requirements, how would you automate the deployment of web and transactional data layers?


A.

Deploy Nginx and Tomcat using Cloud Deployment Manager to Compute Engine. Deploy a Cloud SQL
server to replace MySQL. Deploy Jenkins using Cloud Deployment Manager.


B.

Deploy Nginx and Tomcat using Cloud Launcher. Deploy a MySQL server using Cloud Launcher.
Deploy Jenkins to Compute Engine using Cloud Deployment Manager scripts.


C.

Migrate Nginx and Tomcat to App Engine. Deploy a Cloud Datastore server to replace the MySQL
server in a high-availability configuration. Deploy Jenkins to Compute Engine using Cloud Launcher.


D.

Migrate Nginx and Tomcat to App Engine. Deploy a MySQL server using Cloud Launcher. Deploy
Jenkins to Compute Engine using Cloud Launcher





C.
  

Migrate Nginx and Tomcat to App Engine. Deploy a Cloud Datastore server to replace the MySQL
server in a high-availability configuration. Deploy Jenkins to Compute Engine using Cloud Launcher.



For this question, refer to the TerramEarth case study. You need to implement a reliable, scalable GCP
solution for the data warehouse for your company, TerramEarth. Considering the TerramEarth business and technical requirements, what should you do?


A.

Replace the existing data warehouse with BigQuery. Use table partitioning.


B.

Replace the existing data warehouse with a Compute Engine instance with 96 CPUs.


C.

Replace the existing data warehouse with BigQuery. Use federated data sources.


D.

Replace the existing data warehouse with a Compute Engine instance with 96 CPUs. Add an additional
Compute Engine pre-emptible instance with 32 CPUs.





C.
  

Replace the existing data warehouse with BigQuery. Use federated data sources.



For this question, refer to the TerramEarth case study. TerramEarth has decided to store data files in Cloud Storage. You need to configure Cloud Storage lifecycle rule to store 1 year of data and minimize file storage cost. Which two actions should you take?


A.

Create a Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Standard”, and Action: “Set to
Coldline”, and create a second GCS life-cycle rule with Age: “365”, Storage Class: “Coldline”, and
Action: “Delete”.


B.

Create a Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Coldline”, and Action: “Set to
Nearline”, and create a second GCS life-cycle rule with Age: “91”, Storage Class: “Coldline”, and
Action: “Set to Nearline


C.

Create a Cloud Storage lifecycle rule with Age: “90”, Storage Class: “Standard”, and Action: “Set to
Nearline”, and create a second GCS life-cycle rule with Age: “91”, Storage Class: “Nearline”, and
Action: “Set to Coldline”.


D.

Create a Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Standard”, and Action: “Set to
Coldline”, and create a second GCS life-cycle rule with Age: “365”, Storage Class: “Nearline”, and
Action: “Delete”.





D.
  

Create a Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Standard”, and Action: “Set to
Coldline”, and create a second GCS life-cycle rule with Age: “365”, Storage Class: “Nearline”, and
Action: “Delete”.



For this question, refer to the TerramEarth case study. To be compliant with European GDPR regulation,
TerramEarth is required to delete data generated from its European customers after a period of 36 months when it contains personal data. In the new architecture, this data will be stored in both Cloud Storage and BigQuery. What should you do?


A.

Create a BigQuery table for the European data, and set the table retention period to 36 months. For
Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age
condition of 36 months.


B.

Create a BigQuery table for the European data, and set the table retention period to 36 months. For
Cloud Storage, use gsutil to create a SetStorageClass to NONE action when with an Age condition of 36
months.


C.

Create a BigQuery time-partitioned table for the European data, and set the partition expiration period to 36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with
an Age condition of 36 months.


D.

Create a BigQuery time-partitioned table for the European data, and set the partition period to 36
months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action with an Age
condition of 36 months.





B.
  

Create a BigQuery table for the European data, and set the table retention period to 36 months. For
Cloud Storage, use gsutil to create a SetStorageClass to NONE action when with an Age condition of 36
months.



For this question, refer to the Mountkirk Games case study. You need to analyze and define the technical
architecture for the database workloads for your company, Mountkirk Games. Considering the business and technical requirements, what should you do?


A.

Use Cloud SQL for time series data, and use Cloud Bigtable for historical data queries.


B.

Use Cloud SQL to replace MySQL, and use Cloud Spanner for historical data queries.


C.

Use Cloud Bigtable to replace MySQL, and use BigQuery for historical data queries.


D.

Use Cloud Bigtable for time series data, use Cloud Spanner for transactional data, and use BigQuery
for historical data queries.





D.
  

Use Cloud Bigtable for time series data, use Cloud Spanner for transactional data, and use BigQuery
for historical data queries.



For this question, refer to the Mountkirk Games case study. You need to analyze and define the technical
architecture for the compute workloads for your company, Mountkirk Games. Considering the Mountkirk Games business and technical requirements, what should you do?


A.

Create network load balancers. Use preemptible Compute Engine instances.


B.

Create network load balancers. Use non-preemptible Compute Engine instances.


C.

Create a global load balancer with managed instance groups and autoscaling policies. Use preemptible
Compute Engine instances.


D.

Create a global load balancer with managed instance groups and autoscaling policies. Use
non-preemptible Compute Engine instances.





C.
  

Create a global load balancer with managed instance groups and autoscaling policies. Use preemptible
Compute Engine instances.



For this question, refer to the Mountkirk Games case study. Mountkirk Games wants you to design a way to test the analytics platform’s resilience to changes in mobile network latency. What should you do?


A.

Deploy failure injection software to the game analytics platform that can inject additional latency to
mobile client analytics traffic.


B.

Build a test client that can be run from a mobile phone emulator on a Compute Engine virtual machine,
and run multiple copies in Google Cloud Platform regions all over the world to generate realistic traffic.


C.

Add the ability to introduce a random amount of delay before beginning to process analytics files
uploaded from mobile devices.


D.

Create an opt-in beta of the game that runs on players' mobile devices and collects response times from
analytics endpoints running in Google Cloud Platform regions all over the world.





C.
  

Add the ability to introduce a random amount of delay before beginning to process analytics files
uploaded from mobile devices.



For this question, refer to the Mountkirk Games case study. Which managed storage option meets Mountkirk’s
technical requirement for storing game activity in a time series database service?


A.

Cloud Bigtable


B.

Cloud Spanner


C.

BigQuery


D.

Cloud Datastore





A.
  

Cloud Bigtable



For this question, refer to the Mountkirk Games case study. Mountkirk Games wants to design their solution for the future in order to take advantage of cloud and technology improvements as they become available. Which two steps should they take? (Choose two.)


A.

Store as much analytics and game activity data as financially feasible today so it can be used to train
machine learning models to predict user behavior in the future.


B.

Begin packaging their game backend artifacts in container images and running them on Kubernetes
Engine to improve the availability to scale up or down based on game activity.


C.

Set up a CI/CD pipeline using Jenkins and Spinnaker to automate canary deployments and improve
development velocity.


D.

Adopt a schema versioning tool to reduce downtime when adding new game features that require storing
additional player data in the database.


E.

Implement a weekly rolling maintenance process for the Linux virtual machines so they can apply
critical kernel patches and package updates and reduce the risk of 0-day vulnerabilities





C.
  

Set up a CI/CD pipeline using Jenkins and Spinnaker to automate canary deployments and improve
development velocity.



E.
  

Implement a weekly rolling maintenance process for the Linux virtual machines so they can apply
critical kernel patches and package updates and reduce the risk of 0-day vulnerabilities




Page 10 out of 26 Pages
Previous