Topic 5, Misc Questions
You want to make a copy of a production Linux virtual machine in the US-Central region. You want to
manage and replace the copy easily if there are changes on the production virtual machine. You will deploy the copy as a new instances in a different project in the US-East region. What steps must you take?
A.
Use the Linux dd and netcat command to copy and stream the root disk contents to a new virtual
machine instance in the US-East region.
B.
Create a snapshot of the root disk and select the snapshot as the root disk when you create a new virtual
machine instance in the US-East region.
C.
Create an image file from the root disk with Linux dd command, create a new disk from the image file,
and use it to create a new virtual machine instance in the US-East region
D.
Create a snapshot of the root disk, create an image file in Google Cloud Storage from the snapshot, and
create a new virtual machine instance in the US-East region using the image file for the root disk
Create a snapshot of the root disk, create an image file in Google Cloud Storage from the snapshot, and
create a new virtual machine instance in the US-East region using the image file for the root disk
The application reliability team at your company has added a debug feature to their backend service to send all
server events to Google Cloud Storage for eventual analysis. The event records are at least 50 KB and at most
15 MB and are expected to peak at 3,000 events per second. You want to minimize data loss.
Which process should you implement?
A.
Append metadata to file body.
• Compress individual files.
• Name files with serverName-Timestamp.
• Create a new bucket if bucket is older than 1 hour and save individual files to the new bucket.
Otherwise, save files to existing bucket
B.
Batch every 10,000 events with a single manifest file for metadata.
• Compress event files and manifest file into a single archive file
Name files using serverName-EventSequence.
• Create a new bucket if bucket is older than 1 day and save the single archive file to the new bucket.
Otherwise, save the single archive file to existing bucket.
C.
Compress individual files.
• Name files with serverName-EventSequence.
• Save files to one bucket
• Set custom metadata headers for each object after saving.
D.
Append metadata to file body.
• Compress individual files.
• Name files with a random prefix pattern.
• Save files to one bucket
Append metadata to file body.
• Compress individual files.
• Name files with serverName-Timestamp.
• Create a new bucket if bucket is older than 1 hour and save individual files to the new bucket.
Otherwise, save files to existing bucket
Your solution is producing performance bugs in production that you did not see in staging and test
environments. You want to adjust your test and deployment procedures to avoid this problem in the future. What should you do?
A.
Deploy fewer changes to production.
B.
Deploy smaller changes to production.
C.
Increase the load on your test and staging environments.
D.
Deploy changes to a small subset of users before rolling out to production.
Deploy changes to a small subset of users before rolling out to production.
A news teed web service has the following code running on Google App Engine. During peak load, users
report that they can see news articles they already viewed. What is the most likely cause of this problem?
A.
The session variable is local to just a single instance.
B.
The session variable is being overwritten in Cloud Datastore.
C.
The URL of the API needs to be modified to prevent caching.
D.
The HTTP Expires header needs to be set to -1 to stop caching.
The session variable is being overwritten in Cloud Datastore.
https://stackoverflow.com/questions/3164280/google-app-engine-cache-list-in-session-variable?rq=1
Your organization wants to control IAM policies for different departments independently, but centrally.
Which approach should you take?
A.
Multiple Organizations with multiple Folders
B.
Multiple Organizations, one for each department
C.
A single Organization with Folder for each department
D.
A single Organization with multiple projects, each with a central owner
A single Organization with Folder for each department
Your development team has installed a new Linux kernel module on the batch servers in Google Compute
Engine (GCE) virtual machines (VMs) to speed up the nightly batch process. Two days after the installation,
50% of the batch servers failed the nightly batch run. You want to collect details on the failure to pass back to
the development team. Which three actions should you take? Choose 3 answers
A.
Use Stackdriver Logging to search for the module log entries.
B.
Read the debug GCE Activity log using the API or Cloud Console.
C.
Use gcloud or Cloud Console to connect to the serial console and observe the logs.
D.
Identify whether a live migration event of the failed server occurred, using in the activity log.
E.
Adjust the Google Stackdriver timeline to match the failure time, and observe the batch server metrics.
F.
Export a debug VM into an image, and run the image on a local server where kernel log messages will
be displayed on the native screen.
Use Stackdriver Logging to search for the module log entries.
Use gcloud or Cloud Console to connect to the serial console and observe the logs.
Adjust the Google Stackdriver timeline to match the failure time, and observe the batch server metrics.
You set up an autoscaling instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S) load balancer, you notice that virtual machine (VM) instances are being terminated and re-launched every minute. The instances do not have a public IP address. You have verified the appropriate web response is coming from each instance using the curl command. You want to ensure the backend is configured correctly. What should you do?
A.
Ensure that a firewall rule exists to allow source traffic on HTTP/HTTPS to reach the load balancer.
B.
Assign a public IP to each instance and configure a firewall rule to allow the load balancer to reach the
instance public IP.
C.
Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the
instance group.
D.
Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the
name of the load balancer as the source and the instance tag as the destination.
Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the
instance group.
https://cloud.google.com/vpc/docs/using-firewalls
Your company wants to try out the cloud with low risk. They want to archive approximately 100 TB of their log data to the cloud and test the analytics features available to them there, while also retaining that data as a long-term disaster recovery backup. Which two steps should they take? Choose 2 answers
A.
Load logs into Google BigQuery.
B.
Load logs into Google Cloud SQL.
C.
Import logs into Google Stackdriver.
D.
Insert logs into Google Cloud Bigtable.
E.
Upload log files into Google Cloud Storage.
Load logs into Google BigQuery.
Import logs into Google Stackdriver.
You are designing a mobile chat application. You want to ensure people cannot spoof chat messages, by
providing a message were sent by a specific user.
What should you do
A.
Tag messages client side with the originating user identifier and the destination user.
B.
Encrypt the message client side using block-based encryption with a shared key.
C.
Use public key infrastructure (PKI) to encrypt the message client side using the originating user's private
key.
D.
Use a trusted certificate authority to enable SSL connectivity between the client application and the
server.
Use a trusted certificate authority to enable SSL connectivity between the client application and the
server.
Encrypting each block and tagging each message at the client side is an overhead on the application. Best
method which has been adopted since years is contacting the SSL provider and use the public certificate to
encrypt the traffic between client and the server.
Your company has successfully migrated to the cloud and wants to analyze their data stream to optimize
operations. They do not have any existing code for this analysis, so they are exploring all their options. These
options include a mix of batch and stream processing, as they are running some hourly jobs and
live-processing some data as it comes in. Which technology should they use for this?
A.
Google Cloud Dataproc
B.
Google Cloud Dataflow
C.
Google Container Engine with Bigtable
D.
Google Compute Engine with Google BigQuery
Google Cloud Dataflow
Dataflow is for processing both the Batch and Stream
Page 8 out of 26 Pages |
Previous |