Exams Questions & Answers Fully Explained
What information is required to connect to an on-premises network router over VPN
using Cloud Router for dynamic routing?
Choose 3 correct answers:
[ ] A) Remote Router DNS Name
[ ] B) Remote Router (Peer) IP Address
[ ] C) Shared Secret
[ ] D) Border Gateway Protocol Address (BGP) - answersCorrect answers B, C, and D
Using Cloud Router for dynamic routing requires a BGP address along with the peer
address and the shared secret for secure access.
You want to ensure Dress4Win's sales and tax records remain available for infrequent
viewing by auditors for at least 10 years. Cost optimization is your top priority. Which
cloud services should you choose?
[ ] A) Google Bigtable with US or EU as location to store the data, and gcloud to access
the data.
[ ] B) BigQuery to store the data, and a web server cluster in a managed instance group
to access the data. Google Cloud SQL mirrored across two distinct regions to store the
data, and a Redis cluster in a managed instance group to access the data.
[ ] C) Google Cloud Storage Nearline to store the data, and gsutil to access the data.
[ ] D) Google Cloud Storage Coldline to store the data, and gsutil to access the data. -
answersCorrect Answer D
Feedback:
A and B are not suitable for this type of task "infrequent viewing by auditors for at least
10 years" and they are not cost-effective, either
D (Correct answer) - "for infrequent viewing by auditors" and "for at least 10 years" fit
the usage pattern for Coldline and qualify Answer D for meeting the requirements "Cost
optimization is your top priority" due to its lowest storage cost.
Explanation:
This is about choosing the storage solution for backup or achieving, depending the
required access frequency which in turn impact the cost, you have the option between
Nearline and Coldline.
https://cloud.google.com/images/storage/storage-classes-desktop.svg
Mountkirk Games has deployed their new backend on Google Cloud Platform (GCP).
You want to create a thorough testing process for new versions of the backend before
,they are released to the public. You want the testing environment to scale in an
economical way.
How should you design the process?
[ ] A)Create a scalable environment in GCP for simulating production load.
[ ] B) Build stress tests into each component of your application using resources internal
to GCP to simulate load.
[ ] C) Use the existing infrastructure to test the GCP-based backend at scale.
[ ] D) Create a set of static environments in GCP to test different levels of load - for
example, high, medium, and low. - answersCorrect Answer A
Feedback
A) (Correct Answer) With this disposable and repeatable testing resources, you can do
load test whenever needed. Shutdown or stop the services or simplify delete and
recreate it based on the test plans, to keep the cost low.
It meets the requirements "create a thorough testing process for new versions of the
backend before they are released to the public" and" testing environment to scale in an
economical way". Doing thorough testing on production infrastructure is risky to other
running application, not feasible, not scale in economical way.
B) This is not scale nor economical and too complicated to implement.
C) At first glance, reuse exiting environments so it'll be scalable, economical, and in real
situation. If Read the case study again, we know Mountkirk Games is popular game
platform targeting to global users with very high traffic and heavy load. Doing load test
on the production is no longer an option, nor is it necessary a scale in an economical
way if you mix the production and testing load. Comparing to the solution creating
disposable and reputable testing environment simulating production load and execute
test plans on demanding, Answer A is the winner.
D) This is not scalable or economical
You have been asked to select the storage system for the click-data of your company's
large portfolio of websites. This data is streamed in from a custom website analytics
package at a typical rate of 6,000 clicks per minute, with bursts of up to 8,500 clicks per
second. It must be stored for future analysis by your data science and user experience
teams.
Which storage infrastructure should you choose?
[ ] A) Google cloud Datastore
[ ] B) Google Cloud SQL
[ ] C) Google Cloud Bigtable
[ ] D) Google Cloud Storage - answersCorrect Answer C
Feedback
,A) Doesn't not meet this requirement "It must be stored for future analysis by your data
science and user experience teams." Google Cloud Datastore is a NoSQL document
database built for automatic scaling, high performance, and ease of application
development and integrating well with App Engine.
Datastore: A scalable, fully-managed NoSQL document database for your web and
mobile applications.
Good for:
- Semi-structured application data
- Hierarchical data
- Durable key-value data
Workload:
- User profiles
- Product catalogs
- Game state
B) Cloud SQL is mainly for OLTP (Transactional, CRUD) not for taking and storing
streaming data. It does not have the scalability and elasticity to absorb this amount of
data in real time.
C) (Correct Answer) The reason is that data is in IoT nature and it will be used for
analytics.
Bigtable: A scalable, fully-managed NoSQL wide-column database that is suitable for
both real-time access and analytics workloads. Bigtable is ideal for very large NoSQL
datasets and is useful for high-speed transactions and analysis. It integrates well with
ML. Dataproc, and analytics
Good for
- Low-latency read/write access
- High-throughput analytics
- Native time series support
Work load
- IoT, finance, adtech
- Personalization, recommendations
- Monitoring
- Geospatial datasets
- Graphs
Although both Datastore and Bigtable are NoSQL databases, only Bigtable is able to
support over a petabyte of data and is useful for high speed analytics as well, whereas
Datastore is not.
, D) GCS is ideally for Object storage purpose although it has pretty good scalability. It's
not suitable for IoT kind of spiky streaming data. Its buckets initially support roughly
1000 writes per second and then scale as needed. As the request rate for a g
Over time, you've created 5 snapshots of a single instance. To save space, you delete
snapshots number 3 and 4. What has happened to the fifth snapshot?
[ ] A) The data from both snapshots 3 and 4 necessary for continuance are transferred
to snapshot 5.
[ ] B) It is no longer useable and cannot restore data.
[ ] C) All later snapshots, including 5, are automatically deleted as well.
[ ] D) The data from snapshot 4 necessary for continuance was transferred to snapshot
5, however snapshot 3's contents were transferred to snapshot 2. - answersCorrect
Answer A
Explanation
Deleting a snapshot:
https://cloud.google.com/compute/docs/disks/restore-and-delete-snapshots
When you delete a snapshot, Compute Engine immediately marks the snapshot as
DELETED in the system. If the snapshot has no dependent snapshots, it is deleted
outright. However, if the snapshot does have dependent snapshots:
1) Any data that is required for restoring other snapshots is moved into the next
snapshot, increasing its size.
2) Any data that is not required for restoring other snapshots is deleted. This lowers the
total size of all your snapshots.
3) The next snapshot no longer references the snapshot marked for deletion, and
instead references the snapshot before it.
Because subsequent snapshots might require information stored in a previous
snapshot, keep in mind that deleting a snapshot does not necessarily delete all the data
on the snapshot. As mentioned in the first bullet above, if any data on a snapshot that is
marked for deletion is needed for restoring subsequent snapshots, that data is moved
into the next corresponding snapshot. To definitively delete data from your snapshots,
you should delete all snapshots.
The linked diagram below illustrates the process described above:
https://cloud.google.com/compute/images/deleting-snapshot.png
A small number of API requests to your microservices-based application take a very
long time. You know that each request to the API can traverse many services. You want
to know which service takes the longest in those cases. What should you do?
[ ] A) Set timeouts on your application so that you can fail requests faster.
[ ] B) Instrument your application with StackDriver Trace to break down the request
latencies at each microservice.
[ ] C) Send custom metrics for each of your requests to Stackdriver Monitoring.