Google Associate Data Practitioner Übungsprüfungen
Zuletzt aktualisiert am 27.04.2025- Prüfungscode: Associate Data Practitioner
- Prüfungsname: Google Cloud Associate Data Practitioner ( ADP Exam )
- Zertifizierungsanbieter: Google
- Zuletzt aktualisiert am: 27.04.2025
Passive infrared sensors detect intruders by sensing:
- A . the contrast between the Thermal energy of an intruder’s body and the energy of the surrounding area.
- B . an intruder passing through a pulsed light beam emitted by the detector.
- C . the air shift that occurs when an intruder passes through the sensor’s field.
- D . the contrast between an intruder’s clothing and the stationary objects the intruder passes in front of.
Your company has developed a website that allows users to upload and share video files. These files are most frequently accessed and shared when they are initially uploaded. Over time, the files are accessed and shared less frequently, although some old video files may remain very popular.
You need to design a storage system that is simple and cost-effective.
What should you do?
- A . Create a single-region bucket with Autoclass enabled.
- B . Create a single-region bucket. Configure a Cloud Scheduler job that runs every 24 hours and changes the storage class based on upload date.
- C . Create a single-region bucket with custom Object Lifecycle Management policies based on upload date.
- D . Create a single-region bucket with Archive as the default storage class.
Your company uses Looker as its primary business intelligence platform. You want to use LookML to visualize the profit margin for each of your company’s products in your Looker Explores and dashboards. You need to implement a solution quickly and efficiently.
What should you do?
- A . Create a derived table that pre-calculates the profit margin for each product, and include it in the Looker model.
- B . Define a new measure that calculates the profit margin by using the existing revenue and cost fields.
- C . Create a new dimension that categorizes products based on their profit margin ranges (e.g., high, medium, low).
- D . Apply a filter to only show products with a positive profit margin.
Your company uses Looker as its primary business intelligence platform. You want to use LookML to visualize the profit margin for each of your company’s products in your Looker Explores and dashboards. You need to implement a solution quickly and efficiently.
What should you do?
- A . Create a derived table that pre-calculates the profit margin for each product, and include it in the Looker model.
- B . Define a new measure that calculates the profit margin by using the existing revenue and cost fields.
- C . Create a new dimension that categorizes products based on their profit margin ranges (e.g., high, medium, low).
- D . Apply a filter to only show products with a positive profit margin.
You manage a large amount of data in Cloud Storage, including raw data, processed data, and backups. Your organization is subject to strict compliance regulations that mandate data immutability for specific data types. You want to use an efficient process to reduce storage costs while ensuring that your storage strategy meets retention requirements.
What should you do?
- A . Configure lifecycle management rules to transition objects to appropriate storage classes based on access patterns. Set up Object Versioning for all objects to meet immutability requirements.
- B . Move objects to different storage classes based on their age and access patterns. Use Cloud Key
Management Service (Cloud KMS) to encrypt specific objects with customer-managed encryption keys (CMEK) to meet immutability requirements. - C . Create a Cloud Run function to periodically check object metadata, and move objects to the appropriate storage class based on age and access patterns. Use object holds to enforce immutability for specific objects.
- D . Use object holds to enforce immutability for specific objects, and configure lifecycle management rules to transition objects to appropriate storage classes based on age and access patterns.
Another team in your organization is requesting access to a BigQuery dataset. You need to share the dataset with the team while minimizing the risk of unauthorized copying of data. You also want to create a reusable framework in case you need to share this data with other teams in the future.
What should you do?
- A . Create authorized views in the team’s Google Cloud project that is only accessible by the team.
- B . Create a private exchange using Analytics Hub with data egress restriction, and grant access to the team members.
- C . Enable domain restricted sharing on the project. Grant the team members the BigQuery Data Viewer IAM role on the dataset.
- D . Export the dataset to a Cloud Storage bucket in the team’s Google Cloud project that is only accessible by the team.
You have a BigQuery dataset containing sales data. This data is actively queried for the first 6 months. After that, the data is not queried but needs to be retained for 3 years for compliance reasons. You need to implement a data management strategy that meets access and compliance requirements, while keeping cost and administrative overhead to a minimum.
What should you do?
- A . Use BigQuery long-term storage for the entire dataset. Set up a Cloud Run function to delete the data from BigQuery after 3 years.
- B . Partition a BigQuery table by month. After 6 months, export the data to Coldline storage.
Implement a lifecycle policy to delete the data from Cloud Storage after 3 years. - C . Set up a scheduled query to export the data to Cloud Storage after 6 months. Write a stored procedure to delete the data from BigQuery after 3 years.
- D . Store all data in a single BigQuery table without partitioning or lifecycle policies.
You have millions of customer feedback records stored in BigQuery. You want to summarize the data by using the large language model (LLM) Gemini. You need to plan and execute this analysis using the most efficient approach.
What should you do?
- A . Query the BigQuery table from within a Python notebook, use the Gemini API to summarize the
data within the notebook, and store the summaries in BigQuery. - B . Use a BigQuery ML model to pre-process the text data, export the results to Cloud Storage, and use the Gemini API to summarize the pre- processed data.
- C . Create a BigQuery Cloud resource connection to a remote model in Vertex Al, and use Gemini to summarize the data.
- D . Export the raw BigQuery data to a CSV file, upload it to Cloud Storage, and use the Gemini API to summarize the data.
You have millions of customer feedback records stored in BigQuery. You want to summarize the data by using the large language model (LLM) Gemini. You need to plan and execute this analysis using the most efficient approach.
What should you do?
- A . Query the BigQuery table from within a Python notebook, use the Gemini API to summarize the
data within the notebook, and store the summaries in BigQuery. - B . Use a BigQuery ML model to pre-process the text data, export the results to Cloud Storage, and use the Gemini API to summarize the pre- processed data.
- C . Create a BigQuery Cloud resource connection to a remote model in Vertex Al, and use Gemini to summarize the data.
- D . Export the raw BigQuery data to a CSV file, upload it to Cloud Storage, and use the Gemini API to summarize the data.
You have millions of customer feedback records stored in BigQuery. You want to summarize the data by using the large language model (LLM) Gemini. You need to plan and execute this analysis using the most efficient approach.
What should you do?
- A . Query the BigQuery table from within a Python notebook, use the Gemini API to summarize the
data within the notebook, and store the summaries in BigQuery. - B . Use a BigQuery ML model to pre-process the text data, export the results to Cloud Storage, and use the Gemini API to summarize the pre- processed data.
- C . Create a BigQuery Cloud resource connection to a remote model in Vertex Al, and use Gemini to summarize the data.
- D . Export the raw BigQuery data to a CSV file, upload it to Cloud Storage, and use the Gemini API to summarize the data.