Amazon MLA-C01 Übungsprüfungen
Zuletzt aktualisiert am 26.04.2025- Prüfungscode: MLA-C01
- Prüfungsname: AWS Certified Machine Learning Engineer - Associate
- Zertifizierungsanbieter: Amazon
- Zuletzt aktualisiert am: 26.04.2025
What is the first step in building a machine learning application?
- A . Preparing the data
- B . Training the model
- C . Formulating the problem
- D . Testing the model
You are a data scientist at a marketing agency tasked with creating a sentiment analysis model to analyze customer reviews for a new product. The company wants to quickly deploy a solution with minimal training time and development effort. You decide to leverage a pre-trained natural language processing (NLP) model and fine-tune it using a custom dataset of labeled customer reviews. Your team has access to both Amazon Bedrock and SageMaker JumpStart.
Which approach is the MOST APPROPRIATE for fine-tuning the pre-trained model with your custom dataset?
- A . Use SageMaker JumpStart to create a custom container for your pre-trained model and manually implement fine-tuning with TensorFlow
- B . Use SageMaker JumpStart to deploy a pre-trained NLP model and use the built-in fine-tuning functionality with your custom dataset to create a customized sentiment analysis model
- C . Use Amazon Bedrock to train a model from scratch using your custom dataset, as Bedrock is optimized for training large models efficiently
- D . Use Amazon Bedrock to select a foundation model from a third-party provider, then fine-tune the
model directly in the Bedrock interface using your custom dataset
How do financial institutions use machine learning with mobile check deposits?
- A . To automate logistics
- B . To improve ride-sharing apps
- C . To reduce wait times
- D . To recognize content
You are a machine learning engineer working for an e-commerce company. You have developed a recommendation model that predicts products customers are likely to buy based on their browsing
history and past purchases. The model initially performs well, but after deploying it in production, you notice two issues: the model’s performance degrades over time as new data is added (catastrophic forgetting) and the model shows signs of overfitting during retraining on updated datasets.
Given these challenges, which of the following strategies is the MOST LIKELY to help prevent overfitting and catastrophic forgetting while maintaining model accuracy?
- A . Reduce the model complexity by decreasing the number of features, apply data augmentation to handle underfitting, and leverage L1 regularization to address catastrophic forgetting
- B . Apply L2 regularization to reduce overfitting, use dropout to prevent underfitting, and retrain the model on the entire dataset periodically to avoid catastrophic forgetting
- C . Use early stopping during training to prevent overfitting, incorporate new data incrementally through transfer learning to mitigate catastrophic forgetting, and apply L1 regularization to ensure feature selection
- D . Regularly update the training dataset with new data, apply L2 regularization to manage overfitting,
and use an ensemble of models to prevent catastrophic forgetting
You are a machine learning engineer working for an e-commerce company. You have developed a recommendation model that predicts products customers are likely to buy based on their browsing
history and past purchases. The model initially performs well, but after deploying it in production, you notice two issues: the model’s performance degrades over time as new data is added (catastrophic forgetting) and the model shows signs of overfitting during retraining on updated datasets.
Given these challenges, which of the following strategies is the MOST LIKELY to help prevent overfitting and catastrophic forgetting while maintaining model accuracy?
- A . Reduce the model complexity by decreasing the number of features, apply data augmentation to handle underfitting, and leverage L1 regularization to address catastrophic forgetting
- B . Apply L2 regularization to reduce overfitting, use dropout to prevent underfitting, and retrain the model on the entire dataset periodically to avoid catastrophic forgetting
- C . Use early stopping during training to prevent overfitting, incorporate new data incrementally through transfer learning to mitigate catastrophic forgetting, and apply L1 regularization to ensure feature selection
- D . Regularly update the training dataset with new data, apply L2 regularization to manage overfitting,
and use an ensemble of models to prevent catastrophic forgetting
You are a Data Scientist working for a retail company and you have been tasked with developing a demand forecasting model using Amazon SageMaker. The company requires the model to be highly accurate to optimize inventory levels, but you also need to consider constraints on training time and cost due to budget limitations. You have access to multiple SageMaker instance types and options like spot instances to reduce costs, but you must balance these factors against the need for a performant model. Your goal is to choose a configuration that provides an acceptable tradeoff between model performance, training time, and cost.
Which of the following strategies should you consider when balancing model performance, training time, and cost in this scenario using Amazon SageMaker? (Select two)
- A . Implement distributed training across multiple smaller instances to balance training time and cost while maintaining model performance
- B . Deploy multiple models with different instance types simultaneously, choosing the one that completes training first, regardless of performance or cost
- C . Optimize hyperparameters using Amazon SageMaker’s automatic model tuning (hyperparameter optimization) to improve performance, while using spot instances to reduce cost
- D . Use a smaller instance type to save on costs, and accept longer training times, as model accuracy is not the highest priority
- E . Use the largest available instance type to minimize training time, regardless of cost, ensuring that the model is trained as quickly as possible
You are a DevOps engineer at a tech company that is building a scalable microservices-based application. The application is composed of several containerized services, each responsible for different parts of the application, such as user authentication, data processing, and recommendation systems. The company wants to standardize and automate the deployment and management of its infrastructure using Infrastructure as Code (IaC). You need to choose between AWS CloudFormation and AWS Cloud Development Kit (CDK) for defining the infrastructure. Additionally, you must decide on the appropriate AWS container service to manage and deploy these microservices efficiently.
Given the requirements, which combination of IaC option and container service is MOST SUITABLE for this scenario, and why?
- A . Use AWS CloudFormation with YAML templates for infrastructure automation and deploy the containerized microservices using Amazon Lightsail Containers to simplify management and reduce costs
- B . Use AWS CloudFormation to define and deploy the infrastructure as code, and Amazon ECR (Elastic Container Registry) with Fargate for running the containerized microservices without needing to manage the underlying servers
- C . Use AWS CDK for infrastructure as code, allowing you to define the infrastructure in a high-level programming language, and deploy the containerized microservices using Amazon EKS (Elastic Kubernetes Service) for advanced orchestration and scalability
- D . Use AWS CDK with Amazon ECS on EC2 instances to combine the flexibility of programming languages with direct control over the underlying server infrastructure for the microservices
You are a DevOps engineer at a tech company that is building a scalable microservices-based application. The application is composed of several containerized services, each responsible for different parts of the application, such as user authentication, data processing, and recommendation systems. The company wants to standardize and automate the deployment and management of its infrastructure using Infrastructure as Code (IaC). You need to choose between AWS CloudFormation and AWS Cloud Development Kit (CDK) for defining the infrastructure. Additionally, you must decide on the appropriate AWS container service to manage and deploy these microservices efficiently.
Given the requirements, which combination of IaC option and container service is MOST SUITABLE for this scenario, and why?
- A . Use AWS CloudFormation with YAML templates for infrastructure automation and deploy the containerized microservices using Amazon Lightsail Containers to simplify management and reduce costs
- B . Use AWS CloudFormation to define and deploy the infrastructure as code, and Amazon ECR (Elastic Container Registry) with Fargate for running the containerized microservices without needing to manage the underlying servers
- C . Use AWS CDK for infrastructure as code, allowing you to define the infrastructure in a high-level programming language, and deploy the containerized microservices using Amazon EKS (Elastic Kubernetes Service) for advanced orchestration and scalability
- D . Use AWS CDK with Amazon ECS on EC2 instances to combine the flexibility of programming languages with direct control over the underlying server infrastructure for the microservices
You are a machine learning engineer responsible for deploying a customer churn prediction model using Amazon SageMaker into production at a telecommunications company. The model is critical for proactive customer retention efforts, so maintaining high availability and reliability is essential. Given the model’s importance, you must implement best practices for deployment, including versioning and rollback strategies, to ensure that any issues with the new model version can be quickly addressed without impacting the business.
Which of the following approaches BEST exemplifies deployment best practices in this scenario?
- A . Version the model by tagging the new version, deploy it to production, and use weighted traffic splitting to send a small percentage of traffic to the new model. If no issues are detected, gradually increase traffic to the new version
- B . Use a blue/green deployment strategy to deploy the new model version in parallel with the existing version, allowing you to switch traffic to the new model gradually and roll back if necessary
- C . Deploy the new model version to a staging environment, test it thoroughly, and then manually replace the existing production model. If any issues occur, update the model in staging and redeploy
- D . Deploy the new model version directly to production, replacing the existing model, and monitor its
performance closely. If issues arise, retrain the model immediately and redeploy
You are working on a machine learning project for a financial services company, developing a model to predict credit risk. After deploying the initial version of the model using Amazon SageMaker, you find that its performance, measured by the AUC (Area Under the Curve), is not meeting the company’s accuracy
requirements. Your team has gathered more data and believes that the model can be further optimized. You are considering various methods to improve the model’s performance, including feature engineering, hyperparameter tuning, and trying different algorithms. However, given the limited time and computational resources, you need to prioritize the most impactful strategies.
Which of the following approaches are the MOST LIKELY to lead to a significant improvement in model performance? (Select two)
- A . Increase the size of the training dataset by incorporating synthetic data and then retrain the existing model
- B . Perform hyperparameter tuning using Bayesian optimization and increase the number of trials to explore a broader search space
- C . Switch to a more complex algorithm, such as deep learning, and use transfer learning to leverage pre-trained models
- D . Use Amazon SageMaker Debugger to debug and improve model performance by addressing underlying problems such as overfitting, saturated activation functions, and vanishing gradients
- E . Focus on feature engineering by creating