以下文字与答案无关
提示:有些试题内容 显示不完整,文字错误 或者 答案显示错误等问题,这是由于我们在扫描录入过程中 机器识别错误导致,人工逐条矫正总有遗漏,所以恳请 广大网友理解。
Use AWS DataSync to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier Deep Archive to be the destination for the datA .
B:Use AWS Storage Gateway to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier Deep Archive to be the destination for the datA .
C:Use AWS DataSync to move the historical records from on-premises to AWS. Choose Amazon S3 Standard to be the destination for the datA . Modify the S3 lifecycle configuration to move the data from the Standard tier to Amazon S3 Glacier Deep Archive after 30 days.
D:Use AWS Storage Gateway to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier to be the destination for the datA . Modify the S3 lifecycle configuration to move the data from the Standard tier to Amazon S3 Glacier Deep Archive after 30 days.
Use AWS DataSync to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier Deep Archive to be the destination for the data.
B:Use AWS Storage Gateway to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier Deep Archive to be the destination for the data.
C:Use AWS DataSync to move the historical records from on-premises to AWS. Choose Amazon S3 Standard to be the destination for the data. Modify the S3 lifecycle configuration to move the data from the Standard tier to Amazon S3
Glacier Deep Archive after 30 days.
D:Use AWS Storage Gateway to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier to be the destination for the data. Modify the S3 lifecycle configuration to move the data from the Standard tier to Amazon S3 Glacier Deep Archive after 30 days.
Use AWS DataSync to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier Deep Archive to be the destination for the data.
B:Use AWS Storage Gateway to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier Deep Archive to be the destination for the data.
C:Use AWS DataSync to move the historical records from on-premises to AWS. Choose Amazon S3 Standard to be the
destination for the data. Modify the S3 lifecycle configuration to move the data from the Standard tier to Amazon S3 Glacier Deep Archive after 30 days.
Use AWS Storage Gateway to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier to
be the destination for the data. Modify the S3 lifecycle configuration to move the data from the Standard tier to Amazon
S3 Glacier Deep Archive after 30 days.
Run an AWS Glue crawler that connects to one or more data stores, determines the data structures,and writes tables in the Data Catalog.
B:Use the AWS Glue console to manually create a table in the Data Catalog and schedule an AWS Lambda function to update the table partitions hourly.
C:Use the AWS Glue API CreateTable operation to create a table in the Data Catalog. Create an AWS Glue crawler and specify the table as the source.
D:Create an Apache Hive catalog in Amazon EMR with the table schema definition in Amazon S3, and update the table partition with a scheduled job. Migrate the Hive catalog to the Data Catalog.
Which AWS Service would be very helpful in processing all this data?
选项: A:Amazon S3
B:AWS Data Pipeline
C:AWS Direct Connect
D:Amazon EMR
The second part of the process takes longer than the first, so the company has decided to rewrite the application as two microservices running on Amazon ECS that can scale independently.
How should a solutions architect integrate the microservices?
选项: A:Implement code in microservice 1 to send data to an Amazon S3 bucket.
Use S3 event notifications to invoke microservice 2.
B:Implement code in microservice 1 to publish data to an Amazon SNS topic.
Implement code in microservice 2 to subscribe to this topic.
C:Implement code in microservice 1 to send data to Amazon Kinesis Data Firehose.
Implement code in microservice 2 to read from Kinesis Data Firehose.
D:Implement code in microservice 1 to send data to an Amazon SQS queue.
Implement code in microservice 2 to process messages from the queue.
Store the last 24 months of data in Amazon Redshift. Configure Amazon QuickSight with Amazon Redshift as the data source.
B:Store the last 2 months of data in Amazon Redshift and the rest of the months in Amazon S3. Set up an external schema and table for Amazon Redshift Spectrum. Configure Amazon QuickSight with Amazon Redshift as the data source.
C:Store the last 24 months of data in Amazon S3 and query it using Amazon Redshift Spectrum.Configure Amazon QuickSight with Amazon Redshift Spectrum as the data source.
D:Store the last 2 months of data in Amazon Redshift and the rest of the months in Amazon S3. Use a long-running Amazon EMR with Apache Spark cluster to query the data as needed. Configure Amazon QuickSight with Amazon EMR as the data source.