最後更新: 2020-10-17
S3
Download protocol
The default download protocol is HTTP/HTTPS
Authentication
Authentication mechanisms are provided to ensure that data is kept secure from unauthorized access.
Objects can be made private or public, and rights can be granted to specific users.
Bucket Region
A bucket can be stored in one of several Regions.
You can choose a Region to optimize for latency, minimize costs, or address regulatory requirements.
目錄
- Amazon S3 regular endpoints
- S3 Storage Classes
- Access S3 object
- Private or Public Bucket
- List Buckets
- S3 Folder
- Bucket Versioning
- Encryption
- S3 bucket set CORS
- S3 CLI
- Permission
- Folder
Amazon S3 regular endpoints
Asia Pacific (Hong Kong) # ap-east-1
URI: s3.ap-east-1.amazonaws.com
S3 Storage Classes
Class
- S3 Standard
- ----
- S3 Intelligent-Tiering*
- S3 Standard-IA (Infrequent Access)(AZ >= 3)
- S3 One Zone-IA (1 AZ)
- ----
- S3 Glacier Instant Retrieval
- S3 Glacier Flexible Retrieval (Configurable retrieval times, from minutes to hours)
- S3 Glacier Deep Archive(retrieved in 12 hours or less)
Charge
* Minimum storage duration charge
- Infrequent Access 30 days
- Glacier 90 days
- Deep Archive 180 days
* 有 Retrieval charge
IA vs One Zone-IA
Costs 20% less than S3 Standard-IA.
(Designed for durability: 99.999999999%(11 9’s))
Access S3 object
Virtual-hosted style access
https://bucket-name.s3.Region.amazonaws.com/key-name
i.e.
https://my-bucket.s3.ap-east-1.amazonaws.com/puppy.png
S3://
S3://bucket-name/key-name
i.e.
S3://mybucket/puppy.jpg
Private or Public Bucket
Bucket level
By default, block public access settings are set to True on new S3 buckets.
Allow access per folder
After set bucket level to public
Select Folder -> Choose Actions -> choose Make public.
List Buckets
aws s3api list-buckets
Bucket Versioning
After enabling Bucket Versioning,
you might need to update your lifecycle rules to manage previous versions of objects.
Encryption
S3 with KMS
- Client-side encryption
- Server-side encryption
Server-side encryption
S3 加密過程(Put)
S3 -requests(data-key)-> KMS
# plaintext & encrypted copy of the data key
S3 <-keys- KMS
S3: encrypt(data, plaintext-key) > encrypted data
S3: encrypted data-key as metadata with the encrypted data
S3 解密過程(Get)
S3 -encrypted key-> KMS S3 <-plaintext key- KMS S3: decrypt(encrypted data, plaintext-key) > data
server-side encryption: SSE-S3, SSE-C, or SSE-KMS
# SSE = Server-Side Encryption
# KMS = Key Management Service
# CMKs = Customer Master Keys
-
SSE with Amazon S3-Managed Keys (SSE-S3)
An encryption key that Amazon S3 creates, manages, and uses for you.
If you fully trust AWS, use this S3 encryption method. -
SSE with CMKs Stored in KMS
slightly different method from SSE-S3
它支援 user control and audit trail -
SSE with Customer-Provided Keys (SSE-C)
keys are provided by a customer and AWS doesn’t store the encryption keys.
S3 data encryption & decryption are performed on the AWS server side.
KMS CMK
When you use an AWS KMS CMK for server-side encryption in Amazon S3,
you must choose a symmetric CMK. Amazon S3 only supports symmetric CMKs and not asymmetric CMKs.
S3 Bucket Keys feature
此功能目的:
designed to reduce calls to AWS KMS when objects in an encrypted bucket are accessed.
原理:
S3 uses this data key as a bucket key.
S3 creates unique data keys outside of AWS KMS for objects in the bucket and encrypts those data keys under the bucket key.
S3 uses each bucket key for a time-limited period.
Note
After you set the encryption settings for the entire bucket,
the files that have been uploaded to the bucket before enabling encryption are left unencrypted.
S3 bucket set CORS
Step
1. https://console.aws.amazon.com/s3
2. choose the name of the bucket
3. Permissions tab > CORS section > Edit button > paste jsoin in text box
Permission
Allow the user to add, update, and delete objects.
Granting the
- s3:PutObject
- s3:GetObject
- s3:DeleteObject
In addition, these are the additional permissions required by the CLI:
- s3:ListAllMyBuckets
- s3:GetBucketLocation
- s3:ListBucket
Also, the actions are required to be able to copy, cut, and paste objects in the console.
- s3:PutObjectAcl
- s3:GetObjectAcl
Overall XML
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action": "s3:ListAllMyBuckets",
"Resource":"*"
},
{
"Effect":"Allow",
"Action":["s3:ListBucket","s3:GetBucketLocation"],
"Resource":"arn:aws:s3:::awsexamplebucket1"
},
{
"Effect":"Allow",
"Action":[
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject"
],
"Resource":"arn:aws:s3:::awsexamplebucket1/*"
}
]
}
By setting Block Public Access to "on", nothing will be accessible via bucket policies or ACLs.
Access will only be possible via IAM permissions.
For a bucket to be "public", it must have a Bucket Policy that grants some permissions to everybody (*).
For "Objects can be public", the bucket must permit ACLs that allows some objects to be set to public (but not the whole bucket).
This requires the "ACL" options of Block Public Access to be "off".
Web GUI
Buckets name > Permissions Tab
Access column
- Public – Everyone has access to one or more of the following: List objects, Write objects, Read and write permissions.
- Objects can be public – The bucket is not public, but anyone with the appropriate permissions can grant public access to objects.
Folder
S3 Folder = zero-length object ~ Prefix
# Create Folder
aws s3api put-object --bucket my-s3-bucket --key MyFolder/
# Remove
aws s3 rm s3://bucket_name/folder_name/
# Checking
# PRE = Prefix = Folder
aws s3 ls --recursive s3://my-s3-bucket/
S3 with CloudFront origin access identity (OAI)
Create a CloudFront origin access identity (OAI)
1. Choose the Origins tab.
2. Select the S3 origin, and then choose Edit.
3. For S3 bucket access, select Yes use OAI (bucket can restrict access to only CloudFront).
4. For Origin access identity, select an existing identity from the dropdown list or choose Create new OAI.
5. For Bucket policy, select Yes, update the bucket policy.
Note: This step updates the bucket policy of your S3 origin to grant the OAI access for s3:GetObject.
6. Choose Save Changes.
Review the bucket policy
S3 console > buckets name > Permissions > Bucket Policy
Bucket size
CLI
aws s3 ls --summarize --human-readable --recursive s3://bucket-name/
AWS Console:
Go to S3 and select the bucket > Click on "Metrics" tab > Total bucket size
Lifecycle rule
Prefix format: my-test-bucket/
* The trailing slash is important, because without it, your rule would also match other key prefixes
Rule
- Move current versions of objects between storage classes
- Move noncurrent versions of objects between storage classes
- Expire current versions of objects
- Permanently delete noncurrent versions of objects
- Delete expired object delete markers or incomplete multipart uploads
Expire current versions of objects
* Expire it asynchronously
(Changes in billing are applied when the lifecycle rule is satisfied, even if the action hasn’t been completed.)
* Object expiration Lifecycle policies do not remove incomplete multipart uploads.
non-versioning bucket:
Expiration = permanently removing the object
versioning bucket:
* The Expiration action applies only to the current version
* Amazon S3 doesn't take any action if there are one or more object versions and
the delete marker is the current version.
* Instead of deleting the current object version,
Amazon S3 retains the current version as a noncurrent version by adding a delete marker
Checking
To find when an object is scheduled to expire, use the HEAD Object or the GET Object API operations.
Permanently delete noncurrent versions of objects
- Days after objects become noncurrent
- Number of newer versions to retain (Optional)
AWS Backup S3
To start using AWS Backup support for S3, you must perform the following one-time setup.
在 AWSBackupDefaultServiceRole 建立 "inline policy" - s3-backup-policy
https://console.aws.amazon.com/iam
NOTE:
If AWSBackupDefaultServiceRole does not exist,
you might be using AWS Backup for the first time with a new account.
{ "Version":"2012-10-17", "Statement":[ { "Sid":"S3BucketBackupPermissions", "Action":[ "s3:GetInventoryConfiguration", "s3:PutInventoryConfiguration", "s3:ListBucketVersions", "s3:ListBucket", "s3:GetBucketVersioning", "s3:GetBucketNotification", "s3:PutBucketNotification", "s3:GetBucketLocation", "s3:GetBucketTagging" ], "Effect":"Allow", "Resource":[ "arn:aws:s3:::*" ] }, { "Sid":"S3ObjectBackupPermissions", "Action":[ "s3:GetObjectAcl", "s3:GetObject", "s3:GetObjectVersionTagging", "s3:GetObjectVersionAcl", "s3:GetObjectTagging", "s3:GetObjectVersion" ], "Effect":"Allow", "Resource":[ "arn:aws:s3:::*/*" ] }, { "Sid":"S3GlobalPermissions", "Action":[ "s3:ListAllMyBuckets" ], "Effect":"Allow", "Resource":[ "*" ] }, { "Sid":"KMSBackupPermissions", "Action":[ "kms:Decrypt", "kms:DescribeKey" ], "Effect":"Allow", "Resource":"*", "Condition":{ "StringLike":{ "kms:ViaService":"s3.*.amazonaws.com" } } }, { "Sid":"EventsPermissions", "Action":[ "events:DescribeRule", "events:EnableRule", "events:PutRule", "events:DeleteRule", "events:PutTargets", "events:RemoveTargets", "events:ListTargetsByRule", "events:DisableRule" ], "Effect":"Allow", "Resource":"arn:aws:events:*:*:rule/AwsBackupManagedRule*" }, { "Sid":"EventsMetricsGlobalPermissions", "Action":[ "cloudwatch:GetMetricData", "events:ListRules" ], "Effect":"Allow", "Resource":"*" } ] }
Restore
You can restore your S3 data to an existing bucket, including the original bucket.
During restore, you can also create a new S3 bucket as the restore target.
You can restore S3 backups only to the same AWS Region where your backup is located.
inline policy - s3-backup-policy
{ "Version":"2012-10-17", "Statement":[ { "Sid":"S3BucketRestorePermissions", "Action":[ "s3:CreateBucket", "s3:ListBucketVersions", "s3:ListBucket", "s3:GetBucketVersioning", "s3:GetBucketLocation", "s3:PutBucketVersioning" ], "Effect":"Allow", "Resource":[ "arn:aws:s3:::*" ] }, { "Sid":"S3ObjectRestorePermissions", "Action":[ "s3:GetObject", "s3:GetObjectVersion", "s3:DeleteObject", "s3:PutObjectVersionAcl", "s3:GetObjectVersionAcl", "s3:GetObjectTagging", "s3:PutObjectTagging", "s3:GetObjectAcl", "s3:PutObjectAcl", "s3:PutObject", "s3:ListMultipartUploadParts" ], "Effect":"Allow", "Resource":[ "arn:aws:s3:::*/*" ] }, { "Sid":"S3KMSPermissions", "Action":[ "kms:Decrypt", "kms:DescribeKey", "kms:GenerateDataKey" ], "Effect":"Allow", "Resource":"*", "Condition":{ "StringLike":{ "kms:ViaService":"s3.*.amazonaws.com" } } } ] }
Multipart upload
特性
You can upload these object parts independently and in any order.
If transmission of any part fails, you can retransmit that part without affecting other parts.
After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object.
過程
Your complete multipart upload request must include the upload ID and
a list of both part numbers and corresponding ETag values.
Checksums
Amazon S3 calculates the checksum of the checksums for each individual part of the entire multipart object after the upload is complete.