最後更新: 2024-10-17
目錄
- Amazon S3 regular endpoints
- S3 Storage Classes
- Access S3 object
- Private or Public Bucket
- List Buckets
- S3 Folder
- Bucket Versioning
- Encryption
- S3 bucket set CORS
- S3 CLI
- Permission
- Folder
- S3 with CloudFront origin access identity (OAI)
- Bucket Size
- Lifecycle Rule
- AWS Backup S3
- Multipart Upload
S3
Download protocol
The default download protocol is HTTP/HTTPS
Authentication
Authentication mechanisms are provided to ensure that data is kept secure from unauthorized access.
Objects can be made private or public, and rights can be granted to specific users.
Bucket Region
A bucket can be stored in one of several Regions.
You can choose a Region to optimize for latency, minimize costs, or address regulatory requirements.
Amazon S3 regular endpoints
Asia Pacific (Hong Kong) # ap-east-1
URI: s3.ap-east-1.amazonaws.com
S3 Storage Classes
Class
- S3 Standard
- ----
- S3 Intelligent-Tiering*
- S3 Standard-IA (Infrequent Access)(AZ >= 3)
- S3 One Zone-IA (1 AZ)
- ----
- S3 Glacier Instant Retrieval
- S3 Glacier Flexible Retrieval (Configurable retrieval times, from minutes to hours)
- S3 Glacier Deep Archive(retrieved in 12 hours or less)
Charge
* Minimum storage duration charge
- Infrequent Access 30 days
- Glacier 90 days
- Deep Archive 180 days
* 有 Retrieval charge
IA vs One Zone-IA
Costs 20% less than S3 Standard-IA.
(Designed for durability: 99.999999999%(11 9’s))
Access S3 object
Virtual-hosted style access
https://bucket-name.s3.Region.amazonaws.com/key-name
i.e.
https://my-bucket.s3.ap-east-1.amazonaws.com/puppy.png
S3://
S3://bucket-name/key-name
i.e.
S3://mybucket/puppy.jpg
Private or Public Bucket
Bucket level
By default, block public access settings are set to True on new S3 buckets.
Allow access per folder
After set bucket level to public
Select Folder -> Choose Actions -> choose Make public.
List Buckets
aws s3api list-buckets
Bucket Versioning
Versioning protects your data from accidental overwrites or deletions.
Enabling Bucket Versioning
Each time you upload an object, the current version is retained as the noncurrent version and
the newly added version, the successor, becomes the current version.
After enabling Bucket Versioning
You might need to update your lifecycle rules to manage previous versions of objects.
Delete Marker
A "delete marker" in Amazon S3 is a placeholder
for a versioned object that was specified in a simple DELETE request.
(DELETE request is a request that doesn't specify a version ID)
當 object 被 delete 時, 如果 current object 不是 "delete marker",
Amazon S3 adds a delete marker with a unique version ID.
The object specified in the DELETE request is not actually deleted.
Instead, the delete marker becomes the current version of the object.
The object's key name (or key) becomes the key of the delete marker.
原有 object 變成 noncurrent, "delete marker" 成為 current version.
A "delete marker" doesn't have data associated with it.
收費
Delete markers accrue a minimal charge for storage in Amazon S3.
The storage size of a delete marker is equal to the size of the key name of the delete marker.
A key name is a sequence of Unicode characters.
The UTF-8 encoding for the key name adds 1‐4 bytes of storage to your bucket for each character in the name.
Encryption
S3 with KMS
- Client-side encryption
- Server-side encryption
Server-side encryption
S3 加密過程(Put)
S3 -requests(data-key)-> KMS
# plaintext & encrypted copy of the data key
S3 <-keys- KMS
S3: encrypt(data, plaintext-key) > encrypted data
S3: encrypted data-key as metadata with the encrypted data
S3 解密過程(Get)
S3 -encrypted key-> KMS S3 <-plaintext key- KMS S3: decrypt(encrypted data, plaintext-key) > data
server-side encryption: SSE-S3, SSE-C, or SSE-KMS
# SSE = Server-Side Encryption
# KMS = Key Management Service
# CMKs = Customer Master Keys
-
SSE with Amazon S3-Managed Keys (SSE-S3)
An encryption key that Amazon S3 creates, manages, and uses for you.
If you fully trust AWS, use this S3 encryption method. -
SSE with CMKs Stored in KMS
slightly different method from SSE-S3
它支援 user control and audit trail -
SSE with Customer-Provided Keys (SSE-C)
keys are provided by a customer and AWS doesn’t store the encryption keys.
S3 data encryption & decryption are performed on the AWS server side.
KMS CMK
When you use an AWS KMS CMK for server-side encryption in Amazon S3,
you must choose a symmetric CMK. Amazon S3 only supports symmetric CMKs and not asymmetric CMKs.
S3 Bucket Keys feature
此功能目的:
designed to reduce calls to AWS KMS when objects in an encrypted bucket are accessed.
原理:
S3 uses this data key as a bucket key.
S3 creates unique data keys outside of AWS KMS for objects in the bucket and encrypts those data keys under the bucket key.
S3 uses each bucket key for a time-limited period.
Note
After you set the encryption settings for the entire bucket,
the files that have been uploaded to the bucket before enabling encryption are left unencrypted.
S3 bucket set CORS
Step
1. https://console.aws.amazon.com/s3
2. choose the name of the bucket
3. Permissions tab > CORS section > Edit button > paste jsoin in text box
Permission
Allow the user to add, update, and delete objects.
Granting the
- s3:PutObject
- s3:GetObject
- s3:DeleteObject
In addition, these are the additional permissions required by the CLI:
- s3:ListAllMyBuckets
- s3:GetBucketLocation
- s3:ListBucket
Also, the actions are required to be able to copy, cut, and paste objects in the console.
- s3:PutObjectAcl
- s3:GetObjectAcl
Overall XML
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action": "s3:ListAllMyBuckets",
"Resource":"*"
},
{
"Effect":"Allow",
"Action":["s3:ListBucket","s3:GetBucketLocation"],
"Resource":"arn:aws:s3:::awsexamplebucket1"
},
{
"Effect":"Allow",
"Action":[
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject"
],
"Resource":"arn:aws:s3:::awsexamplebucket1/*"
}
]
}
By setting Block Public Access to "on", nothing will be accessible via bucket policies or ACLs.
Access will only be possible via IAM permissions.
For a bucket to be "public", it must have a Bucket Policy that grants some permissions to everybody (*).
For "Objects can be public", the bucket must permit ACLs that allows some objects to be set to public (but not the whole bucket).
This requires the "ACL" options of Block Public Access to be "off".
Web GUI
Buckets name > Permissions Tab
Access column
- Public – Everyone has access to one or more of the following: List objects, Write objects, Read and write permissions.
- Objects can be public – The bucket is not public, but anyone with the appropriate permissions can grant public access to objects.
Folder
S3 Folder = zero-length object ~ Prefix
# Create Folder
aws s3api put-object --bucket my-s3-bucket --key MyFolder/
# Remove
aws s3 rm s3://bucket_name/folder_name/
# Checking
# PRE = Prefix = Folder
aws s3 ls --recursive s3://my-s3-bucket/
S3 with CloudFront origin access identity (OAI)
Create a CloudFront origin access identity (OAI)
1. Choose the Origins tab.
2. Select the S3 origin, and then choose Edit.
3. For S3 bucket access, select Yes use OAI (bucket can restrict access to only CloudFront).
4. For Origin access identity, select an existing identity from the dropdown list or choose Create new OAI.
5. For Bucket policy, select Yes, update the bucket policy.
Note: This step updates the bucket policy of your S3 origin to grant the OAI access for s3:GetObject.
6. Choose Save Changes.
Review the bucket policy
S3 console > buckets name > Permissions > Bucket Policy
Bucket Size
CLI
aws s3 ls --summarize --human-readable --recursive s3://bucket-name/
AWS Console:
Go to S3 and select the bucket > Click on "Metrics" tab > Total bucket size
Lifecycle rule
注意
* Permanent deletion takes precedence over transition.
* Transition takes precedence over creation of delete markers.
* Lifecycle rules run once a day at midnight Universal Coordinated Time (UTC)
* You can't create date-based Lifecycle rules by using the Amazon S3 console
S3 一共有兩種 Lifecycle Actions
- Transition
- Expiration
Existing objects
The rules apply to both existing objects and objects that you add later.
Amazon S3 will queue for any existing objects.
Changes in billing
Expiration & Transition
When the lifecycle rule is satisfied, even if the action hasn’t been completed.
billing changes are applied as soon as the object becomes eligible for the lifecycle action
Exception
Billing changes don't occur until the object has transitioned to S3 Intelligent-Tiering
Prefix
Format: my-test-bucket/
* The trailing slash is important, because without it, your rule would also match other key prefixes
Rule(多選)
- Move current versions of objects between storage classes
-
Move noncurrent versions of objects between storage classes
----------------------------------------------------------------------------- - Expire current versions of objects
- Permanently delete noncurrent versions of objects
- Delete expired object delete markers or incomplete multipart uploads
Expiration Actions
Expire current versions of objects
* Expire it asynchronously
* Do not remove incomplete multipart uploads.
For non-versioning bucket:
Expiration = permanently removing the object
For versioning bucket:
* The "Expiration action" applies only to the current version
* Instead of deleting the current object version,
retains the current version as a noncurrent version by adding a delete marker
Checking
To find when an object is scheduled to expire, use the HEAD Object or the GET Object API operations.
Permanently delete noncurrent versions of objects # 對於 noncurrent 的操作
- Days after objects become noncurrent [最少是 1]
-
Number of newer versions to retain (Optional)[1 ~ 100]
(It will be retained regardless of how many days they have been noncurrent)
Delete expired object delete markers or incomplete multipart uploads
Expired object delete markers
Incomplete multipart uploads
- Number of days must be greater than 0
Checking
aws s3api head-object --bucket bucketname --key file.txt
AWS Backup S3
To start using AWS Backup support for S3, you must perform the following one-time setup.
在 AWSBackupDefaultServiceRole 建立 "inline policy" - s3-backup-policy
https://console.aws.amazon.com/iam
NOTE:
If AWSBackupDefaultServiceRole does not exist,
you might be using AWS Backup for the first time with a new account.
{ "Version":"2012-10-17", "Statement":[ { "Sid":"S3BucketBackupPermissions", "Action":[ "s3:GetInventoryConfiguration", "s3:PutInventoryConfiguration", "s3:ListBucketVersions", "s3:ListBucket", "s3:GetBucketVersioning", "s3:GetBucketNotification", "s3:PutBucketNotification", "s3:GetBucketLocation", "s3:GetBucketTagging" ], "Effect":"Allow", "Resource":[ "arn:aws:s3:::*" ] }, { "Sid":"S3ObjectBackupPermissions", "Action":[ "s3:GetObjectAcl", "s3:GetObject", "s3:GetObjectVersionTagging", "s3:GetObjectVersionAcl", "s3:GetObjectTagging", "s3:GetObjectVersion" ], "Effect":"Allow", "Resource":[ "arn:aws:s3:::*/*" ] }, { "Sid":"S3GlobalPermissions", "Action":[ "s3:ListAllMyBuckets" ], "Effect":"Allow", "Resource":[ "*" ] }, { "Sid":"KMSBackupPermissions", "Action":[ "kms:Decrypt", "kms:DescribeKey" ], "Effect":"Allow", "Resource":"*", "Condition":{ "StringLike":{ "kms:ViaService":"s3.*.amazonaws.com" } } }, { "Sid":"EventsPermissions", "Action":[ "events:DescribeRule", "events:EnableRule", "events:PutRule", "events:DeleteRule", "events:PutTargets", "events:RemoveTargets", "events:ListTargetsByRule", "events:DisableRule" ], "Effect":"Allow", "Resource":"arn:aws:events:*:*:rule/AwsBackupManagedRule*" }, { "Sid":"EventsMetricsGlobalPermissions", "Action":[ "cloudwatch:GetMetricData", "events:ListRules" ], "Effect":"Allow", "Resource":"*" } ] }
Restore
You can restore your S3 data to an existing bucket, including the original bucket.
During restore, you can also create a new S3 bucket as the restore target.
You can restore S3 backups only to the same AWS Region where your backup is located.
inline policy - s3-backup-policy
{ "Version":"2012-10-17", "Statement":[ { "Sid":"S3BucketRestorePermissions", "Action":[ "s3:CreateBucket", "s3:ListBucketVersions", "s3:ListBucket", "s3:GetBucketVersioning", "s3:GetBucketLocation", "s3:PutBucketVersioning" ], "Effect":"Allow", "Resource":[ "arn:aws:s3:::*" ] }, { "Sid":"S3ObjectRestorePermissions", "Action":[ "s3:GetObject", "s3:GetObjectVersion", "s3:DeleteObject", "s3:PutObjectVersionAcl", "s3:GetObjectVersionAcl", "s3:GetObjectTagging", "s3:PutObjectTagging", "s3:GetObjectAcl", "s3:PutObjectAcl", "s3:PutObject", "s3:ListMultipartUploadParts" ], "Effect":"Allow", "Resource":[ "arn:aws:s3:::*/*" ] }, { "Sid":"S3KMSPermissions", "Action":[ "kms:Decrypt", "kms:DescribeKey", "kms:GenerateDataKey" ], "Effect":"Allow", "Resource":"*", "Condition":{ "StringLike":{ "kms:ViaService":"s3.*.amazonaws.com" } } } ] }
Multipart Upload
Advantages
- Improved throughput
- Quick recovery from any network issues
- Pause and resume object uploads
特性
You can upload these object parts independently and in any order.
If transmission of any part fails, you can retransmit that part without affecting other parts.
After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object.
API calls
- CreateMultipartUpload
- UploadPart
- CompleteMultipartUpload
過程
When you send a request to initiate a multipart upload, Amazon S3 returns a response with an upload ID
When uploading a part, in addition to the upload ID, you must specify a part number.(1 ~ 10,000)
Amazon S3 returns an entity tag (ETag) for the part as a header in the response.
To complete multipart upload request must include the upload ID and
a list of both part numbers and corresponding ETag values.
Checksums
S3 calculates the checksum of the checksums for each individual part of the entire multipart object after the upload is complete.