[AWS] S3Bucket - MFT + Terraform Project 05

Manage file Transfer to/from S3Bucket (IAM User Policy)

Β·

7 min read

[AWS] S3Bucket - MFT + Terraform Project 05

Inception

Hello everyone, This article is part of The Terraform + AWS series, The Examples in this series is built in sequence, I use this series to publish out Projects & Knowledge.


Overview

πŸ’‘
This Article example is very similar to the previous article(i.e. using Bucket Policy.) However, Here will use IAM User Policy instead.

Hello Gurus, π‘΄π’‚π’π’‚π’ˆπ’† π‘­π’Šπ’π’† 𝑻𝒓𝒂𝒏𝒔𝒇𝒆𝒓, Streamlining Data Flows with Security and Reliability, In today's data-driven world, businesses and organizations frequently need to move files reliably between systems, partners, and Organizations. Managed file transfer (MFT) solutions offer a way to streamline these processes. However, we going to provide our own solution here using 𝑺3π‘©π’–π’Œπ’„π’†π’•, IAM User policy, 𝒔3π’‚π’‘π’Š π’„π’π’Žπ’Žπ’‚π’π’…, 𝒂𝒏𝒅 π‘·π’šπ’•π’‰π’π’.

πŸ’‘
MFT (Manage file transfer) is all about managing The File Transfer between systems Automatically in a secure way.

In the previous article, we discovered how to send and receive file to/from S3 to/from EC2 machines using Bucket policy.

However, Today's Example will use S3-Bukcet as a centralize object storage between EC2's, to send and receive files automatically using IAM Policy.

πŸ’‘
The Value behind using The IAM User Policy is that it provides access for an IAM user (i.e The same as Bucket policy,) Therefore you have the ability to control the S3 Bucket from anywhere with the given Policy.

Building-up Steps

Today will build-up together the Same resources as previous*(e.g. transit gateway, VPC, EC2, etc)*In Addition will build up an S3 Bucket, IAM User, IAM User Policy, in order allow IAM user to send/receive files to/from The S3 to/from the EC2 machines, The Infrastructure will build-up Using π‘»π’†π’“π’“π’‚π’‡π’π’“π’Ž.✨

πŸ’‘
The Full Architecture design have Two private VPC's included EC2 machines, Transit gateway that manage all VPC's Routes, and One VPC that have a NAT Gateway and an Internet gateway to route the EC2 machines Traffic to the Internet. Check the full series here.

The Architecture Design Diagram:

building-up steps Details:

  • Create IAM user "Dave"

  • Create an S3 Bucket.

  • Create an IAM User policy for "Dave"

enough talking, let's go forward...πŸ˜‰


Clone The Project Code

Create a clone to your local device as the following:

πŸ’‘
You can fork the project to your GitHub repo first. it's depending on the way you preferred.
pushd ~  # Change Directory
git clone https://github.com/Mohamed-Eleraki/terraform.git

pushd ~/terraform/AWS_Demo/08-S3BucketPolicy02
  • open in a VS Code, or any editor you like
code .  # open the current path into VS Code.

Terraform Resources + Code Steps

Once you opened the code into your editor, will notice that the resources have been created. However will discover together how Create them steps by step.

Create an S3 Bukcet

  • Create a new file called s3.tf
πŸ’‘
separate resource into .tf files is a good way for less code complexity.
  • Create The S3 Bucket Resources as the below
resource "aws_s3_bucket" "s3_01" {
  bucket = "eraki_s3_dev_01"
  force_destroy = true
  object_lock_enabled = false

  tags = {
    Name        = "eraki_s3_dev_01-Tag"
    Environment = "Dev"
  }
}

resource "aws_s3_bucket_public_access_block" "s3_01_dis_pubAcc" {
  bucket = aws_s3_bucket.s3_01.id
  block_public_acls = true
  block_public_policy = true
  ignore_public_acls = true
  restrict_public_buckets = true
}

Create IAM User

  • Create a new file called iam.tf

  • Create The IAM user Resource as the below

resource "aws_iam_user" "iam_user_dave" {
  name = "Dave"
  #force_destroy = true # no need for it now as we not creating Secret Key
}

# Create IAM User Policy - Put & Get S3
resource "aws_iam_user_policy" "iam_user_dave_policy" {
  name = "AllowDaveToPutGetS3"
  user = aws_iam_user.iam_user_dave.name

  # Terraform's "jsonencode" function converts a
  # Terraform expression result to valid JSON syntax.
    policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = [
          "s3:PutObject",
          "s3:GetObject"
        ]
        Effect   = "Allow"
        Resource = "${aws_s3_bucket.s3_01.arn}/*"
      },
    ]
  })
}

Apply Terraform Code

After configured your Terraform Code, It's The exciting time to apply the code and just view it become to Real. 😍

  • First the First, Let's make our code cleaner by:
terraform fmt
  • Plan is always a good practice (Or even just apply 😁)
terraform plan -var-file="terraform-dev.tfvars"
  • Let's apply, If there's No Errors appear and you're agree with the build resources
terraform apply -var-file="terraform-dev.tfvars" -auto-approve

Check S3 File Steaming

Export Access key

  • Open-up the AWS IAM Console, Then specify Dave user

  • Under Security Credentials tab, Create access key for CLI, and download The CSV file.

Login into EC2 machines

login into ec2 machines via AWS console using VPC endpoint as mention here.

Configure AWSCLI Credentials

Configure AWSCLI Credentials for both ec2 machines, as the Following:

aws configure --profile Dave
# past the value from the CSV file
# region us-east-1
# output json

using s3apicommand

By using s3api commands, you can manage your S3 storage programmatically through the AWS CLI. This offers greater automation and control compared to using the S3 console interface.

Check our configuration integration.

Will upload a Dummy file from ec2-1 to the S3 Bucket, Then Download the file to ec2-2.

Uploading from ec2-1

  • Login into ec2-1 via AWS Console as the previous step.

  • Create a Dummy file

touch upload.file
  • use the below command to upload upload.file file.
aws s3api put-object --bucket eraki-s3-dev-01 --key upload.file --body upload.file --profile Dave

The --body parameter in the command identifies the source file to upload. For example, if the file is in the root of the C: drive on a Windows machine, you specify c:\upload.file. The --key parameter provides the key name for the object on the S3 Bucket.

  • Check the S3 Bucket on the AWS console, Should be uploaded.

Downloading to ec2-2

  • Login into ec2-2 machine as the previous step.

  • use the following command to download the file

πŸ’‘
Ensure first you configured the awscli
aws s3api get-object --bucket eraki-s3-dev-01 --key upload.file upload.file --profile Dave

Using Python script

Uploading from ec2-1

  • Login into ec2-1 using the endpoint as we discovered before here.

  • Create a dummy file called upload.file

  • Create a python script as following

vim upload.py
  • Past the below content:
import boto3

session = boto3.Session(profile_name="Dave")   # Define the profile session
s3_client = session.client('s3')     # use the profile session
object_name = "testUpload.File"  # change the name on the S3 Bucket

bucket_name = "eraki-s3-dev-01"
file_path = "/home/ec2-user/upload.file"

try:
    object = s3_client.upload_file(file_path, bucket_name, object_name)

    print(f"File '{file_path}' uploaded successfully to S3 bucket '{bucket_name}' / '{object_name}' !")
except Exception as e:
    print(f"Error uploading file: {e}")
  • Adjust the script permissions
chmod 775 upload.py
  • Install prerequisites:
# check python version
python3 --version
Python 3.9.16

# install pip3 
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py && python3 get-pip.py

# install boto3 library
pip3 install boto3
  • Run the script
python3 upload.py
  • Check S3 Console

Downloading to ec2-2

  • Login into the ec2-2 as the previous step

  • Create a python script as following

vim download.py
  • Past the following content
import boto3

session = boto3.Session(profile_name="Dave")   # Define the profile session
s3_client = session.client('s3')
object_name = "testUpload.File"  # The name of the file in the S3 bucket you want to download

bucket_name = "eraki-s3-dev-01"
download_path = "/home/ec2-user/testUpload.File"  # Local path to save the downloaded file

try:
    s3_client.download_file(bucket_name, object_name, download_path)

    print(f"File '{object_name}' downloaded successfully from S3 bucket '{bucket_name}' to '{download_path}' !")
except Exception as e:
    print(f"Error downloading file: {e}")
  • Adjust the script permissions
chmod 775 download.py
  • Install the prerequisites
# check python version
python3 --version
Python 3.9.16

# install pip3 
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py && python3 get-pip.py

# install boto3 library
pip3 install boto3
  • Run the script as following
python3 download.py


Destroy environment

The Destroy using terraform is very simple, However we should first destroy the resources that have been created manually, Follow the steps below.

Delete the VPC endpoint (Manual resource)

  • open-up VPC console, and locate endpoints under Virtual private cloud section

  • Delete the EC2 Instance Connect Endpoints

Delete the Access key

  • open-up the IAM console, locate Dave user, specify Security credentials tab, Then Delete Access key
πŸ’‘
Note that you should deactivate before delete

Destroy environment using Terraform

Once ensure The EC2 instance connect endpoints have been deleted submit the following command

terraform destroy -var-file="terraform-dev.tfvars"

Conclusion

By integrating With S3, organizations can gain a number of benefits, including improved data security, simplified data management, and increased operational efficiency. S3 provides a secure and scalable storage solution for MFT files, Besides the AWS CLI s3api command and boto3 aws library for managing file streaming.


That's it, Very straightforward, very fastπŸš€. Hope this article inspired you and will appreciate your feedback. Thank you.

Β