Introduction
In today's highly interconnected digital landscape, secure file transfer solutions are essential for businesses that need to exchange sensitive data between various parties. AWS Transfer Family provides a reliable and secure option for transferring files, but when exposed to the public internet, users can only be authenticated using their credentials. To address this challenge, this blog post introduces a solution that adds an additional layer of security to the AWS Transfer Family by integrating it with FreeRADIUS for multi-factor authentication (MFA).
Key Themes
- The need for enhanced security in file transfer solutions
- AWS Transfer Family as a secure file transfer solution
- Integrating AWS Transfer Family with FreeRADIUS for MFA
Building the Infrastructure
In this post, we will demonstrate how to create a secure file transfer solution using AWS Transfer Family, integrated with FreeRADIUS for multi-factor authentication. Readers will learn how to set up a CloudFormation stack that builds the necessary infrastructure to enable MFA when using AWS Transfer Family for file transfers.
Before delving into the intricate details of the secure file transfer process, it’s instrumental to have a visual representation for a holistic understanding. The diagram below elucidates the sophisticated orchestration of various AWS services and components, presenting a clear, step-by-step visual guide of the entire process. From the initial user access via the AWS API Gateway to the final secure file transfer facilitated by multi-factor authentication, each component and its interaction are clearly delineated. This visual aid aims to enhance comprehension and provide a comprehensive overview, setting the stage for the detailed explanation that follows.
Solution Implementation Stages
Stage 1: ECR Repository Deployment
In the first stagea, a CloudFormation script is deployed to create an Amazon Elastic Container Registry (ECR) repository for storing Lambda container images. This repository will be used to store and manage the Docker images that will be deployed to the Lambda function in later stages.
Stage 2: Building and Pushing the Lambda Docker Image
During the second stage, the Lambda Docker image is built and pushed to the ECR repository created in the previous stage. This process ensures that the latest version of the containerized Lambda function is available in the ECR repository, ready to be deployed in the final stage of the solution implementation.
Clone the repository below:
git clone https://github.ibm.com/Reza-Beykzadeh/aws-sftp.git
Make sure you have Podman installed and running on your machine. Take a note of your ECR repository URI that you built in the previous step and run the commands below:
podman build -t <image_name> --platform=linux/x86_64 .
aws ecr get-login-password --region <region> --profile <your_profile> | podman login --username AWS --password-stdin <ECR-URI>
podman tag transfer-family:latest <ECR-URI>
podman push <AWS-ECR-URI>
Stage 3: Transfer Family Server, API Gateway, Lambda, and CI/CD Pipeline Deployment
The third stage involves deploying a second CloudFormation script that builds the AWS Transfer Family server, API Gateway, Lambda function, and the CI/CD pipeline. This comprehensive setup enables the seamless integration of all the necessary components to provide a secure file transfer solution with multi-factor authentication.
Upon completion of these stages, users can push the Lambda container image source code to the AWS CodeCommit repository for source version control and continuous deployment. Make sure your code branch is named master. This approach ensures that the latest updates to the Lambda function are automatically integrated into the secure file transfer solution, further enhancing its security and reliability.
Parameters to be filled out are listed below:
Stack Name: The name of the stack all in lower-case. This is because the cloudformation creates a bucket from this name.
ADPassword: Password of a service account that can read the LDAP tree. The username will be passed to the lambda through environment variables later.
AWSTransferBucket: The S3 bucket to which the files will be transferred. This bucket mut be an existing bucket.
ArtifcatBucket: Name of a bucket that stores source codes artifacts.
CodeBuildImage: Default Value
CodeCommitRepoName: Name of the codecommit repository to be created.
CreateServer: Yes or No
ImageURI: ECR URL that has been created in stage 1.
LambdaFunctionName: Name of the lambda function to be created.
RadiusSharedSecret: Radius Shared Secret.
SubnetId1 and 2: Subnets into which the lambda function will be placed. Select Private Subnets for this purpose.
TransferSubnets1 and 2: Subnets into which the Transfer Family Server will be placed. Select Public Subnets for this purpose.
VpcCidrBlock: The VPC CIDR Block (e.g., 10.10.10.0/24)
VPCId: Select the VPC ID
Updating Lambda Environment Variables
After deploying the secure file transfer solution, it is important to configure the Lambda environment variables with the necessary values. These values are crucial for the proper functioning of the Lambda function, as they allow it to authenticate users and integrate with the Active Directory, FreeRADIUS server, and Active Directory group.
To update the Lambda environment variables, follow these steps:
- In the navigation pane of the Lambda console, click on "Functions.
- Locate and select the Lambda function created by the CloudFormation stack in Stage 3.
- In the "Function code" section, click on the "Configuration" tab.
- Scroll down to the "Environment variables" section and click on the "Edit" button.
- Update the environment variables with the appropriate values specific to your Active Directory, FreeRADIUS server, and Active Directory group. These variables may include, but are not limited to, connection strings, credentials, and group information.
- Click on "Save" to apply the changes.
Testing the Solution with FileZilla
Once the secure file transfer solution is deployed, it is essential to verify its functionality.
To test the solution using FileZilla, follow these steps:
- Launch FileZilla and click on the "Site Manager" icon in the top-left corner or go to "File" > "Site Manager" in the menu.
- In the "Site Manager" window, click on "New Site" to create a new connection profile.
- Enter the AWS Transfer Family server's endpoint in the "Host" field and select "SFTP - SSH File Transfer Protocol" as the protocol.
- Set the "Logon Type" to "Normal" and enter your username in the "User" field.
- In the "Password" field, input your password, followed by the 6-digit token code (e.g., StrongPassword123456, where 123456 is the token code).
- Click on "Connect" to initiate the connection.
If the multi-factor authentication process is successful, FileZilla will establish a connection to the AWS Transfer Family server, and you will have access to the Amazon S3 bucket for secure file transfers. This test confirms the solution's effectiveness and ensures that the integration of AWS Transfer Family with FreeRADIUS for multi-factor authentication is functioning correctly.
Cleaning Up the Solution
After testing and validating the secure file transfer solution, you might want to clean up the resources to avoid incurring unnecessary costs. To clean up the resources created by the CloudFormation stacks, empty your ECR repository and S3 buckets and delete the Cloudformation stacks in reverse order.
Summary
In this blog post, we have explored an innovative solution to enhance the security of AWS Transfer Family by integrating it with FreeRADIUS for multi-factor authentication. The use of a CloudFormation stack, AWS API Gateway, Lambda function, CI/CD pipeline, and Amazon ECR streamlines the process and ensures a more secure file transfer experience for users. By adopting this solution, organizations can significantly improve the security of their file transfers and protect sensitive data from unauthorized access.
Consider reviewing the official AWS documentation on AWS Transfer Family and FreeRADIUS to get started with implementing this solution in your own organization.
Reza Beykzadeh is an IBM Cloud Enterprise Solutions Architect responsible for defining the overall structure of AWS technical programs for Federal clients. Mr. Beykzadeh primarily designs functional technology solutions, oversees development and implementation of programs, and provides technical leadership as well as support to software development teams. He holds 13 AWS certifications, including SAP on AWS and Machine Learning. Mr. Beykzadeh has a strong focus on generative AI, leveraging it to innovate and optimize legacy code conversion solutions. IBM has recognized Mr. Beykzadeh as a ‘rockstar’ for his dedication to client success and his contributions to integrating GenAI with AWS solutions. He was recently named a Federal Market Circle Golden Circle Honoree, an elite group of top performing IBMers who delivered outstanding business results in 2024. Mr. Beykzadeh holds a B.A. in Information Technology from George Mason University.
|
|
Gary Zasman is a seasoned cloud architect at IBM with decades of experience in designing and implementing scalable, secure, and cost-effective solutions on AWS. With a passion for innovation and a knack for solving complex technical challenges, Gary has helped numerous organizations transform their Applications, IT infrastructure to achieve their business goals.
Gary holds multiple AWS certifications, including AWS Certified Solutions Architect – Professional and AWS Certified DevOps Engineer – Professional. He is known for his deep understanding of cloud-native technologies, DevOps practices, and his ability to translate business requirements into robust technical solutions.
|
|