Infrastructure

Turn S3 Into a File Transfer Service With One EC2 Instance

Amazon just launched S3 Files, which lets you mount S3 buckets as NFS file systems. Combine that with Handrive running in headless mode on an EC2 instance, and you have a fully functional file transfer service backed by S3 storage—no new code required.

What S3 Files Changes

Amazon S3 Files, launched in April 2026, adds native NFS v4.1/4.2 support to S3 buckets. Instead of using the S3 object API to PUT and GET files, your EC2 instances can mount an S3 bucket as a regular file system. Files show up as files. Directories show up as directories. Standard file operations—read, write, rename, delete—just work.

Under the hood, S3 Files caches actively used data for low-latency access (around 1ms for hot data) while keeping everything on S3 storage underneath. Thousands of compute instances can mount the same bucket simultaneously.

This matters because it removes the biggest barrier to using S3 as a general-purpose file store: applications that expect a file system interface can now use S3 directly, without custom code to translate between file operations and S3 API calls.

The Architecture

The setup is straightforward. You need three things: an S3 bucket with S3 Files enabled, an EC2 instance with the bucket mounted via NFS, and Handrive running in headless mode on that instance. The data flow looks like this:

S3 bucket (storage) → S3 Files NFS mount → EC2 instance running Handrive → P2P transfer to any device

Handrive sees the mounted S3 bucket as a local directory. It indexes the files, makes them available for transfer, and handles the P2P connection to whoever needs to download them. The person on the other end uses Handrive on their own machine—desktop app, CLI, or through the REST API. Files transfer directly between the EC2 instance and the recipient, end-to-end encrypted.

What This Gives You

This combination turns S3 into a file transfer service without writing any new code. Your S3 bucket becomes the storage backend, Handrive becomes the transfer layer, and the EC2 instance ties them together.

Cheap, scalable storage. S3 Standard costs $0.023 per GB per month. S3 Infrequent Access drops to $0.0125. S3 Glacier is $0.004. You can store a petabyte of archived media for under $4,000 per month and serve active files from the same bucket through lifecycle policies.

No SaaS subscription. Enterprise file transfer services charge per-user or per-GB fees that add up fast at scale. This architecture costs you S3 storage, a small EC2 instance, and AWS bandwidth. No monthly seat licenses, no vendor lock-in to a transfer platform.

End-to-end encryption. Handrive encrypts transfers between the EC2 instance and the recipient. S3 handles encryption at rest. The file never passes through a third-party server—it goes directly from your EC2 instance to the recipient's device.

AI agent automation. Handrive's 40+ MCP tools are available on the headless instance. AI agents can upload, organize, rename, move, and transfer files in automated workflows. An agent could watch for new files in a specific S3 prefix, process them, and push the results to a collaborator—all without human intervention.

Five interfaces. The headless instance exposes Handrive's CLI, REST API, MCP stdio, and MCP HTTP SSE interfaces. External systems, scripts, or agents can interact with your S3-backed file server programmatically.

Who This Is For

This architecture makes the most sense for teams that already use AWS and need to share large files with people outside their AWS environment—clients, contractors, collaborators, or other offices.

A post-production studio could keep their media library on S3, mount it on an EC2 instance, and let editors or clients pull specific files via Handrive. No need to upload to WeTransfer, generate expiring links, or worry about file size limits. The media stays in S3 where it's versioned and backed up. Handrive handles the last mile.

A data team could stage datasets on S3, then let researchers or partner organizations pull what they need via P2P. Faster than generating presigned URLs, more secure than making the bucket public, and the MCP tools mean AI pipelines can automate the whole process.

An agency could maintain a shared asset library on S3—brand files, templates, deliverables—and give clients a way to access exactly what they need without granting them AWS console access.

Setup

The full setup takes about 15 minutes. Here's the outline:

1. Create or choose an S3 bucket and enable S3 Files on it through the AWS console. This creates an NFS endpoint for the bucket.

2. Launch an EC2 instance in the same region as your bucket. A t3.medium or similar is enough for most workloads—Handrive is lightweight and S3 Files handles the caching. Make sure the instance's security group allows inbound traffic on Handrive's port. (S3 Files also supports NFS mounts on ECS and EKS, so if your team already runs containers, you can run Handrive there instead of managing a standalone instance.)

3. Mount the S3 bucket on the EC2 instance using the NFS endpoint from step 1. Add it to /etc/fstab so it persists across reboots.

4. Install and run Handrive in headless mode. Point it at the mounted directory. Handrive indexes the files and starts listening for transfer requests.

That's it. Anyone with Handrive on their machine can now connect and transfer files from your S3-backed server. The REST API and MCP interfaces are also live for programmatic access.

Cost Comparison

For a team storing 10 TB on S3 and transferring 2 TB per month externally, the cost breakdown looks roughly like this: S3 storage at $230/month, a t3.medium EC2 instance at ~$30/month, and AWS bandwidth at $0.09/GB for the transferred data ($184/month). Total: around $444/month.

A managed file transfer service at the same scale typically runs $500–2,000/month depending on the provider and plan, plus per-GB transfer fees on some platforms. At higher volumes the gap widens further because S3 storage gets cheaper per GB at scale while SaaS pricing tends to stay flat or increase.

The real savings come from flexibility. You're not locked into a transfer platform's pricing model. Storage scales independently from transfer. And if your transfer volume drops one month, your bill drops with it.

Limitations

This isn't a managed service. You're responsible for the EC2 instance, security updates, and monitoring. If the instance goes down, transfers stop until it's back up (though your files are safe on S3).

S3 Files is new. NFS performance for very high-throughput workloads (multiple gigabytes per second sustained) may differ from native EBS or EFS. For most file transfer use cases this won't matter, but it's worth benchmarking if you're planning to saturate a 10 Gbps connection.

AWS bandwidth costs still apply. Data leaving AWS is metered at standard egress rates. This architecture eliminates SaaS transfer fees but not AWS bandwidth fees. For very high transfer volumes, look into AWS Direct Connect or CloudFront as a CDN layer.

The Bigger Picture

S3 Files makes S3 usable as a general-purpose file system for the first time. Handrive makes any directory transferable over P2P with encryption and AI automation. Together, they turn commodity cloud infrastructure into a file transfer service that works like the enterprise solutions—without the enterprise price tag or vendor lock-in.

If your team already uses AWS, the infrastructure is already there. The only new piece is Handrive running on an EC2 instance, and that's free.


Try It

Download Handrive and run it in headless mode on your EC2 instance. Point it at a mounted S3 bucket and you have a file transfer service in minutes.

Download Free