Why Syncthing (and TCP) Slows Down Over Distance
Syncthing is an excellent sync tool—open source, private, well-engineered. But if you've ever tried to push a 50 GB folder to a device across the country, you've probably noticed the speed doesn't match your bandwidth. The bottleneck isn't Syncthing's code. It's TCP itself.
How TCP Becomes the Bottleneck
TCP (Transmission Control Protocol) is the foundation of almost everything on the internet. Web browsing, email, file downloads, streaming—all TCP. It's reliable, well-understood, and universally supported. But it was designed in the 1980s for a very different network environment, and its core mechanism creates a speed ceiling for large file transfers over distance.
The issue is TCP's acknowledgment model. TCP requires the receiver to acknowledge each batch of data before the sender sends more. On a local network with sub-millisecond latency, this round-trip happens so fast it's invisible. But over the internet—especially across long distances—each round-trip takes time. A transfer from New York to London has roughly 70ms of latency each way. New York to Sydney is 150ms+.
That latency directly limits throughput. TCP can only keep a certain amount of data "in flight" at once, bounded by the TCP window size. With a 64 KB window and 150ms round-trip time, your maximum throughput is about 3.4 Mbps—on a connection that can physically handle 1,000 Mbps. Modern TCP implementations use larger windows and window scaling, but even with aggressive tuning, a single TCP connection on a high-latency link will dramatically underperform compared to available bandwidth.
The Single Connection Problem
Many file sync and transfer tools open a single TCP connection between two devices. This is simple, reliable, and works well on local networks. But over the internet, that single connection becomes the bottleneck for everything.
Tools like Syncthing, rsync, and SCP all use this model. They're excellent at what they do—Syncthing in particular is a well-built, privacy-respecting sync tool that works great for keeping folders aligned across devices. But because they rely on standard TCP connections, their transfer speed degrades proportionally with distance. The further apart your devices are, the slower the transfer, regardless of how much bandwidth you have.
You can sometimes work around this by configuring multiple parallel connections, but most tools don't make this easy or automatic. And even with multiple TCP streams, each stream individually still faces the same latency penalty.
TCP's Congestion Control Works Against You
Beyond the acknowledgment bottleneck, TCP's congestion control algorithm actively reduces your speed when it detects packet loss. On the modern internet, some packet loss is normal—it doesn't mean the network is overloaded. But TCP treats any packet loss as a congestion signal and cuts its sending rate, sometimes drastically.
This is the right behavior for shared web traffic where being a good neighbor matters. But for a dedicated file transfer where you want to use the bandwidth you're paying for, TCP's congestion response is overly conservative. A single lost packet can cause the transfer to slow down temporarily, and on links with even modest loss rates (common on international routes, satellite connections, and VPNs), the constant speed adjustments prevent the transfer from ever reaching full speed.
What Accelerated Protocols Do Differently
File transfer protocols designed for speed over distance take a fundamentally different approach. Instead of waiting for acknowledgments before sending more data, they send as fast as the network allows and handle packet loss at the application level.
UDP-based transport. Instead of TCP, accelerated protocols typically use UDP (User Datagram Protocol) as the transport layer and build their own reliability and ordering on top. UDP doesn't have TCP's acknowledgment overhead, so the sender can push data continuously without waiting. Lost packets are detected and retransmitted selectively, without slowing down the overall transfer.
Parallel chunking. Rather than sending a file as a single stream, accelerated protocols break files into many small chunks and transfer them simultaneously across multiple streams. This multiplies effective throughput and means a single slow or lost packet doesn't stall the entire transfer.
Latency-independent throughput. Because the sender doesn't wait for round-trip acknowledgments, transfer speed is determined by available bandwidth, not by distance. A transfer from New York to Sydney runs at the same speed as a transfer across town—limited only by the slowest link in the path, not by physics.
Smart congestion control. Instead of TCP's blanket slowdown on any packet loss, accelerated protocols use rate-based congestion control that distinguishes between random packet loss (normal on the internet) and actual congestion. They maintain full speed through normal loss and only back off when the network is genuinely overloaded.
The Numbers
To put this in perspective: transferring a 100 GB file from New York to London (70ms round-trip) over a 1 Gbps connection. With a single optimally-tuned TCP connection, you might sustain 200–400 Mbps. An accelerated protocol on the same link can push 800–950 Mbps—close to the physical limit. That's the difference between a 30-minute transfer and an 8-minute transfer for the same file on the same connection.
The gap gets worse on harder links. Over a satellite connection with 600ms round-trip time and 2% packet loss, TCP might sustain 5–10 Mbps on a 100 Mbps link. An accelerated protocol can push 80–90 Mbps. That's a 10x difference.
Sync vs. Transfer: Different Jobs
It's worth being clear about what problem each tool solves. Sync tools like Syncthing are designed to keep folders identical across devices continuously. They watch for changes, transfer deltas, and resolve conflicts. They're optimized for many small, incremental updates—not for bulk transfer of large files. And they do that job very well.
Bulk file transfer is a different problem. You have 50 GB of video footage, or a machine learning dataset, or a project folder, and you need to get it from A to B as fast as possible. Here, protocol speed matters more than anything else. The tool should saturate your bandwidth, handle packet loss without slowing down, and finish the job in minutes instead of hours.
Handrive is built for the second case. Its protocol is designed from the ground up for bulk file transfer over any distance—UDP-based, parallel, latency-independent, and tolerant of packet loss. It uses the bandwidth you have, not the bandwidth TCP thinks you deserve. Combined with P2P architecture (no cloud server in the middle), end-to-end encryption, and 40+ MCP tools for AI automation, it's purpose-built for moving large files between devices.
When TCP Is Fine
Not every transfer needs an accelerated protocol. If your devices are on the same local network, TCP is effectively instant—latency is under 1ms and there's no meaningful speed penalty. If you're syncing small files or incremental changes, the protocol overhead is negligible compared to the actual work of detecting and transferring changes.
The speed difference shows up when you're moving large amounts of data over the internet, especially over long distances or on connections with packet loss. If you've ever wondered why your 1 Gbps connection only pushes 100 Mbps during a transfer, TCP is almost certainly the reason.
Use Your Full Bandwidth
Handrive's protocol is built for speed over any distance. P2P, end-to-end encrypted, free, and no file size limits.
Download Free