id,summary,reporter,owner,description,type,status,priority,component,resolution,keywords,cc,component_version,os,os_version 5526,Maximizing Transfers,Jim Robinson Jr.,,"I frequently find myself transferring very large data sets across wide-open network pipes... and unable to fully utilize any component in the the path, including client, server or network. With 1G networks considered standard, and 10G becoming common, these systems have far more capacity than FileZilla - and virtually every other FTP client that I've found - can handle. The issue is two-fold. 1. Concurrent Transfers 2. Segmented Transfers About 3 years ago someone opened a ticket (#2762) asking for segmented transfers and the response was that it wasted too much bandwidth. True. There is a fairly high overhead. However, when neither the client, server nor network is even breaking a sweat... why do we care if we waste more bandwidth but get better performance? I believe that capacity has outstripped the natural (and reasonable) limitations of the software. Regarding feature 1, please consider either completely removing the concurrency restriction or making it significantly larger. I suspect that 100 is now a very reasonable MINIMUM number for a hard-coded threshold, but will rely on the opinions of folks much more knowledgeable than I am about the intricate working of FTP. Regarding feature 2, the ability to break up large files into multiple segments could make a significant improvement on the overall performance of FTP. Even when FileZilla is configured to transfer up to 10 files at one time, if I am transferring a single 10GB file... I only get one thread. Breaking this up into 10 (or 20... or more) segments would indeed make a difference. I would, however, also suggest a need to balance these two potentially competing features. Due to the overhead, smaller files would be slowed down by a segmented approach. A low end cap that forces files smaller than X to be single threaded could help. This too should be user-adjustable. Even better might be a multi-level approach. Thus, files smaller than (for example) 10MB could be set to ""always single-segment"", files between 10 and 100MB could be set to ""max 2 segments"", files between 100MB and 500MB could be set to ""max 5 segments"", and files larger than 500MB could be set to ""use as many segments as available."" Between these two features, the Concurrency cap should be the sum of segments rather than files. Following this logic, if concurrency was set to 100, then 100 small files - at 1 segment each - would all be transferred simultaneously, or a single 10GB file could be broken into 100 segments... but not both. I believe that these three changes, particularly implemented together, would be a huge win for FileZilla. Thanks for listening, Jim ",Feature request,new,normal,FileZilla Client,,Concurrency Segments Capacity,,,Windows,