Custom Query (7937 matches)


Show under each result:

Results (151 - 153 of 7937)

Ticket Resolution Summary Owner Reporter
#11620 rejected Cached/displayed file size wrong after reboot - causes resume to start at wrong place with larger files Michael Shepard

When downloading 100GB backup files the overwrite/resume prompt window shows old (cached?) size info about the local files after an unexpected reboot during long download (Enforced Windows Update reboot).

The Local site file listing also shows old data. Hitting F5 multiple times to refresh sometimes updates the window to the current file size. Without knowing it has to be refreshed, FileZilla will redownload large parts of the file.

Event Sequence: Start downloads at 9 am after backup finishes. File download is progressing to 80GB. Unexpected reboot happens at night. Restart the machine and reload Filezilla. Local site and overwrite/resume window show the file progress is at 50GB (the last server reconnect perhaps?) and resumes from 50GB point. Window Explorer shows file was 70GB before download resume.

Thanks for the great tool!

From About:

FileZilla Client

Version: 3.33.0

Build information:

Compiled for: x86_64-w64-mingw32 Compiled on: x86_64-pc-linux-gnu Build date: 2018-05-07 Compiled with: x86_64-w64-mingw32-gcc (GCC) 6.3.0 20170516 Compiler flags: -g -O2 -Wall

Linked against:

wxWidgets: 3.0.5 SQLite: 3.22.0 GnuTLS: 3.5.18

Operating system:

Name: Windows 10 (build 16299), 64-bit edition Version: 10.0 Platform: 64-bit system CPU features: sse sse2 sse3 ssse3 sse4.1 sse4.2 avx aes pclmulqdq rdrnd Settings dir: C:\Users\media\AppData\Roaming\FileZilla\

#11621 rejected Faster downloads online with simultaneous download connections of large files divided into chunks Michael Shepard

When download online (not downloading from a local network server) users are downloading with various limiters blocking the speed of the download per connection: the host website/server, ISPs, company router, traffic shapers, shared wifi access points.

Typically we find when downloading backups Filezilla will quickly work through all the small files using up to 10 connections (Settings > Transfers > Maximum Simultaneous Transfers) and then sit there using only one connection per file slowly downloading the (say 2) last large files for hours since the single connection is rate limited externally.

Suggestion: Divide files larger than x (New Setting) into chunks. Divide the file size by the number of Maximum Simultaneous Connections into even chunks and place each chunk in the transfer queue as a separate file. Each chunk processed in the queue as a regular file download. Use the existing preallocate space for download feature to set up the file for writing or stitch the chunks together after.

Multiple connections will then download different parts of the same file. If the other files in the transfer have a higher priority then once a queue connection runs out of other regular files to download and becomes idle, it starts on the next chunk of the large file. Or the user can use the existing priority feature to go after the large file first.

Example: If the file is 100GB (and there are 10 Maximum Simultaneous Connections) then connection #2 downloads from the 10GB mark to the 20GB mark and so on.

External to Filezilla, the user would be required to connect to a remote server that allows multiple simultaneous connections, which is typically already enabled. Most sites we use let us download 10 files at the same time.

Please ask me if you have any questions. Thanks for the great program!

#4971 fixed Control-Tab between tabs doesn't loop back to the start after last tab Callan

When moving between tabs using Control-Tab on the keyboard, it will get to the last tab and end, as opposed to going back to the start like Firefox behaviour.

Batch Modify
Note: See TracBatchModify for help on using batch modify.
Note: See TracQuery for help on using queries.