Opened 21 years ago
Last modified 18 years ago
#160 closed Bug report
Lockups On Large File Structures
|Reported by:||overcast||Owned by:|
|Keywords:||Cc:||overcast, renhoek, booyeah451, codeine, Tim Kosse|
|Component version:||Operating system type:|
|Operating system version:|
If i attempt to transfer an entire web site directory with
many directories/files inside of directories filezilla
attempts to cue up every single file and appears to
overflow whatever buffer it has for the queue listing. Why
bother cue up every single filename? It basically has to
go and search for every single file before even
transferring anything. So even if it didn't freeze up i'd
have to sit there for 10 minutes while it queues
Change History (4)
comment:1 by , 21 years ago
comment:2 by , 20 years ago
I ran into the same problem. This is a big deal if it is
going to be usefull to mirror large web sites.
comment:3 by , 20 years ago
i have similar problem, if i have file/dir list with about 15000
entries it shows it correctly, but causes lockup when saving
(nothing is saved).
comment:4 by , 18 years ago
This bug report has been closed due to inactivity and has possibly
already been solved.
You can reopen this report if the issue still exists in the
latest version of FileZilla (Server).
I have to agree with this 'bug.. I tried to download a
LARGE structure with hundreds of directories and even after
an eternity it _still_ wasn't done indexing the directories..
In my opinion a large waste of time and bandwidth.. I'd
rather have an implementation that just starts to download
and jumps into a directory when it encounters one. (This is
also CuteFTPpro behaviour)
At least make it a switchable option :)