# ls slow to respond quickly on a folder with 50000 files.

## njcwotx

I  have a folder, its a mounted volume, but it has approx 50,000 files.  When I do an ls, it takes a while for it to respond and start pumping the file list.  Its no big deal, I'm just curious if a folder with this many files is acting normally, or is there some tweaking that would make this a little more responsive?

----------

## krinn

 :Smile: 

```
time ls /usr -Rl | wc -l

583266

real   0m58.201s

user   0m2.838s

sys   0m6.661s

time ls /sbin -Rl | wc -l

185

real   0m0.021s

user   0m0.000s

sys   0m0.004s

time ls /etc -Rl | wc -l

2555

real   0m0.019s

user   0m0.002s

sys   0m0.018s

```

must be why directories exists, and even without testing, it seems logic big number of files take more time to be handle than a directory with few ones.

----------

## njcwotx

does it take a long time to even post the first file to screen?  

What I see is...

0.00seconds

ls

30.00 seconds later....

first file appears and they scroll by screen normally.

obviously having the screen show 2000 files will take less time, im more wondering about the 30 second pause to even begin showing files.

once I do it one time, i can ls again without the pause, its the first time that is a bear.  its proably having to load the file table and put it in memory.  Its not a huge thing and it makes sense.  I just wondered if it was typical of all installs.

----------

## eccerr0r

Yes, probably due to reading everything in memory before sorting, etc.  Other things to try...

Try ls without sorting... Use option -U in coreutils ls.  Same amount of time but it should start dumping files to screen earlier.

Try defragmentation, create a new partition on another disk and copy everything there - a fresh copy should have less fragmentation.

Try a different filesystem.  Some filesystems can deal with large numbers of files in the same directory better than others.

----------

## njcwotx

ls -U did the trick, no pause.  Thanks!

----------

