Changes between Version 1 and Version 2 of Ticket #1073
- Timestamp:
- Nov 18, 2015, 4:48:22 PM (9 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
Ticket #1073 – Description
v1 v2 26 26 * xfs : tested to millions and performance is not impacted 27 27 * btrfs: similar to xfs 28 * ntfs : 2^32-1theoretically (same limit as number of files in a directory)28 * ntfs : `2^32-1` theoretically (same limit as number of files in a directory) 29 29 30 30 Between 10,000 and 100,000 files per directory seems like a good number well supported across filesystems. If we take 100,000 on ext3 that gives us a lower limit of 3 billion tiles. … … 51 51 }}} 52 52 53 The subdirectory index in TILES is dir_index = tile_index / 100,000. The 100,000 number can be a compile time constant that can be adjusted as necessary. By default it is maybe better if it is 2^16 or 2^17so that the dir_index can be computed with a fast bit shift.53 The subdirectory index in TILES is dir_index = tile_index / 100,000. The 100,000 number can be a compile time constant that can be adjusted as necessary. By default it is maybe better if it is `2^16` or `2^17` so that the dir_index can be computed with a fast bit shift. 54 54 55 55 I would like to stay away from creating complicated tree-like schemes nesting multiple subdirectories. It's the job of the filesystem to handle this load, if we ever reach some limits with this scheme on a particular filesystem it seems very unlikely that we'll be able to work around it ourselves, without actually adapting the filesystem underneat.