mirror of
https://github.com/rclone/rclone.git
synced 2025-12-11 22:14:05 +01:00
docs: move documentation for options from docs/content into backends
In the following commit, the documentation will be autogenerated.
This commit is contained in:
@@ -290,168 +290,5 @@ Params:
|
||||
- **remote** = path to remote **(required)**
|
||||
- **withData** = true/false to delete cached data (chunks) as well _(optional, false by default)_
|
||||
|
||||
### Specific options ###
|
||||
|
||||
Here are the command line options specific to this cloud storage
|
||||
system.
|
||||
|
||||
#### --cache-db-path=PATH ####
|
||||
|
||||
Path to where the file structure metadata (DB) is stored locally. The remote
|
||||
name is used as the DB file name.
|
||||
|
||||
**Default**: <rclone default cache path>/cache-backend/<remote name>
|
||||
**Example**: /.cache/cache-backend/test-cache
|
||||
|
||||
#### --cache-chunk-path=PATH ####
|
||||
|
||||
Path to where partial file data (chunks) is stored locally. The remote
|
||||
name is appended to the final path.
|
||||
|
||||
This config follows the `--cache-db-path`. If you specify a custom
|
||||
location for `--cache-db-path` and don't specify one for `--cache-chunk-path`
|
||||
then `--cache-chunk-path` will use the same path as `--cache-db-path`.
|
||||
|
||||
**Default**: <rclone default cache path>/cache-backend/<remote name>
|
||||
**Example**: /.cache/cache-backend/test-cache
|
||||
|
||||
#### --cache-db-purge ####
|
||||
|
||||
Flag to clear all the cached data for this remote on start.
|
||||
|
||||
**Default**: not set
|
||||
|
||||
#### --cache-chunk-size=SIZE ####
|
||||
|
||||
The size of a chunk (partial file data). Use lower numbers for slower
|
||||
connections. If the chunk size is changed, any downloaded chunks will be invalid and cache-chunk-path will need to be cleared or unexpected EOF errors will occur.
|
||||
|
||||
**Default**: 5M
|
||||
|
||||
#### --cache-chunk-total-size=SIZE ####
|
||||
|
||||
The total size that the chunks can take up on the local disk. If `cache`
|
||||
exceeds this value then it will start to the delete the oldest chunks until
|
||||
it goes under this value.
|
||||
|
||||
**Default**: 10G
|
||||
|
||||
#### --cache-chunk-clean-interval=DURATION ####
|
||||
|
||||
How often should `cache` perform cleanups of the chunk storage. The default value
|
||||
should be ok for most people. If you find that `cache` goes over `cache-chunk-total-size`
|
||||
too often then try to lower this value to force it to perform cleanups more often.
|
||||
|
||||
**Default**: 1m
|
||||
|
||||
#### --cache-info-age=DURATION ####
|
||||
|
||||
How long to keep file structure information (directory listings, file size,
|
||||
mod times etc) locally.
|
||||
|
||||
If all write operations are done through `cache` then you can safely make
|
||||
this value very large as the cache store will also be updated in real time.
|
||||
|
||||
**Default**: 6h
|
||||
|
||||
#### --cache-read-retries=RETRIES ####
|
||||
|
||||
How many times to retry a read from a cache storage.
|
||||
|
||||
Since reading from a `cache` stream is independent from downloading file data,
|
||||
readers can get to a point where there's no more data in the cache.
|
||||
Most of the times this can indicate a connectivity issue if `cache` isn't
|
||||
able to provide file data anymore.
|
||||
|
||||
For really slow connections, increase this to a point where the stream is
|
||||
able to provide data but your experience will be very stuttering.
|
||||
|
||||
**Default**: 10
|
||||
|
||||
#### --cache-workers=WORKERS ####
|
||||
|
||||
How many workers should run in parallel to download chunks.
|
||||
|
||||
Higher values will mean more parallel processing (better CPU needed) and
|
||||
more concurrent requests on the cloud provider.
|
||||
This impacts several aspects like the cloud provider API limits, more stress
|
||||
on the hardware that rclone runs on but it also means that streams will
|
||||
be more fluid and data will be available much more faster to readers.
|
||||
|
||||
**Note**: If the optional Plex integration is enabled then this setting
|
||||
will adapt to the type of reading performed and the value specified here will be used
|
||||
as a maximum number of workers to use.
|
||||
**Default**: 4
|
||||
|
||||
#### --cache-chunk-no-memory ####
|
||||
|
||||
By default, `cache` will keep file data during streaming in RAM as well
|
||||
to provide it to readers as fast as possible.
|
||||
|
||||
This transient data is evicted as soon as it is read and the number of
|
||||
chunks stored doesn't exceed the number of workers. However, depending
|
||||
on other settings like `cache-chunk-size` and `cache-workers` this footprint
|
||||
can increase if there are parallel streams too (multiple files being read
|
||||
at the same time).
|
||||
|
||||
If the hardware permits it, use this feature to provide an overall better
|
||||
performance during streaming but it can also be disabled if RAM is not
|
||||
available on the local machine.
|
||||
|
||||
**Default**: not set
|
||||
|
||||
#### --cache-rps=NUMBER ####
|
||||
|
||||
This setting places a hard limit on the number of requests per second that `cache`
|
||||
will be doing to the cloud provider remote and try to respect that value
|
||||
by setting waits between reads.
|
||||
|
||||
If you find that you're getting banned or limited on the cloud provider
|
||||
through cache and know that a smaller number of requests per second will
|
||||
allow you to work with it then you can use this setting for that.
|
||||
|
||||
A good balance of all the other settings should make this
|
||||
setting useless but it is available to set for more special cases.
|
||||
|
||||
**NOTE**: This will limit the number of requests during streams but other
|
||||
API calls to the cloud provider like directory listings will still pass.
|
||||
|
||||
**Default**: disabled
|
||||
|
||||
#### --cache-writes ####
|
||||
|
||||
If you need to read files immediately after you upload them through `cache`
|
||||
you can enable this flag to have their data stored in the cache store at the
|
||||
same time during upload.
|
||||
|
||||
**Default**: not set
|
||||
|
||||
#### --cache-tmp-upload-path=PATH ####
|
||||
|
||||
This is the path where `cache` will use as a temporary storage for new files
|
||||
that need to be uploaded to the cloud provider.
|
||||
|
||||
Specifying a value will enable this feature. Without it, it is completely disabled
|
||||
and files will be uploaded directly to the cloud provider
|
||||
|
||||
**Default**: empty
|
||||
|
||||
#### --cache-tmp-wait-time=DURATION ####
|
||||
|
||||
This is the duration that a file must wait in the temporary location
|
||||
_cache-tmp-upload-path_ before it is selected for upload.
|
||||
|
||||
Note that only one file is uploaded at a time and it can take longer to
|
||||
start the upload if a queue formed for this purpose.
|
||||
|
||||
**Default**: 15m
|
||||
|
||||
#### --cache-db-wait-time=DURATION ####
|
||||
|
||||
Only one process can have the DB open at any one time, so rclone waits
|
||||
for this duration for the DB to become available before it gives an
|
||||
error.
|
||||
|
||||
If you set it to 0 then it will wait forever.
|
||||
|
||||
**Default**: 1s
|
||||
<!--- autogenerated options start - edit in backend/backend.go options -->
|
||||
<!--- autogenerated options stop -->
|
||||
|
||||
Reference in New Issue
Block a user