gcs: low level retry all operations if necessary

Google cloud storage doesn't normally need retries, however certain
things (eg bucket creation and removal) are rate limited and do
generate 429 errors.

Before this change the integration tests would regularly blow up with
errors from GCS rate limiting bucket creation and removal.

After this change we low level retry all operations using the same
exponential backoff strategy as used in the google drive backend.
This commit is contained in:
Nick Craig-Wood
2018-05-09 14:27:21 +01:00
parent 5eecbd83ee
commit 9698a2babb
2 changed files with 97 additions and 18 deletions

View File

@@ -277,7 +277,6 @@ func (t *test) cleanFs() error {
remote := dir.Remote()
if fstest.MatchTestRemote.MatchString(remote) {
log.Printf("Purging %s%s", t.remote, remote)
time.Sleep(2500 * time.Millisecond) // sleep to rate limit bucket deletes for gcs
dir, err := fs.NewFs(t.remote + remote)
if err != nil {
return err