ARChive V1.7.4

- use the purge() function as part of the error handling during a volume duplication. if we recieve a code, 33, that indicates that the duplication chain is broken, we use purge to remove all traces of the now no good replicant, and tell archive to start the duplication over again.

- new event 'VOLUME DIFFERENTIAL DUPLICATE FAILED OUT-OF-SYNCH REPLICANT' to indicate the above condition. the retry will make a new event for the same transaction_id and volume_uuid.

- limit the volume duplication retries to 3. at that point, we'll give up.

- now using the purge() function when deleting the last snapshot for a given dr_binding_uuid from the database.

- adds "--translate" to 'roan events` to translate TENANT and TARGET UUIDs from the UUID to the objects human-name. This will add time (about 3 seconds per tenant ) to the completion time for the 'roan events' command as it collects the name bindings.

- fixes some logic after an error 33 is caught; now it will try any other snapshots found in the database in case there's a long-lived snapshot that could still be used as a base.

ARChive V1.7.3

- if Cinder could not delete a snapshot we asked it to, we'll update the description for that volume. additionally, we'll delete the record for that snapshot from the database.

- added a "wrapper" to the replication module to assist in adding future functionality.

- moved purge() function from replication module into the volume module.

- added site selection for "snaptrim_wait()" function; "local" or "remote"

- moved remote ceph snapshot deletion command into its own function so that it can be called from more than one location.

- added "replication purge" support for a given target: tenant, server or volume. if the given volume has a replicant that is not "in-use", it will collect all of the snapshots for that replicant, delete them, then delete the volume, and finally unlink the volume and replicant from the database.

ARChive V1.7.2

- removed "wait on creating" logic from cinder.create_volume due a chance that a volume can become stuck creating. set hard sleep of 10 seconds.

- increased delete hard sleep from 10 to 20 seconds.

- new logic to create the tables and populate a server with replication state if an instance goes for backup and there is no table or records for that server.