New Upstream Snapshot - golang-github-minio-dsync

Ready changes

Summary

Merged new upstream version: 3.0.1+git20191112.1.b58a578 (was: 0.0~git20170209.0.b9f7da7).

Resulting package

Built on 2022-12-18T14:42 (took 5m7s)

The resulting binary packages can be installed (if you have the apt repository enabled) by running one of:

apt install -t fresh-snapshots golang-github-minio-dsync-dev

Lintian Result

Diff

diff --git a/.travis.yml b/.travis.yml
index 854b3db..ac3fc80 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -6,18 +6,16 @@ os:
 
 env:
 - ARCH=x86_64
-- ARCH=i686
 
 go:
-- 1.6
-- 1.7.4
+- 1.12.x
 
 script:
 - diff -au <(gofmt -d .) <(printf "")
 - go vet ./...
-- go get -u github.com/client9/misspell/cmd/misspell
+- go get github.com/client9/misspell/cmd/misspell
+- go get github.com/gordonklaus/ineffassign
 - misspell -error .
-- go get -u github.com/gordonklaus/ineffassign
 - ineffassign .
 - go test -v
 - go test -v -race
diff --git a/README.md b/README.md
index 6bc3a8f..bcca4ab 100644
--- a/README.md
+++ b/README.md
@@ -1,3 +1,5 @@
+> This project has moved to https://github.com/minio/minio/tree/master/pkg/dsync
+
 dsync [![Slack](https://slack.minio.io/slack?type=svg)](https://slack.minio.io) [![Go Report Card](https://goreportcard.com/badge/minio/minio)](https://goreportcard.com/report/minio/minio) [![codecov](https://codecov.io/gh/minio/dsync/branch/master/graph/badge.svg)](https://codecov.io/gh/minio/dsync)
 =====
 
@@ -5,20 +7,20 @@ A distributed locking and syncing package for Go.
 
 Introduction
 ------------
- 
-`dsync` is a package for doing distributed locks over a network of `n` nodes. It is designed with simplicity in mind and hence offers limited scalability (`n <= 16`). Each node will be connected to all other nodes and lock requests from any node will be broadcast to all connected nodes. A node will succeed in getting the lock if `n/2 + 1` nodes (whether or not including itself) respond positively. If the lock is acquired it can be held for as long as the client desires and needs to be released afterwards. This will cause the release to be broadcast to all nodes after which the lock becomes available again.
+
+`dsync` is a package for doing distributed locks over a network of `n` nodes. It is designed with simplicity in mind and hence offers limited scalability (`n <= 32`). Each node will be connected to all other nodes and lock requests from any node will be broadcast to all connected nodes. A node will succeed in getting the lock if `n/2 + 1` nodes (whether or not including itself) respond positively. If the lock is acquired it can be held for as long as the client desires and needs to be released afterwards. This will cause the release to be broadcast to all nodes after which the lock becomes available again.
 
 Motivation
 ----------
 
-This package was developed for the distributed server version of [Minio Object Storage](https://minio.io/). For this we needed a distributed locking mechanism for up to 16 servers that each would be running `minio server`. The locking mechanism itself should be a reader/writer mutual exclusion lock meaning that it can be held by a single writer or an arbitrary number of readers.
+This package was developed for the distributed server version of [Minio Object Storage](https://minio.io/). For this we needed a distributed locking mechanism for up to 32 servers that each would be running `minio server`. The locking mechanism itself should be a reader/writer mutual exclusion lock meaning that it can be held by a single writer or an arbitrary number of readers.
 
 For [minio](https://minio.io/) the distributed version is started as follows (for a 6-server system):
 
 ```
-$ minio server http://server1/disk http://server2/disk http://server3/disk http://server4/disk http://server5/disk http://server6/disk 
+$ minio server http://server1/disk http://server2/disk http://server3/disk http://server4/disk http://server5/disk http://server6/disk
 ```
- 
+
 _(note that the same identical command should be run on servers `server1` through to `server6`)_
 
 Design goals
@@ -33,7 +35,7 @@ Design goals
 Restrictions
 ------------
 
-* Limited scalability: up to 16 nodes.
+* Limited scalability: up to 32 nodes.
 * Fixed configuration: changes in the number and/or network names/IP addresses need a restart of all nodes in order to take effect.
 * If a down node comes up, it will not try to (re)acquire any locks that it may have held.
 * Not designed for high performance applications such as key/value stores.
@@ -41,10 +43,10 @@ Restrictions
 Performance
 -----------
 
-* Support up to a total of 7500 locks/second for maximum size of 16 nodes (consuming 10% CPU usage per server) on moderately powerful server hardware.
+* Support up to a total of 7500 locks/second for a size of 16 nodes (consuming 10% CPU usage per server) on moderately powerful server hardware.
 * Lock requests (successful) should not take longer than 1ms (provided decent network connection of 1 Gbit or more between the nodes).
 
-The tables below show detailed performance numbers. 
+The tables below show detailed performance numbers.
 
 ### Performance with varying number of nodes
 
@@ -87,22 +89,26 @@ The system can be pushed to 75K locks/sec at 50% CPU load.
 Usage
 -----
 
-### Exclusive lock 
+> NOTE: Previously if you were using `dsync.Init([]NetLocker, nodeIndex)` to initialize dsync has
+been changed to `dsync.New([]NetLocker, nodeIndex)` which returns a `*Dsync` object to be used in
+every instance of `NewDRWMutex("test", *Dsync)`
+
+### Exclusive lock
 
 Here is a simple example showing how to protect a single resource (drop-in replacement for `sync.Mutex`):
 
-```
+```go
 import (
-    "github.com/minio/dsync"
+	"github.com/minio/dsync/v3"
 )
 
 func lockSameResource() {
 
-    // Create distributed mutex to protect resource 'test'
-	dm := dsync.NewDRWMutex("test")
+	// Create distributed mutex to protect resource 'test'
+	dm := dsync.NewDRWMutex(context.Background(), "test", ds)
 
-	dm.Lock()
-    log.Println("first lock granted")
+	dm.Lock("lock-1", "example.go:505:lockSameResource()")
+	log.Println("first lock granted")
 
 	// Release 1st lock after 5 seconds
 	go func() {
@@ -111,10 +117,10 @@ func lockSameResource() {
 		dm.Unlock()
 	}()
 
-    // Try to acquire lock again, will block until initial lock is released
-    log.Println("about to lock same resource again...")
-	dm.Lock()
-    log.Println("second lock granted")
+	// Try to acquire lock again, will block until initial lock is released
+	log.Println("about to lock same resource again...")
+	dm.Lock("lock-1", "example.go:515:lockSameResource()")
+	log.Println("second lock granted")
 
 	time.Sleep(2 * time.Second)
 	dm.Unlock()
@@ -134,15 +140,15 @@ which gives the following output:
 
 DRWMutex also supports multiple simultaneous read locks as shown below (analogous to `sync.RWMutex`)
 
-```
+```go
 func twoReadLocksAndSingleWriteLock() {
 
-	drwm := dsync.NewDRWMutex("resource")
+	drwm := dsync.NewDRWMutex(context.Background(), "resource", ds)
 
-	drwm.RLock()
+	drwm.RLock("RLock-1", "example.go:416:twoReadLocksAndSingleWriteLock()")
 	log.Println("1st read lock acquired, waiting...")
 
-	drwm.RLock()
+	drwm.RLock("RLock-2", "example.go:420:twoReadLocksAndSingleWriteLock()")
 	log.Println("2nd read lock acquired, waiting...")
 
 	go func() {
@@ -158,9 +164,9 @@ func twoReadLocksAndSingleWriteLock() {
 	}()
 
 	log.Println("Trying to acquire write lock, waiting...")
-	drwm.Lock()
+	drwm.Lock("Lock-1", "example.go:445:twoReadLocksAndSingleWriteLock()")
 	log.Println("Write lock acquired, waiting...")
-	
+
 	time.Sleep(3 * time.Second)
 
 	drwm.Unlock()
@@ -186,7 +192,7 @@ Basic architecture
 The basic steps in the lock process are as follows:
 - broadcast lock message to all `n` nodes
 - collect all responses within certain time-out window
-  - if quorum met (minimally `n/2 + 1` responded positively) then grant lock 
+  - if quorum met (minimally `n/2 + 1` responded positively) then grant lock
   - otherwise release all underlying locks and try again after a (semi-)random delay
 - release any locks that (still) came in after time time-out window
 
@@ -232,7 +238,7 @@ This table summarizes the conditions for different configurations during which t
 |    16 |          7 |             2 |           9 |
 
 (for more info see `testMultipleServersOverQuorumDownDuringLockKnownError` in [chaos.go](https://github.com/minio/dsync/blob/master/chaos/chaos.go))
- 
+
 ### Lock not available anymore
 
 This would be due to too many stale locks and/or too many servers down (total over `n/2 - 1`). The following table shows the maximum toterable number for different node sizes:
@@ -256,7 +262,7 @@ Server side logic
 
 On the server side just the following logic needs to be added (barring some extra error checking):
 
-```
+```go
 const WriteLock = -1
 
 type lockServer struct {
@@ -280,7 +286,7 @@ func (l *lockServer) Unlock(args *LockArgs, reply *bool) error {
 	defer l.mutex.Unlock()
 	var locksHeld int64
 	if locksHeld, *reply = l.lockMap[args.Name]; !*reply { // No lock is held on the given name
-		return fmt.Errorf("Unlock attempted on an unlocked entity: %s", args.Name) 
+		return fmt.Errorf("Unlock attempted on an unlocked entity: %s", args.Name)
 	}
 	if *reply = locksHeld == WriteLock; !*reply { // Unless it is a write lock
 		return fmt.Errorf("Unlock attempted on a read locked entity: %s (%d read locks active)", args.Name, locksHeld)
@@ -292,7 +298,7 @@ func (l *lockServer) Unlock(args *LockArgs, reply *bool) error {
 
 If you also want RLock()/RUnlock() functionality, then add this as well:
 
-```
+```go
 const ReadLock = 1
 
 func (l *lockServer) RLock(args *LockArgs, reply *bool) error {
@@ -353,11 +359,11 @@ For this case it is possible to reduce the number of nodes to be contacted to fo
 
 You do however want to make sure that you have some sort of 'random' selection of which 12 out of the 16 nodes will participate in every lock. See [here](https://gist.github.com/fwessels/dbbafd537c13ec8f88b360b3a0091ac0) for some sample code that could help with this.
 
-### Scale beyond 16 nodes?
+### Scale beyond 32 nodes?
 
 Building on the previous example and depending on how resilient you want to be for outages of nodes, you can also go the other way, namely to increase the total number of nodes while keeping the number of nodes contacted per lock the same.
 
-For instance you could imagine a system of 32 nodes where only a quorom majority of `9` would be needed out of `12` nodes. Again this requires some sort of pseudo-random 'deterministic' selection of 12 nodes out of the total of 32 servers (same [example](https://gist.github.com/fwessels/dbbafd537c13ec8f88b360b3a0091ac0) as above). 
+For instance you could imagine a system of 64 nodes where only a quorum majority of `17` would be needed out of `28` nodes. Again this requires some sort of pseudo-random 'deterministic' selection of 28 nodes out of the total of 64 servers (same [example](https://gist.github.com/harshavardhana/44614a69650c9111defe3470941cdd16) as above).
 
 Other techniques
 ----------------
diff --git a/chaos/.gitignore b/chaos/.gitignore
new file mode 100644
index 0000000..90bc8ec
--- /dev/null
+++ b/chaos/.gitignore
@@ -0,0 +1 @@
+chaos
\ No newline at end of file
diff --git a/chaos/chaos-server.go b/chaos/chaos-server.go
index ab64899..155a7b8 100644
--- a/chaos/chaos-server.go
+++ b/chaos/chaos-server.go
@@ -19,7 +19,6 @@ package main
 import (
 	"fmt"
 	"log"
-	"math/rand"
 	"net"
 	"net/http"
 	"net/rpc"
@@ -47,14 +46,6 @@ func startRPCServer(port int) {
 		lockMap: make(map[string][]lockRequesterInfo),
 		// timestamp: leave uninitialized for testing (set to real timestamp for actual usage)
 	}
-	go func() {
-		// Start with random sleep time, so as to avoid "synchronous checks" between servers
-		time.Sleep(time.Duration(rand.Float64() * float64(LockMaintenanceLoop)))
-		for {
-			time.Sleep(LockMaintenanceLoop)
-			locker.lockMaintenance(LockCheckValidityInterval)
-		}
-	}()
 	server.RegisterName("Dsync", locker)
 	// For some reason the registration paths need to be different (even for different server objs)
 	rpcPath := rpcPathPrefix + "-" + strconv.Itoa(port)
diff --git a/chaos/chaos.go b/chaos/chaos.go
index c93a11d..845e4a4 100644
--- a/chaos/chaos.go
+++ b/chaos/chaos.go
@@ -17,18 +17,21 @@
 package main
 
 import (
+	"context"
 	"flag"
 	"fmt"
 	"log"
 	"math/rand"
 	"os"
 	"os/exec"
+	"path"
+	"runtime"
 	"strconv"
 	"strings"
 	"sync"
 	"time"
 
-	"github.com/minio/dsync"
+	"github.com/minio/dsync/v3"
 )
 
 var (
@@ -43,9 +46,23 @@ const n = 4
 const portStart = 12345
 const rpcPathPrefix = "/dsync"
 
+func getSource() string {
+	var funcName string
+	pc, filename, lineNum, ok := runtime.Caller(2)
+	if ok {
+		filename = path.Base(filename)
+		funcName = runtime.FuncForPC(pc).Name()
+	} else {
+		filename = "<unknown>"
+		lineNum = 0
+	}
+
+	return fmt.Sprintf("[%s:%d:%s()]", filename, lineNum, funcName)
+}
+
 // testNotEnoughServersForQuorum verifies that when quorum cannot be achieved that locking will block.
 // Once another server comes up and quorum becomes possible, the lock will be granted
-func testNotEnoughServersForQuorum(wg *sync.WaitGroup) {
+func testNotEnoughServersForQuorum(wg *sync.WaitGroup, ds *dsync.Dsync) {
 
 	defer wg.Done()
 
@@ -66,10 +83,10 @@ func testNotEnoughServersForQuorum(wg *sync.WaitGroup) {
 		servers = append(servers, launchTestServers(n/2, 1)...)
 	}()
 
-	dm := dsync.NewDRWMutex("test")
+	dm := dsync.NewDRWMutex(context.Background(), "test", ds)
 
 	log.Println("Trying to acquire lock but too few servers active...")
-	dm.Lock()
+	dm.Lock("test", getSource())
 	log.Println("Acquired lock")
 
 	time.Sleep(2 * time.Second)
@@ -91,7 +108,7 @@ func testNotEnoughServersForQuorum(wg *sync.WaitGroup) {
 	}()
 
 	log.Println("Trying to acquire lock again but too few servers active...")
-	dm.Lock()
+	dm.Lock("test", getSource())
 	log.Println("Acquired lock again")
 
 	dm.Unlock()
@@ -107,16 +124,16 @@ func testNotEnoughServersForQuorum(wg *sync.WaitGroup) {
 
 // testServerGoingDown tests that a lock is granted when all servers are up, after too
 // many servers die that a new lock will block and once servers are up again, the lock is granted.
-func testServerGoingDown(wg *sync.WaitGroup) {
+func testServerGoingDown(wg *sync.WaitGroup, ds *dsync.Dsync) {
 
 	defer wg.Done()
 
 	log.Println("")
 	log.Println("**STARTING** testServerGoingDown")
 
-	dm := dsync.NewDRWMutex("test")
+	dm := dsync.NewDRWMutex(context.Background(), "test", ds)
 
-	dm.Lock()
+	dm.Lock("test", getSource())
 	log.Println("Acquired lock")
 
 	time.Sleep(100 * time.Millisecond)
@@ -142,7 +159,7 @@ func testServerGoingDown(wg *sync.WaitGroup) {
 	}()
 
 	log.Println("Trying to acquire lock...")
-	dm.Lock()
+	dm.Lock("test", getSource())
 	log.Println("Acquired lock again")
 
 	dm.Unlock()
@@ -153,7 +170,7 @@ func testServerGoingDown(wg *sync.WaitGroup) {
 
 // testServerDownDuringLock verifies that if a server goes down while a lock is held, and comes back later
 // another lock on the same name is not granted too early
-func testSingleServerOverQuorumDownDuringLock(wg *sync.WaitGroup) {
+func testSingleServerOverQuorumDownDuringLock(wg *sync.WaitGroup, ds *dsync.Dsync) {
 
 	defer wg.Done()
 
@@ -169,10 +186,10 @@ func testSingleServerOverQuorumDownDuringLock(wg *sync.WaitGroup) {
 	}
 	log.Println("Killed just enough servers to keep quorum")
 
-	dm := dsync.NewDRWMutex("test")
+	dm := dsync.NewDRWMutex(context.Background(), "test", ds)
 
 	// acquire lock
-	dm.Lock()
+	dm.Lock("test", getSource())
 	log.Println("Acquired lock")
 
 	// kill one server which will lose one active lock
@@ -196,11 +213,11 @@ func testSingleServerOverQuorumDownDuringLock(wg *sync.WaitGroup) {
 		dm.Unlock()
 	}()
 
-	dm2 := dsync.NewDRWMutex("test")
+	dm2 := dsync.NewDRWMutex(context.Background(), "test", ds)
 
 	// try to acquire same lock -- only granted after first lock released
 	log.Println("Trying to acquire new lock on same resource...")
-	dm2.Lock()
+	dm2.Lock("test", getSource())
 	log.Println("New lock granted")
 
 	// release lock
@@ -214,17 +231,17 @@ func testSingleServerOverQuorumDownDuringLock(wg *sync.WaitGroup) {
 // another lock on the same name is granted too early
 //
 // Specific deficiency: more than one lock is granted on the same (exclusive) resource
-func testMultipleServersOverQuorumDownDuringLockKnownError(wg *sync.WaitGroup) {
+func testMultipleServersOverQuorumDownDuringLockKnownError(wg *sync.WaitGroup, ds *dsync.Dsync) {
 
 	defer wg.Done()
 
 	log.Println("")
 	log.Println("**STARTING** testMultipleServersOverQuorumDownDuringLockKnownError")
 
-	dm := dsync.NewDRWMutex("test")
+	dm := dsync.NewDRWMutex(context.Background(), "test", ds)
 
 	// acquire lock
-	dm.Lock()
+	dm.Lock("test", getSource())
 	log.Println("Acquired lock")
 
 	// kill enough servers to free up enough servers to allow new quorum once restarted
@@ -250,11 +267,11 @@ func testMultipleServersOverQuorumDownDuringLockKnownError(wg *sync.WaitGroup) {
 		dm.Unlock()
 	}()
 
-	dm2 := dsync.NewDRWMutex("test")
+	dm2 := dsync.NewDRWMutex(context.Background(), "test", ds)
 
 	// try to acquire same lock -- granted once killed servers are up again
 	log.Println("Trying to acquire new lock on same resource...")
-	dm2.Lock()
+	dm2.Lock("test", getSource())
 	log.Println("New lock granted (too soon)")
 
 	time.Sleep(6 * time.Second)
@@ -266,7 +283,7 @@ func testMultipleServersOverQuorumDownDuringLockKnownError(wg *sync.WaitGroup) {
 }
 
 // testSingleStaleLock verifies that, despite a single stale lock, a new lock can still be acquired on same resource
-func testSingleStaleLock(wg *sync.WaitGroup, beforeMaintenanceKicksIn bool) {
+func testSingleStaleLock(wg *sync.WaitGroup, beforeMaintenanceKicksIn bool, ds *dsync.Dsync) {
 
 	defer wg.Done()
 
@@ -296,14 +313,14 @@ func testSingleStaleLock(wg *sync.WaitGroup, beforeMaintenanceKicksIn bool) {
 	time.Sleep(500 * time.Millisecond)
 
 	// lock on same resource can be acquired despite single server having a stale lock
-	dm := dsync.NewDRWMutex(lockName)
+	dm := dsync.NewDRWMutex(context.Background(), lockName, ds)
 
 	ch := make(chan struct{})
 
 	// try to acquire lock in separate routine (will not succeed)
 	go func() {
 		log.Println("Trying to get the lock")
-		dm.Lock()
+		dm.Lock("test", getSource())
 		ch <- struct{}{}
 	}()
 
@@ -336,7 +353,7 @@ func testSingleStaleLock(wg *sync.WaitGroup, beforeMaintenanceKicksIn bool) {
 // testMultipleStaleLocks verifies that
 // (before maintenance) multiple stale locks will prevent a new lock from being granted
 // ( after maintenance) multiple stale locks not will prevent a new lock from being granted
-func testMultipleStaleLocks(wg *sync.WaitGroup, beforeMaintenanceKicksIn bool) {
+func testMultipleStaleLocks(wg *sync.WaitGroup, beforeMaintenanceKicksIn bool, ds *dsync.Dsync) {
 
 	defer wg.Done()
 
@@ -365,14 +382,14 @@ func testMultipleStaleLocks(wg *sync.WaitGroup, beforeMaintenanceKicksIn bool) {
 	time.Sleep(500 * time.Millisecond)
 
 	// lock on same resource can not be acquired due to too many servers having a stale lock
-	dm := dsync.NewDRWMutex(lockName)
+	dm := dsync.NewDRWMutex(context.Background(), lockName, ds)
 
 	ch := make(chan struct{})
 
 	// try to acquire lock in separate routine (will not succeed)
 	go func() {
 		log.Println("Trying to get the lock")
-		dm.Lock()
+		dm.Lock("test", getSource())
 		ch <- struct{}{}
 	}()
 
@@ -404,7 +421,7 @@ func testMultipleStaleLocks(wg *sync.WaitGroup, beforeMaintenanceKicksIn bool) {
 
 // testClientThatHasLockCrashes verifies that (after a lock maintenance loop)
 // multiple stale locks will not prevent a new lock on same resource
-func testClientThatHasLockCrashes(wg *sync.WaitGroup) {
+func testClientThatHasLockCrashes(wg *sync.WaitGroup, ds *dsync.Dsync) {
 
 	defer wg.Done()
 
@@ -430,14 +447,14 @@ func testClientThatHasLockCrashes(wg *sync.WaitGroup) {
 	servers = append(servers, launchTestServers(len(servers), 1)...)
 	log.Println("Crashed server restarted")
 
-	dm := dsync.NewDRWMutex("test-stale")
+	dm := dsync.NewDRWMutex(context.Background(), "test-stale", ds)
 
 	ch := make(chan struct{})
 
 	// try to acquire lock in separate routine (will not succeed)
 	go func() {
 		log.Println("Trying to get the lock again")
-		dm.Lock()
+		dm.Lock("test", getSource())
 		ch <- struct{}{}
 	}()
 
@@ -455,7 +472,7 @@ func testClientThatHasLockCrashes(wg *sync.WaitGroup) {
 }
 
 // Same as testClientThatHasLockCrashes but with two clients having read locks
-func testTwoClientsThatHaveReadLocksCrash(wg *sync.WaitGroup) {
+func testTwoClientsThatHaveReadLocksCrash(wg *sync.WaitGroup, ds *dsync.Dsync) {
 
 	defer wg.Done()
 
@@ -483,14 +500,14 @@ func testTwoClientsThatHaveReadLocksCrash(wg *sync.WaitGroup) {
 	servers = append(servers, launchTestServers(len(servers), 2)...)
 	log.Println("Crashed servers restarted")
 
-	dm := dsync.NewDRWMutex("test-stale")
+	dm := dsync.NewDRWMutex(context.Background(), "test-stale", ds)
 
 	ch := make(chan struct{})
 
 	// try to acquire lock in separate routine (will not succeed)
 	go func() {
 		log.Println("Trying to get the lock again")
-		dm.Lock()
+		dm.Lock("test", getSource())
 		ch <- struct{}{}
 	}()
 
@@ -508,8 +525,8 @@ func testTwoClientsThatHaveReadLocksCrash(wg *sync.WaitGroup) {
 }
 
 type RWLocker interface {
-	Lock()
-	RLock()
+	Lock(id, source string)
+	RLock(id, source string)
 	Unlock()
 	RUnlock()
 }
@@ -519,29 +536,29 @@ type DRWMutexNoWriterStarvation struct {
 	rw   *dsync.DRWMutex
 }
 
-func NewDRWMutexNoWriterStarvation(name string) *DRWMutexNoWriterStarvation {
+func NewDRWMutexNoWriterStarvation(name string, ds *dsync.Dsync) *DRWMutexNoWriterStarvation {
 	return &DRWMutexNoWriterStarvation{
-		excl: dsync.NewDRWMutex(name + "-excl-no-writer-starvation"),
-		rw:   dsync.NewDRWMutex(name),
+		excl: dsync.NewDRWMutex(context.Background(), name+"-excl-no-writer-starvation", ds),
+		rw:   dsync.NewDRWMutex(context.Background(), name, ds),
 	}
 }
 
-func (d *DRWMutexNoWriterStarvation) Lock() {
-	d.excl.Lock()
+func (d *DRWMutexNoWriterStarvation) Lock(id, source string) {
+	d.excl.Lock(id+"-excl-no-writer-starvation", source)
 	defer d.excl.Unlock()
 
-	d.rw.Lock()
+	d.rw.Lock(id, source)
 }
 
 func (d *DRWMutexNoWriterStarvation) Unlock() {
 	d.rw.Unlock()
 }
 
-func (d *DRWMutexNoWriterStarvation) RLock() {
-	d.excl.Lock()
+func (d *DRWMutexNoWriterStarvation) RLock(id, source string) {
+	d.excl.Lock(id+"-excl-no-writer-starvation", source)
 	defer d.excl.Unlock()
 
-	d.rw.RLock()
+	d.rw.RLock(id, source)
 }
 
 func (d *DRWMutexNoWriterStarvation) RUnlock() {
@@ -550,7 +567,7 @@ func (d *DRWMutexNoWriterStarvation) RUnlock() {
 
 // testWriterStarvation tests that a separate implementation using a pair
 // of two DRWMutexes can prevent writer starvation (due to too many read locks)
-func testWriterStarvation(wg *sync.WaitGroup, noWriterStarvation bool) {
+func testWriterStarvation(wg *sync.WaitGroup, noWriterStarvation bool, ds *dsync.Dsync) {
 
 	defer wg.Done()
 
@@ -561,12 +578,12 @@ func testWriterStarvation(wg *sync.WaitGroup, noWriterStarvation bool) {
 
 	var m RWLocker
 	if noWriterStarvation {
-		m = NewDRWMutexNoWriterStarvation("test") // sync.RWMutex{} behaves identical
+		m = NewDRWMutexNoWriterStarvation("test", ds) // sync.RWMutex{} behaves identical
 	} else {
-		m = dsync.NewDRWMutex("test")
+		m = dsync.NewDRWMutex(context.Background(), "test", ds)
 	}
 
-	m.RLock()
+	m.RLock("test", getSource())
 	log.Println("Acquired (1st) read lock")
 
 	wgReadLocks := sync.WaitGroup{}
@@ -584,7 +601,7 @@ func testWriterStarvation(wg *sync.WaitGroup, noWriterStarvation bool) {
 		defer wgReadLocks.Done()
 		time.Sleep(10 * time.Millisecond)
 		log.Println("About to acquire (second) read lock")
-		m.RLock()
+		m.RLock("RLock", getSource())
 		log.Println("Acquired (2nd) read lock")
 		time.Sleep(2 * time.Second)
 		m.RUnlock()
@@ -592,7 +609,7 @@ func testWriterStarvation(wg *sync.WaitGroup, noWriterStarvation bool) {
 	}()
 
 	log.Println("About to acquire write lock")
-	m.Lock()
+	m.Lock("Lock", getSource())
 	log.Println("Acquired write lock")
 	time.Sleep(2 * time.Second)
 	m.Unlock()
@@ -617,22 +634,6 @@ func testWriterStarvation(wg *sync.WaitGroup, noWriterStarvation bool) {
 	}
 }
 
-func getSelfNode(rpcClnts []dsync.NetLocker, port int) int {
-
-	index := -1
-	for i, c := range rpcClnts {
-		p, _ := strconv.Atoi(strings.Split(c.ServerAddr(), ":")[1])
-		if port == p {
-			if index == -1 {
-				index = i
-			} else {
-				panic("More than one port found")
-			}
-		}
-	}
-	return index
-}
-
 func main() {
 
 	rand.Seed(time.Now().UTC().UnixNano())
@@ -649,7 +650,8 @@ func main() {
 					clnts = append(clnts, newClient(fmt.Sprintf("127.0.0.1:%d", portStart+i), rpcPathPrefix+"-"+strconv.Itoa(portStart+i)))
 				}
 
-				if err := dsync.Init(clnts, getSelfNode(clnts, *portFlag)); err != nil {
+				ds, err := dsync.New(clnts)
+				if err != nil {
 					log.Fatalf("set nodes failed with %v", err)
 				}
 
@@ -657,13 +659,13 @@ func main() {
 				time.Sleep(100 * time.Millisecond)
 
 				if *writeLockFlag != "" {
-					lock := dsync.NewDRWMutex(*writeLockFlag)
-					lock.Lock()
+					lock := dsync.NewDRWMutex(context.Background(), *writeLockFlag, ds)
+					lock.Lock(*writeLockFlag, getSource())
 					log.Println("Acquired write lock:", *writeLockFlag, "(never to be released)")
 				}
 				if *readLockFlag != "" {
-					lock := dsync.NewDRWMutex(*readLockFlag)
-					lock.RLock()
+					lock := dsync.NewDRWMutex(context.Background(), *readLockFlag, ds)
+					lock.RLock(*readLockFlag, getSource())
 					log.Println("Acquired read lock:", *readLockFlag, "(never to be released)")
 				}
 
@@ -697,7 +699,8 @@ func main() {
 	}
 
 	// This process serves as the first server
-	if err := dsync.Init(clnts, getSelfNode(clnts, *portFlag)); err != nil {
+	ds, err := dsync.New(clnts)
+	if err != nil {
 		log.Fatalf("set nodes failed with %v", err)
 	}
 
@@ -706,57 +709,57 @@ func main() {
 	wg := sync.WaitGroup{}
 
 	wg.Add(1)
-	go testNotEnoughServersForQuorum(&wg)
+	go testNotEnoughServersForQuorum(&wg, ds)
 	wg.Wait()
 
 	wg.Add(1)
-	go testServerGoingDown(&wg)
+	go testServerGoingDown(&wg, ds)
 	wg.Wait()
 
 	wg.Add(1)
-	testSingleServerOverQuorumDownDuringLock(&wg)
+	testSingleServerOverQuorumDownDuringLock(&wg, ds)
 	wg.Wait()
 
 	wg.Add(1)
-	testMultipleServersOverQuorumDownDuringLockKnownError(&wg)
+	testMultipleServersOverQuorumDownDuringLockKnownError(&wg, ds)
 	wg.Wait()
 
 	wg.Add(1)
-	testClientThatHasLockCrashes(&wg)
+	testClientThatHasLockCrashes(&wg, ds)
 	wg.Wait()
 
 	wg.Add(1)
-	testTwoClientsThatHaveReadLocksCrash(&wg)
+	testTwoClientsThatHaveReadLocksCrash(&wg, ds)
 	wg.Wait()
 
 	wg.Add(1)
 	beforeMaintenanceKicksIn := true
-	testSingleStaleLock(&wg, beforeMaintenanceKicksIn)
+	testSingleStaleLock(&wg, beforeMaintenanceKicksIn, ds)
 	wg.Wait()
 
 	wg.Add(1)
 	beforeMaintenanceKicksIn = false
-	testSingleStaleLock(&wg, beforeMaintenanceKicksIn)
+	testSingleStaleLock(&wg, beforeMaintenanceKicksIn, ds)
 	wg.Wait()
 
 	wg.Add(1)
 	beforeMaintenanceKicksIn = true
-	testMultipleStaleLocks(&wg, beforeMaintenanceKicksIn)
+	testMultipleStaleLocks(&wg, beforeMaintenanceKicksIn, ds)
 	wg.Wait()
 
 	wg.Add(1)
 	beforeMaintenanceKicksIn = false
-	testMultipleStaleLocks(&wg, beforeMaintenanceKicksIn)
+	testMultipleStaleLocks(&wg, beforeMaintenanceKicksIn, ds)
 	wg.Wait()
 
 	wg.Add(1)
 	noWriterStarvation := true
-	testWriterStarvation(&wg, noWriterStarvation)
+	testWriterStarvation(&wg, noWriterStarvation, ds)
 	wg.Wait()
 
 	wg.Add(1)
 	noWriterStarvation = false
-	testWriterStarvation(&wg, noWriterStarvation)
+	testWriterStarvation(&wg, noWriterStarvation, ds)
 	wg.Wait()
 
 	// Kill any launched processes
diff --git a/chaos/lock-rpc-server.go b/chaos/lock-rpc-server.go
index 5e444cb..3a6333b 100644
--- a/chaos/lock-rpc-server.go
+++ b/chaos/lock-rpc-server.go
@@ -22,13 +22,13 @@ import (
 	"sync"
 	"time"
 
-	"github.com/minio/dsync"
+	"github.com/minio/dsync/v3"
 )
 
 type lockRequesterInfo struct {
 	writer        bool      // Bool whether write or read lock
 	serverAddr    string    // Network address of client claiming lock
-	resource      string    // RPC path of client claiming lock
+	rpcPath       string    // RPC path of client claiming lock
 	uid           string    // Uid to uniquely identify request of client
 	timestamp     time.Time // Timestamp set at the time of initialization
 	timeLastCheck time.Time // Timestamp for last check of validity of lock
@@ -53,8 +53,6 @@ func (l *lockServer) Lock(args *dsync.LockArgs, reply *bool) error {
 		l.lockMap[args.Resource] = []lockRequesterInfo{
 			{
 				writer:        true,
-				serverAddr:    args.ServerAddr,
-				resource:      args.Resource,
 				uid:           args.UID,
 				timestamp:     time.Now().UTC(),
 				timeLastCheck: time.Now().UTC(),
@@ -88,8 +86,6 @@ func (l *lockServer) RLock(args *dsync.LockArgs, reply *bool) error {
 	defer l.mutex.Unlock()
 	lrInfo := lockRequesterInfo{
 		writer:        false,
-		serverAddr:    args.ServerAddr,
-		resource:      args.Resource,
 		uid:           args.UID,
 		timestamp:     time.Now().UTC(),
 		timeLastCheck: time.Now().UTC(),
@@ -122,39 +118,6 @@ func (l *lockServer) RUnlock(args *dsync.LockArgs, reply *bool) error {
 	return nil
 }
 
-// ForceUnlock - rpc handler for force unlock operation.
-func (l *lockServer) ForceUnlock(args *dsync.LockArgs, reply *bool) error {
-	l.mutex.Lock()
-	defer l.mutex.Unlock()
-	if len(args.UID) != 0 {
-		return fmt.Errorf("ForceUnlock called with non-empty UID: %s", args.UID)
-	}
-	if _, ok := l.lockMap[args.Resource]; ok { // Only clear lock when set
-		delete(l.lockMap, args.Resource) // Remove the lock (irrespective of write or read lock)
-	}
-	*reply = true
-	return nil
-}
-
-// Expired - rpc handler for expired lock status.
-func (l *lockServer) Expired(args *dsync.LockArgs, reply *bool) error {
-	l.mutex.Lock()
-	defer l.mutex.Unlock()
-	if lri, ok := l.lockMap[args.Resource]; ok {
-		// Check whether uid is still active for this name
-		for _, entry := range lri {
-			if entry.uid == args.UID {
-				*reply = false // When uid found, lock is still active so return not expired
-				return nil
-			}
-		}
-	}
-	// When we get here, lock is no longer active due to either args.Resource being absent from map
-	// or uid not found for given args.Resource
-	*reply = true
-	return nil
-}
-
 // removeEntry either, based on the uid of the lock message, removes a single entry from the
 // lockRequesterInfo array or the whole array from the map (in case of a write lock or last read lock)
 func (l *lockServer) removeEntry(name, uid string, lri *[]lockRequesterInfo) bool {
@@ -213,42 +176,3 @@ func getLongLivedLocks(m map[string][]lockRequesterInfo, interval time.Duration)
 
 	return rslt
 }
-
-// lockMaintenance loops over locks that have been active for some time and checks back
-// with the original server whether it is still alive or not
-//
-// Following logic inside ignores the errors generated for Dsync.Active operation.
-// - server at client down
-// - some network error (and server is up normally)
-//
-// We will ignore the error, and we will retry later to get a resolve on this lock
-func (l *lockServer) lockMaintenance(interval time.Duration) {
-	l.mutex.Lock()
-	// Get list of long lived locks to check for staleness.
-	nlripLongLived := getLongLivedLocks(l.lockMap, interval)
-	l.mutex.Unlock()
-
-	// Validate if long lived locks are indeed clean.
-	for _, nlrip := range nlripLongLived {
-		// Initialize client based on the long live locks.
-		c := newClient(nlrip.lri.serverAddr, nlrip.lri.resource)
-
-		var expired bool
-
-		// Call back to original server to verify whether the lock is still active (based on name & uid)
-		// We will ignore any errors (see above for reasons), such locks will be retried later to get resolved
-		c.Call("Dsync.Expired", &dsync.LockArgs{
-			Resource: nlrip.name,
-			UID:      nlrip.lri.uid,
-		}, &expired)
-		c.Close()
-
-		if expired {
-			// The lock is no longer active at server that originated the lock
-			// So remove the lock from the map.
-			l.mutex.Lock()
-			l.removeEntryIfExists(nlrip) // Purge the stale entry if it exists.
-			l.mutex.Unlock()
-		}
-	}
-}
diff --git a/chaos/net-rpc-client.go b/chaos/net-rpc-client.go
index 486df5b..bef0ff0 100644
--- a/chaos/net-rpc-client.go
+++ b/chaos/net-rpc-client.go
@@ -20,7 +20,7 @@ import (
 	"net/rpc"
 	"sync"
 
-	"github.com/minio/dsync"
+	"github.com/minio/dsync/v3"
 )
 
 // ReconnectRPCClient is a wrapper type for rpc.Client which provides reconnect on first failure.
@@ -101,15 +101,6 @@ func (rpcClient *ReconnectRPCClient) Unlock(args dsync.LockArgs) (status bool, e
 	return status, err
 }
 
-func (rpcClient *ReconnectRPCClient) ForceUnlock(args dsync.LockArgs) (status bool, err error) {
-	err = rpcClient.Call("Dsync.ForceUnlock", &args, &status)
-	return status, err
-}
-
-func (rpcClient *ReconnectRPCClient) ServerAddr() string {
-	return rpcClient.addr
-}
-
-func (rpcClient *ReconnectRPCClient) ServiceEndpoint() string {
-	return rpcClient.endpoint
+func (rpcClient *ReconnectRPCClient) String() string {
+	return "http://" + rpcClient.addr + "/" + rpcClient.endpoint
 }
diff --git a/debian/changelog b/debian/changelog
index c6c857d..515c45a 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+golang-github-minio-dsync (3.0.1+git20191112.1.b58a578-1) UNRELEASED; urgency=low
+
+  * New upstream snapshot.
+
+ -- Debian Janitor <janitor@jelmer.uk>  Sun, 18 Dec 2022 14:39:04 -0000
+
 golang-github-minio-dsync (0.0~git20170209.0.b9f7da7-3) unstable; urgency=medium
 
   [ Debian Janitor ]
diff --git a/drwmutex.go b/drwmutex.go
index 84a082b..64e9632 100644
--- a/drwmutex.go
+++ b/drwmutex.go
@@ -17,13 +17,13 @@
 package dsync
 
 import (
-	cryptorand "crypto/rand"
+	"context"
 	"fmt"
 	golog "log"
-	"math"
 	"math/rand"
-	"net"
 	"os"
+	"path"
+	"runtime"
 	"sync"
 	"time"
 )
@@ -32,8 +32,9 @@ import (
 var dsyncLog bool
 
 func init() {
-	// Check for DSYNC_LOG env variable, if set logging will be enabled for failed RPC operations.
+	// Check for DSYNC_LOG env variable, if set logging will be enabled for failed REST operations.
 	dsyncLog = os.Getenv("DSYNC_LOG") == "1"
+	rand.Seed(time.Now().UnixNano())
 }
 
 func log(msg ...interface{}) {
@@ -43,7 +44,8 @@ func log(msg ...interface{}) {
 }
 
 // DRWMutexAcquireTimeout - tolerance limit to wait for lock acquisition before.
-const DRWMutexAcquireTimeout = 25 * time.Millisecond // 25ms.
+const DRWMutexAcquireTimeout = 1 * time.Second // 1 second.
+const drwMutexInfinite = time.Duration(1<<63 - 1)
 
 // A DRWMutex is a distributed mutual exclusion lock.
 type DRWMutex struct {
@@ -51,25 +53,31 @@ type DRWMutex struct {
 	writeLocks   []string   // Array of nodes that granted a write lock
 	readersLocks [][]string // Array of array of nodes that granted reader locks
 	m            sync.Mutex // Mutex to prevent multiple simultaneous locks from this node
+	clnt         *Dsync
+	ctx          context.Context
 }
 
+// Granted - represents a structure of a granted lock.
 type Granted struct {
 	index   int
-	lockUid string // Locked if set with UID string, unlocked if empty
+	lockUID string // Locked if set with UID string, unlocked if empty
 }
 
 func (g *Granted) isLocked() bool {
-	return isLocked(g.lockUid)
+	return isLocked(g.lockUID)
 }
 
 func isLocked(uid string) bool {
 	return len(uid) > 0
 }
 
-func NewDRWMutex(name string) *DRWMutex {
+// NewDRWMutex - initializes a new dsync RW mutex.
+func NewDRWMutex(ctx context.Context, name string, clnt *Dsync) *DRWMutex {
 	return &DRWMutex{
 		Name:       name,
-		writeLocks: make([]string, dnodeCount),
+		writeLocks: make([]string, clnt.dNodeCount),
+		clnt:       clnt,
+		ctx:        ctx,
 	}
 }
 
@@ -77,90 +85,111 @@ func NewDRWMutex(name string) *DRWMutex {
 //
 // If the lock is already in use, the calling go routine
 // blocks until the mutex is available.
-func (dm *DRWMutex) Lock() {
+func (dm *DRWMutex) Lock(id, source string) {
 
 	isReadLock := false
-	dm.lockBlocking(isReadLock)
+	dm.lockBlocking(drwMutexInfinite, id, source, isReadLock)
+}
+
+// GetLock tries to get a write lock on dm before the timeout elapses.
+//
+// If the lock is already in use, the calling go routine
+// blocks until either the mutex becomes available and return success or
+// more time has passed than the timeout value and return false.
+func (dm *DRWMutex) GetLock(id, source string, timeout time.Duration) (locked bool) {
+
+	isReadLock := false
+	return dm.lockBlocking(timeout, id, source, isReadLock)
 }
 
 // RLock holds a read lock on dm.
 //
-// If one or more read lock are already in use, it will grant another lock.
+// If one or more read locks are already in use, it will grant another lock.
 // Otherwise the calling go routine blocks until the mutex is available.
-func (dm *DRWMutex) RLock() {
+func (dm *DRWMutex) RLock(id, source string) {
 
 	isReadLock := true
-	dm.lockBlocking(isReadLock)
+	dm.lockBlocking(drwMutexInfinite, id, source, isReadLock)
 }
 
-// lockBlocking will acquire either a read or a write lock
+// GetRLock tries to get a read lock on dm before the timeout elapses.
 //
-// The call will block until the lock is granted using a built-in
-// timing randomized back-off algorithm to try again until successful
-func (dm *DRWMutex) lockBlocking(isReadLock bool) {
+// If one or more read locks are already in use, it will grant another lock.
+// Otherwise the calling go routine blocks until either the mutex becomes
+// available and return success or more time has passed than the timeout
+// value and return false.
+func (dm *DRWMutex) GetRLock(id, source string, timeout time.Duration) (locked bool) {
+
+	isReadLock := true
+	return dm.lockBlocking(timeout, id, source, isReadLock)
+}
 
-	runs, backOff := 1, 1
+// lockBlocking will try to acquire either a read or a write lock
+//
+// The function will loop using a built-in timing randomized back-off
+// algorithm until either the lock is acquired successfully or more
+// time has elapsed than the timeout value.
+func (dm *DRWMutex) lockBlocking(timeout time.Duration, id, source string, isReadLock bool) (locked bool) {
+	doneCh, start := make(chan struct{}), time.Now().UTC()
+	defer close(doneCh)
+
+	// Use incremental back-off algorithm for repeated attempts to acquire the lock
+	for range newRetryTimerSimple(doneCh) {
+		select {
+		case <-dm.ctx.Done():
+			break
+		default:
+		}
 
-	for {
-		// create temp array on stack
-		locks := make([]string, dnodeCount)
+		// Create temp array on stack.
+		locks := make([]string, dm.clnt.dNodeCount)
 
-		// try to acquire the lock
-		success := lock(clnts, &locks, dm.Name, isReadLock)
+		// Try to acquire the lock.
+		success := lock(dm.clnt, &locks, dm.Name, id, source, isReadLock)
 		if success {
 			dm.m.Lock()
 			defer dm.m.Unlock()
 
-			// if success, copy array to object
+			// If success, copy array to object
 			if isReadLock {
-				// append new array of strings at the end
-				dm.readersLocks = append(dm.readersLocks, make([]string, dnodeCount))
+				// Append new array of strings at the end
+				dm.readersLocks = append(dm.readersLocks, make([]string, dm.clnt.dNodeCount))
 				// and copy stack array into last spot
 				copy(dm.readersLocks[len(dm.readersLocks)-1], locks[:])
 			} else {
 				copy(dm.writeLocks, locks[:])
 			}
 
-			return
+			return true
 		}
-
-		// We timed out on the previous lock, incrementally wait for a longer back-off time,
-		// and try again afterwards
-		time.Sleep(time.Duration(backOff) * time.Millisecond)
-
-		backOff += int(rand.Float64() * math.Pow(2, float64(runs)))
-		if backOff > 1024 {
-			backOff = backOff % 64
-
-			runs = 1 // reset runs
-		} else if runs < 10 {
-			runs++
+		if time.Now().UTC().Sub(start) >= timeout { // Are we past the timeout?
+			break
 		}
+		// Failed to acquire the lock on this attempt, incrementally wait
+		// for a longer back-off time and try again afterwards.
 	}
+	return false
 }
 
-// lock tries to acquire the distributed lock, returning true or false
-//
-func lock(clnts []NetLocker, locks *[]string, lockName string, isReadLock bool) bool {
+// lock tries to acquire the distributed lock, returning true or false.
+func lock(ds *Dsync, locks *[]string, lockName, id, source string, isReadLock bool) bool {
 
 	// Create buffered channel of size equal to total number of nodes.
-	ch := make(chan Granted, dnodeCount)
+	ch := make(chan Granted, ds.dNodeCount)
+	defer close(ch)
 
-	for index, c := range clnts {
+	var wg sync.WaitGroup
+	for index, c := range ds.restClnts {
 
+		wg.Add(1)
 		// broadcast lock request to all nodes
 		go func(index int, isReadLock bool, c NetLocker) {
-			// All client methods issuing RPCs are thread-safe and goroutine-safe,
-			// i.e. it is safe to call them from multiple concurrently running go routines.
-			bytesUid := [16]byte{}
-			cryptorand.Read(bytesUid[:])
-			uid := fmt.Sprintf("%X", bytesUid[:])
+			defer wg.Done()
 
 			args := LockArgs{
-				UID:             uid,
-				Resource:        lockName,
-				ServerAddr:      clnts[ownNode].ServerAddr(),
-				ServiceEndpoint: clnts[ownNode].ServiceEndpoint(),
+				UID:      id,
+				Resource: lockName,
+				Source:   source,
 			}
 
 			var locked bool
@@ -177,8 +206,9 @@ func lock(clnts []NetLocker, locks *[]string, lockName string, isReadLock bool)
 
 			g := Granted{index: index}
 			if locked {
-				g.lockUid = args.UID
+				g.lockUID = args.UID
 			}
+
 			ch <- g
 
 		}(index, isReadLock, c)
@@ -186,41 +216,44 @@ func lock(clnts []NetLocker, locks *[]string, lockName string, isReadLock bool)
 
 	quorum := false
 
-	var wg sync.WaitGroup
 	wg.Add(1)
 	go func(isReadLock bool) {
 
-		// Wait until we have either a) received all lock responses, b) received too many 'non-'locks for quorum to be or c) time out
+		// Wait until we have either
+		//
+		// a) received all lock responses
+		// b) received too many 'non-'locks for quorum to be still possible
+		// c) time out
+		//
 		i, locksFailed := 0, 0
 		done := false
 		timeout := time.After(DRWMutexAcquireTimeout)
 
-		for ; i < dnodeCount; i++ { // Loop until we acquired all locks
+		for ; i < ds.dNodeCount; i++ { // Loop until we acquired all locks
 
 			select {
 			case grant := <-ch:
 				if grant.isLocked() {
 					// Mark that this node has acquired the lock
-					(*locks)[grant.index] = grant.lockUid
+					(*locks)[grant.index] = grant.lockUID
 				} else {
 					locksFailed++
-					if !isReadLock && locksFailed > dnodeCount-dquorum ||
-						isReadLock && locksFailed > dnodeCount-dquorumReads {
-						// We know that we are not going to get the lock anymore, so exit out
-						// and release any locks that did get acquired
+					if !isReadLock && locksFailed > ds.dNodeCount-ds.dquorum ||
+						isReadLock && locksFailed > ds.dNodeCount-ds.dquorumReads {
+						// We know that we are not going to get the lock anymore,
+						// so exit out and release any locks that did get acquired
 						done = true
 						// Increment the number of grants received from the buffered channel.
 						i++
-						releaseAll(clnts, locks, lockName, isReadLock)
+						releaseAll(ds, locks, lockName, isReadLock)
 					}
 				}
-
 			case <-timeout:
 				done = true
 				// timeout happened, maybe one of the nodes is slow, count
 				// number of locks to check whether we have quorum or not
-				if !quorumMet(locks, isReadLock) {
-					releaseAll(clnts, locks, lockName, isReadLock)
+				if !quorumMet(locks, isReadLock, ds.dquorum, ds.dquorumReads) {
+					releaseAll(ds, locks, lockName, isReadLock)
 				}
 			}
 
@@ -229,8 +262,8 @@ func lock(clnts []NetLocker, locks *[]string, lockName string, isReadLock bool)
 			}
 		}
 
-		// Count locks in order to determine whterh we have quorum or not
-		quorum = quorumMet(locks, isReadLock)
+		// Count locks in order to determine whether we have quorum or not
+		quorum = quorumMet(locks, isReadLock, ds.dquorum, ds.dquorumReads)
 
 		// Signal that we have the quorum
 		wg.Done()
@@ -238,29 +271,22 @@ func lock(clnts []NetLocker, locks *[]string, lockName string, isReadLock bool)
 		// Wait for the other responses and immediately release the locks
 		// (do not add them to the locks array because the DRWMutex could
 		//  already has been unlocked again by the original calling thread)
-		for ; i < dnodeCount; i++ {
+		for ; i < ds.dNodeCount; i++ {
 			grantToBeReleased := <-ch
 			if grantToBeReleased.isLocked() {
 				// release lock
-				sendRelease(clnts[grantToBeReleased.index], lockName, grantToBeReleased.lockUid, isReadLock)
+				sendRelease(ds, ds.restClnts[grantToBeReleased.index], lockName, grantToBeReleased.lockUID, isReadLock)
 			}
 		}
 	}(isReadLock)
 
 	wg.Wait()
 
-	// Verify that localhost server is actively participating in the lock (the lock maintenance relies on this fact)
-	if quorum && !isLocked((*locks)[ownNode]) {
-		// If not, release lock (and try again later)
-		releaseAll(clnts, locks, lockName, isReadLock)
-		quorum = false
-	}
-
 	return quorum
 }
 
 // quorumMet determines whether we have acquired the required quorum of underlying locks or not
-func quorumMet(locks *[]string, isReadLock bool) bool {
+func quorumMet(locks *[]string, isReadLock bool, quorum, quorumReads int) bool {
 
 	count := 0
 	for _, uid := range *locks {
@@ -269,18 +295,21 @@ func quorumMet(locks *[]string, isReadLock bool) bool {
 		}
 	}
 
+	var metQuorum bool
 	if isReadLock {
-		return count >= dquorumReads
+		metQuorum = count >= quorumReads
 	} else {
-		return count >= dquorum
+		metQuorum = count >= quorum
 	}
+
+	return metQuorum
 }
 
 // releaseAll releases all locks that are marked as locked
-func releaseAll(clnts []NetLocker, locks *[]string, lockName string, isReadLock bool) {
-	for lock := 0; lock < dnodeCount; lock++ {
+func releaseAll(ds *Dsync, locks *[]string, lockName string, isReadLock bool) {
+	for lock := 0; lock < ds.dNodeCount; lock++ {
 		if isLocked((*locks)[lock]) {
-			sendRelease(clnts[lock], lockName, (*locks)[lock], isReadLock)
+			sendRelease(ds, ds.restClnts[lock], lockName, (*locks)[lock], isReadLock)
 			(*locks)[lock] = ""
 		}
 	}
@@ -292,7 +321,7 @@ func releaseAll(clnts []NetLocker, locks *[]string, lockName string, isReadLock
 func (dm *DRWMutex) Unlock() {
 
 	// create temp array on stack
-	locks := make([]string, dnodeCount)
+	locks := make([]string, dm.clnt.dNodeCount)
 
 	{
 		dm.m.Lock()
@@ -313,11 +342,11 @@ func (dm *DRWMutex) Unlock() {
 		// Copy write locks to stack array
 		copy(locks, dm.writeLocks[:])
 		// Clear write locks array
-		dm.writeLocks = make([]string, dnodeCount)
+		dm.writeLocks = make([]string, dm.clnt.dNodeCount)
 	}
 
 	isReadLock := false
-	unlock(locks, dm.Name, isReadLock)
+	unlock(dm.clnt, locks, dm.Name, isReadLock)
 }
 
 // RUnlock releases a read lock held on dm.
@@ -326,7 +355,7 @@ func (dm *DRWMutex) Unlock() {
 func (dm *DRWMutex) RUnlock() {
 
 	// create temp array on stack
-	locks := make([]string, dnodeCount)
+	locks := make([]string, dm.clnt.dNodeCount)
 
 	{
 		dm.m.Lock()
@@ -341,99 +370,38 @@ func (dm *DRWMutex) RUnlock() {
 	}
 
 	isReadLock := true
-	unlock(locks, dm.Name, isReadLock)
+	unlock(dm.clnt, locks, dm.Name, isReadLock)
 }
 
-func unlock(locks []string, name string, isReadLock bool) {
+func unlock(ds *Dsync, locks []string, name string, isReadLock bool) {
 
 	// We don't need to synchronously wait until we have released all the locks (or the quorum)
 	// (a subsequent lock will retry automatically in case it would fail to get quorum)
 
-	for index, c := range clnts {
+	for index, c := range ds.restClnts {
 
 		if isLocked(locks[index]) {
 			// broadcast lock release to all nodes that granted the lock
-			sendRelease(c, name, locks[index], isReadLock)
+			sendRelease(ds, c, name, locks[index], isReadLock)
 		}
 	}
 }
 
-// ForceUnlock will forcefully clear a write or read lock.
-func (dm *DRWMutex) ForceUnlock() {
-
-	{
-		dm.m.Lock()
-		defer dm.m.Unlock()
-
-		// Clear write locks array
-		dm.writeLocks = make([]string, dnodeCount)
-		// Clear read locks array
-		dm.readersLocks = nil
-	}
-
-	for _, c := range clnts {
-		// broadcast lock release to all nodes that granted the lock
-		sendRelease(c, dm.Name, "", false)
-	}
-}
-
 // sendRelease sends a release message to a node that previously granted a lock
-func sendRelease(c NetLocker, name, uid string, isReadLock bool) {
-
-	backOffArray := []time.Duration{
-		30 * time.Second, // 30 secs
-		1 * time.Minute,  // 1 min
-		3 * time.Minute,  // 3 min
-		10 * time.Minute, // 10 min
-		30 * time.Minute, // 30 min
-		1 * time.Hour,    // 1 hour
+func sendRelease(ds *Dsync, c NetLocker, name, uid string, isReadLock bool) {
+	args := LockArgs{
+		UID:      uid,
+		Resource: name,
 	}
-
-	go func(c NetLocker, name string) {
-
-		for _, backOff := range backOffArray {
-
-			// All client methods issuing RPCs are thread-safe and goroutine-safe,
-			// i.e. it is safe to call them from multiple concurrently running goroutines.
-			args := LockArgs{
-				UID:             uid,
-				Resource:        name,
-				ServerAddr:      clnts[ownNode].ServerAddr(),
-				ServiceEndpoint: clnts[ownNode].ServiceEndpoint(),
-			}
-
-			var err error
-			if len(uid) == 0 {
-				if _, err = c.ForceUnlock(args); err != nil {
-					log("Unable to call ForceUnlock", err)
-				}
-			} else if isReadLock {
-				if _, err = c.RUnlock(args); err != nil {
-					log("Unable to call RUnlock", err)
-				}
-			} else {
-				if _, err = c.Unlock(args); err != nil {
-					log("Unable to call Unlock", err)
-				}
-			}
-
-			if err != nil {
-				// Ignore if err is net.Error and it is occurred due to timeout.
-				// The cause could have been server timestamp mismatch or server may have restarted.
-				// FIXME: This is minio specific behaviour and we would need a way to make it generically.
-				if nErr, ok := err.(net.Error); ok && nErr.Timeout() {
-					err = nil
-				}
-			}
-
-			if err == nil {
-				return
-			}
-
-			// Wait..
-			time.Sleep(backOff)
+	if isReadLock {
+		if _, err := c.RUnlock(args); err != nil {
+			log("Unable to call RUnlock", err)
 		}
-	}(c, name)
+	} else {
+		if _, err := c.Unlock(args); err != nil {
+			log("Unable to call Unlock", err)
+		}
+	}
 }
 
 // DRLocker returns a sync.Locker interface that implements
@@ -444,5 +412,29 @@ func (dm *DRWMutex) DRLocker() sync.Locker {
 
 type drlocker DRWMutex
 
-func (dr *drlocker) Lock()   { (*DRWMutex)(dr).RLock() }
+var letterRunes = []rune("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ")
+
+func randString(n int) string {
+	b := make([]rune, n)
+	for i := range b {
+		b[i] = letterRunes[rand.Intn(len(letterRunes))]
+	}
+	return string(b)
+}
+
+func getSource() string {
+	var funcName string
+	pc, filename, lineNum, ok := runtime.Caller(2)
+	if ok {
+		filename = path.Base(filename)
+		funcName = runtime.FuncForPC(pc).Name()
+	} else {
+		filename = "<unknown>"
+		lineNum = 0
+	}
+
+	return fmt.Sprintf("[%s:%d:%s()]", filename, lineNum, funcName)
+}
+
+func (dr *drlocker) Lock()   { (*DRWMutex)(dr).RLock(randString(16), getSource()) }
 func (dr *drlocker) Unlock() { (*DRWMutex)(dr).RUnlock() }
diff --git a/drwmutex_test.go b/drwmutex_test.go
index 8bd07e4..d4c910e 100644
--- a/drwmutex_test.go
+++ b/drwmutex_test.go
@@ -19,6 +19,7 @@
 package dsync_test
 
 import (
+	"context"
 	"fmt"
 	"runtime"
 	"sync"
@@ -26,55 +27,135 @@ import (
 	"testing"
 	"time"
 
-	. "github.com/minio/dsync"
+	. "github.com/minio/dsync/v3"
 )
 
-func TestSimpleWriteLock(t *testing.T) {
+const (
+	id     = "1234-5678"
+	source = "main.go"
+)
+
+func testSimpleWriteLock(t *testing.T, duration time.Duration) (locked bool) {
 
-	drwm := NewDRWMutex("resource")
+	drwm := NewDRWMutex(context.Background(), "simplelock", ds)
 
-	drwm.RLock()
+	if !drwm.GetRLock(id, source, time.Second) {
+		panic("Failed to acquire read lock")
+	}
 	// fmt.Println("1st read lock acquired, waiting...")
 
-	drwm.RLock()
+	if !drwm.GetRLock(id, source, time.Second) {
+		panic("Failed to acquire read lock")
+	}
 	// fmt.Println("2nd read lock acquired, waiting...")
 
 	go func() {
-		time.Sleep(1000 * time.Millisecond)
+		time.Sleep(2 * time.Second)
 		drwm.RUnlock()
 		// fmt.Println("1st read lock released, waiting...")
 	}()
 
 	go func() {
-		time.Sleep(2000 * time.Millisecond)
+		time.Sleep(3 * time.Second)
 		drwm.RUnlock()
 		// fmt.Println("2nd read lock released, waiting...")
 	}()
 
 	// fmt.Println("Trying to acquire write lock, waiting...")
-	drwm.Lock()
+	locked = drwm.GetLock(id, source, duration)
+	if locked {
+		// fmt.Println("Write lock acquired, waiting...")
+		time.Sleep(time.Second)
+
+		drwm.Unlock()
+	} else {
+		// fmt.Println("Write lock failed due to timeout")
+	}
+	return
+}
+
+func TestSimpleWriteLockAcquired(t *testing.T) {
+	locked := testSimpleWriteLock(t, 5*time.Second)
+
+	expected := true
+	if locked != expected {
+		t.Errorf("TestSimpleWriteLockAcquired(): \nexpected %#v\ngot      %#v", expected, locked)
+	}
+}
+
+func TestSimpleWriteLockTimedOut(t *testing.T) {
+	locked := testSimpleWriteLock(t, time.Second)
+
+	expected := false
+	if locked != expected {
+		t.Errorf("TestSimpleWriteLockTimedOut(): \nexpected %#v\ngot      %#v", expected, locked)
+	}
+}
+
+func testDualWriteLock(t *testing.T, duration time.Duration) (locked bool) {
+
+	drwm := NewDRWMutex(context.Background(), "duallock", ds)
+
+	// fmt.Println("Getting initial write lock")
+	if !drwm.GetLock(id, source, time.Second) {
+		panic("Failed to acquire initial write lock")
+	}
+
+	go func() {
+		time.Sleep(2 * time.Second)
+		drwm.Unlock()
+		// fmt.Println("Initial write lock released, waiting...")
+	}()
+
+	// fmt.Println("Trying to acquire 2nd write lock, waiting...")
+	locked = drwm.GetLock(id, source, duration)
+	if locked {
+		// fmt.Println("2nd write lock acquired, waiting...")
+		time.Sleep(time.Second)
+
+		drwm.Unlock()
+	} else {
+		// fmt.Println("2nd write lock failed due to timeout")
+	}
+	return
+}
+
+func TestDualWriteLockAcquired(t *testing.T) {
+	locked := testDualWriteLock(t, 5*time.Second)
+
+	expected := true
+	if locked != expected {
+		t.Errorf("TestDualWriteLockAcquired(): \nexpected %#v\ngot      %#v", expected, locked)
+	}
+
+}
 
-	// fmt.Println("Write lock acquired, waiting...")
-	time.Sleep(2500 * time.Millisecond)
+func TestDualWriteLockTimedOut(t *testing.T) {
+	locked := testDualWriteLock(t, time.Second)
+
+	expected := false
+	if locked != expected {
+		t.Errorf("TestDualWriteLockTimedOut(): \nexpected %#v\ngot      %#v", expected, locked)
+	}
 
-	drwm.Unlock()
 }
 
 // Test cases below are copied 1 to 1 from sync/rwmutex_test.go (adapted to use DRWMutex)
 
 // Borrowed from rwmutex_test.go
 func parallelReader(m *DRWMutex, clocked, cunlock, cdone chan bool) {
-	m.RLock()
-	clocked <- true
-	<-cunlock
-	m.RUnlock()
-	cdone <- true
+	if m.GetRLock(id, source, time.Second) {
+		clocked <- true
+		<-cunlock
+		m.RUnlock()
+		cdone <- true
+	}
 }
 
 // Borrowed from rwmutex_test.go
 func doTestParallelReaders(numReaders, gomaxprocs int) {
 	runtime.GOMAXPROCS(gomaxprocs)
-	m := NewDRWMutex("test-parallel")
+	m := NewDRWMutex(context.Background(), "test-parallel", ds)
 
 	clocked := make(chan bool)
 	cunlock := make(chan bool)
@@ -104,52 +185,54 @@ func TestParallelReaders(t *testing.T) {
 }
 
 // Borrowed from rwmutex_test.go
-func reader(rwm *DRWMutex, num_iterations int, activity *int32, cdone chan bool) {
-	for i := 0; i < num_iterations; i++ {
-		rwm.RLock()
-		n := atomic.AddInt32(activity, 1)
-		if n < 1 || n >= 10000 {
-			panic(fmt.Sprintf("wlock(%d)\n", n))
-		}
-		for i := 0; i < 100; i++ {
+func reader(rwm *DRWMutex, numIterations int, activity *int32, cdone chan bool) {
+	for i := 0; i < numIterations; i++ {
+		if rwm.GetRLock(id, source, time.Second) {
+			n := atomic.AddInt32(activity, 1)
+			if n < 1 || n >= 10000 {
+				panic(fmt.Sprintf("wlock(%d)\n", n))
+			}
+			for i := 0; i < 100; i++ {
+			}
+			atomic.AddInt32(activity, -1)
+			rwm.RUnlock()
 		}
-		atomic.AddInt32(activity, -1)
-		rwm.RUnlock()
 	}
 	cdone <- true
 }
 
 // Borrowed from rwmutex_test.go
-func writer(rwm *DRWMutex, num_iterations int, activity *int32, cdone chan bool) {
-	for i := 0; i < num_iterations; i++ {
-		rwm.Lock()
-		n := atomic.AddInt32(activity, 10000)
-		if n != 10000 {
-			panic(fmt.Sprintf("wlock(%d)\n", n))
-		}
-		for i := 0; i < 100; i++ {
+func writer(rwm *DRWMutex, numIterations int, activity *int32, cdone chan bool) {
+	for i := 0; i < numIterations; i++ {
+		if rwm.GetLock(id, source, time.Second) {
+			n := atomic.AddInt32(activity, 10000)
+			if n != 10000 {
+				panic(fmt.Sprintf("wlock(%d)\n", n))
+			}
+			for i := 0; i < 100; i++ {
+			}
+			atomic.AddInt32(activity, -10000)
+			rwm.Unlock()
 		}
-		atomic.AddInt32(activity, -10000)
-		rwm.Unlock()
 	}
 	cdone <- true
 }
 
 // Borrowed from rwmutex_test.go
-func HammerRWMutex(gomaxprocs, numReaders, num_iterations int) {
+func HammerRWMutex(gomaxprocs, numReaders, numIterations int) {
 	runtime.GOMAXPROCS(gomaxprocs)
 	// Number of active readers + 10000 * number of active writers.
 	var activity int32
-	rwm := NewDRWMutex("test")
+	rwm := NewDRWMutex(context.Background(), "test", ds)
 	cdone := make(chan bool)
-	go writer(rwm, num_iterations, &activity, cdone)
+	go writer(rwm, numIterations, &activity, cdone)
 	var i int
 	for i = 0; i < numReaders/2; i++ {
-		go reader(rwm, num_iterations, &activity, cdone)
+		go reader(rwm, numIterations, &activity, cdone)
 	}
-	go writer(rwm, num_iterations, &activity, cdone)
+	go writer(rwm, numIterations, &activity, cdone)
 	for ; i < numReaders; i++ {
-		go reader(rwm, num_iterations, &activity, cdone)
+		go reader(rwm, numIterations, &activity, cdone)
 	}
 	// Wait for the 2 writers and all readers to finish.
 	for i := 0; i < 2+numReaders; i++ {
@@ -178,7 +261,7 @@ func TestRWMutex(t *testing.T) {
 
 // Borrowed from rwmutex_test.go
 func TestDRLocker(t *testing.T) {
-	wl := NewDRWMutex("test")
+	wl := NewDRWMutex(context.Background(), "test", ds)
 	var rl sync.Locker
 	wlocked := make(chan bool, 1)
 	rlocked := make(chan bool, 1)
@@ -189,7 +272,7 @@ func TestDRLocker(t *testing.T) {
 			rl.Lock()
 			rl.Lock()
 			rlocked <- true
-			wl.Lock()
+			wl.Lock(id, source)
 			wlocked <- true
 		}
 	}()
@@ -219,7 +302,7 @@ func TestUnlockPanic(t *testing.T) {
 			t.Fatalf("unlock of unlocked RWMutex did not panic")
 		}
 	}()
-	mu := NewDRWMutex("test")
+	mu := NewDRWMutex(context.Background(), "test", ds)
 	mu.Unlock()
 }
 
@@ -230,8 +313,8 @@ func TestUnlockPanic2(t *testing.T) {
 			t.Fatalf("unlock of unlocked RWMutex did not panic")
 		}
 	}()
-	mu := NewDRWMutex("test-unlock-panic-2")
-	mu.RLock()
+	mu := NewDRWMutex(context.Background(), "test-unlock-panic-2", ds)
+	mu.RLock(id, source)
 	mu.Unlock()
 }
 
@@ -242,7 +325,7 @@ func TestRUnlockPanic(t *testing.T) {
 			t.Fatalf("read unlock of unlocked RWMutex did not panic")
 		}
 	}()
-	mu := NewDRWMutex("test")
+	mu := NewDRWMutex(context.Background(), "test", ds)
 	mu.RUnlock()
 }
 
@@ -253,23 +336,23 @@ func TestRUnlockPanic2(t *testing.T) {
 			t.Fatalf("read unlock of unlocked RWMutex did not panic")
 		}
 	}()
-	mu := NewDRWMutex("test-runlock-panic-2")
-	mu.Lock()
+	mu := NewDRWMutex(context.Background(), "test-runlock-panic-2", ds)
+	mu.Lock(id, source)
 	mu.RUnlock()
 }
 
 // Borrowed from rwmutex_test.go
 func benchmarkRWMutex(b *testing.B, localWork, writeRatio int) {
-	rwm := NewDRWMutex("test")
+	rwm := NewDRWMutex(context.Background(), "test", ds)
 	b.RunParallel(func(pb *testing.PB) {
 		foo := 0
 		for pb.Next() {
 			foo++
 			if foo%writeRatio == 0 {
-				rwm.Lock()
+				rwm.Lock(id, source)
 				rwm.Unlock()
 			} else {
-				rwm.RLock()
+				rwm.RLock(id, source)
 				for i := 0; i != localWork; i += 1 {
 					foo *= 2
 					foo /= 2
diff --git a/dsync-server_test.go b/dsync-server_test.go
index 355d542..7f49c05 100644
--- a/dsync-server_test.go
+++ b/dsync-server_test.go
@@ -20,7 +20,7 @@ import (
 	"fmt"
 	"sync"
 
-	. "github.com/minio/dsync"
+	. "github.com/minio/dsync/v3"
 )
 
 const WriteLock = -1
diff --git a/dsync.go b/dsync.go
index ba027d5..f370f56 100644
--- a/dsync.go
+++ b/dsync.go
@@ -16,51 +16,45 @@
 
 package dsync
 
-import "errors"
+import (
+	"errors"
+	"math"
+)
 
-// Number of nodes participating in the distributed locking.
-var dnodeCount int
+// Dsync represents dsync client object which is initialized with
+// authenticated clients, used to initiate lock REST calls.
+type Dsync struct {
+	// Number of nodes participating in the distributed locking.
+	dNodeCount int
 
-// List of rpc client objects, one per lock server.
-var clnts []NetLocker
+	// List of rest client objects, one per lock server.
+	restClnts []NetLocker
 
-// Index into rpc client array for server running on localhost
-var ownNode int
+	// Simple majority based quorum, set to dNodeCount/2+1
+	dquorum int
 
-// Simple majority based quorum, set to dNodeCount/2+1
-var dquorum int
-
-// Simple quorum for read operations, set to dNodeCount/2
-var dquorumReads int
-
-// Init - initializes package-level global state variables such as clnts.
-// N B - This function should be called only once inside any program
-// that uses dsync.
-func Init(rpcClnts []NetLocker, rpcOwnNode int) (err error) {
+	// Simple quorum for read operations, set to dNodeCount/2
+	dquorumReads int
+}
 
-	// Validate if number of nodes is within allowable range.
-	if dnodeCount != 0 {
-		return errors.New("Cannot reinitialize dsync package")
-	}
-	if len(rpcClnts) < 4 {
-		return errors.New("Dsync is not designed for less than 4 nodes")
-	} else if len(rpcClnts) > 16 {
-		return errors.New("Dsync is not designed for more than 16 nodes")
-	} else if len(rpcClnts)%2 != 0 {
-		return errors.New("Dsync is not designed for an uneven number of nodes")
+// New - initializes a new dsync object with input restClnts.
+func New(restClnts []NetLocker) (*Dsync, error) {
+	if len(restClnts) < 2 {
+		return nil, errors.New("Dsync is not designed for less than 2 nodes")
+	} else if len(restClnts) > 32 {
+		return nil, errors.New("Dsync is not designed for more than 32 nodes")
 	}
 
-	if rpcOwnNode > len(rpcClnts) {
-		return errors.New("Index for own node is too large")
-	}
+	ds := &Dsync{}
+	ds.dNodeCount = len(restClnts)
+
+	// With odd number of nodes, write and read quorum is basically the same
+	ds.dquorum = int(ds.dNodeCount/2) + 1
+	ds.dquorumReads = int(math.Ceil(float64(ds.dNodeCount) / 2.0))
 
-	dnodeCount = len(rpcClnts)
-	dquorum = dnodeCount/2 + 1
-	dquorumReads = dnodeCount / 2
-	// Initialize node name and rpc path for each NetLocker object.
-	clnts = make([]NetLocker, dnodeCount)
-	copy(clnts, rpcClnts)
+	// Initialize node name and rest path for each NetLocker object.
+	ds.restClnts = make([]NetLocker, ds.dNodeCount)
+	copy(ds.restClnts, restClnts)
 
-	ownNode = rpcOwnNode
-	return nil
+	return ds, nil
 }
diff --git a/dsync_private_test.go b/dsync_private_test.go
new file mode 100644
index 0000000..c50bfcd
--- /dev/null
+++ b/dsync_private_test.go
@@ -0,0 +1,58 @@
+/*
+ * Minio Cloud Storage, (C) 2018 Minio, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+// GOMAXPROCS=10 go test
+
+package dsync
+
+import "testing"
+
+// Tests dsync.New
+func TestNew(t *testing.T) {
+	nclnts := make([]NetLocker, 33)
+	if _, err := New(nclnts); err == nil {
+		t.Fatal("Should have failed")
+	}
+
+	nclnts = make([]NetLocker, 1)
+	if _, err := New(nclnts); err == nil {
+		t.Fatal("Should have failed")
+	}
+
+	nclnts = make([]NetLocker, 2)
+	nds, err := New(nclnts)
+	if err != nil {
+		t.Fatal("Should pass", err)
+	}
+
+	if nds.dquorumReads != 1 {
+		t.Fatalf("Unexpected read quorum values expected 1, got %d", nds.dquorumReads)
+	}
+
+	if nds.dquorum != 2 {
+		t.Fatalf("Unexpected quorum values expected 2, got %d", nds.dquorum)
+	}
+
+	nclnts = make([]NetLocker, 3)
+	nds, err = New(nclnts)
+	if err != nil {
+		t.Fatal("Should pass", err)
+	}
+
+	if nds.dquorumReads != nds.dquorum {
+		t.Fatalf("Unexpected quorum values for odd nodes we expect read %d and write %d quorum to be same", nds.dquorumReads, nds.dquorum)
+	}
+}
diff --git a/dsync_test.go b/dsync_test.go
index 147bafd..fd0b3ab 100644
--- a/dsync_test.go
+++ b/dsync_test.go
@@ -19,6 +19,7 @@
 package dsync_test
 
 import (
+	"context"
 	"fmt"
 	"log"
 	"math/rand"
@@ -31,16 +32,13 @@ import (
 	"testing"
 	"time"
 
-	. "github.com/minio/dsync"
+	. "github.com/minio/dsync/v3"
 )
 
-const RpcPath = "/dsync"
-const N = 4           // number of lock servers for tests.
-var nodes []string    // list of node IP addrs or hostname with ports.
+var ds *Dsync
 var rpcPaths []string // list of rpc paths where lock server is serving.
 
 func startRPCServers(nodes []string) {
-
 	for i := range nodes {
 		server := rpc.NewServer()
 		server.RegisterName("Dsync", &lockServer{
@@ -62,15 +60,17 @@ func startRPCServers(nodes []string) {
 
 // TestMain initializes the testing framework
 func TestMain(m *testing.M) {
+	const rpcPath = "/dsync"
 
 	rand.Seed(time.Now().UTC().UnixNano())
 
+	var nodes []string // list of node IP addrs or hostname with ports.
 	nodes = make([]string, 4)
 	for i := range nodes {
 		nodes[i] = fmt.Sprintf("127.0.0.1:%d", i+12345)
 	}
 	for i := range nodes {
-		rpcPaths = append(rpcPaths, RpcPath+"-"+strconv.Itoa(i))
+		rpcPaths = append(rpcPaths, rpcPath+"-"+strconv.Itoa(i))
 	}
 
 	// Initialize net/rpc clients for dsync.
@@ -79,31 +79,12 @@ func TestMain(m *testing.M) {
 		clnts = append(clnts, newClient(nodes[i], rpcPaths[i]))
 	}
 
-	if err := Init(clnts, 5); err == nil {
-		log.Fatalf("Should have failed")
-	}
-
-	if err := Init(clnts[0:1], 0); err == nil {
-		log.Fatalf("Should have failed")
-	}
-
-	nclnts := make([]NetLocker, 17)
-	if err := Init(nclnts, 0); err == nil {
-		log.Fatalf("Should have failed")
-	}
-
-	nclnts = make([]NetLocker, 15)
-	if err := Init(nclnts, 0); err == nil {
-		log.Fatalf("Should have failed")
-	}
-
-	rpcOwnNodeFakeForTest := 0
-	if err := Init(clnts, rpcOwnNodeFakeForTest); err != nil {
+	var err error
+	ds, err = New(clnts)
+	if err != nil {
 		log.Fatalf("set nodes failed with %v", err)
 	}
-	if err := Init(clnts, rpcOwnNodeFakeForTest); err == nil {
-		log.Fatalf("Should have failed")
-	}
+
 	startRPCServers(nodes)
 
 	os.Exit(m.Run())
@@ -111,9 +92,9 @@ func TestMain(m *testing.M) {
 
 func TestSimpleLock(t *testing.T) {
 
-	dm := NewDRWMutex("test")
+	dm := NewDRWMutex(context.Background(), "test", ds)
 
-	dm.Lock()
+	dm.Lock(id, source)
 
 	// fmt.Println("Lock acquired, waiting...")
 	time.Sleep(2500 * time.Millisecond)
@@ -123,25 +104,25 @@ func TestSimpleLock(t *testing.T) {
 
 func TestSimpleLockUnlockMultipleTimes(t *testing.T) {
 
-	dm := NewDRWMutex("test")
+	dm := NewDRWMutex(context.Background(), "test", ds)
 
-	dm.Lock()
+	dm.Lock(id, source)
 	time.Sleep(time.Duration(10+(rand.Float32()*50)) * time.Millisecond)
 	dm.Unlock()
 
-	dm.Lock()
+	dm.Lock(id, source)
 	time.Sleep(time.Duration(10+(rand.Float32()*50)) * time.Millisecond)
 	dm.Unlock()
 
-	dm.Lock()
+	dm.Lock(id, source)
 	time.Sleep(time.Duration(10+(rand.Float32()*50)) * time.Millisecond)
 	dm.Unlock()
 
-	dm.Lock()
+	dm.Lock(id, source)
 	time.Sleep(time.Duration(10+(rand.Float32()*50)) * time.Millisecond)
 	dm.Unlock()
 
-	dm.Lock()
+	dm.Lock(id, source)
 	time.Sleep(time.Duration(10+(rand.Float32()*50)) * time.Millisecond)
 	dm.Unlock()
 }
@@ -149,10 +130,10 @@ func TestSimpleLockUnlockMultipleTimes(t *testing.T) {
 // Test two locks for same resource, one succeeds, one fails (after timeout)
 func TestTwoSimultaneousLocksForSameResource(t *testing.T) {
 
-	dm1st := NewDRWMutex("aap")
-	dm2nd := NewDRWMutex("aap")
+	dm1st := NewDRWMutex(context.Background(), "aap", ds)
+	dm2nd := NewDRWMutex(context.Background(), "aap", ds)
 
-	dm1st.Lock()
+	dm1st.Lock(id, source)
 
 	// Release lock after 10 seconds
 	go func() {
@@ -162,7 +143,7 @@ func TestTwoSimultaneousLocksForSameResource(t *testing.T) {
 		dm1st.Unlock()
 	}()
 
-	dm2nd.Lock()
+	dm2nd.Lock(id, source)
 
 	// fmt.Printf("2nd lock obtained after 1st lock is released\n")
 	time.Sleep(2500 * time.Millisecond)
@@ -173,11 +154,11 @@ func TestTwoSimultaneousLocksForSameResource(t *testing.T) {
 // Test three locks for same resource, one succeeds, one fails (after timeout)
 func TestThreeSimultaneousLocksForSameResource(t *testing.T) {
 
-	dm1st := NewDRWMutex("aap")
-	dm2nd := NewDRWMutex("aap")
-	dm3rd := NewDRWMutex("aap")
+	dm1st := NewDRWMutex(context.Background(), "aap", ds)
+	dm2nd := NewDRWMutex(context.Background(), "aap", ds)
+	dm3rd := NewDRWMutex(context.Background(), "aap", ds)
 
-	dm1st.Lock()
+	dm1st.Lock(id, source)
 
 	// Release lock after 10 seconds
 	go func() {
@@ -193,7 +174,7 @@ func TestThreeSimultaneousLocksForSameResource(t *testing.T) {
 	go func() {
 		defer wg.Done()
 
-		dm2nd.Lock()
+		dm2nd.Lock(id, source)
 
 		// Release lock after 10 seconds
 		go func() {
@@ -203,7 +184,7 @@ func TestThreeSimultaneousLocksForSameResource(t *testing.T) {
 			dm2nd.Unlock()
 		}()
 
-		dm3rd.Lock()
+		dm3rd.Lock(id, source)
 
 		// fmt.Printf("3rd lock obtained after 1st & 2nd locks are released\n")
 		time.Sleep(2500 * time.Millisecond)
@@ -214,7 +195,7 @@ func TestThreeSimultaneousLocksForSameResource(t *testing.T) {
 	go func() {
 		defer wg.Done()
 
-		dm3rd.Lock()
+		dm3rd.Lock(id, source)
 
 		// Release lock after 10 seconds
 		go func() {
@@ -224,7 +205,7 @@ func TestThreeSimultaneousLocksForSameResource(t *testing.T) {
 			dm3rd.Unlock()
 		}()
 
-		dm2nd.Lock()
+		dm2nd.Lock(id, source)
 
 		// fmt.Printf("2nd lock obtained after 1st & 3rd locks are released\n")
 		time.Sleep(2500 * time.Millisecond)
@@ -238,11 +219,11 @@ func TestThreeSimultaneousLocksForSameResource(t *testing.T) {
 // Test two locks for different resources, both succeed
 func TestTwoSimultaneousLocksForDifferentResources(t *testing.T) {
 
-	dm1 := NewDRWMutex("aap")
-	dm2 := NewDRWMutex("noot")
+	dm1 := NewDRWMutex(context.Background(), "aap", ds)
+	dm2 := NewDRWMutex(context.Background(), "noot", ds)
 
-	dm1.Lock()
-	dm2.Lock()
+	dm1.Lock(id, source)
+	dm2.Lock(id, source)
 
 	// fmt.Println("Both locks acquired, waiting...")
 	time.Sleep(2500 * time.Millisecond)
@@ -256,7 +237,7 @@ func TestTwoSimultaneousLocksForDifferentResources(t *testing.T) {
 // Borrowed from mutex_test.go
 func HammerMutex(m *DRWMutex, loops int, cdone chan bool) {
 	for i := 0; i < loops; i++ {
-		m.Lock()
+		m.Lock(id, source)
 		m.Unlock()
 	}
 	cdone <- true
@@ -265,7 +246,7 @@ func HammerMutex(m *DRWMutex, loops int, cdone chan bool) {
 // Borrowed from mutex_test.go
 func TestMutex(t *testing.T) {
 	c := make(chan bool)
-	m := NewDRWMutex("test")
+	m := NewDRWMutex(context.Background(), "test", ds)
 	for i := 0; i < 10; i++ {
 		go HammerMutex(m, 1000, c)
 	}
@@ -282,21 +263,21 @@ func BenchmarkMutexUncontended(b *testing.B) {
 	b.RunParallel(func(pb *testing.PB) {
 		var mu PaddedMutex
 		for pb.Next() {
-			mu.Lock()
+			mu.Lock(id, source)
 			mu.Unlock()
 		}
 	})
 }
 
 func benchmarkMutex(b *testing.B, slack, work bool) {
-	mu := NewDRWMutex("")
+	mu := NewDRWMutex(context.Background(), "", ds)
 	if slack {
 		b.SetParallelism(10)
 	}
 	b.RunParallel(func(pb *testing.PB) {
 		foo := 0
 		for pb.Next() {
-			mu.Lock()
+			mu.Lock(id, source)
 			mu.Unlock()
 			if work {
 				for i := 0; i < 100; i++ {
@@ -332,7 +313,7 @@ func BenchmarkMutexNoSpin(b *testing.B) {
 	// These goroutines yield during local work, so that switching from
 	// a blocked goroutine to other goroutines is profitable.
 	// As a matter of fact, this benchmark still triggers some spinning in the mutex.
-	m := NewDRWMutex("")
+	m := NewDRWMutex(context.Background(), "", ds)
 	var acc0, acc1 uint64
 	b.SetParallelism(4)
 	b.RunParallel(func(pb *testing.PB) {
@@ -340,7 +321,7 @@ func BenchmarkMutexNoSpin(b *testing.B) {
 		var data [4 << 10]uint64
 		for i := 0; pb.Next(); i++ {
 			if i%4 == 0 {
-				m.Lock()
+				m.Lock(id, source)
 				acc0 -= 100
 				acc1 += 100
 				m.Unlock()
@@ -364,12 +345,12 @@ func BenchmarkMutexSpin(b *testing.B) {
 	// profitable. To achieve this we create a goroutine per-proc.
 	// These goroutines access considerable amount of local data so that
 	// unnecessary rescheduling is penalized by cache misses.
-	m := NewDRWMutex("")
+	m := NewDRWMutex(context.Background(), "", ds)
 	var acc0, acc1 uint64
 	b.RunParallel(func(pb *testing.PB) {
 		var data [16 << 10]uint64
 		for i := 0; pb.Next(); i++ {
-			m.Lock()
+			m.Lock(id, source)
 			acc0 -= 100
 			acc1 += 100
 			m.Unlock()
diff --git a/examples/auth-locker/auth-lock-client.go b/examples/auth-locker/auth-lock-client.go
index b71019d..3313064 100644
--- a/examples/auth-locker/auth-lock-client.go
+++ b/examples/auth-locker/auth-lock-client.go
@@ -29,7 +29,7 @@ import (
 	"sync"
 	"time"
 
-	"github.com/minio/dsync"
+	"github.com/minio/dsync/v3"
 )
 
 // defaultDialTimeout is used for non-secure connection.
@@ -209,10 +209,6 @@ func (rpcClient *ReconnectRPCClient) Close() (err error) {
 	return nil
 }
 
-func (rpcClient *ReconnectRPCClient) ServerAddr() string {
-	return rpcClient.serverAddr
-}
-
-func (rpcClient *ReconnectRPCClient) ServiceEndpoint() string {
-	return rpcClient.serviceEndpoint
+func (rpcClient *ReconnectRPCClient) String() string {
+	return "http://" + rpcClient.serverAddr + "/" + rpcClient.serviceEndpoint
 }
diff --git a/examples/auth-locker/common.go b/examples/auth-locker/common.go
index bd4232a..915a45c 100644
--- a/examples/auth-locker/common.go
+++ b/examples/auth-locker/common.go
@@ -16,7 +16,7 @@
 
 package main
 
-import "github.com/minio/dsync"
+import "github.com/minio/dsync/v3"
 
 const serviceEndpointPrefix = "/lockserver-"
 
diff --git a/examples/auth-locker/main.go b/examples/auth-locker/main.go
index 358c66c..9072c58 100644
--- a/examples/auth-locker/main.go
+++ b/examples/auth-locker/main.go
@@ -17,10 +17,11 @@
 package main
 
 import (
+	"context"
 	"log"
 	"strconv"
 
-	"github.com/minio/dsync"
+	"github.com/minio/dsync/v3"
 )
 
 const basePort = 50001
@@ -38,15 +39,16 @@ func main() {
 	}
 
 	// Initialize dsync and treat 0th index on lockClients as self node.
-	if err := dsync.Init(lockClients, 0); err != nil {
+	ds, err := dsync.New(lockClients)
+	if err != nil {
 		log.Fatal("Fail to initialize dsync.", err)
 	}
 
 	// Get new distributed RWMutex on resource "Music"
-	drwMutex := dsync.NewDRWMutex("Music")
+	drwMutex := dsync.NewDRWMutex(context.Background(), "Music", ds)
 
 	// Lock "music" resource.
-	drwMutex.Lock()
+	drwMutex.Lock("Music", "main.go:50:main()")
 
 	// As we got writable lock on Music, do some crazy things.
 
diff --git a/examples/auth-locker/server.go b/examples/auth-locker/server.go
index d41ac03..9a4e9e6 100644
--- a/examples/auth-locker/server.go
+++ b/examples/auth-locker/server.go
@@ -21,6 +21,7 @@ import (
 	"net"
 	"net/http"
 	"net/rpc"
+	"path"
 	"strconv"
 	"sync"
 )
@@ -38,7 +39,7 @@ func StartLockServer(port int) {
 
 	rpcServer := rpc.NewServer()
 	rpcServer.RegisterName("LockServer", lockServer)
-	rpcServer.HandleHTTP(rpcPath, rpcPath)
+	rpcServer.HandleHTTP(rpcPath, path.Join(rpcPath, "_authlocker"))
 
 	listener, err := net.Listen("tcp", ":"+portString)
 	if err == nil {
diff --git a/go.mod b/go.mod
new file mode 100644
index 0000000..613ab6a
--- /dev/null
+++ b/go.mod
@@ -0,0 +1,3 @@
+module github.com/minio/dsync/v3
+
+go 1.12
diff --git a/performance/.gitignore b/performance/.gitignore
new file mode 100644
index 0000000..3c804e8
--- /dev/null
+++ b/performance/.gitignore
@@ -0,0 +1 @@
+performance
\ No newline at end of file
diff --git a/performance/net-rpc-client.go b/performance/net-rpc-client.go
index 486df5b..bef0ff0 100644
--- a/performance/net-rpc-client.go
+++ b/performance/net-rpc-client.go
@@ -20,7 +20,7 @@ import (
 	"net/rpc"
 	"sync"
 
-	"github.com/minio/dsync"
+	"github.com/minio/dsync/v3"
 )
 
 // ReconnectRPCClient is a wrapper type for rpc.Client which provides reconnect on first failure.
@@ -101,15 +101,6 @@ func (rpcClient *ReconnectRPCClient) Unlock(args dsync.LockArgs) (status bool, e
 	return status, err
 }
 
-func (rpcClient *ReconnectRPCClient) ForceUnlock(args dsync.LockArgs) (status bool, err error) {
-	err = rpcClient.Call("Dsync.ForceUnlock", &args, &status)
-	return status, err
-}
-
-func (rpcClient *ReconnectRPCClient) ServerAddr() string {
-	return rpcClient.addr
-}
-
-func (rpcClient *ReconnectRPCClient) ServiceEndpoint() string {
-	return rpcClient.endpoint
+func (rpcClient *ReconnectRPCClient) String() string {
+	return "http://" + rpcClient.addr + "/" + rpcClient.endpoint
 }
diff --git a/performance/performance-server.go b/performance/performance-server.go
index 9054f6c..85e83db 100644
--- a/performance/performance-server.go
+++ b/performance/performance-server.go
@@ -20,7 +20,7 @@ import (
 	"fmt"
 	"sync"
 
-	"github.com/minio/dsync"
+	"github.com/minio/dsync/v3"
 )
 
 const WriteLock = -1
diff --git a/performance/performance.go b/performance/performance.go
index 8e4a91a..024a2dd 100644
--- a/performance/performance.go
+++ b/performance/performance.go
@@ -17,6 +17,7 @@
 package main
 
 import (
+	"context"
 	"flag"
 	"fmt"
 	"log"
@@ -26,12 +27,13 @@ import (
 	"net/rpc"
 	"os"
 	"os/signal"
+	"path"
+	"runtime"
 	"strconv"
-	"strings"
 	"sync"
 	"time"
 
-	"github.com/minio/dsync"
+	"github.com/minio/dsync/v3"
 )
 
 const rpcPath = "/dsync"
@@ -51,15 +53,29 @@ var (
 	resources []string
 )
 
-func lockLoop(w *sync.WaitGroup, timeStart *time.Time, runs int, done *bool, nr int, ch chan<- float64) {
+func getSource() string {
+	var funcName string
+	pc, filename, lineNum, ok := runtime.Caller(2)
+	if ok {
+		filename = path.Base(filename)
+		funcName = runtime.FuncForPC(pc).Name()
+	} else {
+		filename = "<unknown>"
+		lineNum = 0
+	}
+
+	return fmt.Sprintf("[%s:%d:%s()]", filename, lineNum, funcName)
+}
+
+func lockLoop(ds *dsync.Dsync, w *sync.WaitGroup, timeStart *time.Time, runs int, done *bool, nr int, ch chan<- float64) {
 	defer w.Done()
-	dm := dsync.NewDRWMutex(fmt.Sprintf("chaos-%d-%d", *portFlag, nr))
+	dm := dsync.NewDRWMutex(context.Background(), fmt.Sprintf("chaos-%d-%d", *portFlag, nr), ds)
 
 	delayMax := float64(0.0)
 	timeLast := time.Now()
 	var run int
 	for run = 1; !*done && run <= runs; run++ {
-		dm.Lock()
+		dm.Lock("test", getSource())
 
 		if run == 1 { // re-initialize timing info to account for initial delay to start all servers
 			*timeStart = time.Now()
@@ -115,7 +131,8 @@ func main() {
 		clnts = append(clnts, newClient(servers[i], resources[i]))
 	}
 
-	if err := dsync.Init(clnts, getSelfNode(clnts, *portFlag)); err != nil {
+	ds, err := dsync.New(clnts)
+	if err != nil {
 		log.Fatalf("set nodes failed with %v", err)
 	}
 
@@ -147,7 +164,7 @@ func main() {
 	fmt.Println("Test starting...")
 
 	for i := 0; i < parallel; i++ {
-		go lockLoop(&wait, &timeStart, runs, &done, i, ch)
+		go lockLoop(ds, &wait, &timeStart, runs, &done, i, ch)
 	}
 	totalRuns := runs * parallel
 
@@ -172,19 +189,3 @@ func main() {
 		time.Sleep(10000 * time.Millisecond)
 	}
 }
-
-func getSelfNode(rpcClnts []dsync.NetLocker, port int) int {
-
-	index := -1
-	for i, c := range rpcClnts {
-		p, _ := strconv.Atoi(strings.Split(c.ServerAddr(), ":")[1])
-		if port == p {
-			if index == -1 {
-				index = i
-			} else {
-				panic("More than one port found")
-			}
-		}
-	}
-	return index
-}
diff --git a/retry.go b/retry.go
new file mode 100644
index 0000000..7d72a1f
--- /dev/null
+++ b/retry.go
@@ -0,0 +1,142 @@
+/*
+ * Minio Cloud Storage, (C) 2017 Minio, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package dsync
+
+import (
+	"math/rand"
+	"sync"
+	"time"
+)
+
+// lockedRandSource provides protected rand source, implements rand.Source interface.
+type lockedRandSource struct {
+	lk  sync.Mutex
+	src rand.Source
+}
+
+// Int63 returns a non-negative pseudo-random 63-bit integer as an
+// int64.
+func (r *lockedRandSource) Int63() (n int64) {
+	r.lk.Lock()
+	n = r.src.Int63()
+	r.lk.Unlock()
+	return
+}
+
+// Seed uses the provided seed value to initialize the generator to a
+// deterministic state.
+func (r *lockedRandSource) Seed(seed int64) {
+	r.lk.Lock()
+	r.src.Seed(seed)
+	r.lk.Unlock()
+}
+
+// MaxJitter will randomize over the full exponential backoff time
+const MaxJitter = 1.0
+
+// NoJitter disables the use of jitter for randomizing the
+// exponential backoff time
+const NoJitter = 0.0
+
+// Global random source for fetching random values.
+var globalRandomSource = rand.New(&lockedRandSource{
+	src: rand.NewSource(time.Now().UTC().UnixNano()),
+})
+
+// newRetryTimerJitter creates a timer with exponentially increasing delays
+// until the maximum retry attempts are reached. - this function is a fully
+// configurable version, meant for only advanced use cases. For the most part
+// one should use newRetryTimerSimple and newRetryTimer.
+func newRetryTimerWithJitter(unit time.Duration, cap time.Duration, jitter float64, doneCh <-chan struct{}) <-chan int {
+	attemptCh := make(chan int)
+
+	// normalize jitter to the range [0, 1.0]
+	if jitter < NoJitter {
+		jitter = NoJitter
+	}
+	if jitter > MaxJitter {
+		jitter = MaxJitter
+	}
+
+	// computes the exponential backoff duration according to
+	// https://www.awsarchitectureblog.com/2015/03/backoff.html
+	exponentialBackoffWait := func(attempt int) time.Duration {
+		// 1<<uint(attempt) below could overflow, so limit the value of attempt
+		maxAttempt := 30
+		if attempt > maxAttempt {
+			attempt = maxAttempt
+		}
+		//sleep = random_between(0, min(cap, base * 2 ** attempt))
+		sleep := unit * time.Duration(1<<uint(attempt))
+		if sleep > cap {
+			sleep = cap
+		}
+		if jitter != NoJitter {
+			sleep -= time.Duration(globalRandomSource.Float64() * float64(sleep) * jitter)
+		}
+		return sleep
+	}
+
+	go func() {
+		defer close(attemptCh)
+		nextBackoff := 0
+		// Channel used to signal after the expiry of backoff wait seconds.
+		var timer *time.Timer
+		for {
+			select { // Attempts starts.
+			case attemptCh <- nextBackoff:
+				nextBackoff++
+			case <-doneCh:
+				// Stop the routine.
+				return
+			}
+			timer = time.NewTimer(exponentialBackoffWait(nextBackoff))
+			// wait till next backoff time or till doneCh gets a message.
+			select {
+			case <-timer.C:
+			case <-doneCh:
+				// stop the timer and return.
+				timer.Stop()
+				return
+			}
+
+		}
+	}()
+
+	// Start reading..
+	return attemptCh
+}
+
+// Default retry constants.
+const (
+	defaultRetryUnit = time.Second     // 1 second.
+	defaultRetryCap  = 1 * time.Second // 1 second.
+)
+
+// newRetryTimer creates a timer with exponentially increasing delays
+// until the maximum retry attempts are reached. - this function provides
+// resulting retry values to be of maximum jitter.
+func newRetryTimer(unit time.Duration, cap time.Duration, doneCh <-chan struct{}) <-chan int {
+	return newRetryTimerWithJitter(unit, cap, MaxJitter, doneCh)
+}
+
+// newRetryTimerSimple creates a timer with exponentially increasing delays
+// until the maximum retry attempts are reached. - this function is a
+// simpler version with all default values.
+func newRetryTimerSimple(doneCh <-chan struct{}) <-chan int {
+	return newRetryTimerWithJitter(defaultRetryUnit, defaultRetryCap, MaxJitter, doneCh)
+}
diff --git a/retry_test.go b/retry_test.go
new file mode 100644
index 0000000..bd8e14d
--- /dev/null
+++ b/retry_test.go
@@ -0,0 +1,82 @@
+/*
+ * Minio Cloud Storage, (C) 2017 Minio, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package dsync
+
+import (
+	"testing"
+	"time"
+)
+
+// Tests for retry timer.
+func TestRetryTimerSimple(t *testing.T) {
+	doneCh := make(chan struct{})
+	attemptCh := newRetryTimerSimple(doneCh)
+	i := <-attemptCh
+	if i != 0 {
+		close(doneCh)
+		t.Fatalf("Invalid attempt counter returned should be 0, found %d instead", i)
+	}
+	i = <-attemptCh
+	if i <= 0 {
+		close(doneCh)
+		t.Fatalf("Invalid attempt counter returned should be greater than 0, found %d instead", i)
+	}
+	close(doneCh)
+	_, ok := <-attemptCh
+	if ok {
+		t.Fatal("Attempt counter should be closed")
+	}
+}
+
+// Test retry time with no jitter.
+func TestRetryTimerWithNoJitter(t *testing.T) {
+	doneCh := make(chan struct{})
+	// No jitter
+	attemptCh := newRetryTimerWithJitter(time.Millisecond, 5*time.Millisecond, NoJitter, doneCh)
+	i := <-attemptCh
+	if i != 0 {
+		close(doneCh)
+		t.Fatalf("Invalid attempt counter returned should be 0, found %d instead", i)
+	}
+	// Loop through the maximum possible attempt.
+	for i = range attemptCh {
+		if i == 30 {
+			close(doneCh)
+		}
+	}
+	_, ok := <-attemptCh
+	if ok {
+		t.Fatal("Attempt counter should be closed")
+	}
+}
+
+// Test retry time with Jitter greater than MaxJitter.
+func TestRetryTimerWithJitter(t *testing.T) {
+	doneCh := make(chan struct{})
+	// Jitter will be set back to 1.0
+	attemptCh := newRetryTimerWithJitter(time.Second, 30*time.Second, 2.0, doneCh)
+	i := <-attemptCh
+	if i != 0 {
+		close(doneCh)
+		t.Fatalf("Invalid attempt counter returned should be 0, found %d instead", i)
+	}
+	close(doneCh)
+	_, ok := <-attemptCh
+	if ok {
+		t.Fatal("Attempt counter should be closed")
+	}
+}
diff --git a/rpc-client-impl_test.go b/rpc-client-impl_test.go
index 201a0d7..6ad39e1 100644
--- a/rpc-client-impl_test.go
+++ b/rpc-client-impl_test.go
@@ -20,7 +20,7 @@ import (
 	"net/rpc"
 	"sync"
 
-	. "github.com/minio/dsync"
+	. "github.com/minio/dsync/v3"
 )
 
 // ReconnectRPCClient is a wrapper type for rpc.Client which provides reconnect on first failure.
@@ -106,10 +106,6 @@ func (rpcClient *ReconnectRPCClient) ForceUnlock(args LockArgs) (status bool, er
 	return status, err
 }
 
-func (rpcClient *ReconnectRPCClient) ServerAddr() string {
-	return rpcClient.addr
-}
-
-func (rpcClient *ReconnectRPCClient) ServiceEndpoint() string {
-	return rpcClient.endpoint
+func (rpcClient *ReconnectRPCClient) String() string {
+	return "http://" + rpcClient.addr + "/" + rpcClient.endpoint
 }
diff --git a/rpc-client-interface.go b/rpc-client-interface.go
index 09d4c61..b613731 100644
--- a/rpc-client-interface.go
+++ b/rpc-client-interface.go
@@ -24,11 +24,9 @@ type LockArgs struct {
 	// Resource contains a entity to be locked/unlocked.
 	Resource string
 
-	// ServerAddr contains the address of the server who requested lock/unlock of the above resource.
-	ServerAddr string
-
-	// ServiceEndpoint contains the network path of above server to do lock/unlock.
-	ServiceEndpoint string
+	// Source contains the line number, function and file name of the code
+	// on the client node that requested the lock.
+	Source string
 }
 
 // NetLocker is dsync compatible locker interface.
@@ -53,14 +51,9 @@ type NetLocker interface {
 	// * an error on failure of unlock request operation.
 	Unlock(args LockArgs) (bool, error)
 
-	// Unlock (read/write) forcefully for given LockArgs. It should return
-	// * a boolean to indicate success/failure of the operation
-	// * an error on failure of unlock request operation.
-	ForceUnlock(args LockArgs) (bool, error)
-
-	// Return this lock server address.
-	ServerAddr() string
+	// Returns underlying endpoint of this lock client instance.
+	String() string
 
-	// Return this lock server service endpoint on which the server runs.
-	ServiceEndpoint() string
+	// Close closes any underlying connection to the service endpoint
+	Close() error
 }

Debdiff

[The following lists of changes regard files as different if they have different names, permissions or owners.]

Files in second set of .debs but not in first

-rw-r--r--  root/root   /usr/share/gocode/src/github.com/minio/dsync/dsync_private_test.go
-rw-r--r--  root/root   /usr/share/gocode/src/github.com/minio/dsync/go.mod
-rw-r--r--  root/root   /usr/share/gocode/src/github.com/minio/dsync/retry.go
-rw-r--r--  root/root   /usr/share/gocode/src/github.com/minio/dsync/retry_test.go

No differences were encountered in the control files

More details

Full run details