Tokens for accessing external services via OAuth are cached. If the tokens change, due to automatic refreshing or manual re-authorization, proper cache invalidation is needed. There might arise situations where two clients perform operations on two different nodes, having different instances of the same OAuth accounts in cache. Now if the token is refreshed on node X and node Y performs an operation against the external service before the local cache was invalidated, the operation will fail due to an invalid access token. The handled exception might in turn lead to another refresh operation. That's were we could end up in cycle of two OAuth account instances that mutually invalidate their tokens until at least one operation is delayed until proper remote cache invalidation.

A theoretical and highly unlikely scenario. Nonetheless it is solved with the acquisition of a cluster wide lock for a specific task.

If Hazelcast is enabled, then it is used as the cluster lock storage. If not then the fall-back solution kicks in, that of the database and the new table is created.