patroni.ha module

class patroni.ha.Failsafe(dcs: AbstractDCS)

Bases: object

__init__(dcs: AbstractDCS) None
_reset_state() None
is_active() bool

Is used to report in REST API whether the failsafe mode was activated.

On primary the self._last_update is set from the set_is_active() method and always returns the correct value.

On replicas the self._last_update is set at the moment when the primary performs POST /failsafe REST API calls. The side-effect - it is possible that replicas will show failsafe_is_active values different from the primary.

property leader: Optional[Leader]
set_is_active(value: float) None
update(data: Dict[str, Any]) None
update_cluster(cluster: Cluster) Cluster
class patroni.ha.Ha(patroni: Patroni)

Bases: object

__init__(patroni: Patroni)
_delete_leader(last_lsn: Optional[int] = None) None
_do_reinitialize(cluster: Cluster) Optional[bool]
_failsafe_config() Optional[Dict[str, str]]
_get_failover_action_name() str

Return the currently requested manual failover action name or the default failover.

Returns:

str representing the manually requested action (manual failover if no leader is specified in the /failover in DCS, switchover otherwise) or failover if /failover is empty.

_get_node_to_follow(cluster: Cluster) Optional[Union[Leader, Member]]

Determine the node to follow.

Parameters:

cluster – the currently known cluster state from DCS.

Returns:

the node which we should be replicating from.

_handle_crash_recovery() Optional[str]
_handle_dcs_error() str
_handle_rewind_or_reinitialize() Optional[str]
_is_healthiest_node(members: Collection[Member], check_replication_lag: bool = True) bool

This method tries to determine whether I am healthy enough to became a new leader candidate or not.

_run_cycle() str
_sync_replication_slots(dcs_failed: bool) List[str]

Handles replication slots.

Parameters:

dcs_failed – bool, indicates that communication with DCS failed (get_cluster() or update_leader())

Returns:

list[str], replication slots names that should be copied from the primary

acquire_lock() bool
bootstrap() str
bootstrap_standby_leader() Optional[bool]

If we found ‘standby’ key in the configuration, we need to bootstrap not a real primary, but a ‘standby leader’, that will take base backup from a remote member and start follow it.

call_failsafe_member(data: Dict[str, Any], member: Member) bool
cancel_initialization() None
check_failsafe_topology() bool

Check whether we could continue to run as a primary by calling all members from the failsafe topology.

Note

If the /failsafe key contains invalid data or if the name of our node is missing in the /failsafe key, we immediately give up and return False.

We send the JSON document in the POST request with the following fields:

  • name - the name of our node;

  • conn_url - connection URL to the postgres, which is reachable from other nodes;

  • api_url - connection URL to Patroni REST API on this node reachable from other nodes;

  • slots - a dict with replication slots that exist on the leader node, including the primary itself with the last known LSN, because there could be a permanent physical slot on standby nodes.

Standby nodes are using information from the slots dict to advance position of permanent replication slots while DCS is not accessible in order to avoid indefinite growth of pg_wal.

Returns:

True if all members from the /failsafe topology agree that this node could continue to run as a primary, or False if some of standby nodes are not accessible or don’t agree.

check_timeline() bool
Returns:

True if should check whether the timeline is latest during the leader race.

clone(clone_member: Optional[Union[Leader, Member]] = None, msg: str = '(without leader)') Optional[bool]
delete_future_restart() bool
demote(mode: str) Optional[bool]

Demote PostgreSQL running as primary.

Parameters:

mode – One of offline, graceful, immediate or immediate-nolock. offline is used when connection to DCS is not available. graceful is used when failing over to another node due to user request. May only be called running async. immediate is used when we determine that we are not suitable for primary and want to failover quickly without regard for data durability. May only be called synchronously. immediate-nolock is used when find out that we have lost the lock to be primary. Need to bring down PostgreSQL as quickly as possible without regard for data durability. May only be called synchronously.

enforce_follow_remote_member(message: str) str
enforce_primary_role(message: str, promote_message: str) str

Ensure the node that has won the race for the leader key meets criteria for promoting its PG server to the ‘primary’ role.

evaluate_scheduled_restart() Optional[str]
failsafe_is_active() bool
fetch_node_status(member: Member) _MemberStatus

This function perform http get request on member.api_url and fetches its status :returns: _MemberStatus object

fetch_nodes_statuses(members: List[Member]) List[_MemberStatus]
follow(demote_reason: str, follow_reason: str, refresh: bool = True) str
future_restart_scheduled() Dict[str, Any]
get_effective_tags() Dict[str, Any]

Return configuration tags merged with dynamically applied tags.

get_failover_candidates(exclude_failover_candidate: bool) List[Member]

Return a list of candidates for either manual or automatic failover.

Exclude non-sync members when in synchronous mode, the current node (its checks are always performed earlier) and the candidate if required. If failover candidate exclusion is not requested and a candidate is specified in the /failover key, return the candidate only. The result is further evaluated in the caller Ha.is_failover_possible() to check if any member is actually healthy enough and is allowed to poromote.

Parameters:

exclude_failover_candidate – if True, exclude failover.candidate from the candidates.

Returns:

a list of Member ojects or an empty list if there is no candidate available.

get_remote_member(member: Optional[Union[Leader, Member]] = None) RemoteMember

Get remote member node to stream from.

In case of standby cluster this will tell us from which remote member to stream. Config can be both patroni config or cluster.config.data.

handle_long_action_in_progress() str

Figure out what to do with the task AsyncExecutor is performing.

handle_starting_instance() Optional[str]

Starting up PostgreSQL may take a long time. In case we are the leader we may want to fail over to.

has_lock(info: bool = True) bool
is_failover_possible(*, cluster_lsn: int = 0, exclude_failover_candidate: bool = False) bool

Checks whether any of the cluster members is allowed to promote and is healthy enough for that.

Parameters:
  • cluster_lsn – to calculate replication lag and exclude member if it is lagging.

  • exclude_failover_candidate – if True, exclude failover.candidate from the members list against which the failover possibility checks are run.

Returns:

True if there are members eligible to become the new leader.

is_failsafe_mode() bool
Returns:

True if failsafe_mode is enabled in global configuration.

is_healthiest_node() bool

Performs a series of checks to determine that the current node is the best candidate.

In case if manual failover/switchover is requested it calls manual_failover_process_no_leader() method.

Returns:

True if the current node is among the best candidates to become the new leader.

is_lagging(wal_position: int) bool

Returns if instance with an wal should consider itself unhealthy to be promoted due to replication lag.

Parameters:

wal_position – Current wal position.

:returns True when node is lagging

is_leader() bool
Returns:

True if the current node is the leader, based on expiration set when it last held the key.

is_paused() bool
Returns:

True if in maintenance mode.

is_standby_cluster() bool
Returns:

True if global configuration has a valid “standby_cluster” section.

is_sync_standby(cluster: Cluster) bool
Returns:

True if the current node is a synchronous standby.

is_synchronous_mode() bool
Returns:

True if synchronous replication is requested.

load_cluster_from_dcs() None
manual_failover_process_no_leader() Optional[bool]

Handles manual failover/switchover when the old leader already stepped down.

Returns:

  • True if the current node is the best candidate to become the new leader

  • None if the current node is running as a primary and requested candidate doesn’t exist

notify_citus_coordinator(event: str) None
post_bootstrap() str
post_recover() Optional[str]
primary_stop_timeout() Optional[int]
Returns:

“primary_stop_timeout” from the global configuration or None when not in synchronous mode.

process_healthy_cluster() str
process_manual_failover_from_leader() Optional[str]

Checks if manual failover is requested and takes action if appropriate.

Cleans up failover key if failover conditions are not matched.

Returns:

action message if demote was initiated, None if no action was taken

process_sync_replication() None

Process synchronous standby beahvior.

Synchronous standbys are registered in two places postgresql.conf and DCS. The order of updating them must be right. The invariant that should be kept is that if a node is primary and sync_standby is set in DCS, then that node must have synchronous_standby set to that value. Or more simple, first set in postgresql.conf and then in DCS. When removing, first remove in DCS, then in postgresql.conf. This is so we only consider promoting standbys that were guaranteed to be replicating synchronously.

process_unhealthy_cluster() str

Cluster has no leader key

recover() str

Handle the case when postgres isn’t running.

Depending on the state of Patroni, DCS cluster view, and pg_controldata the following could happen:

  • if primary_start_timeout is 0 and this node owns the leader lock, the lock will be voluntarily released if there are healthy replicas to take it over.

  • if postgres was running as a primary and this node owns the leader lock, postgres is started as primary.

  • crash recover in a single-user mode is executed in the following cases:

    • postgres was running as primary wasn’t shut down cleanly and there is no leader in DCS

    • postgres was running as replica wasn’t shut down in recovery (cleanly) and we need to run pg_rewind to join back to the cluster.

  • pg_rewind is executed if it is necessary, or optinally, the data directory could

    be removed if it is allowed by configuration.

  • after crash recovery and/or pg_rewind are executed, postgres is started in recovery.

Returns:

action message, describing what was performed.

reinitialize(force: bool = False) Optional[str]
release_leader_key_voluntarily(last_lsn: Optional[int] = None) None
restart(restart_data: Dict[str, Any], run_async: bool = False) Tuple[bool, str]

conditional and unconditional restart

restart_matches(role: Optional[str], postgres_version: Optional[str], pending_restart: bool) bool
restart_scheduled() bool
run_cycle() str
schedule_future_restart(restart_data: Dict[str, Any]) bool
set_is_leader(value: bool) None

Update the current node’s view of it’s own leadership status.

Will update the expiry timestamp to match the dcs ttl if setting leadership to true, otherwise will set the expiry to the past to immediately invalidate.

Parameters:

value – is the current node the leader.

set_start_timeout(value: Optional[int]) None

Sets timeout for starting as primary before eligible for failover.

Must be called when async_executor is busy or in the main thread.

should_run_scheduled_action(action_name: str, scheduled_at: Optional[datetime], cleanup_fn: Callable[[...], Any]) bool
shutdown() None
sync_mode_is_active() bool

Check whether synchronous replication is requested and already active.

Returns:

True if the primary already put its name into the /sync in DCS.

static sysid_valid(sysid: Optional[str]) bool
touch_member() bool
update_cluster_history() None
update_failsafe(data: Dict[str, Any]) Optional[str]
update_lock(update_status: bool = False) bool

Update the leader lock in DCS.

Note

After successful update of the leader key the AbstractDCS.update_leader() method could also optionally update the /status and /failsafe keys.

The /status key contains the last known LSN on the leader node and the last known state of permanent replication slots including permanent physical replication slot for the leader.

Last, but not least, this method calls a Watchdog.keepalive() method after the leader key was successfully updated.

Parameters:

update_statusTrue if we also need to update the /status key in DCS, otherwise False.

Returns:

True if the leader key was successfully updated and we can continue to run postgres as a primary or as a standby_leader, otherwise False.

wakeup() None

Trigger the next run of HA loop if there is no “active” leader watch request in progress.

This usually happens on the leader or if the node is running async action

watch(timeout: float) bool
while_not_sync_standby(func: Callable[[...], Any]) Any

Runs specified action while trying to make sure that the node is not assigned synchronous standby status.

Tags us as not allowed to be a sync standby as we are going to go away, if we currently are wait for leader to notice and pick an alternative one or if the leader changes or goes away we are also free.

If the connection to DCS fails we run the action anyway, as this is only a hint.

There is a small race window where this function runs between a primary picking us the sync standby and publishing it to the DCS. As the window is rather tiny consequences are holding up commits for one cycle period we don’t worry about it here.

class patroni.ha._MemberStatus(member: Member, reachable: bool, in_recovery: Optional[bool], wal_position: int, data: Dict[str, Any])

Bases: Tags, _MemberStatus

Node status distilled from API response.

Consists of the following fields:

Variables:
  • memberMember object of the node.

  • reachableFalse if the node is not reachable or is not responding with correct JSON.

  • in_recoveryFalse if the node is running as a primary (if pg_is_in_recovery() == true).

  • wal_position – maximum value of replayed_location or received_location from JSON.

  • data – the whole JSON response for future usage.

_abc_impl = <_abc._abc_data object>
failover_limitation() Optional[str]

Returns reason why this node can’t promote or None if everything is ok.

classmethod from_api_response(member: Member, json: Dict[str, Any]) _MemberStatus
Parameters:
  • member – dcs.Member object

  • json – RestApiHandler.get_postgresql_status() result

Returns:

_MemberStatus object

property tags: Dict[str, Any]

Dictionary with values of different tags (i.e. nofailover).

property timeline: int

Timeline value from JSON.

classmethod unknown(member: Member) _MemberStatus

Create a new class instance with empty or null values.

property watchdog_failed: bool

Indicates that watchdog is required by configuration but not available or failed.