Ignore:
Timestamp:
Nov 25, 2016, 8:04:54 PM (9 years ago)
Author:
Silvan Scherrer
Message:

Samba Server: update vendor to version 4.4.7

Location:
vendor/current/ctdb/doc
Files:
3 edited

Legend:

Unmodified
Added
Removed
  • vendor/current/ctdb/doc/ctdb-tunables.7

    r988 r989  
    33.\"    Author:
    44.\" Generator: DocBook XSL Stylesheets v1.78.1 <http://docbook.sf.net/>
    5 .\"      Date: 01/27/2016
     5.\"      Date: 09/22/2016
    66.\"    Manual: CTDB - clustered TDB database
    77.\"    Source: ctdb
    88.\"  Language: English
    99.\"
    10 .TH "CTDB\-TUNABLES" "7" "01/27/2016" "ctdb" "CTDB \- clustered TDB database"
     10.TH "CTDB\-TUNABLES" "7" "09/22/2016" "ctdb" "CTDB \- clustered TDB database"
    1111.\" -----------------------------------------------------------------
    1212.\" * Define some portability stuff
     
    3838\fBgetvar\fR
    3939commands for more details\&.
    40 .SS "MaxRedirectCount"
     40.PP
     41The tunable variables are listed alphabetically\&.
     42.SS "AllowClientDBAttach"
     43.PP
     44Default: 1
     45.PP
     46When set to 0, clients are not allowed to attach to any databases\&. This can be used to temporarily block any new processes from attaching to and accessing the databases\&. This is mainly used for detaching a volatile database using \*(Aqctdb detach\*(Aq\&.
     47.SS "AllowUnhealthyDBRead"
     48.PP
     49Default: 0
     50.PP
     51When set to 1, ctdb allows database traverses to read unhealthy databases\&. By default, ctdb does not allow reading records from unhealthy databases\&.
     52.SS "ControlTimeout"
     53.PP
     54Default: 60
     55.PP
     56This is the default setting for timeout for when sending a control message to either the local or a remote ctdb daemon\&.
     57.SS "DatabaseHashSize"
     58.PP
     59Default: 100001
     60.PP
     61Number of the hash chains for the local store of the tdbs that ctdb manages\&.
     62.SS "DatabaseMaxDead"
     63.PP
     64Default: 5
     65.PP
     66Maximum number of dead records per hash chain for the tdb databses managed by ctdb\&.
     67.SS "DBRecordCountWarn"
     68.PP
     69Default: 100000
     70.PP
     71When set to non\-zero, ctdb will log a warning during recovery if a database has more than this many records\&. This will produce a warning if a database grows uncontrollably with orphaned records\&.
     72.SS "DBRecordSizeWarn"
     73.PP
     74Default: 10000000
     75.PP
     76When set to non\-zero, ctdb will log a warning during recovery if a single record is bigger than this size\&. This will produce a warning if a database record grows uncontrollably\&.
     77.SS "DBSizeWarn"
     78.PP
     79Default: 1000000000
     80.PP
     81When set to non\-zero, ctdb will log a warning during recovery if a database size is bigger than this\&. This will produce a warning if a database grows uncontrollably\&.
     82.SS "DeferredAttachTO"
     83.PP
     84Default: 120
     85.PP
     86When databases are frozen we do not allow clients to attach to the databases\&. Instead of returning an error immediately to the client, the attach request from the client is deferred until the database becomes available again at which stage we respond to the client\&.
     87.PP
     88This timeout controls how long we will defer the request from the client before timing it out and returning an error to the client\&.
     89.SS "DeterministicIPs"
     90.PP
     91Default: 0
     92.PP
     93When set to 1, ctdb will try to keep public IP addresses locked to specific nodes as far as possible\&. This makes it easier for debugging since you can know that as long as all nodes are healthy public IP X will always be hosted by node Y\&.
     94.PP
     95The cost of using deterministic IP address assignment is that it disables part of the logic where ctdb tries to reduce the number of public IP assignment changes in the cluster\&. This tunable may increase the number of IP failover/failbacks that are performed on the cluster by a small margin\&.
     96.SS "DisableIPFailover"
     97.PP
     98Default: 0
     99.PP
     100When set to non\-zero, ctdb will not perform failover or failback\&. Even if a node fails while holding public IPs, ctdb will not recover the IPs or assign them to another node\&.
     101.PP
     102When this tunable is enabled, ctdb will no longer attempt to recover the cluster by failing IP addresses over to other nodes\&. This leads to a service outage until the administrator has manually performed IP failover to replacement nodes using the \*(Aqctdb moveip\*(Aq command\&.
     103.SS "ElectionTimeout"
    41104.PP
    42105Default: 3
    43106.PP
    44 If we are not the DMASTER and need to fetch a record across the network we first send the request to the LMASTER after which the record is passed onto the current DMASTER\&. If the DMASTER changes before the request has reached that node, the request will be passed onto the "next" DMASTER\&. For very hot records that migrate rapidly across the cluster this can cause a request to "chase" the record for many hops before it catches up with the record\&. this is how many hops we allow trying to chase the DMASTER before we switch back to the LMASTER again to ask for new directions\&.
    45 .PP
    46 When chasing a record, this is how many hops we will chase the record for before going back to the LMASTER to ask for new guidance\&.
    47 .SS "SeqnumInterval"
     107The number of seconds to wait for the election of recovery master to complete\&. If the election is not completed during this interval, then that round of election fails and ctdb starts a new election\&.
     108.SS "EnableBans"
     109.PP
     110Default: 1
     111.PP
     112This parameter allows ctdb to ban a node if the node is misbehaving\&.
     113.PP
     114When set to 0, this disables banning completely in the cluster and thus nodes can not get banned, even it they break\&. Don\*(Aqt set to 0 unless you know what you are doing\&. You should set this to the same value on all nodes to avoid unexpected behaviour\&.
     115.SS "EventScriptTimeout"
     116.PP
     117Default: 30
     118.PP
     119Maximum time in seconds to allow an event to run before timing out\&. This is the total time for all enabled scripts that are run for an event, not just a single event script\&.
     120.PP
     121Note that timeouts are ignored for some events ("takeip", "releaseip", "startrecovery", "recovered") and converted to success\&. The logic here is that the callers of these events implement their own additional timeout\&.
     122.SS "FetchCollapse"
     123.PP
     124Default: 1
     125.PP
     126This parameter is used to avoid multiple migration requests for the same record from a single node\&. All the record requests for the same record are queued up and processed when the record is migrated to the current node\&.
     127.PP
     128When many clients across many nodes try to access the same record at the same time this can lead to a fetch storm where the record becomes very active and bounces between nodes very fast\&. This leads to high CPU utilization of the ctdbd daemon, trying to bounce that record around very fast, and poor performance\&. This can improve performance and reduce CPU utilization for certain workloads\&.
     129.SS "HopcountMakeSticky"
     130.PP
     131Default: 50
     132.PP
     133For database(s) marked STICKY (using \*(Aqctdb setdbsticky\*(Aq), any record that is migrating so fast that hopcount exceeds this limit is marked as STICKY record for
     134\fIStickyDuration\fR
     135seconds\&. This means that after each migration the sticky record will be kept on the node
     136\fIStickyPindown\fRmilliseconds and prevented from being migrated off the node\&.
     137.PP
     138This will improve performance for certain workloads, such as locking\&.tdb if many clients are opening/closing the same file concurrently\&.
     139.SS "KeepaliveInterval"
     140.PP
     141Default: 5
     142.PP
     143How often in seconds should the nodes send keep\-alive packets to each other\&.
     144.SS "KeepaliveLimit"
     145.PP
     146Default: 5
     147.PP
     148After how many keepalive intervals without any traffic should a node wait until marking the peer as DISCONNECTED\&.
     149.PP
     150If a node has hung, it can take
     151\fIKeepaliveInterval\fR
     152* (\fIKeepaliveLimit\fR
     153+ 1) seconds before ctdb determines that the node is DISCONNECTED and performs a recovery\&. This limit should not be set too high to enable early detection and avoid any application timeouts (e\&.g\&. SMB1) to kick in before the fail over is completed\&.
     154.SS "LCP2PublicIPs"
     155.PP
     156Default: 1
     157.PP
     158When set to 1, ctdb uses the LCP2 ip allocation algorithm\&.
     159.SS "LockProcessesPerDB"
     160.PP
     161Default: 200
     162.PP
     163This is the maximum number of lock helper processes ctdb will create for obtaining record locks\&. When ctdb cannot get a record lock without blocking, it creates a helper process that waits for the lock to be obtained\&.
     164.SS "LogLatencyMs"
     165.PP
     166Default: 0
     167.PP
     168When set to non\-zero, ctdb will log if certains operations take longer than this value, in milliseconds, to complete\&. These operations include "process a record request from client", "take a record or database lock", "update a persistent database record" and "vaccum a database"\&.
     169.SS "MaxQueueDropMsg"
     170.PP
     171Default: 1000000
     172.PP
     173This is the maximum number of messages to be queued up for a client before ctdb will treat the client as hung and will terminate the client connection\&.
     174.SS "MonitorInterval"
     175.PP
     176Default: 15
     177.PP
     178How often should ctdb run the \*(Aqmonitor\*(Aq event in seconds to check for a node\*(Aqs health\&.
     179.SS "MonitorTimeoutCount"
     180.PP
     181Default: 20
     182.PP
     183How many \*(Aqmonitor\*(Aq events in a row need to timeout before a node is flagged as UNHEALTHY\&. This setting is useful if scripts can not be written so that they do not hang for benign reasons\&.
     184.SS "NoIPFailback"
     185.PP
     186Default: 0
     187.PP
     188When set to 1, ctdb will not perform failback of IP addresses when a node becomes healthy\&. When a node becomes UNHEALTHY, ctdb WILL perform failover of public IP addresses, but when the node becomes HEALTHY again, ctdb will not fail the addresses back\&.
     189.PP
     190Use with caution! Normally when a node becomes available to the cluster ctdb will try to reassign public IP addresses onto the new node as a way to distribute the workload evenly across the clusternode\&. Ctdb tries to make sure that all running nodes have approximately the same number of public addresses it hosts\&.
     191.PP
     192When you enable this tunable, ctdb will no longer attempt to rebalance the cluster by failing IP addresses back to the new nodes\&. An unbalanced cluster will therefore remain unbalanced until there is manual intervention from the administrator\&. When this parameter is set, you can manually fail public IP addresses over to the new node(s) using the \*(Aqctdb moveip\*(Aq command\&.
     193.SS "NoIPHostOnAllDisabled"
     194.PP
     195Default: 0
     196.PP
     197If no nodes are HEALTHY then by default ctdb will happily host public IPs on disabled (unhealthy or administratively disabled) nodes\&. This can cause problems, for example if the underlying cluster filesystem is not mounted\&. When set to 1 on a node and that node is disabled, any IPs hosted by this node will be released and the node will not takeover any IPs until it is no longer disabled\&.
     198.SS "NoIPTakeover"
     199.PP
     200Default: 0
     201.PP
     202When set to 1, ctdb will not allow IP addresses to be failed over onto this node\&. Any IP addresses that the node currently hosts will remain on the node but no new IP addresses can be failed over to the node\&.
     203.SS "PullDBPreallocation"
     204.PP
     205Default: 10*1024*1024
     206.PP
     207This is the size of a record buffer to pre\-allocate for sending reply to PULLDB control\&. Usually record buffer starts with size of the first record and gets reallocated every time a new record is added to the record buffer\&. For a large number of records, this can be very inefficient to grow the record buffer one record at a time\&.
     208.SS "RecBufferSizeLimit"
     209.PP
     210Default: 1000000
     211.PP
     212This is the limit on the size of the record buffer to be sent in various controls\&. This limit is used by new controls used for recovery and controls used in vacuuming\&.
     213.SS "RecdFailCount"
     214.PP
     215Default: 10
     216.PP
     217If the recovery daemon has failed to ping the main dameon for this many consecutive intervals, the main daemon will consider the recovery daemon as hung and will try to restart it to recover\&.
     218.SS "RecdPingTimeout"
     219.PP
     220Default: 60
     221.PP
     222If the main dameon has not heard a "ping" from the recovery dameon for this many seconds, the main dameon will log a message that the recovery daemon is potentially hung\&. This also increments a counter which is checked against
     223\fIRecdFailCount\fR
     224for detection of hung recovery daemon\&.
     225.SS "RecLockLatencyMs"
    48226.PP
    49227Default: 1000
    50228.PP
    51 Some databases have seqnum tracking enabled, so that samba will be able to detect asynchronously when there has been updates to the database\&. Everytime a database is updated its sequence number is increased\&.
    52 .PP
    53 This tunable is used to specify in \*(Aqms\*(Aq how frequently ctdb will send out updates to remote nodes to inform them that the sequence number is increased\&.
    54 .SS "ControlTimeout"
    55 .PP
    56 Default: 60
    57 .PP
    58 This is the default setting for timeout for when sending a control message to either the local or a remote ctdb daemon\&.
    59 .SS "TraverseTimeout"
    60 .PP
    61 Default: 20
    62 .PP
    63 This setting controls how long we allow a traverse process to run\&. After this timeout triggers, the main ctdb daemon will abort the traverse if it has not yet finished\&.
    64 .SS "KeepaliveInterval"
    65 .PP
    66 Default: 5
    67 .PP
    68 How often in seconds should the nodes send keepalives to eachother\&.
    69 .SS "KeepaliveLimit"
    70 .PP
    71 Default: 5
    72 .PP
    73 After how many keepalive intervals without any traffic should a node wait until marking the peer as DISCONNECTED\&.
    74 .PP
    75 If a node has hung, it can thus take KeepaliveInterval*(KeepaliveLimit+1) seconds before we determine that the node is DISCONNECTED and that we require a recovery\&. This limitshould not be set too high since we want a hung node to be detectec, and expunged from the cluster well before common CIFS timeouts (45\-90 seconds) kick in\&.
     229When using a reclock file for split brain prevention, if set to non\-zero this tunable will make the recovery dameon log a message if the fcntl() call to lock/testlock the recovery file takes longer than this number of milliseconds\&.
     230.SS "RecoverInterval"
     231.PP
     232Default: 1
     233.PP
     234How frequently in seconds should the recovery daemon perform the consistency checks to determine if it should perform a recovery\&.
     235.SS "RecoverPDBBySeqNum"
     236.PP
     237Default: 1
     238.PP
     239When set to zero, database recovery for persistent databases is record\-by\-record and recovery process simply collects the most recent version of every individual record\&.
     240.PP
     241When set to non\-zero, persistent databases will instead be recovered as a whole db and not by individual records\&. The node that contains the highest value stored in the record "__db_sequence_number__" is selected and the copy of that nodes database is used as the recovered database\&.
     242.PP
     243By default, recovery of persistent databses is done using __db_sequence_number__ record\&.
    76244.SS "RecoverTimeout"
    77245.PP
    78 Default: 20
     246Default: 120
    79247.PP
    80248This is the default setting for timeouts for controls when sent from the recovery daemon\&. We allow longer control timeouts from the recovery daemon than from normal use since the recovery dameon often use controls that can take a lot longer than normal controls\&.
    81 .SS "RecoverInterval"
    82 .PP
    83 Default: 1
    84 .PP
    85 How frequently in seconds should the recovery daemon perform the consistency checks that determine if we need to perform a recovery or not\&.
    86 .SS "ElectionTimeout"
    87 .PP
    88 Default: 3
    89 .PP
    90 When electing a new recovery master, this is how many seconds we allow the election to take before we either deem the election finished or we fail the election and start a new one\&.
    91 .SS "TakeoverTimeout"
    92 .PP
    93 Default: 9
    94 .PP
    95 This is how many seconds we allow controls to take for IP failover events\&.
    96 .SS "MonitorInterval"
    97 .PP
    98 Default: 15
    99 .PP
    100 How often should ctdb run the event scripts to check for a nodes health\&.
    101 .SS "TickleUpdateInterval"
    102 .PP
    103 Default: 20
    104 .PP
    105 How often will ctdb record and store the "tickle" information used to kickstart stalled tcp connections after a recovery\&.
    106 .SS "EventScriptTimeout"
    107 .PP
    108 Default: 30
    109 .PP
    110 Maximum time in seconds to allow an event to run before timing out\&. This is the total time for all enabled scripts that are run for an event, not just a single event script\&.
    111 .PP
    112 Note that timeouts are ignored for some events ("takeip", "releaseip", "startrecovery", "recovered") and converted to success\&. The logic here is that the callers of these events implement their own additional timeout\&.
    113 .SS "MonitorTimeoutCount"
    114 .PP
    115 Default: 20
    116 .PP
    117 How many monitor events in a row need to timeout before a node is flagged as UNHEALTHY\&. This setting is useful if scripts can not be written so that they do not hang for benign reasons\&.
     249.SS "RecoveryBanPeriod"
     250.PP
     251Default: 300
     252.PP
     253The duration in seconds for which a node is banned if the node fails during recovery\&. After this time has elapsed the node will automatically get unbanned and will attempt to rejoin the cluster\&.
     254.PP
     255A node usually gets banned due to real problems with the node\&. Don\*(Aqt set this value too small\&. Otherwise, a problematic node will try to re\-join cluster too soon causing unnecessary recoveries\&.
     256.SS "RecoveryDropAllIPs"
     257.PP
     258Default: 120
     259.PP
     260If a node is stuck in recovery, or stopped, or banned, for this many seconds, then ctdb will release all public addresses on that node\&.
    118261.SS "RecoveryGracePeriod"
    119262.PP
    120263Default: 120
    121264.PP
    122 During recoveries, if a node has not caused recovery failures during the last grace period, any records of transgressions that the node has caused recovery failures will be forgiven\&. This resets the ban\-counter back to zero for that node\&.
    123 .SS "RecoveryBanPeriod"
    124 .PP
    125 Default: 300
    126 .PP
    127 If a node becomes banned causing repetitive recovery failures\&. The node will eventually become banned from the cluster\&. This controls how long the culprit node will be banned from the cluster before it is allowed to try to join the cluster again\&. Don\*(Aqt set to small\&. A node gets banned for a reason and it is usually due to real problems with the node\&.
    128 .SS "DatabaseHashSize"
    129 .PP
    130 Default: 100001
    131 .PP
    132 Size of the hash chains for the local store of the tdbs that ctdb manages\&.
    133 .SS "DatabaseMaxDead"
    134 .PP
    135 Default: 5
    136 .PP
    137 How many dead records per hashchain in the TDB database do we allow before the freelist needs to be processed\&.
    138 .SS "RerecoveryTimeout"
    139 .PP
    140 Default: 10
    141 .PP
    142 Once a recovery has completed, no additional recoveries are permitted until this timeout has expired\&.
    143 .SS "EnableBans"
    144 .PP
    145 Default: 1
    146 .PP
    147 When set to 0, this disables BANNING completely in the cluster and thus nodes can not get banned, even it they break\&. Don\*(Aqt set to 0 unless you know what you are doing\&. You should set this to the same value on all nodes to avoid unexpected behaviour\&.
    148 .SS "DeterministicIPs"
    149 .PP
    150 Default: 0
    151 .PP
    152 When enabled, this tunable makes ctdb try to keep public IP addresses locked to specific nodes as far as possible\&. This makes it easier for debugging since you can know that as long as all nodes are healthy public IP X will always be hosted by node Y\&.
    153 .PP
    154 The cost of using deterministic IP address assignment is that it disables part of the logic where ctdb tries to reduce the number of public IP assignment changes in the cluster\&. This tunable may increase the number of IP failover/failbacks that are performed on the cluster by a small margin\&.
    155 .SS "LCP2PublicIPs"
    156 .PP
    157 Default: 1
    158 .PP
    159 When enabled this switches ctdb to use the LCP2 ip allocation algorithm\&.
    160 .SS "ReclockPingPeriod"
    161 .PP
    162 Default: x
    163 .PP
    164 Obsolete
    165 .SS "NoIPFailback"
    166 .PP
    167 Default: 0
    168 .PP
    169 When set to 1, ctdb will not perform failback of IP addresses when a node becomes healthy\&. Ctdb WILL perform failover of public IP addresses when a node becomes UNHEALTHY, but when the node becomes HEALTHY again, ctdb will not fail the addresses back\&.
    170 .PP
    171 Use with caution! Normally when a node becomes available to the cluster ctdb will try to reassign public IP addresses onto the new node as a way to distribute the workload evenly across the clusternode\&. Ctdb tries to make sure that all running nodes have approximately the same number of public addresses it hosts\&.
    172 .PP
    173 When you enable this tunable, CTDB will no longer attempt to rebalance the cluster by failing IP addresses back to the new nodes\&. An unbalanced cluster will therefore remain unbalanced until there is manual intervention from the administrator\&. When this parameter is set, you can manually fail public IP addresses over to the new node(s) using the \*(Aqctdb moveip\*(Aq command\&.
    174 .SS "DisableIPFailover"
    175 .PP
    176 Default: 0
    177 .PP
    178 When enabled, ctdb will not perform failover or failback\&. Even if a node fails while holding public IPs, ctdb will not recover the IPs or assign them to another node\&.
    179 .PP
    180 When you enable this tunable, CTDB will no longer attempt to recover the cluster by failing IP addresses over to other nodes\&. This leads to a service outage until the administrator has manually performed failover to replacement nodes using the \*(Aqctdb moveip\*(Aq command\&.
    181 .SS "NoIPTakeover"
    182 .PP
    183 Default: 0
    184 .PP
    185 When set to 1, ctdb will not allow IP addresses to be failed over onto this node\&. Any IP addresses that the node currently hosts will remain on the node but no new IP addresses can be failed over to the node\&.
    186 .SS "NoIPHostOnAllDisabled"
    187 .PP
    188 Default: 0
    189 .PP
    190 If no nodes are healthy then by default ctdb will happily host public IPs on disabled (unhealthy or administratively disabled) nodes\&. This can cause problems, for example if the underlying cluster filesystem is not mounted\&. When set to 1 on a node and that node is disabled it, any IPs hosted by this node will be released and the node will not takeover any IPs until it is no longer disabled\&.
    191 .SS "DBRecordCountWarn"
    192 .PP
    193 Default: 100000
    194 .PP
    195 When set to non\-zero, ctdb will log a warning when we try to recover a database with more than this many records\&. This will produce a warning if a database grows uncontrollably with orphaned records\&.
    196 .SS "DBRecordSizeWarn"
    197 .PP
    198 Default: 10000000
    199 .PP
    200 When set to non\-zero, ctdb will log a warning when we try to recover a database where a single record is bigger than this\&. This will produce a warning if a database record grows uncontrollably with orphaned sub\-records\&.
    201 .SS "DBSizeWarn"
    202 .PP
    203 Default: 1000000000
    204 .PP
    205 When set to non\-zero, ctdb will log a warning when we try to recover a database bigger than this\&. This will produce a warning if a database grows uncontrollably\&.
    206 .SS "VerboseMemoryNames"
    207 .PP
    208 Default: 0
    209 .PP
    210 This feature consumes additional memory\&. when used the talloc library will create more verbose names for all talloc allocated objects\&.
    211 .SS "RecdPingTimeout"
    212 .PP
    213 Default: 60
    214 .PP
    215 If the main dameon has not heard a "ping" from the recovery dameon for this many seconds, the main dameon will log a message that the recovery daemon is potentially hung\&.
    216 .SS "RecdFailCount"
    217 .PP
    218 Default: 10
    219 .PP
    220 If the recovery daemon has failed to ping the main dameon for this many consecutive intervals, the main daemon will consider the recovery daemon as hung and will try to restart it to recover\&.
    221 .SS "LogLatencyMs"
    222 .PP
    223 Default: 0
    224 .PP
    225 When set to non\-zero, this will make the main daemon log any operation that took longer than this value, in \*(Aqms\*(Aq, to complete\&. These include "how long time a lockwait child process needed", "how long time to write to a persistent database" but also "how long did it take to get a response to a CALL from a remote node"\&.
    226 .SS "RecLockLatencyMs"
    227 .PP
    228 Default: 1000
    229 .PP
    230 When using a reclock file for split brain prevention, if set to non\-zero this tunable will make the recovery dameon log a message if the fcntl() call to lock/testlock the recovery file takes longer than this number of ms\&.
    231 .SS "RecoveryDropAllIPs"
    232 .PP
    233 Default: 120
    234 .PP
    235 If we have been stuck in recovery, or stopped, or banned, mode for this many seconds we will force drop all held public addresses\&.
    236 .SS "VacuumInterval"
    237 .PP
    238 Default: 10
    239 .PP
    240 Periodic interval in seconds when vacuuming is triggered for volatile databases\&.
    241 .SS "VacuumMaxRunTime"
    242 .PP
    243 Default: 120
    244 .PP
    245 The maximum time in seconds for which the vacuuming process is allowed to run\&. If vacuuming process takes longer than this value, then the vacuuming process is terminated\&.
     265During recoveries, if a node has not caused recovery failures during the last grace period in seconds, any records of transgressions that the node has caused recovery failures will be forgiven\&. This resets the ban\-counter back to zero for that node\&.
    246266.SS "RepackLimit"
    247267.PP
     
    249269.PP
    250270During vacuuming, if the number of freelist records are more than
    251 \fIRepackLimit\fR, then databases are repacked to get rid of the freelist records to avoid fragmentation\&.
     271\fIRepackLimit\fR, then the database is repacked to get rid of the freelist records to avoid fragmentation\&.
    252272.PP
    253273Databases are repacked only if both
     
    256276\fIVacuumLimit\fR
    257277are exceeded\&.
     278.SS "RerecoveryTimeout"
     279.PP
     280Default: 10
     281.PP
     282Once a recovery has completed, no additional recoveries are permitted until this timeout in seconds has expired\&.
     283.SS "Samba3AvoidDeadlocks"
     284.PP
     285Default: 0
     286.PP
     287If set to non\-zero, enable code that prevents deadlocks with Samba (only for Samba 3\&.x)\&.
     288.PP
     289This should be set to 1 only when using Samba version 3\&.x to enable special code in ctdb to avoid deadlock with Samba version 3\&.x\&. This code is not required for Samba version 4\&.x and must not be enabled for Samba 4\&.x\&.
     290.SS "SeqnumInterval"
     291.PP
     292Default: 1000
     293.PP
     294Some databases have seqnum tracking enabled, so that samba will be able to detect asynchronously when there has been updates to the database\&. Everytime a database is updated its sequence number is increased\&.
     295.PP
     296This tunable is used to specify in milliseconds how frequently ctdb will send out updates to remote nodes to inform them that the sequence number is increased\&.
     297.SS "StatHistoryInterval"
     298.PP
     299Default: 1
     300.PP
     301Granularity of the statistics collected in the statistics history\&. This is reported by \*(Aqctdb stats\*(Aq command\&.
     302.SS "StickyDuration"
     303.PP
     304Default: 600
     305.PP
     306Once a record has been marked STICKY, this is the duration in seconds, the record will be flagged as a STICKY record\&.
     307.SS "StickyPindown"
     308.PP
     309Default: 200
     310.PP
     311Once a STICKY record has been migrated onto a node, it will be pinned down on that node for this number of milliseconds\&. Any request from other nodes to migrate the record off the node will be deferred\&.
     312.SS "TakeoverTimeout"
     313.PP
     314Default: 9
     315.PP
     316This is the duration in seconds in which ctdb tries to complete IP failover\&.
     317.SS "TDBMutexEnabled"
     318.PP
     319Default: 0
     320.PP
     321This paramter enables TDB_MUTEX_LOCKING feature on volatile databases if the robust mutexes are supported\&. This optimizes the record locking using robust mutexes and is much more efficient that using posix locks\&.
     322.SS "TickleUpdateInterval"
     323.PP
     324Default: 20
     325.PP
     326Every
     327\fITickleUpdateInterval\fR
     328seconds, ctdb synchronizes the client connection information across nodes\&.
     329.SS "TraverseTimeout"
     330.PP
     331Default: 20
     332.PP
     333This is the duration in seconds for which a database traverse is allowed to run\&. If the traverse does not complete during this interval, ctdb will abort the traverse\&.
     334.SS "VacuumFastPathCount"
     335.PP
     336Default: 60
     337.PP
     338During a vacuuming run, ctdb usually processes only the records marked for deletion also called the fast path vacuuming\&. After finishing
     339\fIVacuumFastPathCount\fR
     340number of fast path vacuuming runs, ctdb will trigger a scan of complete database for any empty records that need to be deleted\&.
     341.SS "VacuumInterval"
     342.PP
     343Default: 10
     344.PP
     345Periodic interval in seconds when vacuuming is triggered for volatile databases\&.
    258346.SS "VacuumLimit"
    259347.PP
     
    268356\fIVacuumLimit\fR
    269357are exceeded\&.
    270 .SS "VacuumFastPathCount"
    271 .PP
    272 Default: 60
    273 .PP
    274 When a record is deleted, it is marked for deletion during vacuuming\&. Vacuuming process usually processes this list to purge the records from the database\&. If the number of records marked for deletion are more than VacuumFastPathCount, then vacuuming process will scan the complete database for empty records instead of using the list of records marked for deletion\&.
    275 .SS "DeferredAttachTO"
    276 .PP
    277 Default: 120
    278 .PP
    279 When databases are frozen we do not allow clients to attach to the databases\&. Instead of returning an error immediately to the application the attach request from the client is deferred until the database becomes available again at which stage we respond to the client\&.
    280 .PP
    281 This timeout controls how long we will defer the request from the client before timing it out and returning an error to the client\&.
    282 .SS "HopcountMakeSticky"
    283 .PP
    284 Default: 50
    285 .PP
    286 If the database is set to \*(AqSTICKY\*(Aq mode, using the \*(Aqctdb setdbsticky\*(Aq command, any record that is seen as very hot and migrating so fast that hopcount surpasses 50 is set to become a STICKY record for StickyDuration seconds\&. This means that after each migration the record will be kept on the node and prevented from being migrated off the node\&.
    287 .PP
    288 This setting allows one to try to identify such records and stop them from migrating across the cluster so fast\&. This will improve performance for certain workloads, such as locking\&.tdb if many clients are opening/closing the same file concurrently\&.
    289 .SS "StickyDuration"
    290 .PP
    291 Default: 600
    292 .PP
    293 Once a record has been found to be fetch\-lock hot and has been flagged to become STICKY, this is for how long, in seconds, the record will be flagged as a STICKY record\&.
    294 .SS "StickyPindown"
    295 .PP
    296 Default: 200
    297 .PP
    298 Once a STICKY record has been migrated onto a node, it will be pinned down on that node for this number of ms\&. Any request from other nodes to migrate the record off the node will be deferred until the pindown timer expires\&.
    299 .SS "StatHistoryInterval"
    300 .PP
    301 Default: 1
    302 .PP
    303 Granularity of the statistics collected in the statistics history\&.
    304 .SS "AllowClientDBAttach"
    305 .PP
    306 Default: 1
    307 .PP
    308 When set to 0, clients are not allowed to attach to any databases\&. This can be used to temporarily block any new processes from attaching to and accessing the databases\&.
    309 .SS "RecoverPDBBySeqNum"
    310 .PP
    311 Default: 1
    312 .PP
    313 When set to zero, database recovery for persistent databases is record\-by\-record and recovery process simply collects the most recent version of every individual record\&.
    314 .PP
    315 When set to non\-zero, persistent databases will instead be recovered as a whole db and not by individual records\&. The node that contains the highest value stored in the record "__db_sequence_number__" is selected and the copy of that nodes database is used as the recovered database\&.
    316 .PP
    317 By default, recovery of persistent databses is done using __db_sequence_number__ record\&.
    318 .SS "FetchCollapse"
    319 .PP
    320 Default: 1
    321 .PP
    322 When many clients across many nodes try to access the same record at the same time this can lead to a fetch storm where the record becomes very active and bounces between nodes very fast\&. This leads to high CPU utilization of the ctdbd daemon, trying to bounce that record around very fast, and poor performance\&.
    323 .PP
    324 This parameter is used to activate a fetch\-collapse\&. A fetch\-collapse is when we track which records we have requests in flight so that we only keep one request in flight from a certain node, even if multiple smbd processes are attemtping to fetch the record at the same time\&. This can improve performance and reduce CPU utilization for certain workloads\&.
    325 .PP
    326 This timeout controls if we should collapse multiple fetch operations of the same record into a single request and defer all duplicates or not\&.
    327 .SS "Samba3AvoidDeadlocks"
    328 .PP
    329 Default: 0
    330 .PP
    331 Enable code that prevents deadlocks with Samba (only for Samba 3\&.x)\&.
    332 .PP
    333 This should be set to 1 when using Samba version 3\&.x to enable special code in CTDB to avoid deadlock with Samba version 3\&.x\&. This code is not required for Samba version 4\&.x and must not be enabled for Samba 4\&.x\&.
     358.SS "VacuumMaxRunTime"
     359.PP
     360Default: 120
     361.PP
     362The maximum time in seconds for which the vacuuming process is allowed to run\&. If vacuuming process takes longer than this value, then the vacuuming process is terminated\&.
     363.SS "VerboseMemoryNames"
     364.PP
     365Default: 0
     366.PP
     367When set to non\-zero, ctdb assigns verbose names for some of the talloc allocated memory objects\&. These names are visible in the talloc memory report generated by \*(Aqctdb dumpmemory\*(Aq\&.
    334368.SH "SEE ALSO"
    335369.PP
  • vendor/current/ctdb/doc/ctdb-tunables.7.html

    r988 r989  
    1 <html><head><meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"><title>ctdb-tunables</title><meta name="generator" content="DocBook XSL Stylesheets V1.78.1"></head><body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"><div class="refentry"><a name="ctdb-tunables.7"></a><div class="titlepage"></div><div class="refnamediv"><h2>Name</h2><p>ctdb-tunables &#8212; CTDB tunable configuration variables</p></div><div class="refsect1"><a name="idp52032112"></a><h2>DESCRIPTION</h2><p>
     1<html><head><meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"><title>ctdb-tunables</title><meta name="generator" content="DocBook XSL Stylesheets V1.78.1"></head><body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"><div class="refentry"><a name="ctdb-tunables.7"></a><div class="titlepage"></div><div class="refnamediv"><h2>Name</h2><p>ctdb-tunables &#8212; CTDB tunable configuration variables</p></div><div class="refsect1"><a name="idp51068080"></a><h2>DESCRIPTION</h2><p>
    22      CTDB's behaviour can be configured by setting run-time tunable
    33      variables.  This lists and describes all tunables.  See the
     
    55      <span class="command"><strong>listvars</strong></span>, <span class="command"><strong>setvar</strong></span> and
    66      <span class="command"><strong>getvar</strong></span> commands for more details.
    7     </p><div class="refsect2"><a name="idp52844128"></a><h3>MaxRedirectCount</h3><p>Default: 3</p><p>
    8         If we are not the DMASTER and need to fetch a record across the network
    9         we first send the request to the LMASTER after which the record
    10         is passed onto the current DMASTER. If the DMASTER changes before
    11         the request has reached that node, the request will be passed onto the
    12         "next" DMASTER. For very hot records that migrate rapidly across the
    13         cluster this can cause a request to "chase" the record for many hops
    14         before it catches up with the record.
    15 
    16         this is how many hops we allow trying to chase the DMASTER before we
    17         switch back to the LMASTER again to ask for new directions.
    18       </p><p>
    19         When chasing a record, this is how many hops we will chase the record
    20         for before going back to the LMASTER to ask for new guidance.
    21       </p></div><div class="refsect2"><a name="idp52639696"></a><h3>SeqnumInterval</h3><p>Default: 1000</p><p>
    22         Some databases have seqnum tracking enabled, so that samba will be able
    23         to detect asynchronously when there has been updates to the database.
    24         Everytime a database is updated its sequence number is increased.
    25       </p><p>
    26         This tunable is used to specify in 'ms' how frequently ctdb will
    27         send out updates to remote nodes to inform them that the sequence
    28         number is increased.
    29       </p></div><div class="refsect2"><a name="idp52023488"></a><h3>ControlTimeout</h3><p>Default: 60</p><p>
    30         This is the default
    31         setting for timeout for when sending a control message to either the
    32         local or a remote ctdb daemon.
    33       </p></div><div class="refsect2"><a name="idp51243376"></a><h3>TraverseTimeout</h3><p>Default: 20</p><p>
    34         This setting controls how long we allow a traverse process to run.
    35         After this timeout triggers, the main ctdb daemon will abort the
    36         traverse if it has not yet finished.
    37       </p></div><div class="refsect2"><a name="idp50157008"></a><h3>KeepaliveInterval</h3><p>Default: 5</p><p>
    38         How often in seconds should the nodes send keepalives to eachother.
    39       </p></div><div class="refsect2"><a name="idp49234000"></a><h3>KeepaliveLimit</h3><p>Default: 5</p><p>
    40         After how many keepalive intervals without any traffic should a node
    41         wait until marking the peer as DISCONNECTED.
    42       </p><p>
    43         If a node has hung, it can thus take KeepaliveInterval*(KeepaliveLimit+1)
    44         seconds before we determine that the node is DISCONNECTED and that we
    45         require a recovery. This limitshould not be set too high since we want
    46         a hung node to be detectec, and expunged from the cluster well before
    47         common CIFS timeouts (45-90 seconds) kick in.
    48       </p></div><div class="refsect2"><a name="idp53887184"></a><h3>RecoverTimeout</h3><p>Default: 20</p><p>
    49         This is the default setting for timeouts for controls when sent from the
    50         recovery daemon. We allow longer control timeouts from the recovery daemon
    51         than from normal use since the recovery dameon often use controls that
    52         can take a lot longer than normal controls.
    53       </p></div><div class="refsect2"><a name="idp53889072"></a><h3>RecoverInterval</h3><p>Default: 1</p><p>
    54         How frequently in seconds should the recovery daemon perform the
    55         consistency checks that determine if we need to perform a recovery or not.
    56       </p></div><div class="refsect2"><a name="idp53890832"></a><h3>ElectionTimeout</h3><p>Default: 3</p><p>
    57         When electing a new recovery master, this is how many seconds we allow
    58         the election to take before we either deem the election finished
    59         or we fail the election and start a new one.
    60       </p></div><div class="refsect2"><a name="idp53892640"></a><h3>TakeoverTimeout</h3><p>Default: 9</p><p>
    61         This is how many seconds we allow controls to take for IP failover events.
    62       </p></div><div class="refsect2"><a name="idp53894240"></a><h3>MonitorInterval</h3><p>Default: 15</p><p>
    63         How often should ctdb run the event scripts to check for a nodes health.
    64       </p></div><div class="refsect2"><a name="idp53895840"></a><h3>TickleUpdateInterval</h3><p>Default: 20</p><p>
    65         How often will ctdb record and store the "tickle" information used to
    66         kickstart stalled tcp connections after a recovery.
    67       </p></div><div class="refsect2"><a name="idp53897584"></a><h3>EventScriptTimeout</h3><p>Default: 30</p><p>
     7    </p><p>
     8      The tunable variables are listed alphabetically.
     9    </p><div class="refsect2"><a name="idp51120048"></a><h3>AllowClientDBAttach</h3><p>Default: 1</p><p>
     10        When set to 0, clients are not allowed to attach to any databases.
     11        This can be used to temporarily block any new processes from
     12        attaching to and accessing the databases.  This is mainly used
     13        for detaching a volatile database using 'ctdb detach'.
     14      </p></div><div class="refsect2"><a name="idp53889776"></a><h3>AllowUnhealthyDBRead</h3><p>Default: 0</p><p>
     15        When set to 1, ctdb allows database traverses to read unhealthy
     16        databases.  By default, ctdb does not allow reading records from
     17        unhealthy databases.
     18      </p></div><div class="refsect2"><a name="idp54131312"></a><h3>ControlTimeout</h3><p>Default: 60</p><p>
     19        This is the default setting for timeout for when sending a
     20        control message to either the local or a remote ctdb daemon.
     21      </p></div><div class="refsect2"><a name="idp51364816"></a><h3>DatabaseHashSize</h3><p>Default: 100001</p><p>
     22        Number of the hash chains for the local store of the tdbs that
     23        ctdb manages.
     24      </p></div><div class="refsect2"><a name="idp53157488"></a><h3>DatabaseMaxDead</h3><p>Default: 5</p><p>
     25        Maximum number of dead records per hash chain for the tdb databses
     26        managed by ctdb.
     27      </p></div><div class="refsect2"><a name="idp50010288"></a><h3>DBRecordCountWarn</h3><p>Default: 100000</p><p>
     28        When set to non-zero, ctdb will log a warning during recovery if
     29        a database has more than this many records. This will produce a
     30        warning if a database grows uncontrollably with orphaned records.
     31      </p></div><div class="refsect2"><a name="idp49085760"></a><h3>DBRecordSizeWarn</h3><p>Default: 10000000</p><p>
     32        When set to non-zero, ctdb will log a warning during recovery
     33        if a single record is bigger than this size. This will produce
     34        a warning if a database record grows uncontrollably.
     35      </p></div><div class="refsect2"><a name="idp49087568"></a><h3>DBSizeWarn</h3><p>Default: 1000000000</p><p>
     36        When set to non-zero, ctdb will log a warning during recovery if
     37        a database size is bigger than this. This will produce a warning
     38        if a database grows uncontrollably.
     39      </p></div><div class="refsect2"><a name="idp49089360"></a><h3>DeferredAttachTO</h3><p>Default: 120</p><p>
     40        When databases are frozen we do not allow clients to attach to
     41        the databases. Instead of returning an error immediately to the
     42        client, the attach request from the client is deferred until
     43        the database becomes available again at which stage we respond
     44        to the client.
     45      </p><p>
     46        This timeout controls how long we will defer the request from the
     47        client before timing it out and returning an error to the client.
     48      </p></div><div class="refsect2"><a name="idp54043296"></a><h3>DeterministicIPs</h3><p>Default: 0</p><p>
     49        When set to 1, ctdb will try to keep public IP addresses locked
     50        to specific nodes as far as possible. This makes it easier
     51        for debugging since you can know that as long as all nodes are
     52        healthy public IP X will always be hosted by node Y.
     53      </p><p>
     54        The cost of using deterministic IP address assignment is that it
     55        disables part of the logic where ctdb tries to reduce the number
     56        of public IP assignment changes in the cluster. This tunable may
     57        increase the number of IP failover/failbacks that are performed
     58        on the cluster by a small margin.
     59      </p></div><div class="refsect2"><a name="idp54045872"></a><h3>DisableIPFailover</h3><p>Default: 0</p><p>
     60        When set to non-zero, ctdb will not perform failover or
     61        failback. Even if a node fails while holding public IPs, ctdb
     62        will not recover the IPs or assign them to another node.
     63      </p><p>
     64        When this tunable is enabled, ctdb will no longer attempt
     65        to recover the cluster by failing IP addresses over to other
     66        nodes. This leads to a service outage until the administrator
     67        has manually performed IP failover to replacement nodes using the
     68        'ctdb moveip' command.
     69      </p></div><div class="refsect2"><a name="idp54048368"></a><h3>ElectionTimeout</h3><p>Default: 3</p><p>
     70        The number of seconds to wait for the election of recovery
     71        master to complete. If the election is not completed during this
     72        interval, then that round of election fails and ctdb starts a
     73        new election.
     74      </p></div><div class="refsect2"><a name="idp54050192"></a><h3>EnableBans</h3><p>Default: 1</p><p>
     75        This parameter allows ctdb to ban a node if the node is misbehaving.
     76      </p><p>
     77        When set to 0, this disables banning completely in the cluster
     78        and thus nodes can not get banned, even it they break. Don't
     79        set to 0 unless you know what you are doing.  You should set
     80        this to the same value on all nodes to avoid unexpected behaviour.
     81      </p></div><div class="refsect2"><a name="idp54052448"></a><h3>EventScriptTimeout</h3><p>Default: 30</p><p>
    6882        Maximum time in seconds to allow an event to run before timing
    6983        out.  This is the total time for all enabled scripts that are
     
    7488        success.  The logic here is that the callers of these events
    7589        implement their own additional timeout.
    76       </p></div><div class="refsect2"><a name="idp53900064"></a><h3>MonitorTimeoutCount</h3><p>Default: 20</p><p>
    77         How many monitor events in a row need to timeout before a node
    78         is flagged as UNHEALTHY.  This setting is useful if scripts
    79         can not be written so that they do not hang for benign
    80         reasons.
    81       </p></div><div class="refsect2"><a name="idp53901872"></a><h3>RecoveryGracePeriod</h3><p>Default: 120</p><p>
    82         During recoveries, if a node has not caused recovery failures during the
    83         last grace period, any records of transgressions that the node has caused
    84         recovery failures will be forgiven. This resets the ban-counter back to
    85         zero for that node.
    86       </p></div><div class="refsect2"><a name="idp49113200"></a><h3>RecoveryBanPeriod</h3><p>Default: 300</p><p>
    87         If a node becomes banned causing repetitive recovery failures. The node will
    88         eventually become banned from the cluster.
    89         This controls how long the culprit node will be banned from the cluster
    90         before it is allowed to try to join the cluster again.
    91         Don't set to small. A node gets banned for a reason and it is usually due
    92         to real problems with the node.
    93       </p></div><div class="refsect2"><a name="idp49115184"></a><h3>DatabaseHashSize</h3><p>Default: 100001</p><p>
    94         Size of the hash chains for the local store of the tdbs that ctdb manages.
    95       </p></div><div class="refsect2"><a name="idp49116784"></a><h3>DatabaseMaxDead</h3><p>Default: 5</p><p>
    96         How many dead records per hashchain in the TDB database do we allow before
    97         the freelist needs to be processed.
    98       </p></div><div class="refsect2"><a name="idp49118528"></a><h3>RerecoveryTimeout</h3><p>Default: 10</p><p>
    99         Once a recovery has completed, no additional recoveries are permitted
    100         until this timeout has expired.
    101       </p></div><div class="refsect2"><a name="idp49120256"></a><h3>EnableBans</h3><p>Default: 1</p><p>
    102         When set to 0, this disables BANNING completely in the cluster and thus
    103         nodes can not get banned, even it they break. Don't set to 0 unless you
    104         know what you are doing.  You should set this to the same value on
    105         all nodes to avoid unexpected behaviour.
    106       </p></div><div class="refsect2"><a name="idp49122128"></a><h3>DeterministicIPs</h3><p>Default: 0</p><p>
    107         When enabled, this tunable makes ctdb try to keep public IP addresses
    108         locked to specific nodes as far as possible. This makes it easier for
    109         debugging since you can know that as long as all nodes are healthy
    110         public IP X will always be hosted by node Y.
    111       </p><p>
    112         The cost of using deterministic IP address assignment is that it
    113         disables part of the logic where ctdb tries to reduce the number of
    114         public IP assignment changes in the cluster. This tunable may increase
    115         the number of IP failover/failbacks that are performed on the cluster
    116         by a small margin.
    117       </p></div><div class="refsect2"><a name="idp49124720"></a><h3>LCP2PublicIPs</h3><p>Default: 1</p><p>
    118         When enabled this switches ctdb to use the LCP2 ip allocation
    119         algorithm.
    120       </p></div><div class="refsect2"><a name="idp49126320"></a><h3>ReclockPingPeriod</h3><p>Default: x</p><p>
    121         Obsolete
    122       </p></div><div class="refsect2"><a name="idp49127952"></a><h3>NoIPFailback</h3><p>Default: 0</p><p>
    123         When set to 1, ctdb will not perform failback of IP addresses when a node
    124         becomes healthy. Ctdb WILL perform failover of public IP addresses when a
    125         node becomes UNHEALTHY, but when the node becomes HEALTHY again, ctdb
    126         will not fail the addresses back.
    127       </p><p>
    128         Use with caution! Normally when a node becomes available to the cluster
    129         ctdb will try to reassign public IP addresses onto the new node as a way
    130         to distribute the workload evenly across the clusternode. Ctdb tries to
    131         make sure that all running nodes have approximately the same number of
    132         public addresses it hosts.
    133       </p><p>
    134         When you enable this tunable, CTDB will no longer attempt to rebalance
    135         the cluster by failing IP addresses back to the new nodes. An unbalanced
    136         cluster will therefore remain unbalanced until there is manual
    137         intervention from the administrator. When this parameter is set, you can
    138         manually fail public IP addresses over to the new node(s) using the
    139         'ctdb moveip' command.
    140       </p></div><div class="refsect2"><a name="idp49136144"></a><h3>DisableIPFailover</h3><p>Default: 0</p><p>
    141         When enabled, ctdb will not perform failover or failback. Even if a
    142         node fails while holding public IPs, ctdb will not recover the IPs or
    143         assign them to another node.
    144       </p><p>
    145         When you enable this tunable, CTDB will no longer attempt to recover
    146         the cluster by failing IP addresses over to other nodes. This leads to
    147         a service outage until the administrator has manually performed failover
    148         to replacement nodes using the 'ctdb moveip' command.
    149       </p></div><div class="refsect2"><a name="idp49138608"></a><h3>NoIPTakeover</h3><p>Default: 0</p><p>
    150         When set to 1, ctdb will not allow IP addresses to be failed over
    151         onto this node. Any IP addresses that the node currently hosts
    152         will remain on the node but no new IP addresses can be failed over
    153         to the node.
    154       </p></div><div class="refsect2"><a name="idp49140448"></a><h3>NoIPHostOnAllDisabled</h3><p>Default: 0</p><p>
    155         If no nodes are healthy then by default ctdb will happily host
     90      </p></div><div class="refsect2"><a name="idp54054880"></a><h3>FetchCollapse</h3><p>Default: 1</p><p>
     91       This parameter is used to avoid multiple migration requests for
     92       the same record from a single node. All the record requests for
     93       the same record are queued up and processed when the record is
     94       migrated to the current node.
     95      </p><p>
     96        When many clients across many nodes try to access the same record
     97        at the same time this can lead to a fetch storm where the record
     98        becomes very active and bounces between nodes very fast. This
     99        leads to high CPU utilization of the ctdbd daemon, trying to
     100        bounce that record around very fast, and poor performance.
     101        This can improve performance and reduce CPU utilization for
     102        certain workloads.
     103      </p></div><div class="refsect2"><a name="idp48966640"></a><h3>HopcountMakeSticky</h3><p>Default: 50</p><p>
     104        For database(s) marked STICKY (using 'ctdb setdbsticky'),
     105        any record that is migrating so fast that hopcount
     106        exceeds this limit is marked as STICKY record for
     107        <code class="varname">StickyDuration</code> seconds. This means that
     108        after each migration the sticky record will be kept on the node
     109        <code class="varname">StickyPindown</code>milliseconds and prevented from
     110        being migrated off the node.
     111       </p><p>
     112        This will improve performance for certain workloads, such as
     113        locking.tdb if many clients are opening/closing the same file
     114        concurrently.
     115      </p></div><div class="refsect2"><a name="idp48969952"></a><h3>KeepaliveInterval</h3><p>Default: 5</p><p>
     116        How often in seconds should the nodes send keep-alive packets to
     117        each other.
     118      </p></div><div class="refsect2"><a name="idp48971552"></a><h3>KeepaliveLimit</h3><p>Default: 5</p><p>
     119        After how many keepalive intervals without any traffic should
     120        a node wait until marking the peer as DISCONNECTED.
     121       </p><p>
     122        If a node has hung, it can take
     123        <code class="varname">KeepaliveInterval</code> *
     124        (<code class="varname">KeepaliveLimit</code> + 1) seconds before
     125        ctdb determines that the node is DISCONNECTED and performs
     126        a recovery. This limit should not be set too high to enable
     127        early detection and avoid any application timeouts (e.g. SMB1)
     128        to kick in before the fail over is completed.
     129      </p></div><div class="refsect2"><a name="idp48974864"></a><h3>LCP2PublicIPs</h3><p>Default: 1</p><p>
     130        When set to 1, ctdb uses the LCP2 ip allocation algorithm.
     131      </p></div><div class="refsect2"><a name="idp48976464"></a><h3>LockProcessesPerDB</h3><p>Default: 200</p><p>
     132        This is the maximum number of lock helper processes ctdb will
     133        create for obtaining record locks.  When ctdb cannot get a record
     134        lock without blocking, it creates a helper process that waits
     135        for the lock to be obtained.
     136      </p></div><div class="refsect2"><a name="idp48978304"></a><h3>LogLatencyMs</h3><p>Default: 0</p><p>
     137        When set to non-zero, ctdb will log if certains operations
     138        take longer than this value, in milliseconds, to complete.
     139        These operations include "process a record request from client",
     140        "take a record or database lock", "update a persistent database
     141        record" and "vaccum a database".
     142      </p></div><div class="refsect2"><a name="idp48980208"></a><h3>MaxQueueDropMsg</h3><p>Default: 1000000</p><p>
     143        This is the maximum number of messages to be queued up for
     144        a client before ctdb will treat the client as hung and will
     145        terminate the client connection.
     146      </p></div><div class="refsect2"><a name="idp48981984"></a><h3>MonitorInterval</h3><p>Default: 15</p><p>
     147        How often should ctdb run the 'monitor' event in seconds to check
     148        for a node's health.
     149      </p></div><div class="refsect2"><a name="idp48988480"></a><h3>MonitorTimeoutCount</h3><p>Default: 20</p><p>
     150        How many 'monitor' events in a row need to timeout before a node
     151        is flagged as UNHEALTHY.  This setting is useful if scripts can
     152        not be written so that they do not hang for benign reasons.
     153      </p></div><div class="refsect2"><a name="idp48990288"></a><h3>NoIPFailback</h3><p>Default: 0</p><p>
     154        When set to 1, ctdb will not perform failback of IP addresses
     155        when a node becomes healthy. When a node becomes UNHEALTHY,
     156        ctdb WILL perform failover of public IP addresses, but when the
     157        node becomes HEALTHY again, ctdb will not fail the addresses back.
     158      </p><p>
     159        Use with caution! Normally when a node becomes available to the
     160        cluster ctdb will try to reassign public IP addresses onto the
     161        new node as a way to distribute the workload evenly across the
     162        clusternode. Ctdb tries to make sure that all running nodes have
     163        approximately the same number of public addresses it hosts.
     164      </p><p>
     165        When you enable this tunable, ctdb will no longer attempt to
     166        rebalance the cluster by failing IP addresses back to the new
     167        nodes. An unbalanced cluster will therefore remain unbalanced
     168        until there is manual intervention from the administrator. When
     169        this parameter is set, you can manually fail public IP addresses
     170        over to the new node(s) using the 'ctdb moveip' command.
     171      </p></div><div class="refsect2"><a name="idp48993680"></a><h3>NoIPHostOnAllDisabled</h3><p>Default: 0</p><p>
     172        If no nodes are HEALTHY then by default ctdb will happily host
    156173        public IPs on disabled (unhealthy or administratively disabled)
    157         nodes.  This can cause problems, for example if the underlying
     174        nodes.  This can cause problems, for example if the underlying
    158175        cluster filesystem is not mounted.  When set to 1 on a node and
    159         that node is disabled it, any IPs hosted by this node will be
     176        that node is disabled, any IPs hosted by this node will be
    160177        released and the node will not takeover any IPs until it is no
    161178        longer disabled.
    162       </p></div><div class="refsect2"><a name="idp49142480"></a><h3>DBRecordCountWarn</h3><p>Default: 100000</p><p>
    163         When set to non-zero, ctdb will log a warning when we try to recover a
    164         database with more than this many records. This will produce a warning
    165         if a database grows uncontrollably with orphaned records.
    166       </p></div><div class="refsect2"><a name="idp49144304"></a><h3>DBRecordSizeWarn</h3><p>Default: 10000000</p><p>
    167         When set to non-zero, ctdb will log a warning when we try to recover a
    168         database where a single record is bigger than this. This will produce
    169         a warning if a database record grows uncontrollably with orphaned
    170         sub-records.
    171       </p></div><div class="refsect2"><a name="idp49146144"></a><h3>DBSizeWarn</h3><p>Default: 1000000000</p><p>
    172         When set to non-zero, ctdb will log a warning when we try to recover a
    173         database bigger than this. This will produce
    174         a warning if a database grows uncontrollably.
    175       </p></div><div class="refsect2"><a name="idp49147936"></a><h3>VerboseMemoryNames</h3><p>Default: 0</p><p>
    176         This feature consumes additional memory. when used the talloc library
    177         will create more verbose names for all talloc allocated objects.
    178       </p></div><div class="refsect2"><a name="idp49149696"></a><h3>RecdPingTimeout</h3><p>Default: 60</p><p>
    179         If the main dameon has not heard a "ping" from the recovery dameon for
    180         this many seconds, the main dameon will log a message that the recovery
    181         daemon is potentially hung.
    182       </p></div><div class="refsect2"><a name="idp49151488"></a><h3>RecdFailCount</h3><p>Default: 10</p><p>
    183         If the recovery daemon has failed to ping the main dameon for this many
    184         consecutive intervals, the main daemon will consider the recovery daemon
    185         as hung and will try to restart it to recover.
    186       </p></div><div class="refsect2"><a name="idp49153312"></a><h3>LogLatencyMs</h3><p>Default: 0</p><p>
    187         When set to non-zero, this will make the main daemon log any operation that
    188         took longer than this value, in 'ms', to complete.
    189         These include "how long time a lockwait child process needed",
    190         "how long time to write to a persistent database" but also
    191         "how long did it take to get a response to a CALL from a remote node".
    192       </p></div><div class="refsect2"><a name="idp49155264"></a><h3>RecLockLatencyMs</h3><p>Default: 1000</p><p>
    193         When using a reclock file for split brain prevention, if set to non-zero
    194         this tunable will make the recovery dameon log a message if the fcntl()
    195         call to lock/testlock the recovery file takes longer than this number of
    196         ms.
    197       </p></div><div class="refsect2"><a name="idp49157120"></a><h3>RecoveryDropAllIPs</h3><p>Default: 120</p><p>
    198         If we have been stuck in recovery, or stopped, or banned, mode for
    199         this many seconds we will force drop all held public addresses.
    200       </p></div><div class="refsect2"><a name="idp55021168"></a><h3>VacuumInterval</h3><p>Default: 10</p><p>
     179      </p></div><div class="refsect2"><a name="idp48995696"></a><h3>NoIPTakeover</h3><p>Default: 0</p><p>
     180        When set to 1, ctdb will not allow IP addresses to be failed
     181        over onto this node. Any IP addresses that the node currently
     182        hosts will remain on the node but no new IP addresses can be
     183        failed over to the node.
     184      </p></div><div class="refsect2"><a name="idp48997536"></a><h3>PullDBPreallocation</h3><p>Default: 10*1024*1024</p><p>
     185        This is the size of a record buffer to pre-allocate for sending
     186        reply to PULLDB control. Usually record buffer starts with size
     187        of the first record and gets reallocated every time a new record
     188        is added to the record buffer. For a large number of records,
     189        this can be very inefficient to grow the record buffer one record
     190        at a time.
     191      </p></div><div class="refsect2"><a name="idp48999504"></a><h3>RecBufferSizeLimit</h3><p>Default: 1000000</p><p>
     192        This is the limit on the size of the record buffer to be sent
     193        in various controls.  This limit is used by new controls used
     194        for recovery and controls used in vacuuming.
     195      </p></div><div class="refsect2"><a name="idp49001328"></a><h3>RecdFailCount</h3><p>Default: 10</p><p>
     196        If the recovery daemon has failed to ping the main dameon for
     197        this many consecutive intervals, the main daemon will consider
     198        the recovery daemon as hung and will try to restart it to recover.
     199      </p></div><div class="refsect2"><a name="idp49003152"></a><h3>RecdPingTimeout</h3><p>Default: 60</p><p>
     200        If the main dameon has not heard a "ping" from the recovery dameon
     201        for this many seconds, the main dameon will log a message that
     202        the recovery daemon is potentially hung.  This also increments a
     203        counter which is checked against <code class="varname">RecdFailCount</code>
     204        for detection of hung recovery daemon.
     205      </p></div><div class="refsect2"><a name="idp49005424"></a><h3>RecLockLatencyMs</h3><p>Default: 1000</p><p>
     206        When using a reclock file for split brain prevention, if set
     207        to non-zero this tunable will make the recovery dameon log a
     208        message if the fcntl() call to lock/testlock the recovery file
     209        takes longer than this number of milliseconds.
     210      </p></div><div class="refsect2"><a name="idp49007280"></a><h3>RecoverInterval</h3><p>Default: 1</p><p>
     211        How frequently in seconds should the recovery daemon perform the
     212        consistency checks to determine if it should perform a recovery.
     213      </p></div><div class="refsect2"><a name="idp49009040"></a><h3>RecoverPDBBySeqNum</h3><p>Default: 1</p><p>
     214        When set to zero, database recovery for persistent databases is
     215        record-by-record and recovery process simply collects the most
     216        recent version of every individual record.
     217      </p><p>
     218        When set to non-zero, persistent databases will instead be
     219        recovered as a whole db and not by individual records. The
     220        node that contains the highest value stored in the record
     221        "__db_sequence_number__" is selected and the copy of that nodes
     222        database is used as the recovered database.
     223      </p><p>
     224        By default, recovery of persistent databses is done using
     225        __db_sequence_number__ record.
     226      </p></div><div class="refsect2"><a name="idp54874960"></a><h3>RecoverTimeout</h3><p>Default: 120</p><p>
     227        This is the default setting for timeouts for controls when sent
     228        from the recovery daemon. We allow longer control timeouts from
     229        the recovery daemon than from normal use since the recovery
     230        dameon often use controls that can take a lot longer than normal
     231        controls.
     232      </p></div><div class="refsect2"><a name="idp54876784"></a><h3>RecoveryBanPeriod</h3><p>Default: 300</p><p>
     233       The duration in seconds for which a node is banned if the node
     234       fails during recovery.  After this time has elapsed the node will
     235       automatically get unbanned and will attempt to rejoin the cluster.
     236      </p><p>
     237       A node usually gets banned due to real problems with the node.
     238       Don't set this value too small.  Otherwise, a problematic node
     239       will try to re-join cluster too soon causing unnecessary recoveries.
     240      </p></div><div class="refsect2"><a name="idp54879184"></a><h3>RecoveryDropAllIPs</h3><p>Default: 120</p><p>
     241        If a node is stuck in recovery, or stopped, or banned, for this
     242        many seconds, then ctdb will release all public addresses on
     243        that node.
     244      </p></div><div class="refsect2"><a name="idp54880880"></a><h3>RecoveryGracePeriod</h3><p>Default: 120</p><p>
     245       During recoveries, if a node has not caused recovery failures
     246       during the last grace period in seconds, any records of
     247       transgressions that the node has caused recovery failures will be
     248       forgiven. This resets the ban-counter back to zero for that node.
     249      </p></div><div class="refsect2"><a name="idp54882720"></a><h3>RepackLimit</h3><p>Default: 10000</p><p>
     250        During vacuuming, if the number of freelist records are more than
     251        <code class="varname">RepackLimit</code>, then the database is repacked
     252        to get rid of the freelist records to avoid fragmentation.
     253      </p><p>
     254        Databases are repacked only if both <code class="varname">RepackLimit</code>
     255        and <code class="varname">VacuumLimit</code> are exceeded.
     256      </p></div><div class="refsect2"><a name="idp54885920"></a><h3>RerecoveryTimeout</h3><p>Default: 10</p><p>
     257        Once a recovery has completed, no additional recoveries are
     258        permitted until this timeout in seconds has expired.
     259      </p></div><div class="refsect2"><a name="idp54887600"></a><h3>Samba3AvoidDeadlocks</h3><p>Default: 0</p><p>
     260        If set to non-zero, enable code that prevents deadlocks with Samba
     261        (only for Samba 3.x).
     262      </p><p>
     263        This should be set to 1 only when using Samba version 3.x
     264        to enable special code in ctdb to avoid deadlock with Samba
     265        version 3.x.  This code is not required for Samba version 4.x
     266        and must not be enabled for Samba 4.x.
     267      </p></div><div class="refsect2"><a name="idp54889888"></a><h3>SeqnumInterval</h3><p>Default: 1000</p><p>
     268        Some databases have seqnum tracking enabled, so that samba will
     269        be able to detect asynchronously when there has been updates
     270        to the database.  Everytime a database is updated its sequence
     271        number is increased.
     272      </p><p>
     273        This tunable is used to specify in milliseconds how frequently
     274        ctdb will send out updates to remote nodes to inform them that
     275        the sequence number is increased.
     276      </p></div><div class="refsect2"><a name="idp54892240"></a><h3>StatHistoryInterval</h3><p>Default: 1</p><p>
     277        Granularity of the statistics collected in the statistics
     278        history. This is reported by 'ctdb stats' command.
     279      </p></div><div class="refsect2"><a name="idp54893904"></a><h3>StickyDuration</h3><p>Default: 600</p><p>
     280        Once a record has been marked STICKY, this is the duration in
     281        seconds, the record will be flagged as a STICKY record.
     282      </p></div><div class="refsect2"><a name="idp54895584"></a><h3>StickyPindown</h3><p>Default: 200</p><p>
     283        Once a STICKY record has been migrated onto a node, it will be
     284        pinned down on that node for this number of milliseconds. Any
     285        request from other nodes to migrate the record off the node will
     286        be deferred.
     287      </p></div><div class="refsect2"><a name="idp54897344"></a><h3>TakeoverTimeout</h3><p>Default: 9</p><p>
     288        This is the duration in seconds in which ctdb tries to complete IP
     289        failover.
     290      </p></div><div class="refsect2"><a name="idp54898880"></a><h3>TDBMutexEnabled</h3><p>Default: 0</p><p>
     291        This paramter enables TDB_MUTEX_LOCKING feature on volatile
     292        databases if the robust mutexes are supported. This optimizes the
     293        record locking using robust mutexes and is much more efficient
     294        that using posix locks.
     295      </p></div><div class="refsect2"><a name="idp54900656"></a><h3>TickleUpdateInterval</h3><p>Default: 20</p><p>
     296        Every <code class="varname">TickleUpdateInterval</code> seconds, ctdb
     297        synchronizes the client connection information across nodes.
     298      </p></div><div class="refsect2"><a name="idp54902576"></a><h3>TraverseTimeout</h3><p>Default: 20</p><p>
     299        This is the duration in seconds for which a database traverse
     300        is allowed to run.  If the traverse does not complete during
     301        this interval, ctdb will abort the traverse.
     302      </p></div><div class="refsect2"><a name="idp54904304"></a><h3>VacuumFastPathCount</h3><p>Default: 60</p><p>
     303       During a vacuuming run, ctdb usually processes only the records
     304       marked for deletion also called the fast path vacuuming. After
     305       finishing <code class="varname">VacuumFastPathCount</code> number of fast
     306       path vacuuming runs, ctdb will trigger a scan of complete database
     307       for any empty records that need to be deleted.
     308      </p></div><div class="refsect2"><a name="idp54906560"></a><h3>VacuumInterval</h3><p>Default: 10</p><p>
    201309        Periodic interval in seconds when vacuuming is triggered for
    202310        volatile databases.
    203       </p></div><div class="refsect2"><a name="idp55022832"></a><h3>VacuumMaxRunTime</h3><p>Default: 120</p><p>
     311      </p></div><div class="refsect2"><a name="idp54908224"></a><h3>VacuumLimit</h3><p>Default: 5000</p><p>
     312        During vacuuming, if the number of deleted records are more than
     313        <code class="varname">VacuumLimit</code>, then databases are repacked to
     314        avoid fragmentation.
     315      </p><p>
     316        Databases are repacked only if both <code class="varname">RepackLimit</code>
     317        and <code class="varname">VacuumLimit</code> are exceeded.
     318      </p></div><div class="refsect2"><a name="idp54911392"></a><h3>VacuumMaxRunTime</h3><p>Default: 120</p><p>
    204319        The maximum time in seconds for which the vacuuming process is
    205320        allowed to run.  If vacuuming process takes longer than this
    206321        value, then the vacuuming process is terminated.
    207       </p></div><div class="refsect2"><a name="idp55024592"></a><h3>RepackLimit</h3><p>Default: 10000</p><p>
    208         During vacuuming, if the number of freelist records are more
    209         than <code class="varname">RepackLimit</code>, then databases are
    210         repacked to get rid of the freelist records to avoid
    211         fragmentation.
    212       </p><p>
    213         Databases are repacked only if both
    214         <code class="varname">RepackLimit</code> and
    215         <code class="varname">VacuumLimit</code> are exceeded.
    216       </p></div><div class="refsect2"><a name="idp55027792"></a><h3>VacuumLimit</h3><p>Default: 5000</p><p>
    217         During vacuuming, if the number of deleted records are more
    218         than <code class="varname">VacuumLimit</code>, then databases are
    219         repacked to avoid fragmentation.
    220       </p><p>
    221         Databases are repacked only if both
    222         <code class="varname">RepackLimit</code> and
    223         <code class="varname">VacuumLimit</code> are exceeded.
    224       </p></div><div class="refsect2"><a name="idp55030864"></a><h3>VacuumFastPathCount</h3><p>Default: 60</p><p>
    225         When a record is deleted, it is marked for deletion during
    226         vacuuming.  Vacuuming process usually processes this list to purge
    227         the records from the database.  If the number of records marked
    228         for deletion are more than VacuumFastPathCount, then vacuuming
    229         process will scan the complete database for empty records instead
    230         of using the list of records marked for deletion.
    231       </p></div><div class="refsect2"><a name="idp55032832"></a><h3>DeferredAttachTO</h3><p>Default: 120</p><p>
    232         When databases are frozen we do not allow clients to attach to the
    233         databases. Instead of returning an error immediately to the application
    234         the attach request from the client is deferred until the database
    235         becomes available again at which stage we respond to the client.
    236       </p><p>
    237         This timeout controls how long we will defer the request from the client
    238         before timing it out and returning an error to the client.
    239       </p></div><div class="refsect2"><a name="idp55035216"></a><h3>HopcountMakeSticky</h3><p>Default: 50</p><p>
    240         If the database is set to 'STICKY' mode, using the 'ctdb setdbsticky'
    241         command, any record that is seen as very hot and migrating so fast that
    242         hopcount surpasses 50 is set to become a STICKY record for StickyDuration
    243         seconds. This means that after each migration the record will be kept on
    244         the node and prevented from being migrated off the node.
    245       </p><p>
    246         This setting allows one to try to identify such records and stop them from
    247         migrating across the cluster so fast. This will improve performance for
    248         certain workloads, such as locking.tdb if many clients are opening/closing
    249         the same file concurrently.
    250       </p></div><div class="refsect2"><a name="idp55037776"></a><h3>StickyDuration</h3><p>Default: 600</p><p>
    251         Once a record has been found to be fetch-lock hot and has been flagged to
    252         become STICKY, this is for how long, in seconds, the record will be
    253         flagged as a STICKY record.
    254       </p></div><div class="refsect2"><a name="idp55039504"></a><h3>StickyPindown</h3><p>Default: 200</p><p>
    255         Once a STICKY record has been migrated onto a node, it will be pinned down
    256         on that node for this number of ms. Any request from other nodes to migrate
    257         the record off the node will be deferred until the pindown timer expires.
    258       </p></div><div class="refsect2"><a name="idp55041296"></a><h3>StatHistoryInterval</h3><p>Default: 1</p><p>
    259         Granularity of the statistics collected in the statistics history.
    260       </p></div><div class="refsect2"><a name="idp55042928"></a><h3>AllowClientDBAttach</h3><p>Default: 1</p><p>
    261         When set to 0, clients are not allowed to attach to any databases.
    262         This can be used to temporarily block any new processes from attaching
    263         to and accessing the databases.
    264       </p></div><div class="refsect2"><a name="idp55044656"></a><h3>RecoverPDBBySeqNum</h3><p>Default: 1</p><p>
    265         When set to zero, database recovery for persistent databases
    266         is record-by-record and recovery process simply collects the
    267         most recent version of every individual record.
    268       </p><p>
    269         When set to non-zero, persistent databases will instead be
    270         recovered as a whole db and not by individual records. The
    271         node that contains the highest value stored in the record
    272         "__db_sequence_number__" is selected and the copy of that
    273         nodes database is used as the recovered database.
    274       </p><p>
    275         By default, recovery of persistent databses is done using
    276         __db_sequence_number__ record.
    277       </p></div><div class="refsect2"><a name="idp55047584"></a><h3>FetchCollapse</h3><p>Default: 1</p><p>
    278         When many clients across many nodes try to access the same record at the
    279         same time this can lead to a fetch storm where the record becomes very
    280         active and bounces between nodes very fast. This leads to high CPU
    281         utilization of the ctdbd daemon, trying to bounce that record around
    282         very fast, and poor performance.
    283       </p><p>
    284         This parameter is used to activate a fetch-collapse. A fetch-collapse
    285         is when we track which records we have requests in flight so that we only
    286         keep one request in flight from a certain node, even if multiple smbd
    287         processes are attemtping to fetch the record at the same time. This
    288         can improve performance and reduce CPU utilization for certain
    289         workloads.
    290       </p><p>
    291         This timeout controls if we should collapse multiple fetch operations
    292         of the same record into a single request and defer all duplicates or not.
    293       </p></div><div class="refsect2"><a name="idp55050784"></a><h3>Samba3AvoidDeadlocks</h3><p>Default: 0</p><p>
    294         Enable code that prevents deadlocks with Samba (only for Samba 3.x).
    295       </p><p>
    296         This should be set to 1 when using Samba version 3.x to enable special
    297         code in CTDB to avoid deadlock with Samba version 3.x.  This code
    298         is not required for Samba version 4.x and must not be enabled for
    299         Samba 4.x.
    300       </p></div></div><div class="refsect1"><a name="idp55053168"></a><h2>SEE ALSO</h2><p>
     322      </p></div><div class="refsect2"><a name="idp54913152"></a><h3>VerboseMemoryNames</h3><p>Default: 0</p><p>
     323        When set to non-zero, ctdb assigns verbose names for some of
     324        the talloc allocated memory objects.  These names are visible
     325        in the talloc memory report generated by 'ctdb dumpmemory'.
     326      </p></div></div><div class="refsect1"><a name="idp54915024"></a><h2>SEE ALSO</h2><p>
    301327      <span class="citerefentry"><span class="refentrytitle">ctdb</span>(1)</span>,
    302328
  • vendor/current/ctdb/doc/ctdb-tunables.7.xml

    r988 r989  
    3030    </para>
    3131
    32     <refsect2>
    33       <title>MaxRedirectCount</title>
    34       <para>Default: 3</para>
    35       <para>
    36         If we are not the DMASTER and need to fetch a record across the network
    37         we first send the request to the LMASTER after which the record
    38         is passed onto the current DMASTER. If the DMASTER changes before
    39         the request has reached that node, the request will be passed onto the
    40         "next" DMASTER. For very hot records that migrate rapidly across the
    41         cluster this can cause a request to "chase" the record for many hops
    42         before it catches up with the record.
    43 
    44         this is how many hops we allow trying to chase the DMASTER before we
    45         switch back to the LMASTER again to ask for new directions.
    46       </para>
    47       <para>
    48         When chasing a record, this is how many hops we will chase the record
    49         for before going back to the LMASTER to ask for new guidance.
    50       </para>
    51     </refsect2>
    52 
    53     <refsect2>
    54       <title>SeqnumInterval</title>
    55       <para>Default: 1000</para>
    56       <para>
    57         Some databases have seqnum tracking enabled, so that samba will be able
    58         to detect asynchronously when there has been updates to the database.
    59         Everytime a database is updated its sequence number is increased.
    60       </para>
    61       <para>
    62         This tunable is used to specify in 'ms' how frequently ctdb will
    63         send out updates to remote nodes to inform them that the sequence
    64         number is increased.
     32    <para>
     33      The tunable variables are listed alphabetically.
     34    </para>
     35
     36    <refsect2>
     37      <title>AllowClientDBAttach</title>
     38      <para>Default: 1</para>
     39      <para>
     40        When set to 0, clients are not allowed to attach to any databases.
     41        This can be used to temporarily block any new processes from
     42        attaching to and accessing the databases.  This is mainly used
     43        for detaching a volatile database using 'ctdb detach'.
     44      </para>
     45    </refsect2>
     46
     47    <refsect2>
     48      <title>AllowUnhealthyDBRead</title>
     49      <para>Default: 0</para>
     50      <para>
     51        When set to 1, ctdb allows database traverses to read unhealthy
     52        databases.  By default, ctdb does not allow reading records from
     53        unhealthy databases.
    6554      </para>
    6655    </refsect2>
     
    7059      <para>Default: 60</para>
    7160      <para>
    72         This is the default
    73         setting for timeout for when sending a control message to either the
    74         local or a remote ctdb daemon.
    75       </para>
    76     </refsect2>
    77 
    78     <refsect2>
    79       <title>TraverseTimeout</title>
    80       <para>Default: 20</para>
    81       <para>
    82         This setting controls how long we allow a traverse process to run.
    83         After this timeout triggers, the main ctdb daemon will abort the
    84         traverse if it has not yet finished.
    85       </para>
    86     </refsect2>
    87 
    88     <refsect2>
    89       <title>KeepaliveInterval</title>
     61        This is the default setting for timeout for when sending a
     62        control message to either the local or a remote ctdb daemon.
     63      </para>
     64    </refsect2>
     65
     66    <refsect2>
     67      <title>DatabaseHashSize</title>
     68      <para>Default: 100001</para>
     69      <para>
     70        Number of the hash chains for the local store of the tdbs that
     71        ctdb manages.
     72      </para>
     73    </refsect2>
     74
     75    <refsect2>
     76      <title>DatabaseMaxDead</title>
    9077      <para>Default: 5</para>
    9178      <para>
    92         How often in seconds should the nodes send keepalives to eachother.
    93       </para>
    94     </refsect2>
    95 
    96     <refsect2>
    97       <title>KeepaliveLimit</title>
    98       <para>Default: 5</para>
    99       <para>
    100         After how many keepalive intervals without any traffic should a node
    101         wait until marking the peer as DISCONNECTED.
    102       </para>
    103       <para>
    104         If a node has hung, it can thus take KeepaliveInterval*(KeepaliveLimit+1)
    105         seconds before we determine that the node is DISCONNECTED and that we
    106         require a recovery. This limitshould not be set too high since we want
    107         a hung node to be detectec, and expunged from the cluster well before
    108         common CIFS timeouts (45-90 seconds) kick in.
    109       </para>
    110     </refsect2>
    111 
    112     <refsect2>
    113       <title>RecoverTimeout</title>
    114       <para>Default: 20</para>
    115       <para>
    116         This is the default setting for timeouts for controls when sent from the
    117         recovery daemon. We allow longer control timeouts from the recovery daemon
    118         than from normal use since the recovery dameon often use controls that
    119         can take a lot longer than normal controls.
    120       </para>
    121     </refsect2>
    122 
    123     <refsect2>
    124       <title>RecoverInterval</title>
    125       <para>Default: 1</para>
    126       <para>
    127         How frequently in seconds should the recovery daemon perform the
    128         consistency checks that determine if we need to perform a recovery or not.
     79        Maximum number of dead records per hash chain for the tdb databses
     80        managed by ctdb.
     81      </para>
     82    </refsect2>
     83
     84    <refsect2>
     85      <title>DBRecordCountWarn</title>
     86      <para>Default: 100000</para>
     87      <para>
     88        When set to non-zero, ctdb will log a warning during recovery if
     89        a database has more than this many records. This will produce a
     90        warning if a database grows uncontrollably with orphaned records.
     91      </para>
     92    </refsect2>
     93
     94    <refsect2>
     95      <title>DBRecordSizeWarn</title>
     96      <para>Default: 10000000</para>
     97      <para>
     98        When set to non-zero, ctdb will log a warning during recovery
     99        if a single record is bigger than this size. This will produce
     100        a warning if a database record grows uncontrollably.
     101      </para>
     102    </refsect2>
     103
     104    <refsect2>
     105      <title>DBSizeWarn</title>
     106      <para>Default: 1000000000</para>
     107      <para>
     108        When set to non-zero, ctdb will log a warning during recovery if
     109        a database size is bigger than this. This will produce a warning
     110        if a database grows uncontrollably.
     111      </para>
     112    </refsect2>
     113
     114    <refsect2>
     115      <title>DeferredAttachTO</title>
     116      <para>Default: 120</para>
     117      <para>
     118        When databases are frozen we do not allow clients to attach to
     119        the databases. Instead of returning an error immediately to the
     120        client, the attach request from the client is deferred until
     121        the database becomes available again at which stage we respond
     122        to the client.
     123      </para>
     124      <para>
     125        This timeout controls how long we will defer the request from the
     126        client before timing it out and returning an error to the client.
     127      </para>
     128    </refsect2>
     129
     130    <refsect2>
     131      <title>DeterministicIPs</title>
     132      <para>Default: 0</para>
     133      <para>
     134        When set to 1, ctdb will try to keep public IP addresses locked
     135        to specific nodes as far as possible. This makes it easier
     136        for debugging since you can know that as long as all nodes are
     137        healthy public IP X will always be hosted by node Y.
     138      </para>
     139      <para>
     140        The cost of using deterministic IP address assignment is that it
     141        disables part of the logic where ctdb tries to reduce the number
     142        of public IP assignment changes in the cluster. This tunable may
     143        increase the number of IP failover/failbacks that are performed
     144        on the cluster by a small margin.
     145      </para>
     146    </refsect2>
     147
     148    <refsect2>
     149      <title>DisableIPFailover</title>
     150      <para>Default: 0</para>
     151      <para>
     152        When set to non-zero, ctdb will not perform failover or
     153        failback. Even if a node fails while holding public IPs, ctdb
     154        will not recover the IPs or assign them to another node.
     155      </para>
     156      <para>
     157        When this tunable is enabled, ctdb will no longer attempt
     158        to recover the cluster by failing IP addresses over to other
     159        nodes. This leads to a service outage until the administrator
     160        has manually performed IP failover to replacement nodes using the
     161        'ctdb moveip' command.
    129162      </para>
    130163    </refsect2>
     
    134167      <para>Default: 3</para>
    135168      <para>
    136         When electing a new recovery master, this is how many seconds we allow
    137         the election to take before we either deem the election finished
    138         or we fail the election and start a new one.
    139       </para>
    140     </refsect2>
    141 
    142     <refsect2>
    143       <title>TakeoverTimeout</title>
    144       <para>Default: 9</para>
    145       <para>
    146         This is how many seconds we allow controls to take for IP failover events.
    147       </para>
    148     </refsect2>
    149 
    150     <refsect2>
    151       <title>MonitorInterval</title>
    152       <para>Default: 15</para>
    153       <para>
    154         How often should ctdb run the event scripts to check for a nodes health.
    155       </para>
    156     </refsect2>
    157 
    158     <refsect2>
    159       <title>TickleUpdateInterval</title>
    160       <para>Default: 20</para>
    161       <para>
    162         How often will ctdb record and store the "tickle" information used to
    163         kickstart stalled tcp connections after a recovery.
     169        The number of seconds to wait for the election of recovery
     170        master to complete. If the election is not completed during this
     171        interval, then that round of election fails and ctdb starts a
     172        new election.
     173      </para>
     174    </refsect2>
     175
     176    <refsect2>
     177      <title>EnableBans</title>
     178      <para>Default: 1</para>
     179      <para>
     180        This parameter allows ctdb to ban a node if the node is misbehaving.
     181      </para>
     182      <para>
     183        When set to 0, this disables banning completely in the cluster
     184        and thus nodes can not get banned, even it they break. Don't
     185        set to 0 unless you know what you are doing.  You should set
     186        this to the same value on all nodes to avoid unexpected behaviour.
    164187      </para>
    165188    </refsect2>
     
    173196        run for an event, not just a single event script.
    174197      </para>
    175 
    176198      <para>
    177199        Note that timeouts are ignored for some events ("takeip",
     
    183205
    184206    <refsect2>
     207      <title>FetchCollapse</title>
     208      <para>Default: 1</para>
     209      <para>
     210       This parameter is used to avoid multiple migration requests for
     211       the same record from a single node. All the record requests for
     212       the same record are queued up and processed when the record is
     213       migrated to the current node.
     214      </para>
     215      <para>
     216        When many clients across many nodes try to access the same record
     217        at the same time this can lead to a fetch storm where the record
     218        becomes very active and bounces between nodes very fast. This
     219        leads to high CPU utilization of the ctdbd daemon, trying to
     220        bounce that record around very fast, and poor performance.
     221        This can improve performance and reduce CPU utilization for
     222        certain workloads.
     223      </para>
     224    </refsect2>
     225
     226    <refsect2>
     227      <title>HopcountMakeSticky</title>
     228      <para>Default: 50</para>
     229      <para>
     230        For database(s) marked STICKY (using 'ctdb setdbsticky'),
     231        any record that is migrating so fast that hopcount
     232        exceeds this limit is marked as STICKY record for
     233        <varname>StickyDuration</varname> seconds. This means that
     234        after each migration the sticky record will be kept on the node
     235        <varname>StickyPindown</varname>milliseconds and prevented from
     236        being migrated off the node.
     237       </para>
     238       <para>
     239        This will improve performance for certain workloads, such as
     240        locking.tdb if many clients are opening/closing the same file
     241        concurrently.
     242      </para>
     243    </refsect2>
     244
     245    <refsect2>
     246      <title>KeepaliveInterval</title>
     247      <para>Default: 5</para>
     248      <para>
     249        How often in seconds should the nodes send keep-alive packets to
     250        each other.
     251      </para>
     252    </refsect2>
     253
     254    <refsect2>
     255      <title>KeepaliveLimit</title>
     256      <para>Default: 5</para>
     257      <para>
     258        After how many keepalive intervals without any traffic should
     259        a node wait until marking the peer as DISCONNECTED.
     260       </para>
     261       <para>
     262        If a node has hung, it can take
     263        <varname>KeepaliveInterval</varname> *
     264        (<varname>KeepaliveLimit</varname> + 1) seconds before
     265        ctdb determines that the node is DISCONNECTED and performs
     266        a recovery. This limit should not be set too high to enable
     267        early detection and avoid any application timeouts (e.g. SMB1)
     268        to kick in before the fail over is completed.
     269      </para>
     270    </refsect2>
     271
     272    <refsect2>
     273      <title>LCP2PublicIPs</title>
     274      <para>Default: 1</para>
     275      <para>
     276        When set to 1, ctdb uses the LCP2 ip allocation algorithm.
     277      </para>
     278    </refsect2>
     279
     280    <refsect2>
     281      <title>LockProcessesPerDB</title>
     282      <para>Default: 200</para>
     283      <para>
     284        This is the maximum number of lock helper processes ctdb will
     285        create for obtaining record locks.  When ctdb cannot get a record
     286        lock without blocking, it creates a helper process that waits
     287        for the lock to be obtained.
     288      </para>
     289    </refsect2>
     290
     291    <refsect2>
     292      <title>LogLatencyMs</title>
     293      <para>Default: 0</para>
     294      <para>
     295        When set to non-zero, ctdb will log if certains operations
     296        take longer than this value, in milliseconds, to complete.
     297        These operations include "process a record request from client",
     298        "take a record or database lock", "update a persistent database
     299        record" and "vaccum a database".
     300      </para>
     301    </refsect2>
     302
     303    <refsect2>
     304      <title>MaxQueueDropMsg</title>
     305      <para>Default: 1000000</para>
     306      <para>
     307        This is the maximum number of messages to be queued up for
     308        a client before ctdb will treat the client as hung and will
     309        terminate the client connection.
     310      </para>
     311    </refsect2>
     312
     313    <refsect2>
     314      <title>MonitorInterval</title>
     315      <para>Default: 15</para>
     316      <para>
     317        How often should ctdb run the 'monitor' event in seconds to check
     318        for a node's health.
     319      </para>
     320    </refsect2>
     321
     322    <refsect2>
    185323      <title>MonitorTimeoutCount</title>
    186324      <para>Default: 20</para>
    187325      <para>
    188         How many monitor events in a row need to timeout before a node
    189         is flagged as UNHEALTHY.  This setting is useful if scripts
    190         can not be written so that they do not hang for benign
    191         reasons.
     326        How many 'monitor' events in a row need to timeout before a node
     327        is flagged as UNHEALTHY.  This setting is useful if scripts can
     328        not be written so that they do not hang for benign reasons.
     329      </para>
     330    </refsect2>
     331
     332    <refsect2>
     333      <title>NoIPFailback</title>
     334      <para>Default: 0</para>
     335      <para>
     336        When set to 1, ctdb will not perform failback of IP addresses
     337        when a node becomes healthy. When a node becomes UNHEALTHY,
     338        ctdb WILL perform failover of public IP addresses, but when the
     339        node becomes HEALTHY again, ctdb will not fail the addresses back.
     340      </para>
     341      <para>
     342        Use with caution! Normally when a node becomes available to the
     343        cluster ctdb will try to reassign public IP addresses onto the
     344        new node as a way to distribute the workload evenly across the
     345        clusternode. Ctdb tries to make sure that all running nodes have
     346        approximately the same number of public addresses it hosts.
     347      </para>
     348      <para>
     349        When you enable this tunable, ctdb will no longer attempt to
     350        rebalance the cluster by failing IP addresses back to the new
     351        nodes. An unbalanced cluster will therefore remain unbalanced
     352        until there is manual intervention from the administrator. When
     353        this parameter is set, you can manually fail public IP addresses
     354        over to the new node(s) using the 'ctdb moveip' command.
     355      </para>
     356    </refsect2>
     357
     358    <refsect2>
     359      <title>NoIPHostOnAllDisabled</title>
     360      <para>Default: 0</para>
     361      <para>
     362        If no nodes are HEALTHY then by default ctdb will happily host
     363        public IPs on disabled (unhealthy or administratively disabled)
     364        nodes.  This can cause problems, for example if the underlying
     365        cluster filesystem is not mounted.  When set to 1 on a node and
     366        that node is disabled, any IPs hosted by this node will be
     367        released and the node will not takeover any IPs until it is no
     368        longer disabled.
     369      </para>
     370    </refsect2>
     371
     372    <refsect2>
     373      <title>NoIPTakeover</title>
     374      <para>Default: 0</para>
     375      <para>
     376        When set to 1, ctdb will not allow IP addresses to be failed
     377        over onto this node. Any IP addresses that the node currently
     378        hosts will remain on the node but no new IP addresses can be
     379        failed over to the node.
     380      </para>
     381    </refsect2>
     382
     383    <refsect2>
     384      <title>PullDBPreallocation</title>
     385      <para>Default: 10*1024*1024</para>
     386      <para>
     387        This is the size of a record buffer to pre-allocate for sending
     388        reply to PULLDB control. Usually record buffer starts with size
     389        of the first record and gets reallocated every time a new record
     390        is added to the record buffer. For a large number of records,
     391        this can be very inefficient to grow the record buffer one record
     392        at a time.
     393      </para>
     394    </refsect2>
     395
     396    <refsect2>
     397      <title>RecBufferSizeLimit</title>
     398      <para>Default: 1000000</para>
     399      <para>
     400        This is the limit on the size of the record buffer to be sent
     401        in various controls.  This limit is used by new controls used
     402        for recovery and controls used in vacuuming.
     403      </para>
     404    </refsect2>
     405
     406    <refsect2>
     407      <title>RecdFailCount</title>
     408      <para>Default: 10</para>
     409      <para>
     410        If the recovery daemon has failed to ping the main dameon for
     411        this many consecutive intervals, the main daemon will consider
     412        the recovery daemon as hung and will try to restart it to recover.
     413      </para>
     414    </refsect2>
     415
     416    <refsect2>
     417      <title>RecdPingTimeout</title>
     418      <para>Default: 60</para>
     419      <para>
     420        If the main dameon has not heard a "ping" from the recovery dameon
     421        for this many seconds, the main dameon will log a message that
     422        the recovery daemon is potentially hung.  This also increments a
     423        counter which is checked against <varname>RecdFailCount</varname>
     424        for detection of hung recovery daemon.
     425      </para>
     426    </refsect2>
     427
     428    <refsect2>
     429      <title>RecLockLatencyMs</title>
     430      <para>Default: 1000</para>
     431      <para>
     432        When using a reclock file for split brain prevention, if set
     433        to non-zero this tunable will make the recovery dameon log a
     434        message if the fcntl() call to lock/testlock the recovery file
     435        takes longer than this number of milliseconds.
     436      </para>
     437    </refsect2>
     438
     439    <refsect2>
     440      <title>RecoverInterval</title>
     441      <para>Default: 1</para>
     442      <para>
     443        How frequently in seconds should the recovery daemon perform the
     444        consistency checks to determine if it should perform a recovery.
     445      </para>
     446    </refsect2>
     447
     448    <refsect2>
     449      <title>RecoverPDBBySeqNum</title>
     450      <para>Default: 1</para>
     451      <para>
     452        When set to zero, database recovery for persistent databases is
     453        record-by-record and recovery process simply collects the most
     454        recent version of every individual record.
     455      </para>
     456      <para>
     457        When set to non-zero, persistent databases will instead be
     458        recovered as a whole db and not by individual records. The
     459        node that contains the highest value stored in the record
     460        "__db_sequence_number__" is selected and the copy of that nodes
     461        database is used as the recovered database.
     462      </para>
     463      <para>
     464        By default, recovery of persistent databses is done using
     465        __db_sequence_number__ record.
     466      </para>
     467    </refsect2>
     468
     469    <refsect2>
     470      <title>RecoverTimeout</title>
     471      <para>Default: 120</para>
     472      <para>
     473        This is the default setting for timeouts for controls when sent
     474        from the recovery daemon. We allow longer control timeouts from
     475        the recovery daemon than from normal use since the recovery
     476        dameon often use controls that can take a lot longer than normal
     477        controls.
     478      </para>
     479    </refsect2>
     480
     481    <refsect2>
     482      <title>RecoveryBanPeriod</title>
     483      <para>Default: 300</para>
     484      <para>
     485       The duration in seconds for which a node is banned if the node
     486       fails during recovery.  After this time has elapsed the node will
     487       automatically get unbanned and will attempt to rejoin the cluster.
     488      </para>
     489      <para>
     490       A node usually gets banned due to real problems with the node.
     491       Don't set this value too small.  Otherwise, a problematic node
     492       will try to re-join cluster too soon causing unnecessary recoveries.
     493      </para>
     494    </refsect2>
     495
     496    <refsect2>
     497      <title>RecoveryDropAllIPs</title>
     498      <para>Default: 120</para>
     499      <para>
     500        If a node is stuck in recovery, or stopped, or banned, for this
     501        many seconds, then ctdb will release all public addresses on
     502        that node.
    192503      </para>
    193504    </refsect2>
     
    197508      <para>Default: 120</para>
    198509      <para>
    199         During recoveries, if a node has not caused recovery failures during the
    200         last grace period, any records of transgressions that the node has caused
    201         recovery failures will be forgiven. This resets the ban-counter back to
    202         zero for that node.
    203       </para>
    204     </refsect2>
    205 
    206     <refsect2>
    207       <title>RecoveryBanPeriod</title>
    208       <para>Default: 300</para>
    209       <para>
    210         If a node becomes banned causing repetitive recovery failures. The node will
    211         eventually become banned from the cluster.
    212         This controls how long the culprit node will be banned from the cluster
    213         before it is allowed to try to join the cluster again.
    214         Don't set to small. A node gets banned for a reason and it is usually due
    215         to real problems with the node.
    216       </para>
    217     </refsect2>
    218 
    219     <refsect2>
    220       <title>DatabaseHashSize</title>
    221       <para>Default: 100001</para>
    222       <para>
    223         Size of the hash chains for the local store of the tdbs that ctdb manages.
    224       </para>
    225     </refsect2>
    226 
    227     <refsect2>
    228       <title>DatabaseMaxDead</title>
    229       <para>Default: 5</para>
    230       <para>
    231         How many dead records per hashchain in the TDB database do we allow before
    232         the freelist needs to be processed.
     510       During recoveries, if a node has not caused recovery failures
     511       during the last grace period in seconds, any records of
     512       transgressions that the node has caused recovery failures will be
     513       forgiven. This resets the ban-counter back to zero for that node.
     514      </para>
     515    </refsect2>
     516
     517    <refsect2>
     518      <title>RepackLimit</title>
     519      <para>Default: 10000</para>
     520      <para>
     521        During vacuuming, if the number of freelist records are more than
     522        <varname>RepackLimit</varname>, then the database is repacked
     523        to get rid of the freelist records to avoid fragmentation.
     524      </para>
     525      <para>
     526        Databases are repacked only if both <varname>RepackLimit</varname>
     527        and <varname>VacuumLimit</varname> are exceeded.
    233528      </para>
    234529    </refsect2>
     
    238533      <para>Default: 10</para>
    239534      <para>
    240         Once a recovery has completed, no additional recoveries are permitted
    241         until this timeout has expired.
    242       </para>
    243     </refsect2>
    244 
    245     <refsect2>
    246       <title>EnableBans</title>
     535        Once a recovery has completed, no additional recoveries are
     536        permitted until this timeout in seconds has expired.
     537      </para>
     538    </refsect2>
     539
     540    <refsect2>
     541      <title>Samba3AvoidDeadlocks</title>
     542      <para>Default: 0</para>
     543      <para>
     544        If set to non-zero, enable code that prevents deadlocks with Samba
     545        (only for Samba 3.x).
     546      </para> <para>
     547        This should be set to 1 only when using Samba version 3.x
     548        to enable special code in ctdb to avoid deadlock with Samba
     549        version 3.x.  This code is not required for Samba version 4.x
     550        and must not be enabled for Samba 4.x.
     551      </para>
     552    </refsect2>
     553
     554    <refsect2>
     555      <title>SeqnumInterval</title>
     556      <para>Default: 1000</para>
     557      <para>
     558        Some databases have seqnum tracking enabled, so that samba will
     559        be able to detect asynchronously when there has been updates
     560        to the database.  Everytime a database is updated its sequence
     561        number is increased.
     562      </para>
     563      <para>
     564        This tunable is used to specify in milliseconds how frequently
     565        ctdb will send out updates to remote nodes to inform them that
     566        the sequence number is increased.
     567      </para>
     568    </refsect2>
     569
     570    <refsect2>
     571      <title>StatHistoryInterval</title>
    247572      <para>Default: 1</para>
    248573      <para>
    249         When set to 0, this disables BANNING completely in the cluster and thus
    250         nodes can not get banned, even it they break. Don't set to 0 unless you
    251         know what you are doing.  You should set this to the same value on
    252         all nodes to avoid unexpected behaviour.
    253       </para>
    254     </refsect2>
    255 
    256     <refsect2>
    257       <title>DeterministicIPs</title>
    258       <para>Default: 0</para>
    259       <para>
    260         When enabled, this tunable makes ctdb try to keep public IP addresses
    261         locked to specific nodes as far as possible. This makes it easier for
    262         debugging since you can know that as long as all nodes are healthy
    263         public IP X will always be hosted by node Y.
    264       </para>
    265       <para>
    266         The cost of using deterministic IP address assignment is that it
    267         disables part of the logic where ctdb tries to reduce the number of
    268         public IP assignment changes in the cluster. This tunable may increase
    269         the number of IP failover/failbacks that are performed on the cluster
    270         by a small margin.
    271       </para>
    272 
    273     </refsect2>
    274     <refsect2>
    275       <title>LCP2PublicIPs</title>
    276       <para>Default: 1</para>
    277       <para>
    278         When enabled this switches ctdb to use the LCP2 ip allocation
    279         algorithm.
    280       </para>
    281     </refsect2>
    282 
    283     <refsect2>
    284       <title>ReclockPingPeriod</title>
    285       <para>Default: x</para>
    286       <para>
    287         Obsolete
    288       </para>
    289     </refsect2>
    290 
    291     <refsect2>
    292       <title>NoIPFailback</title>
    293       <para>Default: 0</para>
    294       <para>
    295         When set to 1, ctdb will not perform failback of IP addresses when a node
    296         becomes healthy. Ctdb WILL perform failover of public IP addresses when a
    297         node becomes UNHEALTHY, but when the node becomes HEALTHY again, ctdb
    298         will not fail the addresses back.
    299       </para>
    300       <para>
    301         Use with caution! Normally when a node becomes available to the cluster
    302         ctdb will try to reassign public IP addresses onto the new node as a way
    303         to distribute the workload evenly across the clusternode. Ctdb tries to
    304         make sure that all running nodes have approximately the same number of
    305         public addresses it hosts.
    306       </para>
    307       <para>
    308         When you enable this tunable, CTDB will no longer attempt to rebalance
    309         the cluster by failing IP addresses back to the new nodes. An unbalanced
    310         cluster will therefore remain unbalanced until there is manual
    311         intervention from the administrator. When this parameter is set, you can
    312         manually fail public IP addresses over to the new node(s) using the
    313         'ctdb moveip' command.
    314       </para>
    315     </refsect2>
    316 
    317     <refsect2>
    318       <title>DisableIPFailover</title>
    319       <para>Default: 0</para>
    320       <para>
    321         When enabled, ctdb will not perform failover or failback. Even if a
    322         node fails while holding public IPs, ctdb will not recover the IPs or
    323         assign them to another node.
    324       </para>
    325       <para>
    326         When you enable this tunable, CTDB will no longer attempt to recover
    327         the cluster by failing IP addresses over to other nodes. This leads to
    328         a service outage until the administrator has manually performed failover
    329         to replacement nodes using the 'ctdb moveip' command.
    330       </para>
    331     </refsect2>
    332 
    333     <refsect2>
    334       <title>NoIPTakeover</title>
    335       <para>Default: 0</para>
    336       <para>
    337         When set to 1, ctdb will not allow IP addresses to be failed over
    338         onto this node. Any IP addresses that the node currently hosts
    339         will remain on the node but no new IP addresses can be failed over
    340         to the node.
    341       </para>
    342     </refsect2>
    343 
    344     <refsect2>
    345       <title>NoIPHostOnAllDisabled</title>
    346       <para>Default: 0</para>
    347       <para>
    348         If no nodes are healthy then by default ctdb will happily host
    349         public IPs on disabled (unhealthy or administratively disabled)
    350         nodes.  This can cause problems, for example if the underlying
    351         cluster filesystem is not mounted.  When set to 1 on a node and
    352         that node is disabled it, any IPs hosted by this node will be
    353         released and the node will not takeover any IPs until it is no
    354         longer disabled.
    355       </para>
    356     </refsect2>
    357 
    358     <refsect2>
    359       <title>DBRecordCountWarn</title>
    360       <para>Default: 100000</para>
    361       <para>
    362         When set to non-zero, ctdb will log a warning when we try to recover a
    363         database with more than this many records. This will produce a warning
    364         if a database grows uncontrollably with orphaned records.
    365       </para>
    366     </refsect2>
    367 
    368     <refsect2>
    369       <title>DBRecordSizeWarn</title>
    370       <para>Default: 10000000</para>
    371       <para>
    372         When set to non-zero, ctdb will log a warning when we try to recover a
    373         database where a single record is bigger than this. This will produce
    374         a warning if a database record grows uncontrollably with orphaned
    375         sub-records.
    376       </para>
    377     </refsect2>
    378 
    379     <refsect2>
    380       <title>DBSizeWarn</title>
    381       <para>Default: 1000000000</para>
    382       <para>
    383         When set to non-zero, ctdb will log a warning when we try to recover a
    384         database bigger than this. This will produce
    385         a warning if a database grows uncontrollably.
    386       </para>
    387     </refsect2>
    388 
    389     <refsect2>
    390       <title>VerboseMemoryNames</title>
    391       <para>Default: 0</para>
    392       <para>
    393         This feature consumes additional memory. when used the talloc library
    394         will create more verbose names for all talloc allocated objects.
    395       </para>
    396     </refsect2>
    397 
    398     <refsect2>
    399       <title>RecdPingTimeout</title>
     574        Granularity of the statistics collected in the statistics
     575        history. This is reported by 'ctdb stats' command.
     576      </para>
     577    </refsect2>
     578
     579    <refsect2>
     580      <title>StickyDuration</title>
     581      <para>Default: 600</para>
     582      <para>
     583        Once a record has been marked STICKY, this is the duration in
     584        seconds, the record will be flagged as a STICKY record.
     585      </para>
     586    </refsect2>
     587
     588    <refsect2>
     589      <title>StickyPindown</title>
     590      <para>Default: 200</para>
     591      <para>
     592        Once a STICKY record has been migrated onto a node, it will be
     593        pinned down on that node for this number of milliseconds. Any
     594        request from other nodes to migrate the record off the node will
     595        be deferred.
     596      </para>
     597    </refsect2>
     598
     599    <refsect2>
     600      <title>TakeoverTimeout</title>
     601      <para>Default: 9</para>
     602      <para>
     603        This is the duration in seconds in which ctdb tries to complete IP
     604        failover.
     605      </para>
     606    </refsect2>
     607
     608    <refsect2>
     609      <title>TDBMutexEnabled</title>
     610      <para>Default: 0</para>
     611      <para>
     612        This paramter enables TDB_MUTEX_LOCKING feature on volatile
     613        databases if the robust mutexes are supported. This optimizes the
     614        record locking using robust mutexes and is much more efficient
     615        that using posix locks.
     616      </para>
     617    </refsect2>
     618
     619    <refsect2>
     620      <title>TickleUpdateInterval</title>
     621      <para>Default: 20</para>
     622      <para>
     623        Every <varname>TickleUpdateInterval</varname> seconds, ctdb
     624        synchronizes the client connection information across nodes.
     625      </para>
     626    </refsect2>
     627
     628    <refsect2>
     629      <title>TraverseTimeout</title>
     630      <para>Default: 20</para>
     631      <para>
     632        This is the duration in seconds for which a database traverse
     633        is allowed to run.  If the traverse does not complete during
     634        this interval, ctdb will abort the traverse.
     635      </para>
     636    </refsect2>
     637
     638    <refsect2>
     639      <title>VacuumFastPathCount</title>
    400640      <para>Default: 60</para>
    401641      <para>
    402         If the main dameon has not heard a "ping" from the recovery dameon for
    403         this many seconds, the main dameon will log a message that the recovery
    404         daemon is potentially hung.
    405       </para>
    406     </refsect2>
    407 
    408     <refsect2>
    409       <title>RecdFailCount</title>
    410       <para>Default: 10</para>
    411       <para>
    412         If the recovery daemon has failed to ping the main dameon for this many
    413         consecutive intervals, the main daemon will consider the recovery daemon
    414         as hung and will try to restart it to recover.
    415       </para>
    416     </refsect2>
    417 
    418     <refsect2>
    419       <title>LogLatencyMs</title>
    420       <para>Default: 0</para>
    421       <para>
    422         When set to non-zero, this will make the main daemon log any operation that
    423         took longer than this value, in 'ms', to complete.
    424         These include "how long time a lockwait child process needed",
    425         "how long time to write to a persistent database" but also
    426         "how long did it take to get a response to a CALL from a remote node".
    427       </para>
    428     </refsect2>
    429 
    430     <refsect2>
    431       <title>RecLockLatencyMs</title>
    432       <para>Default: 1000</para>
    433       <para>
    434         When using a reclock file for split brain prevention, if set to non-zero
    435         this tunable will make the recovery dameon log a message if the fcntl()
    436         call to lock/testlock the recovery file takes longer than this number of
    437         ms.
    438       </para>
    439     </refsect2>
    440 
    441     <refsect2>
    442       <title>RecoveryDropAllIPs</title>
    443       <para>Default: 120</para>
    444       <para>
    445         If we have been stuck in recovery, or stopped, or banned, mode for
    446         this many seconds we will force drop all held public addresses.
     642       During a vacuuming run, ctdb usually processes only the records
     643       marked for deletion also called the fast path vacuuming. After
     644       finishing <varname>VacuumFastPathCount</varname> number of fast
     645       path vacuuming runs, ctdb will trigger a scan of complete database
     646       for any empty records that need to be deleted.
    447647      </para>
    448648    </refsect2>
     
    454654        Periodic interval in seconds when vacuuming is triggered for
    455655        volatile databases.
     656      </para>
     657    </refsect2>
     658
     659    <refsect2>
     660      <title>VacuumLimit</title>
     661      <para>Default: 5000</para>
     662      <para>
     663        During vacuuming, if the number of deleted records are more than
     664        <varname>VacuumLimit</varname>, then databases are repacked to
     665        avoid fragmentation.
     666      </para>
     667      <para>
     668        Databases are repacked only if both <varname>RepackLimit</varname>
     669        and <varname>VacuumLimit</varname> are exceeded.
    456670      </para>
    457671    </refsect2>
     
    468682
    469683    <refsect2>
    470       <title>RepackLimit</title>
    471       <para>Default: 10000</para>
    472       <para>
    473         During vacuuming, if the number of freelist records are more
    474         than <varname>RepackLimit</varname>, then databases are
    475         repacked to get rid of the freelist records to avoid
    476         fragmentation.
    477       </para>
    478       <para>
    479         Databases are repacked only if both
    480         <varname>RepackLimit</varname> and
    481         <varname>VacuumLimit</varname> are exceeded.
    482       </para>
    483     </refsect2>
    484 
    485     <refsect2>
    486       <title>VacuumLimit</title>
    487       <para>Default: 5000</para>
    488       <para>
    489         During vacuuming, if the number of deleted records are more
    490         than <varname>VacuumLimit</varname>, then databases are
    491         repacked to avoid fragmentation.
    492       </para>
    493       <para>
    494         Databases are repacked only if both
    495         <varname>RepackLimit</varname> and
    496         <varname>VacuumLimit</varname> are exceeded.
    497       </para>
    498     </refsect2>
    499 
    500     <refsect2>
    501       <title>VacuumFastPathCount</title>
    502       <para>Default: 60</para>
    503       <para>
    504         When a record is deleted, it is marked for deletion during
    505         vacuuming.  Vacuuming process usually processes this list to purge
    506         the records from the database.  If the number of records marked
    507         for deletion are more than VacuumFastPathCount, then vacuuming
    508         process will scan the complete database for empty records instead
    509         of using the list of records marked for deletion.
    510       </para>
    511     </refsect2>
    512 
    513     <refsect2>
    514       <title>DeferredAttachTO</title>
    515       <para>Default: 120</para>
    516       <para>
    517         When databases are frozen we do not allow clients to attach to the
    518         databases. Instead of returning an error immediately to the application
    519         the attach request from the client is deferred until the database
    520         becomes available again at which stage we respond to the client.
    521       </para>
    522       <para>
    523         This timeout controls how long we will defer the request from the client
    524         before timing it out and returning an error to the client.
    525       </para>
    526     </refsect2>
    527 
    528     <refsect2>
    529       <title>HopcountMakeSticky</title>
    530       <para>Default: 50</para>
    531       <para>
    532         If the database is set to 'STICKY' mode, using the 'ctdb setdbsticky'
    533         command, any record that is seen as very hot and migrating so fast that
    534         hopcount surpasses 50 is set to become a STICKY record for StickyDuration
    535         seconds. This means that after each migration the record will be kept on
    536         the node and prevented from being migrated off the node.
    537       </para>
    538       <para>
    539         This setting allows one to try to identify such records and stop them from
    540         migrating across the cluster so fast. This will improve performance for
    541         certain workloads, such as locking.tdb if many clients are opening/closing
    542         the same file concurrently.
    543       </para>
    544     </refsect2>
    545 
    546     <refsect2>
    547       <title>StickyDuration</title>
    548       <para>Default: 600</para>
    549       <para>
    550         Once a record has been found to be fetch-lock hot and has been flagged to
    551         become STICKY, this is for how long, in seconds, the record will be
    552         flagged as a STICKY record.
    553       </para>
    554     </refsect2>
    555 
    556     <refsect2>
    557       <title>StickyPindown</title>
    558       <para>Default: 200</para>
    559       <para>
    560         Once a STICKY record has been migrated onto a node, it will be pinned down
    561         on that node for this number of ms. Any request from other nodes to migrate
    562         the record off the node will be deferred until the pindown timer expires.
    563       </para>
    564     </refsect2>
    565 
    566     <refsect2>
    567       <title>StatHistoryInterval</title>
    568       <para>Default: 1</para>
    569       <para>
    570         Granularity of the statistics collected in the statistics history.
    571       </para>
    572     </refsect2>
    573 
    574     <refsect2>
    575       <title>AllowClientDBAttach</title>
    576       <para>Default: 1</para>
    577       <para>
    578         When set to 0, clients are not allowed to attach to any databases.
    579         This can be used to temporarily block any new processes from attaching
    580         to and accessing the databases.
    581       </para>
    582     </refsect2>
    583 
    584     <refsect2>
    585       <title>RecoverPDBBySeqNum</title>
    586       <para>Default: 1</para>
    587       <para>
    588         When set to zero, database recovery for persistent databases
    589         is record-by-record and recovery process simply collects the
    590         most recent version of every individual record.
    591       </para>
    592       <para>
    593         When set to non-zero, persistent databases will instead be
    594         recovered as a whole db and not by individual records. The
    595         node that contains the highest value stored in the record
    596         "__db_sequence_number__" is selected and the copy of that
    597         nodes database is used as the recovered database.
    598       </para>
    599       <para>
    600         By default, recovery of persistent databses is done using
    601         __db_sequence_number__ record.
    602       </para>
    603     </refsect2>
    604 
    605     <refsect2>
    606       <title>FetchCollapse</title>
    607       <para>Default: 1</para>
    608       <para>
    609         When many clients across many nodes try to access the same record at the
    610         same time this can lead to a fetch storm where the record becomes very
    611         active and bounces between nodes very fast. This leads to high CPU
    612         utilization of the ctdbd daemon, trying to bounce that record around
    613         very fast, and poor performance.
    614       </para>
    615       <para>
    616         This parameter is used to activate a fetch-collapse. A fetch-collapse
    617         is when we track which records we have requests in flight so that we only
    618         keep one request in flight from a certain node, even if multiple smbd
    619         processes are attemtping to fetch the record at the same time. This
    620         can improve performance and reduce CPU utilization for certain
    621         workloads.
    622       </para>
    623       <para>
    624         This timeout controls if we should collapse multiple fetch operations
    625         of the same record into a single request and defer all duplicates or not.
    626       </para>
    627     </refsect2>
    628 
    629     <refsect2>
    630       <title>Samba3AvoidDeadlocks</title>
    631       <para>Default: 0</para>
    632       <para>
    633         Enable code that prevents deadlocks with Samba (only for Samba 3.x).
    634       </para>
    635       <para>
    636         This should be set to 1 when using Samba version 3.x to enable special
    637         code in CTDB to avoid deadlock with Samba version 3.x.  This code
    638         is not required for Samba version 4.x and must not be enabled for
    639         Samba 4.x.
    640       </para>
    641     </refsect2>
     684      <title>VerboseMemoryNames</title>
     685      <para>Default: 0</para>
     686      <para>
     687        When set to non-zero, ctdb assigns verbose names for some of
     688        the talloc allocated memory objects.  These names are visible
     689        in the talloc memory report generated by 'ctdb dumpmemory'.
     690      </para>
     691    </refsect2>
     692
    642693  </refsect1>
    643694
Note: See TracChangeset for help on using the changeset viewer.