Ignore:
Timestamp:
Nov 25, 2016, 8:04:54 PM (9 years ago)
Author:
Silvan Scherrer
Message:

Samba Server: update vendor to version 4.4.7

File:
1 edited

Legend:

Unmodified
Added
Removed
  • vendor/current/ctdb/doc/ctdb-tunables.7

    r988 r989  
    33.\"    Author:
    44.\" Generator: DocBook XSL Stylesheets v1.78.1 <http://docbook.sf.net/>
    5 .\"      Date: 01/27/2016
     5.\"      Date: 09/22/2016
    66.\"    Manual: CTDB - clustered TDB database
    77.\"    Source: ctdb
    88.\"  Language: English
    99.\"
    10 .TH "CTDB\-TUNABLES" "7" "01/27/2016" "ctdb" "CTDB \- clustered TDB database"
     10.TH "CTDB\-TUNABLES" "7" "09/22/2016" "ctdb" "CTDB \- clustered TDB database"
    1111.\" -----------------------------------------------------------------
    1212.\" * Define some portability stuff
     
    3838\fBgetvar\fR
    3939commands for more details\&.
    40 .SS "MaxRedirectCount"
     40.PP
     41The tunable variables are listed alphabetically\&.
     42.SS "AllowClientDBAttach"
     43.PP
     44Default: 1
     45.PP
     46When set to 0, clients are not allowed to attach to any databases\&. This can be used to temporarily block any new processes from attaching to and accessing the databases\&. This is mainly used for detaching a volatile database using \*(Aqctdb detach\*(Aq\&.
     47.SS "AllowUnhealthyDBRead"
     48.PP
     49Default: 0
     50.PP
     51When set to 1, ctdb allows database traverses to read unhealthy databases\&. By default, ctdb does not allow reading records from unhealthy databases\&.
     52.SS "ControlTimeout"
     53.PP
     54Default: 60
     55.PP
     56This is the default setting for timeout for when sending a control message to either the local or a remote ctdb daemon\&.
     57.SS "DatabaseHashSize"
     58.PP
     59Default: 100001
     60.PP
     61Number of the hash chains for the local store of the tdbs that ctdb manages\&.
     62.SS "DatabaseMaxDead"
     63.PP
     64Default: 5
     65.PP
     66Maximum number of dead records per hash chain for the tdb databses managed by ctdb\&.
     67.SS "DBRecordCountWarn"
     68.PP
     69Default: 100000
     70.PP
     71When set to non\-zero, ctdb will log a warning during recovery if a database has more than this many records\&. This will produce a warning if a database grows uncontrollably with orphaned records\&.
     72.SS "DBRecordSizeWarn"
     73.PP
     74Default: 10000000
     75.PP
     76When set to non\-zero, ctdb will log a warning during recovery if a single record is bigger than this size\&. This will produce a warning if a database record grows uncontrollably\&.
     77.SS "DBSizeWarn"
     78.PP
     79Default: 1000000000
     80.PP
     81When set to non\-zero, ctdb will log a warning during recovery if a database size is bigger than this\&. This will produce a warning if a database grows uncontrollably\&.
     82.SS "DeferredAttachTO"
     83.PP
     84Default: 120
     85.PP
     86When databases are frozen we do not allow clients to attach to the databases\&. Instead of returning an error immediately to the client, the attach request from the client is deferred until the database becomes available again at which stage we respond to the client\&.
     87.PP
     88This timeout controls how long we will defer the request from the client before timing it out and returning an error to the client\&.
     89.SS "DeterministicIPs"
     90.PP
     91Default: 0
     92.PP
     93When set to 1, ctdb will try to keep public IP addresses locked to specific nodes as far as possible\&. This makes it easier for debugging since you can know that as long as all nodes are healthy public IP X will always be hosted by node Y\&.
     94.PP
     95The cost of using deterministic IP address assignment is that it disables part of the logic where ctdb tries to reduce the number of public IP assignment changes in the cluster\&. This tunable may increase the number of IP failover/failbacks that are performed on the cluster by a small margin\&.
     96.SS "DisableIPFailover"
     97.PP
     98Default: 0
     99.PP
     100When set to non\-zero, ctdb will not perform failover or failback\&. Even if a node fails while holding public IPs, ctdb will not recover the IPs or assign them to another node\&.
     101.PP
     102When this tunable is enabled, ctdb will no longer attempt to recover the cluster by failing IP addresses over to other nodes\&. This leads to a service outage until the administrator has manually performed IP failover to replacement nodes using the \*(Aqctdb moveip\*(Aq command\&.
     103.SS "ElectionTimeout"
    41104.PP
    42105Default: 3
    43106.PP
    44 If we are not the DMASTER and need to fetch a record across the network we first send the request to the LMASTER after which the record is passed onto the current DMASTER\&. If the DMASTER changes before the request has reached that node, the request will be passed onto the "next" DMASTER\&. For very hot records that migrate rapidly across the cluster this can cause a request to "chase" the record for many hops before it catches up with the record\&. this is how many hops we allow trying to chase the DMASTER before we switch back to the LMASTER again to ask for new directions\&.
    45 .PP
    46 When chasing a record, this is how many hops we will chase the record for before going back to the LMASTER to ask for new guidance\&.
    47 .SS "SeqnumInterval"
     107The number of seconds to wait for the election of recovery master to complete\&. If the election is not completed during this interval, then that round of election fails and ctdb starts a new election\&.
     108.SS "EnableBans"
     109.PP
     110Default: 1
     111.PP
     112This parameter allows ctdb to ban a node if the node is misbehaving\&.
     113.PP
     114When set to 0, this disables banning completely in the cluster and thus nodes can not get banned, even it they break\&. Don\*(Aqt set to 0 unless you know what you are doing\&. You should set this to the same value on all nodes to avoid unexpected behaviour\&.
     115.SS "EventScriptTimeout"
     116.PP
     117Default: 30
     118.PP
     119Maximum time in seconds to allow an event to run before timing out\&. This is the total time for all enabled scripts that are run for an event, not just a single event script\&.
     120.PP
     121Note that timeouts are ignored for some events ("takeip", "releaseip", "startrecovery", "recovered") and converted to success\&. The logic here is that the callers of these events implement their own additional timeout\&.
     122.SS "FetchCollapse"
     123.PP
     124Default: 1
     125.PP
     126This parameter is used to avoid multiple migration requests for the same record from a single node\&. All the record requests for the same record are queued up and processed when the record is migrated to the current node\&.
     127.PP
     128When many clients across many nodes try to access the same record at the same time this can lead to a fetch storm where the record becomes very active and bounces between nodes very fast\&. This leads to high CPU utilization of the ctdbd daemon, trying to bounce that record around very fast, and poor performance\&. This can improve performance and reduce CPU utilization for certain workloads\&.
     129.SS "HopcountMakeSticky"
     130.PP
     131Default: 50
     132.PP
     133For database(s) marked STICKY (using \*(Aqctdb setdbsticky\*(Aq), any record that is migrating so fast that hopcount exceeds this limit is marked as STICKY record for
     134\fIStickyDuration\fR
     135seconds\&. This means that after each migration the sticky record will be kept on the node
     136\fIStickyPindown\fRmilliseconds and prevented from being migrated off the node\&.
     137.PP
     138This will improve performance for certain workloads, such as locking\&.tdb if many clients are opening/closing the same file concurrently\&.
     139.SS "KeepaliveInterval"
     140.PP
     141Default: 5
     142.PP
     143How often in seconds should the nodes send keep\-alive packets to each other\&.
     144.SS "KeepaliveLimit"
     145.PP
     146Default: 5
     147.PP
     148After how many keepalive intervals without any traffic should a node wait until marking the peer as DISCONNECTED\&.
     149.PP
     150If a node has hung, it can take
     151\fIKeepaliveInterval\fR
     152* (\fIKeepaliveLimit\fR
     153+ 1) seconds before ctdb determines that the node is DISCONNECTED and performs a recovery\&. This limit should not be set too high to enable early detection and avoid any application timeouts (e\&.g\&. SMB1) to kick in before the fail over is completed\&.
     154.SS "LCP2PublicIPs"
     155.PP
     156Default: 1
     157.PP
     158When set to 1, ctdb uses the LCP2 ip allocation algorithm\&.
     159.SS "LockProcessesPerDB"
     160.PP
     161Default: 200
     162.PP
     163This is the maximum number of lock helper processes ctdb will create for obtaining record locks\&. When ctdb cannot get a record lock without blocking, it creates a helper process that waits for the lock to be obtained\&.
     164.SS "LogLatencyMs"
     165.PP
     166Default: 0
     167.PP
     168When set to non\-zero, ctdb will log if certains operations take longer than this value, in milliseconds, to complete\&. These operations include "process a record request from client", "take a record or database lock", "update a persistent database record" and "vaccum a database"\&.
     169.SS "MaxQueueDropMsg"
     170.PP
     171Default: 1000000
     172.PP
     173This is the maximum number of messages to be queued up for a client before ctdb will treat the client as hung and will terminate the client connection\&.
     174.SS "MonitorInterval"
     175.PP
     176Default: 15
     177.PP
     178How often should ctdb run the \*(Aqmonitor\*(Aq event in seconds to check for a node\*(Aqs health\&.
     179.SS "MonitorTimeoutCount"
     180.PP
     181Default: 20
     182.PP
     183How many \*(Aqmonitor\*(Aq events in a row need to timeout before a node is flagged as UNHEALTHY\&. This setting is useful if scripts can not be written so that they do not hang for benign reasons\&.
     184.SS "NoIPFailback"
     185.PP
     186Default: 0
     187.PP
     188When set to 1, ctdb will not perform failback of IP addresses when a node becomes healthy\&. When a node becomes UNHEALTHY, ctdb WILL perform failover of public IP addresses, but when the node becomes HEALTHY again, ctdb will not fail the addresses back\&.
     189.PP
     190Use with caution! Normally when a node becomes available to the cluster ctdb will try to reassign public IP addresses onto the new node as a way to distribute the workload evenly across the clusternode\&. Ctdb tries to make sure that all running nodes have approximately the same number of public addresses it hosts\&.
     191.PP
     192When you enable this tunable, ctdb will no longer attempt to rebalance the cluster by failing IP addresses back to the new nodes\&. An unbalanced cluster will therefore remain unbalanced until there is manual intervention from the administrator\&. When this parameter is set, you can manually fail public IP addresses over to the new node(s) using the \*(Aqctdb moveip\*(Aq command\&.
     193.SS "NoIPHostOnAllDisabled"
     194.PP
     195Default: 0
     196.PP
     197If no nodes are HEALTHY then by default ctdb will happily host public IPs on disabled (unhealthy or administratively disabled) nodes\&. This can cause problems, for example if the underlying cluster filesystem is not mounted\&. When set to 1 on a node and that node is disabled, any IPs hosted by this node will be released and the node will not takeover any IPs until it is no longer disabled\&.
     198.SS "NoIPTakeover"
     199.PP
     200Default: 0
     201.PP
     202When set to 1, ctdb will not allow IP addresses to be failed over onto this node\&. Any IP addresses that the node currently hosts will remain on the node but no new IP addresses can be failed over to the node\&.
     203.SS "PullDBPreallocation"
     204.PP
     205Default: 10*1024*1024
     206.PP
     207This is the size of a record buffer to pre\-allocate for sending reply to PULLDB control\&. Usually record buffer starts with size of the first record and gets reallocated every time a new record is added to the record buffer\&. For a large number of records, this can be very inefficient to grow the record buffer one record at a time\&.
     208.SS "RecBufferSizeLimit"
     209.PP
     210Default: 1000000
     211.PP
     212This is the limit on the size of the record buffer to be sent in various controls\&. This limit is used by new controls used for recovery and controls used in vacuuming\&.
     213.SS "RecdFailCount"
     214.PP
     215Default: 10
     216.PP
     217If the recovery daemon has failed to ping the main dameon for this many consecutive intervals, the main daemon will consider the recovery daemon as hung and will try to restart it to recover\&.
     218.SS "RecdPingTimeout"
     219.PP
     220Default: 60
     221.PP
     222If the main dameon has not heard a "ping" from the recovery dameon for this many seconds, the main dameon will log a message that the recovery daemon is potentially hung\&. This also increments a counter which is checked against
     223\fIRecdFailCount\fR
     224for detection of hung recovery daemon\&.
     225.SS "RecLockLatencyMs"
    48226.PP
    49227Default: 1000
    50228.PP
    51 Some databases have seqnum tracking enabled, so that samba will be able to detect asynchronously when there has been updates to the database\&. Everytime a database is updated its sequence number is increased\&.
    52 .PP
    53 This tunable is used to specify in \*(Aqms\*(Aq how frequently ctdb will send out updates to remote nodes to inform them that the sequence number is increased\&.
    54 .SS "ControlTimeout"
    55 .PP
    56 Default: 60
    57 .PP
    58 This is the default setting for timeout for when sending a control message to either the local or a remote ctdb daemon\&.
    59 .SS "TraverseTimeout"
    60 .PP
    61 Default: 20
    62 .PP
    63 This setting controls how long we allow a traverse process to run\&. After this timeout triggers, the main ctdb daemon will abort the traverse if it has not yet finished\&.
    64 .SS "KeepaliveInterval"
    65 .PP
    66 Default: 5
    67 .PP
    68 How often in seconds should the nodes send keepalives to eachother\&.
    69 .SS "KeepaliveLimit"
    70 .PP
    71 Default: 5
    72 .PP
    73 After how many keepalive intervals without any traffic should a node wait until marking the peer as DISCONNECTED\&.
    74 .PP
    75 If a node has hung, it can thus take KeepaliveInterval*(KeepaliveLimit+1) seconds before we determine that the node is DISCONNECTED and that we require a recovery\&. This limitshould not be set too high since we want a hung node to be detectec, and expunged from the cluster well before common CIFS timeouts (45\-90 seconds) kick in\&.
     229When using a reclock file for split brain prevention, if set to non\-zero this tunable will make the recovery dameon log a message if the fcntl() call to lock/testlock the recovery file takes longer than this number of milliseconds\&.
     230.SS "RecoverInterval"
     231.PP
     232Default: 1
     233.PP
     234How frequently in seconds should the recovery daemon perform the consistency checks to determine if it should perform a recovery\&.
     235.SS "RecoverPDBBySeqNum"
     236.PP
     237Default: 1
     238.PP
     239When set to zero, database recovery for persistent databases is record\-by\-record and recovery process simply collects the most recent version of every individual record\&.
     240.PP
     241When set to non\-zero, persistent databases will instead be recovered as a whole db and not by individual records\&. The node that contains the highest value stored in the record "__db_sequence_number__" is selected and the copy of that nodes database is used as the recovered database\&.
     242.PP
     243By default, recovery of persistent databses is done using __db_sequence_number__ record\&.
    76244.SS "RecoverTimeout"
    77245.PP
    78 Default: 20
     246Default: 120
    79247.PP
    80248This is the default setting for timeouts for controls when sent from the recovery daemon\&. We allow longer control timeouts from the recovery daemon than from normal use since the recovery dameon often use controls that can take a lot longer than normal controls\&.
    81 .SS "RecoverInterval"
    82 .PP
    83 Default: 1
    84 .PP
    85 How frequently in seconds should the recovery daemon perform the consistency checks that determine if we need to perform a recovery or not\&.
    86 .SS "ElectionTimeout"
    87 .PP
    88 Default: 3
    89 .PP
    90 When electing a new recovery master, this is how many seconds we allow the election to take before we either deem the election finished or we fail the election and start a new one\&.
    91 .SS "TakeoverTimeout"
    92 .PP
    93 Default: 9
    94 .PP
    95 This is how many seconds we allow controls to take for IP failover events\&.
    96 .SS "MonitorInterval"
    97 .PP
    98 Default: 15
    99 .PP
    100 How often should ctdb run the event scripts to check for a nodes health\&.
    101 .SS "TickleUpdateInterval"
    102 .PP
    103 Default: 20
    104 .PP
    105 How often will ctdb record and store the "tickle" information used to kickstart stalled tcp connections after a recovery\&.
    106 .SS "EventScriptTimeout"
    107 .PP
    108 Default: 30
    109 .PP
    110 Maximum time in seconds to allow an event to run before timing out\&. This is the total time for all enabled scripts that are run for an event, not just a single event script\&.
    111 .PP
    112 Note that timeouts are ignored for some events ("takeip", "releaseip", "startrecovery", "recovered") and converted to success\&. The logic here is that the callers of these events implement their own additional timeout\&.
    113 .SS "MonitorTimeoutCount"
    114 .PP
    115 Default: 20
    116 .PP
    117 How many monitor events in a row need to timeout before a node is flagged as UNHEALTHY\&. This setting is useful if scripts can not be written so that they do not hang for benign reasons\&.
     249.SS "RecoveryBanPeriod"
     250.PP
     251Default: 300
     252.PP
     253The duration in seconds for which a node is banned if the node fails during recovery\&. After this time has elapsed the node will automatically get unbanned and will attempt to rejoin the cluster\&.
     254.PP
     255A node usually gets banned due to real problems with the node\&. Don\*(Aqt set this value too small\&. Otherwise, a problematic node will try to re\-join cluster too soon causing unnecessary recoveries\&.
     256.SS "RecoveryDropAllIPs"
     257.PP
     258Default: 120
     259.PP
     260If a node is stuck in recovery, or stopped, or banned, for this many seconds, then ctdb will release all public addresses on that node\&.
    118261.SS "RecoveryGracePeriod"
    119262.PP
    120263Default: 120
    121264.PP
    122 During recoveries, if a node has not caused recovery failures during the last grace period, any records of transgressions that the node has caused recovery failures will be forgiven\&. This resets the ban\-counter back to zero for that node\&.
    123 .SS "RecoveryBanPeriod"
    124 .PP
    125 Default: 300
    126 .PP
    127 If a node becomes banned causing repetitive recovery failures\&. The node will eventually become banned from the cluster\&. This controls how long the culprit node will be banned from the cluster before it is allowed to try to join the cluster again\&. Don\*(Aqt set to small\&. A node gets banned for a reason and it is usually due to real problems with the node\&.
    128 .SS "DatabaseHashSize"
    129 .PP
    130 Default: 100001
    131 .PP
    132 Size of the hash chains for the local store of the tdbs that ctdb manages\&.
    133 .SS "DatabaseMaxDead"
    134 .PP
    135 Default: 5
    136 .PP
    137 How many dead records per hashchain in the TDB database do we allow before the freelist needs to be processed\&.
    138 .SS "RerecoveryTimeout"
    139 .PP
    140 Default: 10
    141 .PP
    142 Once a recovery has completed, no additional recoveries are permitted until this timeout has expired\&.
    143 .SS "EnableBans"
    144 .PP
    145 Default: 1
    146 .PP
    147 When set to 0, this disables BANNING completely in the cluster and thus nodes can not get banned, even it they break\&. Don\*(Aqt set to 0 unless you know what you are doing\&. You should set this to the same value on all nodes to avoid unexpected behaviour\&.
    148 .SS "DeterministicIPs"
    149 .PP
    150 Default: 0
    151 .PP
    152 When enabled, this tunable makes ctdb try to keep public IP addresses locked to specific nodes as far as possible\&. This makes it easier for debugging since you can know that as long as all nodes are healthy public IP X will always be hosted by node Y\&.
    153 .PP
    154 The cost of using deterministic IP address assignment is that it disables part of the logic where ctdb tries to reduce the number of public IP assignment changes in the cluster\&. This tunable may increase the number of IP failover/failbacks that are performed on the cluster by a small margin\&.
    155 .SS "LCP2PublicIPs"
    156 .PP
    157 Default: 1
    158 .PP
    159 When enabled this switches ctdb to use the LCP2 ip allocation algorithm\&.
    160 .SS "ReclockPingPeriod"
    161 .PP
    162 Default: x
    163 .PP
    164 Obsolete
    165 .SS "NoIPFailback"
    166 .PP
    167 Default: 0
    168 .PP
    169 When set to 1, ctdb will not perform failback of IP addresses when a node becomes healthy\&. Ctdb WILL perform failover of public IP addresses when a node becomes UNHEALTHY, but when the node becomes HEALTHY again, ctdb will not fail the addresses back\&.
    170 .PP
    171 Use with caution! Normally when a node becomes available to the cluster ctdb will try to reassign public IP addresses onto the new node as a way to distribute the workload evenly across the clusternode\&. Ctdb tries to make sure that all running nodes have approximately the same number of public addresses it hosts\&.
    172 .PP
    173 When you enable this tunable, CTDB will no longer attempt to rebalance the cluster by failing IP addresses back to the new nodes\&. An unbalanced cluster will therefore remain unbalanced until there is manual intervention from the administrator\&. When this parameter is set, you can manually fail public IP addresses over to the new node(s) using the \*(Aqctdb moveip\*(Aq command\&.
    174 .SS "DisableIPFailover"
    175 .PP
    176 Default: 0
    177 .PP
    178 When enabled, ctdb will not perform failover or failback\&. Even if a node fails while holding public IPs, ctdb will not recover the IPs or assign them to another node\&.
    179 .PP
    180 When you enable this tunable, CTDB will no longer attempt to recover the cluster by failing IP addresses over to other nodes\&. This leads to a service outage until the administrator has manually performed failover to replacement nodes using the \*(Aqctdb moveip\*(Aq command\&.
    181 .SS "NoIPTakeover"
    182 .PP
    183 Default: 0
    184 .PP
    185 When set to 1, ctdb will not allow IP addresses to be failed over onto this node\&. Any IP addresses that the node currently hosts will remain on the node but no new IP addresses can be failed over to the node\&.
    186 .SS "NoIPHostOnAllDisabled"
    187 .PP
    188 Default: 0
    189 .PP
    190 If no nodes are healthy then by default ctdb will happily host public IPs on disabled (unhealthy or administratively disabled) nodes\&. This can cause problems, for example if the underlying cluster filesystem is not mounted\&. When set to 1 on a node and that node is disabled it, any IPs hosted by this node will be released and the node will not takeover any IPs until it is no longer disabled\&.
    191 .SS "DBRecordCountWarn"
    192 .PP
    193 Default: 100000
    194 .PP
    195 When set to non\-zero, ctdb will log a warning when we try to recover a database with more than this many records\&. This will produce a warning if a database grows uncontrollably with orphaned records\&.
    196 .SS "DBRecordSizeWarn"
    197 .PP
    198 Default: 10000000
    199 .PP
    200 When set to non\-zero, ctdb will log a warning when we try to recover a database where a single record is bigger than this\&. This will produce a warning if a database record grows uncontrollably with orphaned sub\-records\&.
    201 .SS "DBSizeWarn"
    202 .PP
    203 Default: 1000000000
    204 .PP
    205 When set to non\-zero, ctdb will log a warning when we try to recover a database bigger than this\&. This will produce a warning if a database grows uncontrollably\&.
    206 .SS "VerboseMemoryNames"
    207 .PP
    208 Default: 0
    209 .PP
    210 This feature consumes additional memory\&. when used the talloc library will create more verbose names for all talloc allocated objects\&.
    211 .SS "RecdPingTimeout"
    212 .PP
    213 Default: 60
    214 .PP
    215 If the main dameon has not heard a "ping" from the recovery dameon for this many seconds, the main dameon will log a message that the recovery daemon is potentially hung\&.
    216 .SS "RecdFailCount"
    217 .PP
    218 Default: 10
    219 .PP
    220 If the recovery daemon has failed to ping the main dameon for this many consecutive intervals, the main daemon will consider the recovery daemon as hung and will try to restart it to recover\&.
    221 .SS "LogLatencyMs"
    222 .PP
    223 Default: 0
    224 .PP
    225 When set to non\-zero, this will make the main daemon log any operation that took longer than this value, in \*(Aqms\*(Aq, to complete\&. These include "how long time a lockwait child process needed", "how long time to write to a persistent database" but also "how long did it take to get a response to a CALL from a remote node"\&.
    226 .SS "RecLockLatencyMs"
    227 .PP
    228 Default: 1000
    229 .PP
    230 When using a reclock file for split brain prevention, if set to non\-zero this tunable will make the recovery dameon log a message if the fcntl() call to lock/testlock the recovery file takes longer than this number of ms\&.
    231 .SS "RecoveryDropAllIPs"
    232 .PP
    233 Default: 120
    234 .PP
    235 If we have been stuck in recovery, or stopped, or banned, mode for this many seconds we will force drop all held public addresses\&.
    236 .SS "VacuumInterval"
    237 .PP
    238 Default: 10
    239 .PP
    240 Periodic interval in seconds when vacuuming is triggered for volatile databases\&.
    241 .SS "VacuumMaxRunTime"
    242 .PP
    243 Default: 120
    244 .PP
    245 The maximum time in seconds for which the vacuuming process is allowed to run\&. If vacuuming process takes longer than this value, then the vacuuming process is terminated\&.
     265During recoveries, if a node has not caused recovery failures during the last grace period in seconds, any records of transgressions that the node has caused recovery failures will be forgiven\&. This resets the ban\-counter back to zero for that node\&.
    246266.SS "RepackLimit"
    247267.PP
     
    249269.PP
    250270During vacuuming, if the number of freelist records are more than
    251 \fIRepackLimit\fR, then databases are repacked to get rid of the freelist records to avoid fragmentation\&.
     271\fIRepackLimit\fR, then the database is repacked to get rid of the freelist records to avoid fragmentation\&.
    252272.PP
    253273Databases are repacked only if both
     
    256276\fIVacuumLimit\fR
    257277are exceeded\&.
     278.SS "RerecoveryTimeout"
     279.PP
     280Default: 10
     281.PP
     282Once a recovery has completed, no additional recoveries are permitted until this timeout in seconds has expired\&.
     283.SS "Samba3AvoidDeadlocks"
     284.PP
     285Default: 0
     286.PP
     287If set to non\-zero, enable code that prevents deadlocks with Samba (only for Samba 3\&.x)\&.
     288.PP
     289This should be set to 1 only when using Samba version 3\&.x to enable special code in ctdb to avoid deadlock with Samba version 3\&.x\&. This code is not required for Samba version 4\&.x and must not be enabled for Samba 4\&.x\&.
     290.SS "SeqnumInterval"
     291.PP
     292Default: 1000
     293.PP
     294Some databases have seqnum tracking enabled, so that samba will be able to detect asynchronously when there has been updates to the database\&. Everytime a database is updated its sequence number is increased\&.
     295.PP
     296This tunable is used to specify in milliseconds how frequently ctdb will send out updates to remote nodes to inform them that the sequence number is increased\&.
     297.SS "StatHistoryInterval"
     298.PP
     299Default: 1
     300.PP
     301Granularity of the statistics collected in the statistics history\&. This is reported by \*(Aqctdb stats\*(Aq command\&.
     302.SS "StickyDuration"
     303.PP
     304Default: 600
     305.PP
     306Once a record has been marked STICKY, this is the duration in seconds, the record will be flagged as a STICKY record\&.
     307.SS "StickyPindown"
     308.PP
     309Default: 200
     310.PP
     311Once a STICKY record has been migrated onto a node, it will be pinned down on that node for this number of milliseconds\&. Any request from other nodes to migrate the record off the node will be deferred\&.
     312.SS "TakeoverTimeout"
     313.PP
     314Default: 9
     315.PP
     316This is the duration in seconds in which ctdb tries to complete IP failover\&.
     317.SS "TDBMutexEnabled"
     318.PP
     319Default: 0
     320.PP
     321This paramter enables TDB_MUTEX_LOCKING feature on volatile databases if the robust mutexes are supported\&. This optimizes the record locking using robust mutexes and is much more efficient that using posix locks\&.
     322.SS "TickleUpdateInterval"
     323.PP
     324Default: 20
     325.PP
     326Every
     327\fITickleUpdateInterval\fR
     328seconds, ctdb synchronizes the client connection information across nodes\&.
     329.SS "TraverseTimeout"
     330.PP
     331Default: 20
     332.PP
     333This is the duration in seconds for which a database traverse is allowed to run\&. If the traverse does not complete during this interval, ctdb will abort the traverse\&.
     334.SS "VacuumFastPathCount"
     335.PP
     336Default: 60
     337.PP
     338During a vacuuming run, ctdb usually processes only the records marked for deletion also called the fast path vacuuming\&. After finishing
     339\fIVacuumFastPathCount\fR
     340number of fast path vacuuming runs, ctdb will trigger a scan of complete database for any empty records that need to be deleted\&.
     341.SS "VacuumInterval"
     342.PP
     343Default: 10
     344.PP
     345Periodic interval in seconds when vacuuming is triggered for volatile databases\&.
    258346.SS "VacuumLimit"
    259347.PP
     
    268356\fIVacuumLimit\fR
    269357are exceeded\&.
    270 .SS "VacuumFastPathCount"
    271 .PP
    272 Default: 60
    273 .PP
    274 When a record is deleted, it is marked for deletion during vacuuming\&. Vacuuming process usually processes this list to purge the records from the database\&. If the number of records marked for deletion are more than VacuumFastPathCount, then vacuuming process will scan the complete database for empty records instead of using the list of records marked for deletion\&.
    275 .SS "DeferredAttachTO"
    276 .PP
    277 Default: 120
    278 .PP
    279 When databases are frozen we do not allow clients to attach to the databases\&. Instead of returning an error immediately to the application the attach request from the client is deferred until the database becomes available again at which stage we respond to the client\&.
    280 .PP
    281 This timeout controls how long we will defer the request from the client before timing it out and returning an error to the client\&.
    282 .SS "HopcountMakeSticky"
    283 .PP
    284 Default: 50
    285 .PP
    286 If the database is set to \*(AqSTICKY\*(Aq mode, using the \*(Aqctdb setdbsticky\*(Aq command, any record that is seen as very hot and migrating so fast that hopcount surpasses 50 is set to become a STICKY record for StickyDuration seconds\&. This means that after each migration the record will be kept on the node and prevented from being migrated off the node\&.
    287 .PP
    288 This setting allows one to try to identify such records and stop them from migrating across the cluster so fast\&. This will improve performance for certain workloads, such as locking\&.tdb if many clients are opening/closing the same file concurrently\&.
    289 .SS "StickyDuration"
    290 .PP
    291 Default: 600
    292 .PP
    293 Once a record has been found to be fetch\-lock hot and has been flagged to become STICKY, this is for how long, in seconds, the record will be flagged as a STICKY record\&.
    294 .SS "StickyPindown"
    295 .PP
    296 Default: 200
    297 .PP
    298 Once a STICKY record has been migrated onto a node, it will be pinned down on that node for this number of ms\&. Any request from other nodes to migrate the record off the node will be deferred until the pindown timer expires\&.
    299 .SS "StatHistoryInterval"
    300 .PP
    301 Default: 1
    302 .PP
    303 Granularity of the statistics collected in the statistics history\&.
    304 .SS "AllowClientDBAttach"
    305 .PP
    306 Default: 1
    307 .PP
    308 When set to 0, clients are not allowed to attach to any databases\&. This can be used to temporarily block any new processes from attaching to and accessing the databases\&.
    309 .SS "RecoverPDBBySeqNum"
    310 .PP
    311 Default: 1
    312 .PP
    313 When set to zero, database recovery for persistent databases is record\-by\-record and recovery process simply collects the most recent version of every individual record\&.
    314 .PP
    315 When set to non\-zero, persistent databases will instead be recovered as a whole db and not by individual records\&. The node that contains the highest value stored in the record "__db_sequence_number__" is selected and the copy of that nodes database is used as the recovered database\&.
    316 .PP
    317 By default, recovery of persistent databses is done using __db_sequence_number__ record\&.
    318 .SS "FetchCollapse"
    319 .PP
    320 Default: 1
    321 .PP
    322 When many clients across many nodes try to access the same record at the same time this can lead to a fetch storm where the record becomes very active and bounces between nodes very fast\&. This leads to high CPU utilization of the ctdbd daemon, trying to bounce that record around very fast, and poor performance\&.
    323 .PP
    324 This parameter is used to activate a fetch\-collapse\&. A fetch\-collapse is when we track which records we have requests in flight so that we only keep one request in flight from a certain node, even if multiple smbd processes are attemtping to fetch the record at the same time\&. This can improve performance and reduce CPU utilization for certain workloads\&.
    325 .PP
    326 This timeout controls if we should collapse multiple fetch operations of the same record into a single request and defer all duplicates or not\&.
    327 .SS "Samba3AvoidDeadlocks"
    328 .PP
    329 Default: 0
    330 .PP
    331 Enable code that prevents deadlocks with Samba (only for Samba 3\&.x)\&.
    332 .PP
    333 This should be set to 1 when using Samba version 3\&.x to enable special code in CTDB to avoid deadlock with Samba version 3\&.x\&. This code is not required for Samba version 4\&.x and must not be enabled for Samba 4\&.x\&.
     358.SS "VacuumMaxRunTime"
     359.PP
     360Default: 120
     361.PP
     362The maximum time in seconds for which the vacuuming process is allowed to run\&. If vacuuming process takes longer than this value, then the vacuuming process is terminated\&.
     363.SS "VerboseMemoryNames"
     364.PP
     365Default: 0
     366.PP
     367When set to non\-zero, ctdb assigns verbose names for some of the talloc allocated memory objects\&. These names are visible in the talloc memory report generated by \*(Aqctdb dumpmemory\*(Aq\&.
    334368.SH "SEE ALSO"
    335369.PP
Note: See TracChangeset for help on using the changeset viewer.