source: vendor/current/ctdb/doc/ctdb-tunables.7.html

Last change on this file was 989, checked in by Silvan Scherrer, 9 years ago

Samba Server: update vendor to version 4.4.7

File size: 21.7 KB
Line 
1<html><head><meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"><title>ctdb-tunables</title><meta name="generator" content="DocBook XSL Stylesheets V1.78.1"></head><body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"><div class="refentry"><a name="ctdb-tunables.7"></a><div class="titlepage"></div><div class="refnamediv"><h2>Name</h2><p>ctdb-tunables &#8212; CTDB tunable configuration variables</p></div><div class="refsect1"><a name="idp51068080"></a><h2>DESCRIPTION</h2><p>
2 CTDB's behaviour can be configured by setting run-time tunable
3 variables. This lists and describes all tunables. See the
4 <span class="citerefentry"><span class="refentrytitle">ctdb</span>(1)</span>
5 <span class="command"><strong>listvars</strong></span>, <span class="command"><strong>setvar</strong></span> and
6 <span class="command"><strong>getvar</strong></span> commands for more details.
7 </p><p>
8 The tunable variables are listed alphabetically.
9 </p><div class="refsect2"><a name="idp51120048"></a><h3>AllowClientDBAttach</h3><p>Default: 1</p><p>
10 When set to 0, clients are not allowed to attach to any databases.
11 This can be used to temporarily block any new processes from
12 attaching to and accessing the databases. This is mainly used
13 for detaching a volatile database using 'ctdb detach'.
14 </p></div><div class="refsect2"><a name="idp53889776"></a><h3>AllowUnhealthyDBRead</h3><p>Default: 0</p><p>
15 When set to 1, ctdb allows database traverses to read unhealthy
16 databases. By default, ctdb does not allow reading records from
17 unhealthy databases.
18 </p></div><div class="refsect2"><a name="idp54131312"></a><h3>ControlTimeout</h3><p>Default: 60</p><p>
19 This is the default setting for timeout for when sending a
20 control message to either the local or a remote ctdb daemon.
21 </p></div><div class="refsect2"><a name="idp51364816"></a><h3>DatabaseHashSize</h3><p>Default: 100001</p><p>
22 Number of the hash chains for the local store of the tdbs that
23 ctdb manages.
24 </p></div><div class="refsect2"><a name="idp53157488"></a><h3>DatabaseMaxDead</h3><p>Default: 5</p><p>
25 Maximum number of dead records per hash chain for the tdb databses
26 managed by ctdb.
27 </p></div><div class="refsect2"><a name="idp50010288"></a><h3>DBRecordCountWarn</h3><p>Default: 100000</p><p>
28 When set to non-zero, ctdb will log a warning during recovery if
29 a database has more than this many records. This will produce a
30 warning if a database grows uncontrollably with orphaned records.
31 </p></div><div class="refsect2"><a name="idp49085760"></a><h3>DBRecordSizeWarn</h3><p>Default: 10000000</p><p>
32 When set to non-zero, ctdb will log a warning during recovery
33 if a single record is bigger than this size. This will produce
34 a warning if a database record grows uncontrollably.
35 </p></div><div class="refsect2"><a name="idp49087568"></a><h3>DBSizeWarn</h3><p>Default: 1000000000</p><p>
36 When set to non-zero, ctdb will log a warning during recovery if
37 a database size is bigger than this. This will produce a warning
38 if a database grows uncontrollably.
39 </p></div><div class="refsect2"><a name="idp49089360"></a><h3>DeferredAttachTO</h3><p>Default: 120</p><p>
40 When databases are frozen we do not allow clients to attach to
41 the databases. Instead of returning an error immediately to the
42 client, the attach request from the client is deferred until
43 the database becomes available again at which stage we respond
44 to the client.
45 </p><p>
46 This timeout controls how long we will defer the request from the
47 client before timing it out and returning an error to the client.
48 </p></div><div class="refsect2"><a name="idp54043296"></a><h3>DeterministicIPs</h3><p>Default: 0</p><p>
49 When set to 1, ctdb will try to keep public IP addresses locked
50 to specific nodes as far as possible. This makes it easier
51 for debugging since you can know that as long as all nodes are
52 healthy public IP X will always be hosted by node Y.
53 </p><p>
54 The cost of using deterministic IP address assignment is that it
55 disables part of the logic where ctdb tries to reduce the number
56 of public IP assignment changes in the cluster. This tunable may
57 increase the number of IP failover/failbacks that are performed
58 on the cluster by a small margin.
59 </p></div><div class="refsect2"><a name="idp54045872"></a><h3>DisableIPFailover</h3><p>Default: 0</p><p>
60 When set to non-zero, ctdb will not perform failover or
61 failback. Even if a node fails while holding public IPs, ctdb
62 will not recover the IPs or assign them to another node.
63 </p><p>
64 When this tunable is enabled, ctdb will no longer attempt
65 to recover the cluster by failing IP addresses over to other
66 nodes. This leads to a service outage until the administrator
67 has manually performed IP failover to replacement nodes using the
68 'ctdb moveip' command.
69 </p></div><div class="refsect2"><a name="idp54048368"></a><h3>ElectionTimeout</h3><p>Default: 3</p><p>
70 The number of seconds to wait for the election of recovery
71 master to complete. If the election is not completed during this
72 interval, then that round of election fails and ctdb starts a
73 new election.
74 </p></div><div class="refsect2"><a name="idp54050192"></a><h3>EnableBans</h3><p>Default: 1</p><p>
75 This parameter allows ctdb to ban a node if the node is misbehaving.
76 </p><p>
77 When set to 0, this disables banning completely in the cluster
78 and thus nodes can not get banned, even it they break. Don't
79 set to 0 unless you know what you are doing. You should set
80 this to the same value on all nodes to avoid unexpected behaviour.
81 </p></div><div class="refsect2"><a name="idp54052448"></a><h3>EventScriptTimeout</h3><p>Default: 30</p><p>
82 Maximum time in seconds to allow an event to run before timing
83 out. This is the total time for all enabled scripts that are
84 run for an event, not just a single event script.
85 </p><p>
86 Note that timeouts are ignored for some events ("takeip",
87 "releaseip", "startrecovery", "recovered") and converted to
88 success. The logic here is that the callers of these events
89 implement their own additional timeout.
90 </p></div><div class="refsect2"><a name="idp54054880"></a><h3>FetchCollapse</h3><p>Default: 1</p><p>
91 This parameter is used to avoid multiple migration requests for
92 the same record from a single node. All the record requests for
93 the same record are queued up and processed when the record is
94 migrated to the current node.
95 </p><p>
96 When many clients across many nodes try to access the same record
97 at the same time this can lead to a fetch storm where the record
98 becomes very active and bounces between nodes very fast. This
99 leads to high CPU utilization of the ctdbd daemon, trying to
100 bounce that record around very fast, and poor performance.
101 This can improve performance and reduce CPU utilization for
102 certain workloads.
103 </p></div><div class="refsect2"><a name="idp48966640"></a><h3>HopcountMakeSticky</h3><p>Default: 50</p><p>
104 For database(s) marked STICKY (using 'ctdb setdbsticky'),
105 any record that is migrating so fast that hopcount
106 exceeds this limit is marked as STICKY record for
107 <code class="varname">StickyDuration</code> seconds. This means that
108 after each migration the sticky record will be kept on the node
109 <code class="varname">StickyPindown</code>milliseconds and prevented from
110 being migrated off the node.
111 </p><p>
112 This will improve performance for certain workloads, such as
113 locking.tdb if many clients are opening/closing the same file
114 concurrently.
115 </p></div><div class="refsect2"><a name="idp48969952"></a><h3>KeepaliveInterval</h3><p>Default: 5</p><p>
116 How often in seconds should the nodes send keep-alive packets to
117 each other.
118 </p></div><div class="refsect2"><a name="idp48971552"></a><h3>KeepaliveLimit</h3><p>Default: 5</p><p>
119 After how many keepalive intervals without any traffic should
120 a node wait until marking the peer as DISCONNECTED.
121 </p><p>
122 If a node has hung, it can take
123 <code class="varname">KeepaliveInterval</code> *
124 (<code class="varname">KeepaliveLimit</code> + 1) seconds before
125 ctdb determines that the node is DISCONNECTED and performs
126 a recovery. This limit should not be set too high to enable
127 early detection and avoid any application timeouts (e.g. SMB1)
128 to kick in before the fail over is completed.
129 </p></div><div class="refsect2"><a name="idp48974864"></a><h3>LCP2PublicIPs</h3><p>Default: 1</p><p>
130 When set to 1, ctdb uses the LCP2 ip allocation algorithm.
131 </p></div><div class="refsect2"><a name="idp48976464"></a><h3>LockProcessesPerDB</h3><p>Default: 200</p><p>
132 This is the maximum number of lock helper processes ctdb will
133 create for obtaining record locks. When ctdb cannot get a record
134 lock without blocking, it creates a helper process that waits
135 for the lock to be obtained.
136 </p></div><div class="refsect2"><a name="idp48978304"></a><h3>LogLatencyMs</h3><p>Default: 0</p><p>
137 When set to non-zero, ctdb will log if certains operations
138 take longer than this value, in milliseconds, to complete.
139 These operations include "process a record request from client",
140 "take a record or database lock", "update a persistent database
141 record" and "vaccum a database".
142 </p></div><div class="refsect2"><a name="idp48980208"></a><h3>MaxQueueDropMsg</h3><p>Default: 1000000</p><p>
143 This is the maximum number of messages to be queued up for
144 a client before ctdb will treat the client as hung and will
145 terminate the client connection.
146 </p></div><div class="refsect2"><a name="idp48981984"></a><h3>MonitorInterval</h3><p>Default: 15</p><p>
147 How often should ctdb run the 'monitor' event in seconds to check
148 for a node's health.
149 </p></div><div class="refsect2"><a name="idp48988480"></a><h3>MonitorTimeoutCount</h3><p>Default: 20</p><p>
150 How many 'monitor' events in a row need to timeout before a node
151 is flagged as UNHEALTHY. This setting is useful if scripts can
152 not be written so that they do not hang for benign reasons.
153 </p></div><div class="refsect2"><a name="idp48990288"></a><h3>NoIPFailback</h3><p>Default: 0</p><p>
154 When set to 1, ctdb will not perform failback of IP addresses
155 when a node becomes healthy. When a node becomes UNHEALTHY,
156 ctdb WILL perform failover of public IP addresses, but when the
157 node becomes HEALTHY again, ctdb will not fail the addresses back.
158 </p><p>
159 Use with caution! Normally when a node becomes available to the
160 cluster ctdb will try to reassign public IP addresses onto the
161 new node as a way to distribute the workload evenly across the
162 clusternode. Ctdb tries to make sure that all running nodes have
163 approximately the same number of public addresses it hosts.
164 </p><p>
165 When you enable this tunable, ctdb will no longer attempt to
166 rebalance the cluster by failing IP addresses back to the new
167 nodes. An unbalanced cluster will therefore remain unbalanced
168 until there is manual intervention from the administrator. When
169 this parameter is set, you can manually fail public IP addresses
170 over to the new node(s) using the 'ctdb moveip' command.
171 </p></div><div class="refsect2"><a name="idp48993680"></a><h3>NoIPHostOnAllDisabled</h3><p>Default: 0</p><p>
172 If no nodes are HEALTHY then by default ctdb will happily host
173 public IPs on disabled (unhealthy or administratively disabled)
174 nodes. This can cause problems, for example if the underlying
175 cluster filesystem is not mounted. When set to 1 on a node and
176 that node is disabled, any IPs hosted by this node will be
177 released and the node will not takeover any IPs until it is no
178 longer disabled.
179 </p></div><div class="refsect2"><a name="idp48995696"></a><h3>NoIPTakeover</h3><p>Default: 0</p><p>
180 When set to 1, ctdb will not allow IP addresses to be failed
181 over onto this node. Any IP addresses that the node currently
182 hosts will remain on the node but no new IP addresses can be
183 failed over to the node.
184 </p></div><div class="refsect2"><a name="idp48997536"></a><h3>PullDBPreallocation</h3><p>Default: 10*1024*1024</p><p>
185 This is the size of a record buffer to pre-allocate for sending
186 reply to PULLDB control. Usually record buffer starts with size
187 of the first record and gets reallocated every time a new record
188 is added to the record buffer. For a large number of records,
189 this can be very inefficient to grow the record buffer one record
190 at a time.
191 </p></div><div class="refsect2"><a name="idp48999504"></a><h3>RecBufferSizeLimit</h3><p>Default: 1000000</p><p>
192 This is the limit on the size of the record buffer to be sent
193 in various controls. This limit is used by new controls used
194 for recovery and controls used in vacuuming.
195 </p></div><div class="refsect2"><a name="idp49001328"></a><h3>RecdFailCount</h3><p>Default: 10</p><p>
196 If the recovery daemon has failed to ping the main dameon for
197 this many consecutive intervals, the main daemon will consider
198 the recovery daemon as hung and will try to restart it to recover.
199 </p></div><div class="refsect2"><a name="idp49003152"></a><h3>RecdPingTimeout</h3><p>Default: 60</p><p>
200 If the main dameon has not heard a "ping" from the recovery dameon
201 for this many seconds, the main dameon will log a message that
202 the recovery daemon is potentially hung. This also increments a
203 counter which is checked against <code class="varname">RecdFailCount</code>
204 for detection of hung recovery daemon.
205 </p></div><div class="refsect2"><a name="idp49005424"></a><h3>RecLockLatencyMs</h3><p>Default: 1000</p><p>
206 When using a reclock file for split brain prevention, if set
207 to non-zero this tunable will make the recovery dameon log a
208 message if the fcntl() call to lock/testlock the recovery file
209 takes longer than this number of milliseconds.
210 </p></div><div class="refsect2"><a name="idp49007280"></a><h3>RecoverInterval</h3><p>Default: 1</p><p>
211 How frequently in seconds should the recovery daemon perform the
212 consistency checks to determine if it should perform a recovery.
213 </p></div><div class="refsect2"><a name="idp49009040"></a><h3>RecoverPDBBySeqNum</h3><p>Default: 1</p><p>
214 When set to zero, database recovery for persistent databases is
215 record-by-record and recovery process simply collects the most
216 recent version of every individual record.
217 </p><p>
218 When set to non-zero, persistent databases will instead be
219 recovered as a whole db and not by individual records. The
220 node that contains the highest value stored in the record
221 "__db_sequence_number__" is selected and the copy of that nodes
222 database is used as the recovered database.
223 </p><p>
224 By default, recovery of persistent databses is done using
225 __db_sequence_number__ record.
226 </p></div><div class="refsect2"><a name="idp54874960"></a><h3>RecoverTimeout</h3><p>Default: 120</p><p>
227 This is the default setting for timeouts for controls when sent
228 from the recovery daemon. We allow longer control timeouts from
229 the recovery daemon than from normal use since the recovery
230 dameon often use controls that can take a lot longer than normal
231 controls.
232 </p></div><div class="refsect2"><a name="idp54876784"></a><h3>RecoveryBanPeriod</h3><p>Default: 300</p><p>
233 The duration in seconds for which a node is banned if the node
234 fails during recovery. After this time has elapsed the node will
235 automatically get unbanned and will attempt to rejoin the cluster.
236 </p><p>
237 A node usually gets banned due to real problems with the node.
238 Don't set this value too small. Otherwise, a problematic node
239 will try to re-join cluster too soon causing unnecessary recoveries.
240 </p></div><div class="refsect2"><a name="idp54879184"></a><h3>RecoveryDropAllIPs</h3><p>Default: 120</p><p>
241 If a node is stuck in recovery, or stopped, or banned, for this
242 many seconds, then ctdb will release all public addresses on
243 that node.
244 </p></div><div class="refsect2"><a name="idp54880880"></a><h3>RecoveryGracePeriod</h3><p>Default: 120</p><p>
245 During recoveries, if a node has not caused recovery failures
246 during the last grace period in seconds, any records of
247 transgressions that the node has caused recovery failures will be
248 forgiven. This resets the ban-counter back to zero for that node.
249 </p></div><div class="refsect2"><a name="idp54882720"></a><h3>RepackLimit</h3><p>Default: 10000</p><p>
250 During vacuuming, if the number of freelist records are more than
251 <code class="varname">RepackLimit</code>, then the database is repacked
252 to get rid of the freelist records to avoid fragmentation.
253 </p><p>
254 Databases are repacked only if both <code class="varname">RepackLimit</code>
255 and <code class="varname">VacuumLimit</code> are exceeded.
256 </p></div><div class="refsect2"><a name="idp54885920"></a><h3>RerecoveryTimeout</h3><p>Default: 10</p><p>
257 Once a recovery has completed, no additional recoveries are
258 permitted until this timeout in seconds has expired.
259 </p></div><div class="refsect2"><a name="idp54887600"></a><h3>Samba3AvoidDeadlocks</h3><p>Default: 0</p><p>
260 If set to non-zero, enable code that prevents deadlocks with Samba
261 (only for Samba 3.x).
262 </p><p>
263 This should be set to 1 only when using Samba version 3.x
264 to enable special code in ctdb to avoid deadlock with Samba
265 version 3.x. This code is not required for Samba version 4.x
266 and must not be enabled for Samba 4.x.
267 </p></div><div class="refsect2"><a name="idp54889888"></a><h3>SeqnumInterval</h3><p>Default: 1000</p><p>
268 Some databases have seqnum tracking enabled, so that samba will
269 be able to detect asynchronously when there has been updates
270 to the database. Everytime a database is updated its sequence
271 number is increased.
272 </p><p>
273 This tunable is used to specify in milliseconds how frequently
274 ctdb will send out updates to remote nodes to inform them that
275 the sequence number is increased.
276 </p></div><div class="refsect2"><a name="idp54892240"></a><h3>StatHistoryInterval</h3><p>Default: 1</p><p>
277 Granularity of the statistics collected in the statistics
278 history. This is reported by 'ctdb stats' command.
279 </p></div><div class="refsect2"><a name="idp54893904"></a><h3>StickyDuration</h3><p>Default: 600</p><p>
280 Once a record has been marked STICKY, this is the duration in
281 seconds, the record will be flagged as a STICKY record.
282 </p></div><div class="refsect2"><a name="idp54895584"></a><h3>StickyPindown</h3><p>Default: 200</p><p>
283 Once a STICKY record has been migrated onto a node, it will be
284 pinned down on that node for this number of milliseconds. Any
285 request from other nodes to migrate the record off the node will
286 be deferred.
287 </p></div><div class="refsect2"><a name="idp54897344"></a><h3>TakeoverTimeout</h3><p>Default: 9</p><p>
288 This is the duration in seconds in which ctdb tries to complete IP
289 failover.
290 </p></div><div class="refsect2"><a name="idp54898880"></a><h3>TDBMutexEnabled</h3><p>Default: 0</p><p>
291 This paramter enables TDB_MUTEX_LOCKING feature on volatile
292 databases if the robust mutexes are supported. This optimizes the
293 record locking using robust mutexes and is much more efficient
294 that using posix locks.
295 </p></div><div class="refsect2"><a name="idp54900656"></a><h3>TickleUpdateInterval</h3><p>Default: 20</p><p>
296 Every <code class="varname">TickleUpdateInterval</code> seconds, ctdb
297 synchronizes the client connection information across nodes.
298 </p></div><div class="refsect2"><a name="idp54902576"></a><h3>TraverseTimeout</h3><p>Default: 20</p><p>
299 This is the duration in seconds for which a database traverse
300 is allowed to run. If the traverse does not complete during
301 this interval, ctdb will abort the traverse.
302 </p></div><div class="refsect2"><a name="idp54904304"></a><h3>VacuumFastPathCount</h3><p>Default: 60</p><p>
303 During a vacuuming run, ctdb usually processes only the records
304 marked for deletion also called the fast path vacuuming. After
305 finishing <code class="varname">VacuumFastPathCount</code> number of fast
306 path vacuuming runs, ctdb will trigger a scan of complete database
307 for any empty records that need to be deleted.
308 </p></div><div class="refsect2"><a name="idp54906560"></a><h3>VacuumInterval</h3><p>Default: 10</p><p>
309 Periodic interval in seconds when vacuuming is triggered for
310 volatile databases.
311 </p></div><div class="refsect2"><a name="idp54908224"></a><h3>VacuumLimit</h3><p>Default: 5000</p><p>
312 During vacuuming, if the number of deleted records are more than
313 <code class="varname">VacuumLimit</code>, then databases are repacked to
314 avoid fragmentation.
315 </p><p>
316 Databases are repacked only if both <code class="varname">RepackLimit</code>
317 and <code class="varname">VacuumLimit</code> are exceeded.
318 </p></div><div class="refsect2"><a name="idp54911392"></a><h3>VacuumMaxRunTime</h3><p>Default: 120</p><p>
319 The maximum time in seconds for which the vacuuming process is
320 allowed to run. If vacuuming process takes longer than this
321 value, then the vacuuming process is terminated.
322 </p></div><div class="refsect2"><a name="idp54913152"></a><h3>VerboseMemoryNames</h3><p>Default: 0</p><p>
323 When set to non-zero, ctdb assigns verbose names for some of
324 the talloc allocated memory objects. These names are visible
325 in the talloc memory report generated by 'ctdb dumpmemory'.
326 </p></div></div><div class="refsect1"><a name="idp54915024"></a><h2>SEE ALSO</h2><p>
327 <span class="citerefentry"><span class="refentrytitle">ctdb</span>(1)</span>,
328
329 <span class="citerefentry"><span class="refentrytitle">ctdbd</span>(1)</span>,
330
331 <span class="citerefentry"><span class="refentrytitle">ctdbd.conf</span>(5)</span>,
332
333 <span class="citerefentry"><span class="refentrytitle">ctdb</span>(7)</span>,
334
335 <a class="ulink" href="http://ctdb.samba.org/" target="_top">http://ctdb.samba.org/</a>
336 </p></div></div></body></html>
Note: See TracBrowser for help on using the repository browser.