| 1 | <!--#set var="TITLE" value="CTDB and ClamAV Daemon" -->
 | 
|---|
| 2 | <!--#include virtual="header.html" -->
 | 
|---|
| 3 | 
 | 
|---|
| 4 | <h1>Setting up ClamAV with CTDB</h1>
 | 
|---|
| 5 | 
 | 
|---|
| 6 | <h2>Prereqs</h2>
 | 
|---|
| 7 | Configure CTDB as above and set it up to use public ipaddresses.<br>
 | 
|---|
| 8 | Verify that the CTDB cluster works.
 | 
|---|
| 9 | 
 | 
|---|
| 10 | <h2>Configuration</h2>
 | 
|---|
| 11 | 
 | 
|---|
| 12 | Configure clamd on each node on the cluster.<br><br>
 | 
|---|
| 13 | For details how to configure clamd check its documentation.
 | 
|---|
| 14 | 
 | 
|---|
| 15 | <h2>/etc/sysconfig/ctdb</h2>
 | 
|---|
| 16 | 
 | 
|---|
| 17 | Add the following lines to the /etc/sysconfig/ctdb configuration file.
 | 
|---|
| 18 | <pre>
 | 
|---|
| 19 |   CTDB_MANAGES_CLAMD=yes
 | 
|---|
| 20 |   CTDB_CLAMD_SOCKET="/path/to/clamd.sock"
 | 
|---|
| 21 | </pre>
 | 
|---|
| 22 | 
 | 
|---|
| 23 | Disable clamd in chkconfig so that it does not start by default. Instead CTDB will start/stop clamd as required.
 | 
|---|
| 24 | <pre>
 | 
|---|
| 25 |   chkconfig clamd off
 | 
|---|
| 26 | </pre>
 | 
|---|
| 27 | 
 | 
|---|
| 28 | <h2>Events script</h2>
 | 
|---|
| 29 | 
 | 
|---|
| 30 | The CTDB distribution already comes with an events script for clamd in the file /etc/ctdb/events.d/31.clamd<br><br>
 | 
|---|
| 31 | There should not be any need to edit this file.
 | 
|---|
| 32 | What you need is to set it as executable, with command like this:
 | 
|---|
| 33 | <pre>
 | 
|---|
| 34 |   chmod +x /etc/ctdb/events.d/31.clamd
 | 
|---|
| 35 | </pre>
 | 
|---|
| 36 | To check if ctdb monitoring and handling with clamd, you can check outpout of command:
 | 
|---|
| 37 | <pre>
 | 
|---|
| 38 |   ctdb scriptstatus
 | 
|---|
| 39 | </pre>
 | 
|---|
| 40 | 
 | 
|---|
| 41 | <h2>Restart your cluster</h2>
 | 
|---|
| 42 | Next time your cluster restarts, CTDB will start managing the clamd service.<br><br>
 | 
|---|
| 43 | If the cluster is already in production you may not want to restart the entire cluster since this would disrupt services.<br>
 | 
|---|
| 44 | 
 | 
|---|
| 45 | Insted you can just disable/enable the nodes one by one. Once a node becomes enabled again it will start the clamd service.<br><br>
 | 
|---|
| 46 | 
 | 
|---|
| 47 | Follow the procedure below for each node, one node at a time :
 | 
|---|
| 48 | 
 | 
|---|
| 49 | <h3>1 Disable the node</h3>
 | 
|---|
| 50 | Use the ctdb command to disable the node :
 | 
|---|
| 51 | <pre>
 | 
|---|
| 52 |   ctdb -n NODE disable
 | 
|---|
| 53 | </pre>
 | 
|---|
| 54 | 
 | 
|---|
| 55 | <h3>2 Wait until the cluster has recovered</h3>
 | 
|---|
| 56 | 
 | 
|---|
| 57 | Use the ctdb tool to monitor until the cluster has recovered, i.e. Recovery mode is NORMAL. This should happen within seconds of when you disabled the node.
 | 
|---|
| 58 | <pre>
 | 
|---|
| 59 |   ctdb status
 | 
|---|
| 60 | </pre>
 | 
|---|
| 61 | 
 | 
|---|
| 62 | <h3>3 Enable the node again</h3>
 | 
|---|
| 63 | 
 | 
|---|
| 64 | Re-enable the node again which will start the newly configured vsftp service.
 | 
|---|
| 65 | <pre>
 | 
|---|
| 66 |   ctdb -n NODE enable
 | 
|---|
| 67 | </pre>
 | 
|---|
| 68 | 
 | 
|---|
| 69 | <h2>See also</h2>
 | 
|---|
| 70 | 
 | 
|---|
| 71 | The CLAMAV section in the ctdbd manpage.
 | 
|---|
| 72 | 
 | 
|---|
| 73 | <pre>
 | 
|---|
| 74 |   man ctdbd
 | 
|---|
| 75 | </pre>
 | 
|---|
| 76 | 
 | 
|---|
| 77 | <!--#include virtual="footer.html" -->
 | 
|---|
| 78 | 
 | 
|---|