Showing posts with label build. Show all posts
Showing posts with label build. Show all posts

25 June 2025

Patching Io Addons

This project is getting out of hand. I just wanted to use Regular Expressions in Io. This is the forth part of me trying different extensions of Io. First I installed addons without native code, then I compiled native code of addons and in the third part I:
  • Checked package.json and build.io for hints about needed third party libraries and headers.
  • Found ported libraries for Windows in GnuWin32, e.g. ReadLine.
  • Compiled Io addons with dependencies.
  • Fixed the undefined reference to 'IoState_registerProtoWithFunc_' error, which occurs using addons created for older versions of Io.
  • Worked around conflicting headers included by IoObject.h or IoState.h.
  • Finally compiled with dependencies on the native code of another addon.
On the way I added minor fixes to several addons, see my (forked) Io repositories and today I cover addons which needed more extensive modifications.

patch (licensed CC BY-NC-ND by Natasha Wheatland)JSON Parsing
Io has "half" JSON support, most objects provide an asJson() method to represent the contained data in JSON. Some addons (like Docio) require Sequence.parseJson() but omit a specific dependency for that and my Io (iobin-​win32-​current.zip) misses this method. These addons are for a newer version of Io, which has not been released (yet?). Writing a JSON parser is a nice exercise and there is already a JSON parser for Io. It is neither an addon nor an Eerie package. Preparing some arbitrary Io code as addon means moving the existing files into the proper folder structure, and adding the files needed to run as addon (i.e. proto and depends files). I even added a test. My Io JSON addon can be cloned directly into the addons folder %IO_HOME%\​lib\​io\​addons.

Missing Io Core Functions
I started changing addon source code to include the JSON parser addon, at the same time I wanted to keep my changes as little as possible. The Io Guide said that if you have a .iorc file in your home folder, it will be eval'ed before the interactive prompt starts.. In my %HOMEDRIVE%%HOMEPATH%\.iorc I added all definitions of missing Io core functions:
false isFalse := method( true )
true isFalse := method( false )

// load parseJson
Sequence hasSlot("parseJson") ifFalse(
    Json
)

false asJson := "false"
true asJson := "true"
nil asJson := "null"
This made me ask myself, what else was added to Io since my Windows binary was built in 2013? I cloned Io and compared the proper tag with master. There were many changes and I filtered out formatting and comments. The final result was a short list of additions and modifications like
  • Addon and AddonLoader extensions to use Eerie.
  • Core tildeExpandsTo() using UserProfile instead of HOME on Windows.
  • Sequence power() and powerEquals().
  • TestRunner run() returning the number of errors.
Docio
With JSON support and Markdown working, I can run Docio. Docio is the documentation generator for Eerie packages. It extracts tagged comments from C and Io code and generates Markdown and HTML documentation like this. To make Docio work, I had to work around several issues:
  1. Docio's repository name is lowercase. To install it as addon the folder name has to be uppercase, as for Kano:
    > cd "%IO_HOME%\lib\io\addons"
    > git clone git@github.com:IoLanguage/docio.git Docio
  2. Docio does not depend on native code, and it has Markdown as its sole dependency.
    > echo Markdown > Docio\depends
    > echo Docio > Docio\protos
  3. Docio loads Eerie eagerly in the first line of Docio.io. It uses Eerie to query its template path. I lack Eerie and the eager loading fails, so I remove that. With Eerie out of the picture, I always have to provide the full path to the template (%IO_HOME%\lib\io\addons\Docio\template). Now I can initialise Docio with io -e "Docio println". Success.
  4. Docio has a starter script in bin/docio which needs a Windows version, i.e. a bin/docio.bat which calls the original starter script,
    io "%IO_HOME%\lib\io\addons\Docio\bin\docio" %*
  5. With the starter script Docio is available as a command line tool. One thing to know is that its help is wrong. It says
    docio package=/path/to/package [template=/path/to/template]
    while it really needs two dashes for each option,
    docio --package=/path/to/package [--template=/path/to/template]
    Both paths need to be absolute, otherwise documentation files appear in strange places.
  6. There is a syntax error in DocsExtractor.io line 65 and following.
  7. Copying of binary resources, i.e. fonts, corrupts the files. You will have to copy them manually. This is because Io's low level C code does fails to set the binary flag when opening files. On Windows the C library functions consider text files to be finished on the first byte 0x1A. I am unable to fix that right now.
After fixing all that, Docio provides help, even inside the Io REPL, which comes handy:
Io> Docio printDocFor("ReadLine")
Binding to GNU readline.
Docio will generate the documentation for any package on the fly which has a package.json with a name field. Bundled addons miss that file, and I create empty ones containing each addon's name for addons where I want to generate documentation. Here is my own Docio with all the fixes.

Socket
Finally I am going for the Socket addon. I was scared of Socket, some sources stated that Io socket support has always been tricky. Socket depends on libevent 2.0.x, an event notification library. Surprisingly autogen.sh, configure and make install compile and link libevent without any problems. Now I have some libevent_*.dll files. This is way too easy.

But of course Socket's C code does not compile. The header sys/​queue.h is not available on Windows. Fortunately libevent comes with its own compat/​sys/​queue.h. Some header files, e.g. IoEvConnection.h, IoEvHttpServer.h and several others need the conditional include:
#include "Socket.h"
#if !defined(_WIN32) || defined(__CYGWIN__)
#include <sys/queue.h>
#else
#include <compat/sys/queue.h>
#endif
#include <event.h>
Further UnixPath.c fails compilation, and I drop it from the list of files to compile. Even if it would compile, it would not work as indicated by the error returned from IoUnixPath for defined(_WIN32) || defined(__CYGWIN__): Sorry, no Unix Domain sockets on Windows MSCRT.

Linking the compiled files produces an undefined reference to 'GetNetworkParams@8', which is part of the Windows GetNetworkParams interface in iphlpapi.dll, as listed on build.io together with ws2_32. It seems that after my exploration of building addons with native dependencies like ReadLine or Markdown, compiling Socket as difficult than that. Here are the important pieces of the steps:
> ...
> cd source
> gcc -fPIC -D _NO_OLDNAMES -D IO_WINSOCK_COMPAT -c ...
> gcc -liovmall -lbasekit -levent -liphlpapi -lws2_32 ^
      -shared -Wl,--out-implib,libIoSocket.dll.a ^
      ... -o libIoSocket.dll
> ...
My fork of Socket contains the required fixes for Windows.

Using the Socket addon on current Ubuntu
In my product development Coderetreat template, I have GitHub actions to verify the sample code. Based on Sandro's installation instructions I first install the required dependencies:
sudo apt-get install libpcre3-dev libevent-dev
(This might be unecessary as GitHub's Ubuntu image has both of them installed.) The current branch of libevent is 2.1.x and the none of the older 2.0.x versions do compile due to wrong dependencies. At the same time, the Linux version of Io contains Socket but needs libevent 2.0.5 specifically. I have no idea if that is the way to fix these kind of issues, but it works, so I link the installed version (2.1.7) as the required (2.0.5) one.
ls -la /usr/lib/x86_64-linux-gnu/libevent*.so.*
sudo ln -s /usr/lib/x86_64-linux-gnu/libevent-2.1.so.7.0.1 /usr/lib/x86_64-linux-gnu/libevent-2.0.so.5
(I leave the ls is the action to show the current version when the GitHub runner changes and there is a newer version of libevent.) After that I download the "latest" version of Io and install it:
wget -q http://iobin.suspended-chord.info/linux/iobin-linux-x64-deb-current.zip
unzip -qq iobin-linux-x64-deb-current.zip
sudo dpkg -i IoLanguage-2013.11.04-Linux-x64.deb
sudo ldconfig
io --version
The Linux version of Io comes with almost all addons pre-compiled, and there is no need for any compilation. Success. Because this version is without Eerie, custom addons have to be installed manually into the System installPrefix folder, e.g. adding Docio
git clone https://github.com/codecop/Docio.git
io -e "System installPrefix println"
sudo mv Docio /usr/local/lib/io/addons/
io -e "Docio println"
Now the runner is ready to execute some tests which is usually done with io ./tests/correctness/run.io. The full GitHub action for Io is here.

Alaska, Frontier Land (licensed CC BY-NC-ND by Clickrbee)Random Facts about Addons
During my exploration I learned more things about Io's addon structure:
  • The protos file does not need to contain the name of the "main" prototype, which is also the name of the addon, and often it does not. I put the name of all (exported) prototypes there to simplify my scripts. Then package.json, the Eerie manifest, does contain all prototypes.
  • In the beginning I though depends was some kind of manifest, listing dependencies. But it is only needed for dependencies of native code, so AddonLoader loads dependent addons before they are used in native code. For non native addons, Io will load whatever is needed when parsing any unknown type name. Till now the only addon I have seen which needs that feature is Regex.
  • When an addon is loaded all its files are evaluated. Only then is it registered as active. This can lead to addon initialisation loops. The initialisation order seems to be by file name. Some addons with complex inter dependencies - like Socket - prefix Io files with A_0, A_1 and so on to ensure ordered initialisation. (This is a bit annoying for tooling as the prototype name usually equals the file name in Io.)
Summary: My Feelings Towards Io
While Io is dead since more than ten years, working with it feels bleeding edge, even more it is outside the frontier. You are at your own, there is no help. Nobody is coming to save you. The latest article I found was written in 2019. There are less than ten (!) active blog posts: Blame it on Io (2006), Io language on Windows (2014), Io Basics (2015) and Io Programming Language (2019) - to list the best ones. There are a handful of Stack Overflow questions and a few repositories on GitHub - which are sometimes incompatible with the "latest" Io. ChatGPT understands the language but fantasises the libraries, so no help from AI neither. I am used to modern languages with a rich ecosystem, e.g. Java, C#, Python and this is an unfamiliar feeling. At the same time it is a refreshing puzzle. Maybe I will come back for a vacation in uncharted territory.

25 July 2018

Only modified files in Jenkins

For a custom Jenkins build I needed to know all changed files since the last green build. I searched a lot and found a solution as combination of several StackOverflow answers. It took me some experimenting to get it working: I installed the Groovy plugin, configured the Groovy language and created a script which executed as system Groovy script. Here is the complete step by step guide for Jenkins 2.63, SVN and Groovy 2.4.11.

Jenkins only provides the current revision in the environment variable $SVN_REVISION. Of course Jenkins knows the information about changed files of each build as it is shown in the build status page. I guess a plugin would be able to access the model, but that is too much work. Fortunately the Jenkins Groovy plugin allows scripts to run under the system context having access to hudson.model.Build and other classes.

Groovy!Groovy Programming Language
The Groovy programming language is a dynamic language which runs on the JVM. It integrates smoothly with any Java program and is the first choice for scripting Java applications. While not strictly necessary I recommend downloading the SDK's zip and unpacking it on the host where you run Jenkins, usually into the folder where you keep your development tools. For testing and debugging I also install it on my local workstation (in the same location).

Groovy in Jenkins
Next comes the Jenkins Groovy plugin. Open Jenkins in the browser and navigate the menus:
  • Manage Jenkins
  • Manage Plugins
  • select tab Available
  • filter "groovy"
  • select Groovy
  • Install
(You will need Jenkins admin rights to do so.) Then tell Jenkins which Groovy to use and where to find it. To configure the Groovy language go to
  • Manage Jenkins
  • Global Tool Configuration
  • go to section Groovy
  • Add Groovy: Give it a name and set GROOVY_HOME to the folder you unpacked it, e.g. /tools/groovy-2.4.11.
  • deselect Install automatically
  • Save
Now Jenkins supports Groovy scripts.

Run a Groovy script in the build
Now let's use a Groovy script in the project. On the project page,
  • Configure
  • go to section Build
  • Add build step
  • select Execute system Groovy script
  • paste Groovy code into the script console
  • Save
Now when you trigger the build, the script will be executed.

Debugging the Script
Of course it does not work. How can I debug this? Can I print something to the console? Groovy's println "Hello" does not show up in the build log. Searching again, finally the gist by lyuboraykov shows how to print to the console in system scripts: Jenkins provides the build console as out variable,
out = getBinding().getVariables()['out']
which can be used like out.println "Hello". Much better, now I can debug. Let's wrap the out.println in a def log(msg) method for later.

The MisfitsGetting the changed files of the current build
StackOverflow answer by ChrLipp shows how to get the changed files of the current build:
def changedFilesIn(Build build) {
  build.getChangeSet().
    getItems().
    collect { logEntry -> logEntry.paths }.
    flatten().
    collect { path -> path.path }
}
This gets the change set hudson.scm.ChangeLogSet<LogEntry> from the build, gets the SubversionChangeLogSet.LogEntrys from it and collects all the paths in these entries - this is the list of all file paths of all changed items in all commits (LogEntrys). I guess when another SCM provider is used, another type of ChangeLogSet.LogEntry will be returned, but I did not test that. To better understand what is going on, I added explicit types in the final Groovy script, which will only work for Subversion projects.

Getting all builds since the last successful one
I want all changed files from all builds since the last green one because they might not have been processed in previous, failed builds. Again StackOverflow, answer by CaptRespect comes to the rescue:
def changedFileSinceLastSuccessfull(Build build) {
  if (build == null || build.result == Result.SUCCESS) {
    []
  } else {
    changedFilesIn(build) +
      changedFileSinceLastSuccessfull(build.getPreviousBuild())
  }
}
In case there is no previous build or it was successful the recursion stops, otherwise we collect changed files of this build and recurse into the past.

All Together
Let's put it all together,
def changedFiles() {
  def Build build = Thread.currentThread()?.executable
  changedFileSinceLastSuccessfull(build).
    unique().
    sort()
}
After collecting all duplicates are removed, as I do not care if a file was changed once or more times, and the list is sorted. In the end the list of changed files is saved as text changed_files.log into the workspace. (The complete jenkins_list_changed_files.groovy script is inside the zipped source.)

Leave space to VIPs and journalistsWhile developing the script, the Jenkins script console was very handy. As soon as the script worked, I created a the file jenkins_list_changed_files.groovy, put that under version control and changed the build definition step to use the script's file name. Next time the build ran, the script file would be executed, or at least so I thought.

Script Approvals
Unfortunately system Groovy script files do not work as expected because Jenkins runs them in a sandbox. Scripts need certain approvals, see StackOverflow answer by Maarten Kieft. To approve a script's access to sensitive fields or methods navigate to
  • Manage Jenkins
  • In-process Script Approval (This is the one but last item in the list.)
  • Approve
The sandbox is very restrictive, the full jenkins_list_changed_files needs a lot of approvals:
field hudson.model.Executor executable
method groovy.lang.Binding getVariables
method hudson.model.AbstractBuild getChangeSet
method hudson.model.AbstractBuild getWorkspace
method hudson.model.Run getNumber
method hudson.model.Run getPreviousBuild
method hudson.model.Run getResult
method hudson.scm.SubversionChangeLogSet$LogEntry getPaths
method java.io.PrintStream println java.lang.String
new java.io.File java.lang.String
staticMethod java.lang.Thread currentThread
staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods flatten java.util.List
staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods println java.lang.Object java.lang.Object
staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods sort java.util.Collection
staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods withWriter java.io.File groovy.lang.Closure
Creating a new java.io.File might be a security risk, but even println is not allowed. Adding all these approvals is a boring process. The build breaks on each missing one until everything is well. As soon as you have all the approvals, you can copy Jenkins' scriptApproval.xml found in JENKINS_HOME (e.g. ~/.jenkins) and store it for later installations. The full scriptApproval.xml is inside the zipped source.

Conclusion
Jenkins' Groovy integration is very powerful. System scripts have access to Jenkins' internal model which allows them to query information about build, status, changed files etc. On the other hand, development and debugging is cumbersome and time consuming. IDE support helps a lot. Fortunately StackOverflow knows all the answers! ;-)

3 December 2017

PMD Check and Report in same build

lane one, lane twoI am working together with senior developer and (coding) architect Elisabeth Blümelhuber to set up a full featured continuous delivery process for the team. The team's projects use Java and are built with Maven.

Using PMD for Static Code Analysis
After using Jenkins for some time to run the tests, package and deploy the products, it was time to make it even more useful: Add static code analysis. As a first step Elisabeth added a PMD report of a small set of important rules to the Maven parent of all projects. PMD creates a pmd.xml in the target folder which is picked up by Jenkins' PMD Plugin. Jenkins displays the found violations and tracks changes over time, showing a basic trend graph. (While SonarQube would be more powerful, we decided to stay with Jenkins because the team was already "listening" to it.)

Breaking the Build on Critical Violations
I like breaking the build on critical violations to ensure the developers' attention. It is vital, though, to achieve the acceptance of the team members when changing their development process. We thus started with a custom, minimal set of rules (in src/config/pmd_mandatory.xml) that would break the build. The smaller the initial rule set is the better. In the beginning of adding static code analysis to the build process, it is not about the code but getting the team aboard - we can always add more rules later. The first rule set might even contain a single rule, e.g. EmptyCatchBlock. Empty Catch blocks are a well known problem when analysing defects and usually developers agree with the severity of having them in the code and accept breaking the build for that. On the other hand, breaking the build on minor or formatting issues is not recommended in the beginning.

Here is the snippet of our pom.xml that breaks the build:
<build>
  ...
  <plugins>
    <plugin>
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-pmd-plugin</artifactId>
      <configuration>
        <failOnViolation>true</failOnViolation>
        <printFailingErrors>true</printFailingErrors>
        <rulesets>
          <ruleset>.../pmd_mandatory.xml</ruleset>
        </rulesets>
        ... other settings
      </configuration>
      <executions>
        <execution>
          <id>pmd-break</id>
          <phase>prepare-package</phase>
          <goals>
            <goal>check</goal>
          </goals>
        </execution>
      </executions>
    </plugin>
  </plugins>
</build>
This is more or less taken directly from the PMD Plugin documentation. After running the tests, PMD checks the code.

Keeping a Report of Major Violations
We wanted to keep the report Elisabeth had established previously. We tried to add another <execution> element for that. As executions can have their own <configuration> we thought that this would work, but it did not. PMD just ignored the second configuration. (Maybe this is a general Maven issue. For example the Maven Failsafe Plugin is a copy of the Surefire plugin to allow both plugins to have different configurations.)

The PMD plugin offers a report for the Maven site which is configured independently. As a workaround for the above problem, we used the site report to check the rules listed in src/config/pmd_report.xml. The PMD report invocation created the needed target/pmd.xml as well as a readable target/site/pmd.html.
<reporting>
  <plugins>
    ... other plugins
    <plugin>
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-pmd-plugin</artifactId>
      <configuration>
        <rulesets>
          <ruleset>.../pmd_report.xml</ruleset>
        </rulesets>
        ... other settings
      </configuration>
    </plugin>
  </plugins>
</reporting>
Skipping Maven Standard Reports
Unfortunately mvn site also created other reports which we did not need and which slowed down the build. Maven standard reports can be selected using the Maven Project Info Reports Plugin. It is possible to set its <reportSet> empty, not creating any reports:
<reporting>
  <plugins>
    ... other plugins
    <plugin>
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-project-info-reports-plugin</artifactId>
      <version>2.9</version>
      <reportSets>
        <reportSet>
          <reports>
            <!-- empty - no reports -->
          </reports>
        </reportSet>
      </reportSets>
    </plugin>
  </plugins>
</reporting>
Now it did not create the standard reports. It only generated target/site/project-reports.html with a link to the pmd.html and no other HTML reports. Win.

Skipping CPD Report
By default, the PMD plugin invokes PMD and CPD. CPD is checking for duplicate code - and is very useful - but we did not want to use it right now. As I said before, we wanted to start small. All plugins have goals which are explained in the documentation. Obviously the Maven report invokes PMD plugin's goals pmd:pmd and pmd:cpd. How do we tell a report which goals to invoke? That was the hardest problem to solve because we could not find any documentation on that. It turned out that each reporting plugin can be configured with <reportSets> similar to the Maven Project Info Reports Plugin:
<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-pmd-plugin</artifactId>
  <configuration>
    ... same as above
  </configuration>
  <reportSets>
    <reportSet>
      <reports>
        <report>pmd</report>
      </reports>
    </reportSet>
  </reportSets>
</plugin>
Putting Everything Together
We execute the build with
mvn clean verify site
If there is a violation of the mandatory rules, the build breaks and Maven stops. Otherwise site generates the PMD report. If there are no violations at all, Maven does not create a pmd.html. There is always a pmd.xml, so Jenkins is always happy.

(The complete project (compressed as zip) is here.)

8 August 2011

Maven Plugin Harness Woes

Last year I figured out how to use the Maven Plugin Harness and started using it. I added it to my Maven plugin projects. Recently I started using DEV@cloud. DEV@cloud is a new service which contains Jenkins and Maven source repositories. CloudBees, the company behind DEV@cloud, offers a free subscription with reduced capabilities which are more than enough for small projects. I set up all my projects there in no time, but had serious problems with the integration tests.

Using a local Maven repository other than ~/.m2/repository
Repository (unnamed)You don't have to use the default repository location. It's possible to define your own in the user's settings.xml or even in the global settings. But I guess most people just use the default. On the other hand in an environment like DEV@cloud, all the different builds from different users must be separated. So CloudBees decided that each Jenkins job has its own Maven repository inside the job's workspace. That is good because the repository is deleted together with the project.

Problem
The Testing Harness embeds Maven, i.e. forks a new Maven instance. It fails to relay the modified settings to this new process. During the execution of the integration test a new local repository is created and the original local one is used as a remote one (called "local-as-remote"). But without any hints, Maven uses ~/.m2/repository. So the true local repository is not found and all needed artefacts are downloaded again. This takes a lot of time (and wastes bandwidth). Dependencies that exist only in the local repository, e.g. snapshots of dependent projects, are not found and the integration test fails.

Solution
RepositoryTool.findLocalRepositoryDirectoy() uses an instance of MavenSettingsBuilder to get the settings. Its only implementing class is DefaultMavenSettingsBuilder and it tries to determine the repository location from the value of the system property maven.repo.local. Then it reads the user settings and in the end it uses ~/.m2/repository. The solution is to set the maven.repo.local system property whenever the local repository is not under ~/.m2/repository. Add -Dmaven.repo.local=.repository into the field for Goals and Options of the Jenkins job configuration.

Using additional dependencies while building integration test projects
indirectionAfter the plugin under test is built and installed into the new local repository the Maven Plugin Harness runs Maven against the integration test projects inside the src/test/resources/it folder. The approach which I described last year forks a Maven with pom, properties and goals defined by the JUnit test method.

Problem
The integration tests know the location of the new local repository (because it is set explicitly) and are able to access the plugin under test. But they know nothing about the local-as-remote repository. They can only access all artefacts which have been "downloaded" from the local-as-remote repository during the build of the plugin under test. So the problem is similar to the previous problem but occurs only when an integration test project needs additional artefacts. For example a global ruleset Maven module might consists of XML ruleset configuration files. The test module depends on the Checkstyle plugin and executes it using the newly build rulesets. So the object under test (the rules XML) is tested indirectly through the invocation of Checkstyle but the ruleset module itself does not depend on Checkstyle.

Solution
All POMs used during integration test have to be "mangled", not just the POM of the plugin under test. The method manglePomForTestModule(pom) is defined in the ProjectTool but it's protected and not accessible. So I copied it to AbstractPluginITCase and applied it to the integration test POMs.

Using settings other than ~/.m2/repository/settings.xml
Cluster ConfigurationIf you need artefacts from repositories other than Maven Central you usually add them to your settings.xml. Then you refer to them in the Jenkins job configuration. Behind the scenes Jenkins calls Maven with the parameter -s custom_settings.xml.

Problem
Similar to the repository location, the custom settings' path is not propagated to the embedded Maven and it uses the default settings. This causes no problems if all needed artefacts are either in the local-as-remote repository or can be downloaded from Maven Central. For example a Global Ruleset might contain some Macker Architecture Rules. The snapshot of the Macker Maven Plugin is deployed by another build job into the CloudBees snapshot repository. The test module depends on this Macker plugin and runs it using the newly built rulesets.

Solution
AbstractPluginITCase calls BuildTool's createBasicInvocationRequest() to get an InvocationRequest and subsequently executes this request. Using any system property the InvocationRequest can be customised:
if (System.getProperty(ALT_USER_SETTINGS_XML_LOCATION) != null) {
   File settings =
      new File(System.getProperty(ALT_USER_SETTINGS_XML_LOCATION));
   if (settings.exists()) {
      request.setUserSettingsFile(settings);
   }
}
Then the value of the used system property is added into the field for Jenkins' Goals and Options: -s custom_settings.xml -Dorg.apache.maven.user-settings=custom_settings.xml.

Alas I'm not a Maven expert and it took me quite some time to solve these problems. They are not specific to CloudBees but they result from using non default settings. Other plugins that fork Maven have similar problems.

13 February 2010

Testing For All One's Worth

In the end of last year, after a long break, the third part of the 'Code Cop' series has been published in the well known German magazine iX:

Crash DummyTägliche Builds mit automatisierten Tests (Daily Builds with Automated Testing) (iX 1/2010). [... Automated testing is vital for the quality assurance. Unit-tests are applied easily using JUnit. The same is true for functional testing thanks to a number of already existing tools. By adding testing capabilities to the build, developers are more willing to write tests. In the end the analysis of the code coverage achieved by the tests reveals some possible weak points. ...]

(Download source code of Ant/JUnit/HttpUnit and EMMA integration.)

ReferencesSome other test and code coverage tools mentioned in the article are AgitarOne, dbUnit, Clover, Cobertura, HtmlUnit, JCoverage, Jester, Jtest, JUnitPerf, Selenium, XMLUnit (incomplete list).

(List of all my publications with abstracts.)

20 January 2009

Generic Build Server Notification Tray

I like to be notified immediately when our build fails (or at least before someone notices and tells me that it's my fault :-). I really have to know. In fact I am kind of paranoid about it. Ideal would be some kind of system tray notifier, like Team City has one.

Scituate Lighthouse Some years ago at Herold we were using Anthill OS, which is nice but minimalistic and did not offer any notifications. So I made a tray notifier myself using the System Tray Functionality of Java SE 6. The notifier polled the configured build server status page and used regular expressions from a property file to parse it. If a build changed to red then a little popup was shown. Later I was using Cruise Control and the only thing I was able to find was the CruiseControl-Eclipse-Plugin on Google Code. That's quite cool stuff, but I needed something that popped up in my face when the build was red.

So here is my Generic Build Server System Tray Notifier. After unpacking the zip you have to create a startup script or link executing the jar that looks like <path to java 6>\bin\javaw.exe -jar BuildServerSystemTray.jar <path to config> (Source is included in the zip.)

Configuration
The notifier is generic and needs a properties file. The zip contains sample configurations for Anthill OS 1.7, Cruise Control 2.3 and Hudson 1.2. You will have to customise the configuration. At least the build server URL (server.url) has to be set accordingly. It should be easy to create configurations for other build servers, just set the proper value for the status.pattern property. This property defines a regular expression matching the whole information about a build containing the project name, success or failure and build time. The regex grouping values status.name.group, status.value.group and status.date.group must be set accordingly.

FAQ
Q: I was checking out the source, why is the package at.kugel used as top level namespace instead of org.codecop? A: Kugel was my coder pseudonym in the 80ties when I started coding for Commodore 64. I'm using it from time to time when I'm feeling retro. Kugel is German for ball or sphere. The name was coined by my first coding buddy because I was quite overweight.

19 January 2008

Daily Build Articles

In 2007 I started writing articles about Daily Builds for Java applications, which I called the 'Code Cop' Series. Unfortunately I just managed to finish two articles so far, shown below. I have a lot of material for further articles about adding automated testing and enforcing architecture to our daily build, just have to squeeze in the time to do it ;-)

The Daily GrindPart 1 - Daily Build with Anthill OS
This article describes my experiences when introducing a daily build in 2004 when I used the Anthill tool. The first steps were to create JavaDoc pages daily and to compile the Java sources. It turned out that the initial set-up of these build routines did not cost much and were supported by the team. Obviously this is only the start of better quality code. Read more in the full article Täglicher Build mit Anthill OS published in JavaSPEKTRUM, 5/2007.

ReferencesOther build tools are Anthill Pro, Continuum, Cruise Control, Maven, Luntbuild (incomplete list).

Part 2 - Daily Code Analysis with PMD
This article introduces static code analysis with PMD. The existing daily build was extended easily. A daily report of the code quality metrics awakened the management and was used as a base to check for a small set of errors. The most serious of them were fixed and part of the coding conventions have been checked automatically since then. Read more in the full article Tägliche Code-Analyse mit PMD published in JavaSPEKTRUM, 1/2008.

ReferencesTurn on the magicOther code analysis tools are Checkstyle, Enerjy CQ2, Findbugs, JavaNCSS, JLint, Jtest, Optimal Advisor, Sotograph (incomplete list).

(Download source code of Ant-PMD integration and BuildStatistics.java.)

FAQ
Q: Why did you favour Anthill OS instead of all other build tools listed? A: There was no evaluation or decision process. Anthill OS was just the first tool I got my hands on. Everything worked fine, so I did not look further. For a new project I would use Cruise Control because it is actively maintained and has a strong community. Paul Duvall has written a nice comparison of open source CI servers. If your build process is "heavy", you might want to have a look at commercial build servers. Some of them offer build farms. e.g. Anthill Pro, Parabuild or Team City.

Q: You divided all rules of PMD into four groups: error, warning, information and formatting. How can I get this categorisation? A: PMD comes with build in severity. Each rule definition contains an <priority> element. There are also some commercial tools that use PMD under the hood and have their own severity levels. Some of them even have references for each rule, why it's bad.

Q: You use a program called BuildRuleDoc to document used rules. Is it free or home-grown? A: I wrote it myself, but you can use it if you want to. The BuildRuleDoc.zip contains the code, the template to create the rule document, an Ant fragment and some test scripts. You will have to adapt the scripts in order to run them. Finally you need the XML rule set file of active PMD rules to generate the report.

(List of all my publications with abstracts.)