<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>Pelure</title>
  <link href="http://hugoduncan.org/index.xml" rel="self"/>
  <link href="http://hugoduncan.org/"/>
  <updated>2021-11-23T03:21:18+00:00</updated>
  <id>http://hugoduncan.org/</id>
  <author>
    <name>Hugo Duncan</name>
  </author>
  <entry>
    <id>http://hugoduncan.org/post/versions_in_the_time_of_git_dependencies.html</id>
    <link href="http://hugoduncan.org/post/versions_in_the_time_of_git_dependencies.html"/>
    <title>Versions in the Time of Git Dependencies</title>
    <summary>An idea for adding release data to a git repository, to allow tooling to update git dependencies</summary>
    <updated>2021-11-21T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<blockquote><p> He allowed himself to be swayed by his conviction that human beings  are not born once and for all on the day their mothers give birth to  them, but that life obliges them over and over again to give birth to  themselves.”  ― Gabriel García Márquez, Love in the Time of Cholera </p></blockquote><p><em>Edit</em></p><p>This is a re-write - the original is below.  It now aims to provide some recommendations on how to release for consumption by a git dependency. It all seems very obvious in retrospect</p><p>Firstly only publish a single artifact from a single git repository, or several artifacts with identical versions if it is a monorepo.  You can imagine schemes that would work with multiple artifacts with independent versions, but tooling is going to have a hard time with any such scheme. This was the piece of the puzzle I was missing in my original post.</p><p>Once we've agreed to the above, then it becomes simple - versions become monotonically increasing, and easy for tooling to deal with.</p><p>Just put a git tag on a release in the same way you would a version published to maven.  For a good scheme see <a href='https://golang.org/doc/modules/version-numbers'>Golang Modules</a>.  Make sure the scheme you choose sorts well with <a href='https://github.com/xsc/version-clj'>version-clj</a> (h/t <a href='https://twitter.com/borkdude'>@borkdude</a> for both).</p><p>Remember you can have multiple tags for a given sha, so you could tag it <code>v1.0.0-alpha</code> to start with and promote it to <code>v1.0.0</code>, if that is your cup of tea.</p><p>Many thanks to the collective wisdom of Michiel Borkent <a href='https://twitter.com/borkdude'>@borkdude</a>, Alex Miller <a href='https://twitter.com/puredanger'>@puredanger</a>, <a href='https://github.com/seancorfield'>Sean
Corfield</a> and Erik Assum <a href='https://twitter.com/slipset'>@slipset</a> on the clojurians slack.</p><p><em>Original post below</em></p><p><hr></p><p>When you want to consume a library using git dependencies, you go to the project's GitHub page, lookup the SHA from the <code>README</code>, put it in your <code>deps.edn</code>, and your done, right? &ndash; But what happens when you want to upgrade? rinse and repeat?  How do you even know that a new release is available?</p><h2>On Git</h2><p>A git repository is an append only log of changes to a project, and Together with the repository url, the SHA forms a content based addressing scheme to a particular state of the project.  This is a natural identifier for that particular project state.</p><p>As consumers of a library, we aren't concerned with every single commit made to the repository - we want to know the SHA that the project's maintainers consider to be a release.  We might not want the main branch HEAD commit, depending on the branching model used by the project developers.</p><h2>A CI Pipeline</h2><p>A good CI pipeline takes an immutable project artifact, and put's it through increasingly vigorous testing.  It might start of as an alpha, or a release candidate, and as confidence is increased through testing, it can be promoted to a full release.</p><p>An artifact built from the contents of a single git SHA fits this model nicely.  In an open source world, we can think of a SHA as an alpha release, that gets tested by a small number of people, and then gets published as a release - tools.build seems to follow this model for example, with announcement of alphas on #tools.build, followed by release announcements, for the same SHA, on #announce after a few people have tried it.</p><h2>A new SHA, a new Version?</h2><p>So which SHA do we want to put in our <code>deps.edn</code>?</p><p>In the maven world, versions are ordered, so when a new version is published, it is a signal that can be used by tooling to determine if an update is available.</p><p>In the git world, a SHA is not ordered, so how do we know when a new version is available?  Should we check slack, or a blog, or the project's home page? or could we as project authors provide data to allow automation of this process?</p><h2>release.edn</h2><p>To provide release information, we could put the version information into the repository itself.</p><p>There are many release schemes, but we can model a project's releases as falling into release streams.  Examples of this are "stable" or "alpha" or "v4.x".  A release for each stream is then just a SHA associated with the stream.</p><p>A natural way to present this would be as a map in an EDN file (or JSON, or YAML, this doesn't need to be clojure specific).</p><p><pre><code class="clojure hljs"><span class="forms"><span class="map">{<span class="keyword">:stable</span> <span class="map">{<span class="keyword">:git/tag</span> <span class="string">&quot;v1.0.0&quot;</span> <span class="keyword">:git/sha</span> <span class="string">&quot;abcdef&quot;</span>}</span><br> <span class="keyword">:alpha</span> <span class="map">{<span class="keyword">:git/tag</span> <span class="string">&quot;v1.1.0&quot;</span> <span class="keyword">:git/sha</span> <span class="string">&quot;abcdef&quot;</span>}</span><br> <span class="keyword">:head</span> <span class="map">{<span class="keyword">:git/tag</span> <span class="string">&quot;master&quot;</span> <span class="keyword">:git/sha</span> <span class="keyword">:latest</span>}</span>}</span></span></code></pre></p><p>A polylith or other monolith repository could have different streams for the various artifacts it published.</p><p>If we decide on this format, we then need to make the file discoverable. One way one be to take it from a git repository's default branch, which seems like a good default.</p><p>The final piece of the puzzle would be for tooling like <a href='https://github.com/liquidz/antq'>antq</a> and <a href='https://github.com/nnichols/clojure-dependency-update-action'>clojure-dependency-update-action</a> to use this information.</p><h2>Good idea?</h2><p>I'm sure I can't be the first to have thought of this.</p><p>What do you think - is this useful? How could the idea be improved?</p>]]>
    </content>
  </entry>
  <entry>
    <id>http://hugoduncan.org/post/back_to_blogging.html</id>
    <link href="http://hugoduncan.org/post/back_to_blogging.html"/>
    <title>Back to blogging</title>
    <summary>My road back to a working blogging environment, thoughts on comments and blogging framworks</summary>
    <updated>2021-11-14T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<blockquote><p> “My mind turned by anxiety, or other cause, from its scrutiny of blank  paper, is like a lost child–wandering the house, sitting on the bottom  step to cry.”  — Virginia Woolf </p></blockquote><p>I was inspired to write some blog posts, which led me to realise my current blog and blogging setup were completely broken.</p><p>Michiel Borkent (<a href='https://twitter.com/borkdude'>@borkdude</a>) recently wrote about his <a href='https://blog.michielborkent.nl/migrating-octopress-to-babashka.html'>migration from Octopress</a>. His requirements were very similar to mine, so I copied and modified. Thank you Michiel!</p><p>His blog, <a href='https://blog.michielborkent.nl/'>REPL adventures</a>, is well worth the read.</p><h2>Blog Post Discussions</h2><p>With a static web site, the perennial problem is how to enable discussions.  Some people just punt pn this, and point to reddit, but Michiel's solution is to use github discussions.  As a way of owning the discussion, this has a lot of appeal.</p><p>I think it could be taken further.  It would be great if we could automate the creation of a blog post topic when creating a blog post. Unfortunately the <a href='https://cli.github.com/'><code>gh</code></a> command line client doesn't support discussions yet, so that would require using Github's GraphQL API - more work than I wanted to do for now.</p><p>One downside though, is that the discussions are not visible on the blog pages, where discussion could easily engender more discussion. I wonder if a Github Action could be triggered by conversation activity, and automatically republish the post with the discussion to date at the end of the post.</p><h2>Blogging Frameworks vs Tasks</h2><p>There are many blog site generators (I used <a href='https://gohugo.io/'>Hugo</a>, of course), even if we limit ourselves to clojure:- <a href='https://github.com/retrogradeorbit/bootleg'>bootleg</a>, <a href='https://github.com/rafaeldelboni/nota'>nota</a>, <a href='https://github.com/cryogen-project/cryogen'>cryogen</a>, and <a href='https://github.com/nakkaya/static'>static</a> to name a few.</p><p>These are usually feature rich.  The price for those features though, is extra complexity.</p><p>Michiel's blog uses <a href='https://book.babashka.org/'>babashka</a> tasks to add a post, render posts, etc. These are extremely quick to run and make maintaining the blog simple.  it does just what he needs, and no more.</p><p>This reminds me of project automation, and the <a href='https://clojure.org/guides/tools_build'><code>tools.build</code></a> approach of using composable code tasks to build just what is needed.</p><p>Maybe there is an opportunity to take the same approach for building a blog or static site. If we could pick from a selection of configurable tasks, maybe we wouldn't need to write our own.</p><p>Speaking of which, I have tried writing my own before, in common-lisp: <a href='http://github.com/hugoduncan/cl-blog-generator'>cl-blog-generator</a>.</p><h2>And…</h2><p>So I have a blogging setup.  Now I just need to write something.</p>]]>
    </content>
  </entry>
  <entry>
    <id>http://hugoduncan.org/post/generating_source_with_leiningen.html</id>
    <link href="http://hugoduncan.org/post/generating_source_with_leiningen.html"/>
    <title>Generating Source Files with Leiningen</title>
    <summary>Generating source files with the leiningen run task.  Adds project specific source generation to prep-tasks.</summary>
    <updated>2013-10-28T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<p>Recently, we needed to include some generated source files in a project.  The source code generation was project specific, so we didn't want to have to create a leiningen plugin specifically for it. To get this to work required using quite a few of <a href='https://github.com/technomancy/leiningen#leiningen'>leiningen's</a> features.</p><p>This post will explain how to use lein to customise you build to generates a source file, but many of the steps are useful to implement any form of lein build customisation.</p><h3>The Generator</h3><p>The source code generator is going to live in the <code>my.src-generator</code> namespace.  Here's an example, that just generates a namespace declaration for the <code>my.gen</code> namespace under <code>target/generated/my/gen.clj</code>.</p><p><pre><code class="clojure hljs"><span class="forms"><span class="list">(<span class="symbol">ns</span> <span class="symbol">my.src-generator</span><br>  <span class="list">(<span class="keyword">:require</span> <span class="vector">&#91;<span class="symbol">clojure.java.io</span> <span class="keyword">:refer</span> <span class="vector">&#91;<span class="symbol">file</span>&#93;</span>&#93;</span>)</span>)</span><br><br><span class="list">(<span class="symbol">defn</span> <span class="def">generate</span> <span class="vector">&#91;&#93;</span><br>  <span class="list">(<span class="symbol">doto</span> <span class="list">(<span class="symbol">file</span> <span class="string">&quot;target&quot;</span> <span class="string">&quot;generated&quot;</span> <span class="string">&quot;my&quot;</span> <span class="string">&quot;gen.clj&quot;</span>)</span><br>    <span class="list">(<span class="symbol">-&gt;</span> <span class="fn">#(<span class="symbol">.getParentFile</span>)</span> <span class="fn">#(<span class="symbol">.mkdirs</span>)</span>)</span><br>    <span class="list">(<span class="symbol">spit</span> <span class="string">&quot;(ns my.gen)&quot;</span>)</span>)</span>)</span></span></code></pre></p><h3>Development only code</h3><p>The source generation code should not be packaged in the jar, so we place it in <code>dev-src/my/src&#95;generator.clj</code>, and add <code>dev-src</code> and the generated source directories to the <code>:dev</code> profile's <code>:source-paths</code>. The <code>:dev</code> profile is automatically used by leiningen unless it is producing a jar file.  When producing the jar, the <code>dev</code> profile will not be used, so <code>dev-src</code> will not be on the <code>:source-path</code> (we add the generated directory to the base <code>:source-path</code> below).</p><p><pre><code class="clojure hljs"><span class="forms"><span class="keyword">:profiles</span> <span class="map">{<span class="keyword">:dev</span> <span class="map">{<span class="keyword">:source-paths</span> <span class="vector">&#91;<span class="string">&quot;src&quot;</span> <span class="string">&quot;dev-src&quot;</span> <span class="string">&quot;target/generated&quot;</span>&#93;</span>}</span>}</span></span></code></pre></p><h3>Running project specific code with leininingen</h3><p>The <code>run</code> task can be used to invoke code in your project.  To use lein's <code>run</code> task we need to add a <code>-main</code> function to the <code>my.src-generator</code> namespace.</p><p><pre><code class="clojure hljs"><span class="forms"><span class="list">(<span class="symbol">defn</span> <span class="def">-main</span> <span class="vector">&#91;<span class="symbol">&amp;</span> <span class="local">args</span>&#93;</span><br>  <span class="list">(<span class="symbol">generate</span>)</span>)</span></span></code></pre></p><p>In the <code>project.clj</code> file we also tell lein about the main namespace. In order to avoid AOT compilation of the main namespace, we mark it with <code>:skip-aot</code> metadata.</p><p><pre><code class="clojure hljs"><span class="forms"><span class="keyword">:main</span> <span class="meta">&#94;<span class="keyword">:skip-aot</span> <span class="symbol">my.src-generator</span></span></span></code></pre></p><h3>Customising the jar contents</h3><p>The generated files need to end up in the jar (and possibly be compiled), so we put them on the <code>:source-paths</code> in the project.  If we had wanted to include the sources without further processing, we could have added the generated directory to <code>:resource-paths</code> instead.</p><p><pre><code class="clojure hljs"><span class="forms"><span class="keyword">:source-paths</span> <span class="vector">&#91;<span class="string">&quot;src&quot;</span> <span class="string">&quot;target/generated&quot;</span>&#93;</span></span></code></pre></p><h3>Extending the build process</h3><p>Now we can tell lein to generate the source files whenever we use the project.  We do this by adding the <code>run</code> task to the <code>:prep-tasks</code> key.  Leiningen runs all the tasks in <code>:prep-tasks</code> before any task invoked by the lein command line.</p><p>The tricky bit here is that the <code>run</code> task will itself invoke the <code>:prep-tasks</code>, so we want to make sure we don't end up calling the task recursively and generating a stack overflow.  To solve this, add a <code>gen</code> profile, and disable the prep tasks in it.  We use the <code>:replace</code> metadata to ensure this definition takes precedence.  See the <a href='https://github.com/technomancy/leiningen/blob/master/doc/PROFILES.md#merging'>leiningen profile documentation</a> for more information on <code>:replace</code> and it's sibling <code>:displace</code>.</p><p><pre><code class="clojure hljs"><span class="forms"><span class="keyword">:gen</span> <span class="map">{<span class="keyword">:prep-tasks</span> <span class="meta">&#94;<span class="keyword">:replace</span> <span class="vector">&#91;&#93;</span></span>}</span></span></code></pre></p><p>Then use this profile when setting the <code>:prep-tasks</code> key in the project.</p><p><pre><code class="clojure hljs"><span class="forms"><span class="keyword">:prep-tasks</span> <span class="vector">&#91;<span class="vector">&#91;<span class="string">&quot;with-profile&quot;</span> <span class="string">&quot;+gen,+dev&quot;</span> <span class="string">&quot;run&quot;</span>&#93;</span>  <span class="string">&quot;compile&quot;</span>&#93;</span></span></code></pre></p><p>Now when we run any command, the sources are generated.</p><h3>Adding an alias</h3><p>Finally we may want to just invoke the source generation, so let's create an alias to make <code>lein gen</code> run the generator.  We need the <code>gen</code> profile for this, or otherwise the generator will run twice.</p><p><pre><code class="clojure hljs"><span class="forms"><span class="keyword">:aliases</span> <span class="map">{<span class="string">&quot;gen&quot;</span> <span class="vector">&#91;<span class="string">&quot;with-profile&quot;</span> <span class="string">&quot;+gen,+dev&quot;</span> <span class="string">&quot;run&quot;</span>&#93;</span>}</span></span></code></pre></p><h3>The final project.clj</h3><p>For reference, the final project.clj looks like this:</p><p><pre><code class="clojure hljs"><span class="forms"><span class="list">(<span class="symbol">defproject</span> <span class="symbol">my-proj</span> <span class="string">&quot;0.1.0-SNAPSHOT&quot;</span><br>  <span class="keyword">:dependencies</span> <span class="vector">&#91;<span class="vector">&#91;<span class="symbol">org.clojure/clojure</span> <span class="string">&quot;1.4.0&quot;</span>&#93;</span>&#93;</span><br>  <span class="keyword">:source-paths</span> <span class="vector">&#91;<span class="string">&quot;src&quot;</span> <span class="string">&quot;target/generated&quot;</span>&#93;</span><br>  <span class="keyword">:main</span> <span class="meta">&#94;<span class="keyword">:skip-aot</span> <span class="symbol">my.src-generator</span></span><br>  <span class="keyword">:prep-tasks</span> <span class="vector">&#91;<span class="vector">&#91;<span class="string">&quot;with-profile&quot;</span> <span class="string">&quot;+gen,+dev&quot;</span> <span class="string">&quot;run&quot;</span>&#93;</span>  <span class="string">&quot;compile&quot;</span>&#93;</span><br>  <span class="keyword">:profiles</span> <span class="map">{<span class="keyword">:dev</span> <span class="map">{<span class="keyword">:source-paths</span> <span class="vector">&#91;<span class="string">&quot;src&quot;</span> <span class="string">&quot;dev-src&quot;</span> <span class="string">&quot;target/generated&quot;</span>&#93;</span>}</span><br>             <span class="keyword">:gen</span> <span class="map">{<span class="keyword">:prep-tasks</span> <span class="meta">&#94;<span class="keyword">:replace</span> <span class="vector">&#91;&#93;</span></span>}</span>}</span><br>  <span class="keyword">:aliases</span> <span class="map">{<span class="string">&quot;gen&quot;</span> <span class="vector">&#91;<span class="string">&quot;with-profile&quot;</span> <span class="string">&quot;+gen,+dev&quot;</span> <span class="string">&quot;run&quot;</span>&#93;</span>}</span>)</span></span></code></pre></p><h3>Conclusion</h3><p>This required using many of lein's features to get working - hopefully you'll find a use for some of them.</p>]]>
    </content>
  </entry>
  <entry>
    <id>http://hugoduncan.org/post/alembic_reloads_your_project_clj_dependencies.html</id>
    <link href="http://hugoduncan.org/post/alembic_reloads_your_project_clj_dependencies.html"/>
    <title>Alembic Reloads your Leiningen project.clj Dependencies</title>
    <summary>When working on a project, you sometime need to add a dependency.  Using Alembic you can add the dependency in your project.clj file, and then call alembic.still/load-project to load the dependency into a running repl, without loosing your repl state.</summary>
    <updated>2013-08-29T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<p>You're working away in a Clojure REPL, when you realise you need to add a dependency.  You add the dependency to your <a href='[http://leiningen.org](http://leiningen.org)' title='Leiningen'>leiningen</a> <code>project.clj</code> file and then?  Instead of shutting down your REPL, loosing whatever state you have built up, you can use <a href='[https://github.com/pallet/alembic#alembic](https://github.com/pallet/alembic#alembic)' title='Alembic'>Alembic</a> to load the new dependencies.  Simply call <code>&#40;alembic.still/load-project&#41;</code>.</p><p>Of course, it still has to work within the confines of the JVM's classloaders, so you can only add dependencies, and not modify versions or remove dependencies, but this should still cover a lot of use cases.</p><p>To use alembic on a single project, simply add it as a dependency in your <code>:dev</code> profile in <code>project.clj</code>:</p><pre><code class="clj">:profiles {:dev {:dependencies &#91;&#91;alembic &quot;0.2.0&quot;&#93;&#93;}}
</code></pre><p>To make alembic available in all your projects, and it to the <code>:user</code> profile in <code>&#126;/.lein/profiles.clj</code> instead:</p><pre><code class="clj">{:user {:dependencies &#91;&#91;alembic &quot;0.2.0&quot;&#93;&#93;}}
</code></pre><p>Alembic also allows you to directly add dependencies without editing your <code>project.clj</code> file, using the <code>distill</code> function.  Use this if you are just exploring libraries, for example.</p><p>Finally a big thank you to <a href='[http://blog.raynes.me/](http://blog.raynes.me/)' title='Raynes'>Anthony Grimes</a> and the other <a href='[https://github.com/flatland/](https://github.com/flatland/)' title='flatland'>flatland</a> developers for removing classlojure's dependency on <code>useful</code>, which should make this all much more robust.</p>]]>
    </content>
  </entry>
  <entry>
    <id>http://hugoduncan.org/post/evaluate_clojure_in_emacs_markdown_buffers.html</id>
    <link href="http://hugoduncan.org/post/evaluate_clojure_in_emacs_markdown_buffers.html"/>
    <title>Evaluate and Format Clojure in Emacs Markdown Buffers</title>
    <summary>When editing Clojure blocks in mardown or asciidoc documents, allow formating and evaluation of code blocks with clojure-mode.  Using mmm-mode, you can mix whichever major modes you want.</summary>
    <updated>2013-08-26T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<p>When writing documentation or blog posts about Clojure code, it is very useful to be able to format Clojure code blocks using <a href='[https://github.com/clojure-emacs/clojure-mode](https://github.com/clojure-emacs/clojure-mode)' title='clojure-mode'><code>clojure-mode</code></a> and evaluate code with <a href='[https://github.com/clojure-emacs/nrepl.el](https://github.com/clojure-emacs/nrepl.el)' title='nrepl.el'><code>nrepl.el</code></a>.</p><p>This can be enabled using <a href='[https://github.com/purcell/mmm-mode/](https://github.com/purcell/mmm-mode/)' title='mmm-mode'><code>mmm-mode</code></a>, which allows a single buffer to use different major modes for different sections of the buffer (and is not limited to just web modes). Install <code>mmm-mode</code> using <code>M-x package-install mmm-mode</code>, or using <code>M-x el-get-install mmm-mode</code> from the excellent <a href='[http://tapoueh.org/emacs/el-get.html](http://tapoueh.org/emacs/el-get.html)' title='el-get'><code>el-get</code></a>, or by checking the project from github and installing manually.</p><p>To configure this for clojure and markdown, add this in your <code>init.el</code> or <code>.emacs</code> file.</p><pre><code class="lisp">&#40;require 'mmm-auto&#41;
&#40;mmm-add-classes
 '&#40;&#40;markdown-clojure
    :submode clojure-mode
    :face mmm-declaration-submode-face
    :front &quot;&#94;```clj&#91;\n\r&#93;+&quot;
    :back &quot;&#94;```$&quot;&#41;&#41;&#41;

&#40;setq mmm-global-mode 'maybe&#41;
&#40;mmm-add-mode-ext-class 'markdown-mode nil 'markdown-clojure&#41;
</code></pre><p>After evaluating the above, or restarting emacs, you can test multi-mode support by opening a markdown document, or creating a new one, and adding a clojure source block, e.g.:</p><p><pre><pre><code class="clj">&#40;defn my-fn &#91;x&#93;
  &#40;inc x&#41;&#41;

&#40;my-fn 1&#41;
</code></pre></p><p></pre></p><p>Inside the code block you can format and evaluate your code as in any <code>clojure-mode</code> buffer, and the code will display exactly as in a <code>.clj</code> file.  By default the evaluation uses a running inferior lisp process, which you must start yourself.  To use a running <a href='[https://github.com/clojure-emacs/nrepl.el](https://github.com/clojure-emacs/nrepl.el)' title='nrepl.el'>nrepl</a> session instead, use <code>M-x nrepl-interaction-mode</code> inside the code block.</p><h2>Using with AsciiDoc</h2><p>This technique is not limited to clojure and markdown, but could be made to work whenever you would like differing major modes in distinct parts of your Emacs buffers.  You can add class to <code>mmm-mode</code> appropriately, for as many major mode combinations as you need.  The regions for each major mode are detected using regular expressions (or by some function).</p><p>For example, if you're writing asciidoc, you might use:</p><pre><code class="lisp">&#40;mmm-add-classes
 '&#40;&#40;asciidoc-clojure
    :submode clojure
    :face mmm-declaration-submode-face
    :front &quot;\\&#91;source, clojure\\&#93;&#91;\n\r&#93;+----&#91;\n\r&#93;+&quot;
    :back &quot;&#94;----$&quot;&#41;&#41;&#41;
&#40;mmm-add-mode-ext-class 'adoc-mode nil 'asciidoc-clojure&#41;
&#40;mmm-add-mode-ext-class 'doc-mode nil 'asciidoc-clojure&#41;
</code></pre><h2>Summary</h2><p><code>mmm-mode</code> allows you to flexibly use multiple major modes in different parts of a single emacs buffer.  Here we have shown how to use it for <code>clojure-mode</code> code blocks in markdown or asciidoc, but it is in no way limited to this, and it allows some fine grained customisation to the appearance and behaviour of each major mode block. I'm sure you'll find your own uses for <code>mmm-mode</code>.</p>]]>
    </content>
  </entry>
  <entry>
    <id>http://hugoduncan.org/post/snarf-pgp-keys-in-emacs-mu4e.html</id>
    <link href="http://hugoduncan.org/post/snarf-pgp-keys-in-emacs-mu4e.html"/>
    <title>Snarf PGP Keys from Signed Messages in Emacs mu4e</title>
    <summary>Snarf PGP Keys from Signed Messages in the mu4e message view.</summary>
    <updated>2013-08-25T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<p>I just moved to <a href='[http://www.djcbsoftware.nl/code/mu/](http://www.djcbsoftware.nl/code/mu/)' title='mu mail reader'>mu</a> for reading my email.  One feature I was missing was the ability to receive <a href='[http://en.wikipedia.org/wiki/Pretty_Good_Privacy](http://en.wikipedia.org/wiki/Pretty_Good_Privacy)' title='Pretty Good Privacy'>PGP</a> keys for signed messages.</p><p>When you receive a signed message, <code>mu</code> shows the verification status in the <code>Signature</code> field in the message view (see <a href='[http://www.djcbsoftware.nl/code/mu/mu4e/MSGV-Crypto.html#MSGV-Crypto](http://www.djcbsoftware.nl/code/mu/mu4e/MSGV-Crypto.html#MSGV-Crypto)' title='mu message cryptography'>MSGV-Crypto</a>).  If you don't have the sender's PGP key on your keyring, this will show <code>unverified</code>.  Click on the <code>Details</code> link within field will show the sender's key id.  To manually import the key you can use <a href='[http://www.gnupg.org/](http://www.gnupg.org/)' title='GNU Privacy Guard'><code>gpg</code></a>:</p><pre><code>$ gpg --recv &lt;the-key-id&gt;
</code></pre><p>This seemed a little labourious, so some automation was in order. <code>mu4e</code> allows you to define actions that can be run on messages (or attachments), so I just wrote an action to do this.</p><pre><code class="lisp">&#40;defun mu4e-view-snarf-pgp-key &#40;&amp;optional msg&#41;
  &quot;Snarf the pgp key for the specified message.&quot;
  &#40;interactive&#41;
  &#40;let&#42; &#40;&#40;msg &#40;or msg &#40;mu4e-message-at-point&#41;&#41;&#41;
          &#40;path &#40;mu4e-message-field msg :path&#41;&#41;
          &#40;cmd &#40;format &quot;%s verify --verbose %s&quot;
                 mu4e-mu-binary
                 &#40;shell-quote-argument path&#41;&#41;&#41;
          &#40;output &#40;shell-command-to-string cmd&#41;&#41;&#41;
    &#40;let &#40;&#40;case-fold-search nil&#41;&#41;
      &#40;when &#40;string-match &quot;key:\\&#40;&#91;A-F0-9&#93;+\\&#41;&quot; output&#41;
        &#40;let&#42; &#40;&#40;cmd &#40;format &quot;%s --recv %s&quot;
                            epg-gpg-program &#40;match-string 1 output&#41;&#41;&#41;
               &#40;output &#40;shell-command-to-string cmd&#41;&#41;&#41;
          &#40;message output&#41;&#41;&#41;&#41;&#41;&#41;

</code></pre><p>This works by parsing the output of the <code>mu</code> program itself, as displayed in the <code>Details</code> window, to obtain the PGP key id.  It then executes the <code>gpg --recv</code> command, parsing in the parsed key id.</p><p>To install the action, we simply add it to <code>mu4e-view-actions</code>:</p><pre><code class="lisp">&#40;add-to-list 'mu4e-view-actions
             '&#40;&quot;Snarf PGP keys&quot; . mu4e-view-snarf-pgp-key&#41; t&#41;
</code></pre>]]>
    </content>
  </entry>
  <entry>
    <id>http://hugoduncan.org/post/clojurescript-libs-with-js-dependencies.html</id>
    <link href="http://hugoduncan.org/post/clojurescript-libs-with-js-dependencies.html"/>
    <title>How to Build Clojurescript Libs with JavaScript Dependencies</title>
    <summary>A summary of different strategies for packaging JavaScipt dependencies in a Clojurescript library</summary>
    <updated>2013-08-16T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<p>Using JavaScript dependencies in a Clojurescript library seems to be hard.  It took me many hours to understand how it should work.  A big thanks to <a href='[http://cemerick.com](http://cemerick.com)' title='Chas Emerick'>Chas Emerick</a> for setting me straight on most of this.</p><p>Luke Vanderhart <a href='[http://lukevanderhart.com/2011/09/30/using-javascript-and-clojurescript.html](http://lukevanderhart.com/2011/09/30/using-javascript-and-clojurescript.html)' title='Luke Vanderhart's post on JavaScript libs'>posted</a> a general introduction to using Javascript libraries in Clojurescript.  Go read it if you haven't already - this post assumes you have.</p><p>While that post is an excellent description of using JavaScript in a Clojurescript application, it doesn't really address JavaScript in Clojurescript libraries, which has the additional problem of how to ensure the JavaScript dependency is available in the consumer of the library.  A Clojurescript library should definitely be capable of providing it's dependencies, but should also allow the consumer to override the version of these dependencies.</p><h2>Don't package the JavaScript</h2><p>The first approach is to simply not provide the JavaScript at all.  This is the approach taken by <a href='[https://github.com/ibdknox/jayq](https://github.com/ibdknox/jayq)' title='jayq'>jayq</a> for example.  The consumer of jayq, or any library that uses jayq, is required to provide jQuery through the JavaScript runtime.  This can take the form of a <code>&amp;lt;script&amp;gt;</code> link in the browser page, or a call to <code>webPage#injectJs</code> in phantomJS.  The compile <code>:libs</code> or <code>:foreign-libs</code> options can not be used to provide the dependency, as there is no way for the compiler to know that jayq depends on the namespace provided by these options.</p><p>For the consumer of the library to use compiler<code>:optimizations</code> other than <code>:whitespace</code>, they will need to provide an <code>:externs</code> file.</p><h2>Package JavaScript</h2><p>The second approach is to package the JavaScript via a Clojurescript namespace. This involves adding a <code>require</code> on a namespace to the code that directly depends on the JavaScript, and arranging for that Clojurescript namespace to load the JavaScript, using either of the compiler<code>:libs</code> or <code>:foreign-libs</code> options.</p><p>The Clojurescript library can make the JavaScript library available in its resources.  The library consumer can then use resource via the <code>:libs</code> or <code>:foreign-libs</code> options, depending on whether or not the JavaScript contains a <code>goog.provides</code> call.</p><p>If the library is packaged with a <code>goog.provides</code> call, then the consumer can not replace the version using <code>:libs &#91;&quot;&quot;&#93;</code> - the use of an explicit prefix in <code>:libs</code> is needed to prevent more than one JavaScript implementation trying to provide the clojure namespace, or the use of <code>:foreign-libs</code> where the namespace is explicitly mapped.</p><p>For examples, the <a href='[https://github.com/cemerick/pprng](https://github.com/cemerick/pprng)' title='pprng'>pprng</a> library packages its dependency with a <code>goog.provides</code> call, allowing the use of <code>:libs &#91;&quot;&quot;&#93;</code> to pull in the dependency.  The <a href='[https://github.com/hugoduncan/papadom](https://github.com/hugoduncan/papadom)' title='papadom'>papadom</a> library on the other hand provides vanilla javascript dependencies, and requires the use of the more verbose <code>:foreign-libs</code> option.</p><p>If the JavaScript is to be provided in the runtime, then the consumer will have to provide an empty namespace definition to satisfy the require in the Clojurescript library, and the<code>:externs</code> file as in the first case.</p><h2>Postscript</h2><p>There are several assumptions in much of the documentation that I didn't see explicitly explained.  I'll record these here for posterity.</p><p>A clojurescript library is always a source code library.  There is no such thing as the linking of compiled clojurescript artifacts.</p><p>Neither<code>:libs</code> nor<code>:foreign-libs</code> actually changes how the JavaScript is accessed within the clojurescript code.  If you include jQuery via a <code>:libs</code>, and a <code>require</code>, you still access it through <code>js/jQuery</code>.  The <code>require</code> of the namespace specified by <code>goog.provide</code>, or the namespace specified in the <code>:foreign-libs</code>' <code>:provides</code> key, simply ensures the JavaScript is loaded.</p><p>The choice of compiler <code>:optimizations</code> affects what information you need to provide, and this differs depending on whether you are providing javascript libraries through the runtime (e.g. $lt;script&gt; tags in the browser), or through <code>:libs</code> or <code>:foreign-libs</code> compiler options.  The simplest here is to use the compiler options.  When providing the JavaScript via the runtime, then everything should also just work if you are using no optimisation, or just <code>:whitespace</code>, but as soon as you try anything else, you will need to provide an :externs definition for the JavaScript libraries.</p>]]>
    </content>
  </entry>
  <entry>
    <id>http://hugoduncan.org/post/webapp_with_core.async.html</id>
    <link href="http://hugoduncan.org/post/webapp_with_core.async.html"/>
    <title>Exploring a todo app with core.async</title>
    <summary>Builds the equivalent of the angularJS TODO example with core.async</summary>
    <updated>2013-08-15T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<p>We're going to build an equivalent of the <a href='[http://angularjs.org/#add-some-control](http://angularjs.org/#add-some-control)' title='Angular TODO example'>AngularJS TODO example</a> using core.async, and a templating library, <a href='[https://github.com/hugoduncan/papadom](https://github.com/hugoduncan/papadom)' title='Papadom templating library'>papadom</a>, that I've written to help in this.</p><p><a href='[https://github.com/clojure/clojurescript](https://github.com/clojure/clojurescript)' title='Clojurescript'>Clojurescript</a> recently gained a <a href='[http://en.wikipedia.org/wiki/Communicating_sequential_processes](http://en.wikipedia.org/wiki/Communicating_sequential_processes)' title='Communicating Sequential Processes'>CSP</a> implemetation via <a href='[http://clojure.com/blog/2013/06/28/clojure-core-async-channels.html](http://clojure.com/blog/2013/06/28/clojure-core-async-channels.html)' title='Clojure core.async Channels'>core.async</a>, similar to <a href='[http://golang.org/ref/spec#Channel_types](http://golang.org/ref/spec#Channel_types)' title='Go Channels'>Go's channels</a>, or <a href='[http://www.cs.cornell.edu/Courses/cs312/2006fa/recitations/rec24.html](http://www.cs.cornell.edu/Courses/cs312/2006fa/recitations/rec24.html)' title='CML Channels'>CML's channels</a> (CML also has a nice select).  Bruce Haumann started exploring this with <a href='[http://rigsomelight.com/2013/07/18/clojurescript-core-async-todos.html](http://rigsomelight.com/2013/07/18/clojurescript-core-async-todos.html)' title='Bruce Haumann's 'ClojureScript Core.Async Todos''>ClojureScript Core.Async Todos</a>, and David Nolen has been looking at how to use core.async for <a href='[http://swannodette.github.io/2013/07/31/extracting-processes/](http://swannodette.github.io/2013/07/31/extracting-processes/)' title='David Nolan's 'CSP is Responsive Design''>responsive design</a>.  In this post, we'll take the TODO example, and take it a little further.</p><h2>Basic Display</h2><p>We'll start with just displaying a list of todo items.  For this we'll need a template, so we'll just write this in HTML, and add a <code>t-template</code> attribute, which enables us to use mustache style templating of values to display.  This doesn't use mustache sections for looping, in order to preserve valid HTML markup.</p><pre><code class="html">&lt;h1&gt;TODOS&lt;/h1&gt;
&lt;ul class=&quot;unstyled&quot;&gt;
  &lt;li t-template=&quot;todos&quot;&gt;{{text}}&lt;/li&gt;
&lt;/ul&gt;
</code></pre><p>To get this to show something we'll need some code:</p><pre><code class="clj">&#40;ns app
  &#40;:require
   &#91;papadom.template :refer &#91;compile-templates render&#93;&#93;&#41;&#41;

&#40;defn start &#91;&#93;
  &#40;compile-templates&#41;
  &#40;render {:todos &#91;{:text &quot;learn papadom&quot; :done false}
                   {:text &quot;write a papadom app&quot; :done false}&#93;}&#41;&#41;
</code></pre><p>When you call <code>app.start&#40;&#41;</code> from the page containing the above template, you'll see a list of two todo entries.</p><h2>Adding an event</h2><p>Now we have something displayed, lets add a checkbox to mark todo items as done:</p><pre><code class="html">&lt;ul class=&quot;unstyled&quot;&gt;
  &lt;li t-template=&quot;todos&quot;&gt;
    &lt;input type=&quot;checkbox&quot; t-prop=&quot;done&quot; t-event=&quot;done&quot;
           t-id=&quot;index&quot; index=&quot;{{@index}}&quot;&gt;
    &lt;span&gt;{{text}}&lt;/span&gt;
  &lt;/li&gt;
&lt;/ul&gt;
</code></pre><p>The <code>t-prop</code> attribute tells the template to which data value to display as the checkbox.</p><p>The <code>t-event</code> attribute specifies that we want an event.  When the checkbox is clicked, we will get a core.async message with a <code>:done</code> event type.  We need to know which todo was clicked, so we use the <code>t-id</code> attribute to list the attributes whose values should be sent as the event data &ndash; in this case the index, which has a value based on handlebars style <code>@index</code> property.</p><p>Now we need some code to process the events.  To do this we'll define an <code>app</code> function that will be passed a state atom containing a map with our todos state, and a core.async channel from which to read events.  The function will loop over events, and dispatch them as required.</p><pre><code class="clj">&#40;defn app
  &#91;state event-chan&#93;
  &#40;go
   &#40;loop &#91;&#91;event event-data&#93; &#40;&lt;! event-chan&#41;&#93;
     &#40;case event
       :done
       &#40;let &#91;nv &#40;boolean &#40;:checked event-data&#41;&#41;&#93;
         &#40;swap! state assoc-in &#91;:todos &#40;:index event-data&#41; :done&#93; nv&#41;&#41;&#41;
     &#40;recur &#40;&lt;! event-chan&#41;&#41;&#41;&#41;&#41;
</code></pre><p>When the <code>app</code> function receives a <code>:done</code> event, it will update the state atom appropriately.  Now we have our state updating, we'll need to display it, which we can again do with the <code>render</code> function.</p><pre><code class="clj">&#40;defn show-state &#91;state&#93;
  &#40;render &quot;state&quot; state&#41;&#41;
</code></pre><p>We still need to get <code>show-state</code> called, and we'll arrange this in a modified <code>start</code> function.  This will create an atom for the state, and add a watch on the atom that will call <code>show-state</code>.</p><pre><code class="clj">&#40;defn start
  &#91;&#93;
  &#40;let &#91;event-chan &#40;chan&#41;
        state &#40;atom nil&#41;&#93;
    &#40;compile-templates&#41;
    &#40;template-events event-chan&#41;
    &#40;add-watch state :state &#40;fn &#91;key ref old new&#93; &#40;show-state new&#41;&#41;&#41;
    &#40;reset! state {:todos &#91;{:text &quot;Learn papadom&quot; :done false}
                           {:text &quot;Build a papadom app&quot; :done false}&#93;}&#41;
    &#40;app state event-chan&#41;&#41;&#41;&#41;
</code></pre><p>We've also added a core.async channel, <code>event-chan</code>, which we've passed to <code>template-events</code> to arrange delivery of the events defined in our template.  We pass this channel to the <code>app</code> function to start processing the events.</p><p>This shows the basic structure of the application.</p><h2>Adding New Todo Elements</h2><p>To allow you to add new todo items, we'll add a form to our template, specifying a <code>t-event</code> attribute, which will cause an event to be sent when the form is submitted, with the form's input values as the event data.</p><pre><code class="html">&lt;form t-event=&quot;add-todo&quot;&gt;
  &lt;input type=&quot;text&quot; t-prop=&quot;text&quot; size=&quot;30&quot; placeholder=&quot;add new todo here&quot;&gt;
  &lt;input class=&quot;btn btn-primary&quot; type=&quot;submit&quot; value=&quot;add&quot;&gt;
&lt;/form&gt;
</code></pre><p>To process this new event, we'll add a case to the <code>app</code> function loop's <code>case</code> form.</p><pre><code class="clj">:add-todo
&#40;swap! state update-in &#91;:todos&#93;
       conj {:text &#40;:text &#40;input-seq-&gt;map event-data&#41;&#41;
             :done false}&#41;
</code></pre><p>This uses the <code>input-seq-&gt;map</code> helper to convert the data from the form into a map, and we extract the <code>:text</code> value (defined by <code>t-prop</code> in the <code>input</code> element).</p><p>And we're done.  To see a full working example have a look at the <a href='[https://github.com/hugoduncan/papadom/blob/master/examples/todo/resources/public/index.html](https://github.com/hugoduncan/papadom/blob/master/examples/todo/resources/public/index.html)' title='TODO HTML Template'>template</a> and <a href='[https://github.com/hugoduncan/papadom/blob/master/examples/todo/src/papadom/example/todo.cljs](https://github.com/hugoduncan/papadom/blob/master/examples/todo/src/papadom/example/todo.cljs)' title='TODO Clojurescript Code'>code</a> in the todo example of papadom. To run the example:</p><pre><code class="shell">git clone &#91;https://github.com/hugoduncan/papadom.git&#93;&#40;https://github.com/hugoduncan/papadom.git&#41;
cd papadom/examples/todo
lein ring server
</code></pre>]]>
    </content>
  </entry>
  <entry>
    <id>http://hugoduncan.org/post/configuration_in_templates_is_not_configuration_as_code.html</id>
    <link href="http://hugoduncan.org/post/configuration_in_templates_is_not_configuration_as_code.html"/>
    <title>Configuration in Templates is not Configuration as Code</title>
    <summary>If you have configuration that uses template configuration files, you are practising neither configuration as code nor configuration as data.  Having configuration locked away in template files reduces its visibility, and makes it hard for you to query it. It might be easier to write configuration code to use templates, but it will come back to bite you.</summary>
    <updated>2010-10-04T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<p><p>If you have configuration that uses template configuration files, you are practising neither configuration as code nor configuration as data.  Having configuration locked away in template files reduces its visibility, and makes it hard for you to query it. It might be easier to write configuration code to use templates, but it will come back to bite you.</p></p><p><p>One of the first things I implemented in <a href="http://github.com/hugoduncan/pallet">pallet</a> was a templating mechanism, because configuration management tools use templates, right?  I even built a template selection mechanism, just like <a href="http://wiki.opscode.com/display/chef/Templates">Chef's</a>.</p></p><p><p>I have come to realise however, that having configuration in template files is not particularly useful. There are three major problems you are likely to encounter.  Firstly template files are not visible, secondly you can not query the data in the template files, and lastly you will need to touch multiple files to add or modify parameters.</p></p><p><p>Visibility at the point of usage is important, especially in a team environment.  If you have to find the template file and look at its content when reading your configuration code, then the chances are you assume it hasn't changed, and skip the contents. Making an analogy to the world of software development, templates are like global variables in one sense. You can change the operation of a program with a global variable modified in some obscure place, and in the same way, you can change your system configuration by changing a template file, tucked away in some folder, and not visible from where you are actually calling your configuration crate/recipe.</p></p><p><p>The ability to query configuration settings allows not just finding out, for example,  which directory a log file is in, but also enables you to put tools on top of your configuration data.  Template configuration files suffer on two counts here - they are separate text files that require parsing to be read, and the format of each configuration file is different.</p></p><p><p>The last point concerns the flexibility of your configuration. If you have used template files, with hard coded parameter values, and you then want to modify your configuration to dynamically set one of those hard coded values, you have to modify all the specialised versions of the existing templates, and specify the value in code. You have to touch multiple files - lots of room for making typos.</p></p><p><p>My goal for pallet then, is to have all configuration supplied as arguments to crates.  For most packages a hash map is sufficient an abstraction for providing the data, but when this gets too cumbersome, we'll use a DSL that mirrors the original configuration file language.</p></p><p><p>Goodbye hidden configuration!</p></p>]]>
    </content>
  </entry>
  <entry>
    <id>http://hugoduncan.org/post/configure_nagios_using_pallet.html</id>
    <link href="http://hugoduncan.org/post/configure_nagios_using_pallet.html"/>
    <title>Configure Nagios using Pallet</title>
    <summary>Basic Nagios support was recently added to pallet, and while very simple to use, this blog post should make it even simpler. The overall philosophy is to configure the nagios service monitoring definitions along with the service itself, rather than have monolithic nagios configuration, divorced from the configuration of the various nodes.</summary>
    <updated>2010-08-18T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<p><p>Basic Nagios support was recently added to <a href="http://github.com/hugoduncan/pallet">pallet</a>, and while very simple to use, this blog post should make it even simpler. The overall philosophy is to configure the nagios service monitoring definitions along with the service itself, rather than have monolithic nagios configuration, divorced from the configuration of the various nodes.</p></p><p><p>As an example, we can configure a machine to have it's ssh service, CPU load, number of processes and number of users monitored. Obviously, you would normally be monitoring several different types of nodes, but there is no difference as far as pallet is concerned.</p></p><p><p>We start by requiring various pallet components.  These would normally be part of a <code>ns</code> declaration, but are provided here for ease of use at the REPL.</p></p><p><pre class="clojure"> (require   '[pallet.crate.automated-admin-user
    :as admin-user]   '[pallet.crate.iptables :as 'iptables]   '[pallet.crate.ssh :as ssh]   '[pallet.crate.nagios-config
     :as nagios-config]   '[pallet.crate.nagios :as nagios]   '[pallet.crate.postfix :as postfix]   '[pallet.resource.service :as service]) </pre></p><p><h2>Node to be Monitored by Nagios</h2></p><p><p>Now we define the node to be monitored. We set up a machine that has <abbr>SSH</abbr> running, and configure <code>iptables</code> to allow access to <abbr>SSH</abbr>, with a throttled connection rate (six connections/minute by default).</p></p><p><pre class="clojure"> (pallet.core/defnode monitored   []   :bootstrap [(admin-user/automated-admin-user)]   :configure [;; set iptables for restricted access
              (iptables/iptables-accept-icmp)
              (iptables/iptables-accept-established)
              ;; allow connections to ssh
              ;; but throttle connection requests
              (ssh/iptables-throttle)
              (ssh/iptables-accept)]) </pre></p><p><p>Monitoring of the <abbr>SSH</abbr> service is configured by simply adding <code>(ssh/nagios-monitor)</code>.</p></p><p><p>Remote monitoring is implemented using nagios' <code>nrpe</code> plugin, which we add with <code>(nagios-config/nrpe-client)</code>.  To make nrpe accessible to the nagios server, we open the that the nrpe agent runs on using <code>(nagios-config/nrpe-client-port)</code>, which restricts access to the nagios server node. We also add a phase, :restart-nagios, that can be used to restart the nrpe agent.</p></p><p><p>Pallet comes with some configured nrpe checks, and we add <code>nrpe-check-load</code>, <code>nrpe-check-total-proces</code> and <code>nrpe-check-users</code>. The final configuration looks like this:</p></p><p><pre class="clojure"> (pallet.core/defnode monitored   []   :bootstrap [(admin-user/automated-admin-user)]   :configure [;; set iptables for restricted access
              (iptables/iptables-accept-icmp)
              (iptables/iptables-accept-established)
              ;; allow connections to ssh
              ;; but throttle connection requests
              (ssh/iptables-throttle)
              (ssh/iptables-accept)
              ;; monitor ssh
              (ssh/nagios-monitor)
              ;; add nrpe agent, and only allow
              ;; connections from nagios server
              (nagios-config/nrpe-client)
              (nagios-config/nrpe-client-port)
              ;; add some remote checks
              (nagios-config/nrpe-check-load)
              (nagios-config/nrpe-check-total-procs)
              (nagios-config/nrpe-check-users)]   :restart-nagios [(service/service
                    "nagios-nrpe-server"
                    :action :restart)]) </pre></p><p><h2>Nagios Server</h2> <p>We now configure the nagios server node. The nagios server is installed with <code>(nagios/nagios "nagiospwd")</code>, specifying the password for the nagios web interface, and add a phase, :restart-nagios, that can be used to restart nagios.</p></p><p><p>Nagios also requires a <abbr>MTA</abbr> for notifications, and here we install postfix.  We add a contact, which we make a member of the "admins" contact group, which is notified as part of the default host and service templates.</p></p><p><pre class="clojure"> (pallet.core/defnode nagios   []   :bootstrap [(admin-user/automated-admin-user)]   :configure [;; restrict access
              (iptables/iptables-accept-icmp)
              (iptables/iptables-accept-established)
              (ssh/iptables-throttle)
              (ssh/iptables-accept)
              ;; configure MTA
              (postfix/postfix
               "pallet.org" :internet-site)
              ;; install nagios
              (nagios/nagios "nagiospwd")
              ;; allow access to nagios web site
              (iptables/iptables-accept-port 80)
              ;; configure notifications
              (nagios/contact
              {:contact<i>name "hugo"
               :service</i>notification<i>period "24x7"
               :host</i>notification<i>period "24x7"
               :service</i>notification<i>options
                  "w,u,c,r"
               :host</i>notification<i>options
                  "d,r"
               :service</i>notification<i>commands
                 "notify-service-by-email"
               :host</i>notification_commands
                  "notify-host-by-email"
               :email "my.email@my.domain"
               :contactgroups [:admins]})]   :restart-nagios [(service/service "nagios3"
                     :action :restart)]) </pre></p><p><h2>Trying it out</h2> <p>That's it. To fire up both machines, we use pallet's <code>converge</code> command.</p></p><p><pre class="clojure"> (pallet.core/converge   {monitored 1 nagios 1} service   :configure :restart-nagios) </pre></p><p><p>The nagios web interface is then accessible on the <code>nagios</code> node with the <code>nagiosadmin</code> user and specified password.  Real world usage would probably have several different monitored configurations, and restricted access to the <code>nagios</code> node.</p></p><p><h2>Still to do...</h2> <p>Support for nagios is not complete (e.g. remote command configuration still needs to be added, and it has only been tested on Ubuntu), but I would appreciate any feedback on the general approach.</p></p>]]>
    </content>
  </entry>
  <entry>
    <id>http://hugoduncan.org/post/mocking_clojure_functions_with_atticus.html</id>
    <link href="http://hugoduncan.org/post/mocking_clojure_functions_with_atticus.html"/>
    <title>Mocking Clojure Functions with Atticus</title>
    <summary>I dislike most mocking, and try and avoid it as much as possible. Sometimes it is however the only realistic way of testing.  I did a quick survey of mocking tools in clojure, and found them very much reflecting the Java mocking libraries. Clojure has a few more dynamic capabilities than Java, so I thought a little about how these could be used to make a simple mocking facility, and atticus is what I came up with.</summary>
    <updated>2010-05-18T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<p><p>I dislike most mocking, and try and avoid it as much as possible. Sometimes it is however the only realistic way of testing.  I did a quick survey of mocking tools in clojure, and found them very much reflecting the Java mocking libraries. Clojure has a few more dynamic capabilities than Java, so I thought a little about how these could be used to make a simple mocking facility, and <a href="http://github.com/hugoduncan/atticus">atticus</a> is what I came up with.</p> <p>There is a consensus that mocking should be implemented by binding a function's var to a new function for the duration of a test, and atticus does this too. Atticus' premise is that we can do simple mocking by declaring the mock function as a local function definition, and have the local function do the argument checking, return value setting, etc.  The simplest case would be something like below, which checks the value of its argument and specifies a return value:</p> <pre class="clojure"> ;; pull in namespaces (use 'clojure.test) (require 'atticus.mock)</p><p>;; define test which mocks f (deftest mock-test   (atticus.mock/expects     [(f [arg]        (is (= arg 1) "Check argument")        arg)] ; set the return value     ;; in a real test case this would be called     ;; indirectly by some other function     (is (= 1 (f 1)) "Call mocked function")) </pre> <p>At the moment, I have added two macros to this.  <code>once</code>, which checks a function is called once, and <code>times</code>, which checks that a function is called a specific number of times. The macros are used to wrap the body of the mock function, which keeps the function's expected behaviour in one place.</p> <pre class="clojure"> ;; define test, that should be called just once (deftest mock-test   (atticus.mock/expects     [(f [arg]        (atticus.mock/once          (is (= arg 1) "Check argument")          arg))]     (is (= 1 (f 1)) "Call mocked function")) </pre> <pre class="clojure"> ;; define test, that should be called exactly twice (deftest mock-test   (atticus.mock/expects     [(f [arg]        (atticus.mock/times 2          (is (= arg 1) "Check argument")          arg))]     (is (= 1 (f 1)) "Call mocked function")     (is (= 1 (f 1)) "Call mocked function")) </pre> <p>So what do you think, is this a reasonable approach? Not having the explicit calls to <code>returns</code>, etc, might be seen as a loss of declarative clarity, but I for one prefer this, as it gives you the full power of the language to test the arguments and set the return value.</p> <h3>References</h3> <ul> <li><a href="http://s-expressions.com/2010/01/24/conjure-simple-mocking-and-stubbing-for-clojure-unit-tests/">conjure – simple mocking and stubbing for Clojure unit-tests</a></li> <li><a href="http://richhickey.github.com/clojure-contrib/mock-api.html">clojure.contrib.mock</a></li> <li><a href="http://code.google.com/p/test-expect/">test-expect</a></li> <li><a href="http://blog.n01se.net/?p=134">Using binding to mock out even “direct linked” functions in Clojure</a></li> </ul></p>]]>
    </content>
  </entry>
  <entry>
    <id>http://hugoduncan.org/post/provisioning_cloud_nodes_with_pallet.html</id>
    <link href="http://hugoduncan.org/post/provisioning_cloud_nodes_with_pallet.html"/>
    <title>Provisioning Cloud Nodes with Pallet</title>
    <summary>I recently needed to move a server from dedicated hosting to a cloud server. The existing server had been configured over time by several people, with little documentation.  I want to make sure that this time everything was documented, and what better way than doing that than using an automated configuration tool.</summary>
    <updated>2010-05-12T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<p><p>I recently needed to move a server from dedicated hosting to a cloud server. The existing server had been configured over time by several people, with little documentation.  I want to make sure that this time everything was documented, and what better way than doing that than using an automated configuration tool.</p> <p>Looking around at the configuration tools, I couldn't find one I really liked, so I started <a href="http://github.com/hugoduncan/pallet">Pallet</a>. I'll explain why I didn't use an existing tool below, but first I wanted to show how to manage nodes in Pallet.</p></p><p><pre class="clojure">   ;; Pull in the pallet namespaces   (require 'pallet.repl)   (pallet.repl/use-pallet)</p><p>  ;; Define a default node   (defnode mynode [])</p><p>  ;; Define the cloud account to use   (def service     (compute-service "provider" "user" "password"                       :log4j :ssh))</p><p>  ;; Create 2 nodes   (converge {mynode 2} service) </pre></p><p><p>This example would create two nodes (cloud vm instances) with the tag "mynode" in your cloud account, as specified in the <code>service</code>.  This would give you the smallest size, ubuntu image on most clouds.  Of course, to do anything serious, you would want to specify the image you would like, and you would probably like some configuration of the nodes.  So carrying on the above example: </p> <pre class="clojure">   ;; Pull in the needed crates   (use 'pallet.crate.automated-admin-user)   (use 'pallet.crate.java)</p><p>  ;; Define a new node that will use the Java JDK   (defnode javanode     [:ubuntu :X86_64 :smallest
     :os-description-matches "[<sup>J]+9.10[</sup><sup>32]+"]</sup>     :bootstrap [(automated-admin-user)]     :configure [(java :openjdk :jdk)])</p><p>  ;; Create a new node, and remove the previous ones   (converge {javanode 1 mynode 0} service) </pre></p><p><p>This would stop the two nodes that were created before, and create a new one, with the specified ubuntu version.  On first boot, it would create a user account with your current username, authorize your id_rsa key on that account, and give it sudo permissions.  Every time converge is run, it also ensures that the openjdk JDK is installed.</p></p><p><p>The configuration to be applied is specified as a call to a crate - automated-admin-user and java in the example above. Crates are just clojure functions that specify some configuration or other action on the nodes (they're equivalent to Chef's recipes, which Pallet can also execute using chef-solo). Pallet can be extended with your own crates, and crates can specify general actions, not just configuration.  <code>lift</code> is a companion to <code>converge</code>, and can be used to apply crates to existing nodes (including local VM's).  The hypothetical example below would execute <code>my-backup-crate</code> on all the "mynode" nodes.</p></p><p><pre class="clojure">   (defnode mynode [] :backup [(my-backup-crate)])   (lift mynode service :backup) </pre></p><p><p>This was just a quick overview of Pallet, to give you an idea of what it is. One big area of Pallet not demonstrated here is its command line tool. But that is a topic for another post.</p></p><p><h2>Why Write another Tool?</h2></p><p><p>Now you've seen some examples, I'll try and explain the features that make Pallet distinct from other configuration tools out there.</p></p><p><h3>No Dependencies</h3></p><p><p>The machines being managed require no special dependencies to be installed. As long as they have bash and ssh running, they can be used with pallet.  For me this was important - it means that you can use pretty much any image out there, which is great for ad-hoc testing and development.</p></p><p><h3>No Server</h3></p><p><p>Pallet has no central server to set up and maintain - it simply runs on demand. You can run it from anywhere, even over a remote REPL connection.</p></p><p><h3>Everything in Version Control</h3></p><p><p>In pallet, all your configuration is handled in SCM controlled files - there is no database involved.  This means that your configuration can always be kept in step with the development of your crates, and the versions of the external crates that you use.</p></p><p><h3>Jar File Distribution of Crates</h3></p><p><p>Custom crates can be distributed as jar files, and so can be published in maven repositories, and be consumed in a version controlled manner.  Hopefully this will promote shared crates.</p></p><p><h3>Provisioning, Configuration and Administration</h3></p><p><p>Pallet aims quite wide. You can use it for starting an stopping nodes, for configuring nodes, deploying projects and also for running administration tasks.  To be honest, this wasn't an initial design goal, but has come out of the wash that way.</p></p><p><h2>Interested?</h2></p><p><p>Hopefully this has whetted your appetite, and you'll give pallet a try.  You can get support via <a href="http://groups.google.com/group/pallet-clj">the Google Group</a>, or #pallet on freenode irc.</p></p>]]>
    </content>
  </entry>
  <entry>
    <id>http://hugoduncan.org/post/shell_scripting_in_clojure_with_pallet.html</id>
    <link href="http://hugoduncan.org/post/shell_scripting_in_clojure_with_pallet.html"/>
    <title>Shell Scripting in Clojure with Pallet</title>
    <summary>Let's face it, many of us hate writing shell scripts, and with good reason. Personally, it's not so much the shell language itself that puts me off, but organising everything around it; how do you deploy your scripts, how do you arrange to call other scripts, how do you manage the dependencies between your scripts?  Pallet aims to solve these problems by embedding shell script in clojure.</summary>
    <updated>2010-05-03T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<p><p>Let's face it, many of us hate writing shell scripts, and with good reason. Personally, it's not so much the shell language itself that puts me off, but organising everything around it; how do you deploy your scripts, how do you arrange to call other scripts, how do you manage the dependencies between your scripts?  <a href="http://github.com/hugoduncan/pallet">Pallet</a> aims to solve these problems by embedding shell script in <a href="http://clojure.org/">clojure</a>.</p></p><p><h2>Embedding in Clojure</h2></p><p><p>Embedding other languages in lisp is not a new idea; <a href="http://common-lisp.net/project/parenscript/">parenscript</a>, <a href="http://github.com/arohner/scriptjure">scriptjure</a> (which Pallet's embedding is based on), and <a href="http://www.gitorious.org/clojureql/">ClojureQL</a> all do this.</p></p><p><p>So what does shell script in clojure look like? Some examples:</p> <pre class="clojure">(script   (ls "/some/path")   (defvar x 1)   (println @x)   (defn foo [x] (ls @x))   (foo 1)   (if (= a a)     (println "Reassuring")     (println "Ooops"))   (println "I am" @(whomai))</pre></p><p><p>which generates:</p></p><p><pre>ls /some/path x=1 echo ${x} function foo(x) { ls ${x}  } foo 1 if [ &#92;( \"a\" == \"a\" &#92;) ]; then echo Reassuring;else echo Ooops;fi echo I am $(whomai) </pre></p><p><p>The aim is to make writing shell script similar to writing Clojure, but there are obvious differences in the language that limit how far that can be taken. To run the code above at the REPL, you'll have to use the <code>pallet.stevedore</code> package.</p></p><p><h2>Escaping back to Clojure</h2></p><p><p>Escaping allows us to embed Clojure values and expressions inside our scripts, in much the same way as symbols can be unquoted when writing macros.</p></p><p><pre class="clojure">(let [path "/some/path"]   (script     (ls ~path)     (ls ~(.replace path "some" "other)))</pre></p><p><p>We can now write Clojure functions that produce shell scripts.  Writing scripts as clojure functions allows you to use the Clojure namespace facilities, and allows you to distribute you scripts in jar files (which can be deployed in a versioned manner with maven)</p></p><p><pre class="clojure">(defn list-path [path]   (script     (ls ~path)     (ls ~(.replace path "some" "other)))</pre></p><p><h2>Composing scripts</h2></p><p><p>Pallet allows the scripts to be combined. <code>do-script</code> concatenates the code pieces together.</p></p><p><pre class="clojure">(do-script   (list-path "path1")   (list-path "path2")) </pre></p><p><p><code>chain-script</code> chains the scripts together with '&amp;&amp;'.</p></p><p><pre class="clojure">(chain-script   (list-path "path1")   (list-path "path2")) </pre></p><p><p><code>checked-script</code> finally allows the chaining of scripts, and calls exit if the chain fails.</p></p><p><pre class="clojure">(checked-script "Message"   (list-path "path1")   (list-path "path2")) </pre></p><p><h2>Conclusion</h2></p><p><p>Writing shell script in Clojure gives access to Clojure's namespace facility allowing modularised shell script, and to Clojure's packaging as jar files, which allows reuse and distribution.  The ability to compose script fragments leads to being able to macro-like functions, such <code>checked-script</code>, and you could even use Clojure macros to generate script (but I haven't thought of a use for that, yet).</p></p><p><p>The syntax for the embedding has arisen out of practical usage, so is far from complete, and can definitely be improved. I look forward to hearing your feedback!</p></p><p><p>UPDATE: stevedore now requires a binding for <em>template</em>, to specify the target for the script generation.  This should be a vector containing one of :ubuntu, :centos, or :darwin, and one of :aptitude, :yum, :brew.</p></p>]]>
    </content>
  </entry>
  <entry>
    <id>http://hugoduncan.org/post/swank_clojure_gets_a_break_with_the_local_environment.html</id>
    <link href="http://hugoduncan.org/post/swank_clojure_gets_a_break_with_the_local_environment.html"/>
    <title>Swank Clojure gets a Break with the Local Environment</title>
    <summary>Recently I got fed up with a couple of warts in swank-clojure, so I made a couple of small fixes, and that lead to a couple of new features.  Using SLIME with Clojure has never been as smooth as using it with Common Lisp, and the lack of debug functionality beyond the display of stack traces is particularly onerous.  Recently, George Jahad's debug-repl showed the possibility of adding a break macro to enter the debugger with the call stack intact and local variables visible.  This functionality is now in swank-clojure.</summary>
    <updated>2010-03-31T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<p>Recently I got fed up with a couple of warts in swank-clojure, so I made a couple of small fixes, and that lead to a couple of new features.  Using SLIME with Clojure has never been as smooth as using it with Common Lisp, and the lack of debug functionality beyond the display of stack traces is particularly onerous.  Recently, George Jahad and Alex Osborne's <a href="http://github.com/GeorgeJahad/debug-repl">debug-repl</a> showed the possibility of adding a break macro to enter the debugger with the call stack intact and local variables visible.  This functionality is now in swank-clojure.</p><p>Consider the following example, adapted from debug-repl:</p><p><pre>   (let [c 1
        d 2]     (defn a [b c]       (swank.core/break)       d))   (a "foo" "bar") </pre></p><p>Running this now brings up the following SLDB debug frame:</p><p><pre> BREAK:   [Thrown class java.lang.Exception]</p><p>Restarts:  0: [QUIT] Quit to the SLIME top level  1: [CONTINUE] Continue from breakpoint</p><p>Backtrace:   0: user$eval<b>1666$a</b>1667.invoke(NO<i>SOURCE</i>FILE:1)   1: user$eval__1670.invoke(NO<i>SOURCE</i>FILE:1)   2: clojure.lang.Compiler.eval(Compiler.java:5358)   3: clojure.lang.Compiler.eval(Compiler.java:5326)   4: clojure.core$eval__4157.invoke(core.clj:2139)   &ndash;more&ndash; </pre></p><p><p>As you can see, the stack trace reflects the location of the breakpoint, and there is a CONTINUE restart. Pressing "1", or Enter on the CONTINUE line, or clicking the CONTINUE line should all cause the the debugger frame to close, and the result of the function call, 2, to be displayed in the REPL frame.</p></p><p><p>Enter, or "t", on the first line of the stacktrace causes the local variables to be displayed:</p> <pre> BREAK:   [Thrown class java.lang.Exception]</p><p>Restarts:  0: [QUIT] Quit to the SLIME top level  1: [CONTINUE] Continue from breakpoint</p><p>Backtrace:   0: user$eval<b>1666$a</b>1667.invoke(NO<i>SOURCE</i>FILE:1)       Locals:         b = foo         d = 2         c = bar   1: user$eval__1670.invoke(NO<i>SOURCE</i>FILE:1)   2: clojure.lang.Compiler.eval(Compiler.java:5358)   3: clojure.lang.Compiler.eval(Compiler.java:5326)   4: clojure.core$eval__4157.invoke(core.clj:2139)   &ndash;more&ndash; </pre></p><p><p>Pressing enter on one of the local variable lines will pull up the SLIME inspector with that value. If you go back to the REPL without closing the SLDB frame, there will be no prompt, but pressing enter should give you one.  The local variables are then all be avaiable for evaluation form the REPL.</p></p><p><p>Should an error occur while you are using the REPL, you will be placed in a nested debug session, with an "ABORT" restart to return to the previous debug level.</p></p><p><p>Finally, restarts are now displayed for each of the exceptions in the exception cause chain.</p></p><p><pre>   (let [c 1
        d 2]     (defn a [b c]       (throw (Exception. "top" (Exception. "nested" (Exception. "bottom"))))       d))   (a "foo" "bar") </pre></p><p><p>This will bring up the debugger with 2 cause restarts, which can be used to examine the related stack traces.</p></p><p><pre> top    [Thrown class java.lang.Exception]</p><p>Restarts:   0: [QUIT] Quit to the SLIME top level   1: [CAUSE1] Invoke debugger on cause  nested [Thrown class java.lang.Exception]   2: [CAUSE2] Invoke debugger on cause   bottom [Thrown class java.lang.Exception]</p><p>Backtrace:    0: user$eval<b>1752$a</b>1753.invoke(NO<i>SOURCE</i>FILE:1)    1: user$eval__1756.invoke(NO<i>SOURCE</i>FILE:1)    2: clojure.lang.Compiler.eval(Compiler.java:5358)    3: clojure.lang.Compiler.eval(Compiler.java:5326)    4: clojure.core$eval__4157.invoke(core.clj:2139)   &ndash;more&ndash; </pre></p><p><p>The break functionality is known only to work from the REPL thread at the moment.  With that small proviso, I hope you enjoy the new functionality - at least it provides a basic debug functionality until full JPDA/JDI integration is tackled.</p></p>]]>
    </content>
  </entry>
  <entry>
    <id>http://hugoduncan.org/post/benchmarking_clojure_code_with_criterium.html</id>
    <link href="http://hugoduncan.org/post/benchmarking_clojure_code_with_criterium.html"/>
    <title>Benchmarking Clojure Code with Criterium</title>
    <summary>I have released Criterium, a new project for benchmarking code in Clojure.  I found Brent Broyer's article on Java benchmarking which explains many of the pitfalls of benchmarking on the JVM, and Criterion, a benchmarking library in Haskell.</summary>
    <updated>2010-02-19T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<p><p>I have released <a href="http://github.com/hugoduncan/criterium">Criterium</a>, a new project for benchmarking code in <a href="http://clojure.org">Clojure</a>.  I found Brent Broyer's <a href="http://www.ibm.com/developerworks/java/library/j-benchmark1.html">article on Java benchmarking</a> which explains many of the pitfalls of benchmarking on the JVM, and <a href="http://www.serpentine.com/blog/2009/09/29/criterion-a-new-benchmarking-library-for-haskell">Criterion</a>, a benchmarking library in Haskell.</p></p><p><p>The main issues with benchmarking on the JVM are associated with garbage collection, and with JIT compilation.  It seems from Broyer's articles that we can mitigate the effects but not completely eliminate them, and Criterium follows his advice.  Both of the above libraries use the <a href="http://en.wikipedia.org/wiki/Bootstrapping_(statistics)">bootstrap</a> technique to estimate mean execution time and provide a confidence interval, and Criterium does likewise.  At the moment the confidence intervals are biased and I still need to implement BCa or ABC to improve these.</p></p><p><p>One of the functions that I wanted to benchmark originally involved reading a file.  Criterium does not yet address clearing I/O buffer cache, and I am not sure on the best way forward on this.  On Mac OS X, the <code>purge</code> command can be used to clear the caches, and on Linux this can be achieved by writing to /proc/sys/vm/drop_caches.  On the Mac at least, this causes everything to grind to halt for about five seconds, and there are then some file reads as whatever processes are running read things in again. This sort of behaviour doesn't lend itself to inclusion in a timing loop... Any suggestions?</p></p>]]>
    </content>
  </entry>
  <entry>
    <id>http://hugoduncan.org/post/a_clojure_library_for_fluiddb.html</id>
    <link href="http://hugoduncan.org/post/a_clojure_library_for_fluiddb.html"/>
    <title>A Clojure library for FluidDB</title>
    <summary>I have released Criterium, a new project for benchmarking code in Clojure.  I found Brent Broyer's article on Java benchmarking which explains many of the pitfalls of benchmarking on the JVM, and Criterion, a benchmarking library in Haskell.</summary>
    <updated>2009-09-13T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<p><p><a href="http://fluidinfo.com/">FluidDB</a>, a "cloud" based triple-store, where the objects are immutable and can be tagged by anyone, launched about a month ago. As a another step to getting up to speed with <a href="http://clojure.org">Clojure</a>, I decided to write a client library, and <a href="http://github.com/hugoduncan/clj-fluiddb">clj-fluiddb</a> was born.  The code was very simple, especially as I could base the library on <a href="http://github.com/hdurer/cl-fluiddb">cl-fluiddb</a>, a Common-Lisp library.</p> <p>I have some ideas I want to try out using FluidDB.  It's permission system is one of it's <a href="http://abouttag.blogspot.com/2009/09/permissions-worth-getting-excited-about.html">best features</a>, together with the ability to <a href="http://www.xavierllora.net/2009/08/25/liquid-rdf-meandering-in-fluiddb/">use it for RDF like triples</a> means that it could provide a usable basis for growing the semantic web.  My ideas are less grandiose, but might take as long to develop, we'll see...</p></p>]]>
    </content>
  </entry>
  <entry>
    <id>http://hugoduncan.org/post/setting_up_clojure_and_compojure_with_maven.html</id>
    <link href="http://hugoduncan.org/post/setting_up_clojure_and_compojure_with_maven.html"/>
    <title>Setting up clojure and compojure with maven</title>
    <summary>I wanted to experiment with building a webapp using Clojure, so I tried setting up the Compojure web framework.  I am new to clojure, so I am not sure if this is the preferred way of doing things, but here goes anyway.</summary>
    <updated>2009-09-06T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<p><p>I wanted to experiment with building a webapp using <a href="http://clojure.org">Clojure</a>, so I tried setting up the <a href="http://en.wikibooks.org/wiki/Compojure">Compojure</a> web framework.  I am new to clojure, so I am not sure if this is the preferred way of doing things, but here goes anyway.</p> <p>There seem to be several ways to set up clojure in emacs.  I ended up following <a href="http://bc.tech.coop/blog/081205.html">Bill Clementson's instructions</a>. A couple of years ago I had some experience using maven, so decided to use this to manage my classpath.  Installing maven on my mac was simple with macports (<code>sudo port install maven</code>).</p> <p>Setting up a POM for maven took longer than expected.  <a href="http://stuartsierra.com/2009/09/04/cutting-edge-clojure-development-with-maven">Stuart Sierra's post</a> pointed me to the formos maven repository containing the clojure snapshots.  With some help from google, I also found the <a href="http://github.com/talios/clojure-maven-plugin/tree/master">maven-clojure-plugin</a>, which is a maven plugin for compiling clojure, and the <a href="http://github.com/fred-o/clojureshell-maven-plugin/tree/master">clojureshell-maven-plugin</a> which will start a swank session (or bare REPL) using the pom information.</p> <p>With the basic clojure and maven setup in place, it was time to move on to compojure. I added the <a href="http://github.com/weavejester/compojure/tree/master">Compojure git repository</a> into Bill Clementson's clj-build script, ran it to clone the repository, and then built it using ant (<code>ant deps; ant</code>).  <a href="http://jimdowning.wordpress.com/2009/07/30/compojure-maven/">Jim Downing</a> instructions for installing compojure into your local maven repository (<code>mvn install:install-file -DgroupId=org.clojure -DartifactId=compojure -Dversion=1.0-SNAPSHOT -Dfile=compojure.jar -Dpackaging=jar</code>) work smoothly.</p></p>]]>
    </content>
  </entry>
  <entry>
    <id>http://hugoduncan.org/post/product_development_flow.html</id>
    <link href="http://hugoduncan.org/post/product_development_flow.html"/>
    <title>Product Development Flow</title>
    <summary>I have spent the last few months with my latest start-up, Artfox, where I have been trying to push home some of the lean start-up advice expounded by Eric Lie's and Steve Blank.  I was hoping that 'The Principles of Product Development Flow', by Donald Reinertsen, might help me in making a persuasive argument for some of the more troublesome concepts around minimum viable product and ensuring that feedback loops are in place with your customers as soon as possible. Unfortunately, I don't think that this is the book if you are looking for immediate, practical prescription, but it is a thought provoking, rigorous view of the product development process, that pulls together ideas from manufacturing, telecommunications and the Marines.</summary>
    <updated>2009-08-30T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<p><p>I have spent the last few months with my latest start-up, <a href="http://artfox.com">Artfox</a>, where I have been trying to push home some of the lean start-up advice expounded by <a href="http://startuplessonslearned.blogspot.com">Eric Ries</a> and <a href="http://steveblank.com/">Steve Blank</a>.  I was hoping that "The Principles of Product Development Flow", by <a href="http://www.reinertsenassociates.com/">Donald Reinertsen</a>, might help me in making a persuasive argument for some of the more troublesome concepts around minimum viable product and ensuring that feedback loops are in place with your customers as soon as possible. Unfortunately, I don't think that this is the book if you are looking for immediate, practical prescription, but it is a thought provoking, rigorous view of the product development process, that pulls together ideas from manufacturing, telecommunications and the Marines.</p></p><p><p>Perhaps Reinertsen's most accessible advice is that decisions in product development should be based on a strong economic foundation, pulled together by a concept of the "Cost of Delay".  Rather than on relying on prescriptions for each of several interconnected metrics, such as efficiency and utilisation, Reinertsen suggests that economics will provide different targets for each of these metrics depending on the costs of the project at hand.</p></p><p><p>His proposition that product development organisations should measure "Design in Process", similar to the idea of "Intellectual Working In Process" proposed by Thomas Stewart in his book "Intellectual Capital", is what allows him to make the parallels to manufacturing and queueing theory and enables the application of the wide body of work in these fields to product development.</p></p><p><p>His practical advice, such as working in small batches and using a cadence for activities that require coordination, will come as no surprise to practitioners of agile development, and Reinertsen provides clear reasoning of why these practices work.</p></p><p><p>During my time at Alcan, and later Novelis, I gave a lot of thought to scheduling, queues and cycle times in a transformation based manufacturing environment, and I found that this had many parallels to his view of the product development process, and little in common with what Reinertsen describes as manufacturing, which seems to be limited to high volume assembly type operations.  I found many ideas that could be usefully taken back to a manufacturing context.</p></p><p><p>If you look at this book as an introduction to scheduling, queueing theory and the reason's behind some of agile development practices, then you will not be disappointed.</p></p>]]>
    </content>
  </entry>
  <entry>
    <id>http://hugoduncan.org/post/rails_environments_for_lisp.html</id>
    <link href="http://hugoduncan.org/post/rails_environments_for_lisp.html"/>
    <title>Rails Environments For Lisp</title>
    <summary>The facility of Ruby on Rails' test, development and production environments is one of those features that goes almost unremarked, but which makes using rails more pleasant.  No doubt everyone has their own solution for this in other environments, and while I am sure Common Lisp is not lacking in examples, I have not seen an idiomatic implementation.  In developing cl-blog-generator I came up with the following solution.</summary>
    <updated>2009-04-07T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<p><p>The facility of Ruby on Rails' test, development and production environments is one of those features that goes almost unremarked, but which makes using rails more pleasant.  No doubt everyone has their own solution for this in other environments, and while I am sure Common Lisp is not lacking in examples, I have not seen an idiomatic implementation.  In developing <a href="http://github.com/hugoduncan/cl-blog-generator">cl-blog-generator</a> I came up with the following solution.</p> <p>Configuration in Common Lisp usually depends on using special variables, which can be rebound across any block of code.  I started by putting the configuration of my blog into s-expressions in files, but got tired of specifying the file names for different blogs.  Instead, I created an association list for each configuration, and registered each using a symbol as key.  I can now switch to a given environment by specifying the symbol for the environment. </p> <p>The implementation (in <code>src/configure.lisp</code> under the <a href="http://github.com/hugoduncan/cl-blog-generator">GitHub repository</a>) consists of two functions and a special variable.  <code>SET-ENVIRONMENT</code> is used to register an environment, and <code>CONFIGURE</code> is used to make an environment active.  The environments are stored in <code><em>ENVIRONMENTS</em></code> special as an association list.  An example of setting up the configurations can be seen in the <code>config.lisp</code> file.  In creating the configurations I drop the '*' from the special names.</p> <p>I'm relatively new to CL, so let me now if I have overlooked anything.  Writing this post makes me think I am missing a <code>WITH-ENVIRONMENT</code> macro ...</p></p>]]>
    </content>
  </entry>
  <entry>
    <id>http://hugoduncan.org/post/twitter_is_more_secure_than_my_credit_card.html</id>
    <link href="http://hugoduncan.org/post/twitter_is_more_secure_than_my_credit_card.html"/>
    <title>Twitter is More Secure than My Credit Card</title>
    <summary>Twitter now lets developers build applications that take actions on your behalf without you ever having to divulge your password. Instead of asking you for your password, these applications ask Twitter to ask you for permission, and you give permission to the application while logged in to Twitter. What's even better is that you can revoke the application's permissions, from within Twitter, at any time, without having to change your password. The OAuth protocol makes this possible, and does so in a very secure manner.</summary>
    <updated>2009-04-02T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<p><p>Twitter now <a href="http://apiwiki.twitter.com/OAuth-FAQ">lets developers build applications</a> that take actions on your behalf without you ever having to divulge your password. Instead of asking you for your password, these applications ask Twitter to ask you for permission, and you give permission to the application while logged in to Twitter. What's even better is that you can revoke the application's permissions, from within Twitter, at any time, without having to change your password. The <a href="http://oauth.net/">OAuth</a> protocol makes this possible, and does so in a very secure manner. </p> <p>Compare this to on-line transactions involving your credit card; you have to divulge your user-name (the name on your credit card) and your password (your credit card number) for every transaction. Some sites even store your card details, and you rely on trust, and the vendors good standing with the credit card company, that they will not make further transactions using your card. What happens should you find your card being abused? You have to go through the hassle of cancelling your card and obtaining a new card number, which you then have to divulge to all the companies that make regular charges to your card, instantly creating the opportunity for further abuse.  To make matters worse, the credit card companies even give us these cards with our passwords on them (the card number), violating the "never write down your password (or PIN)" rule.</p></p><p><p><a href="http://oauth.net/">OAuth</a> is an open protocol developed by some of the major web companies.  The protocol is gaining traction rapidly, and the OAuth site lists some of the <a href="http://wiki.oauth.net/ServiceProviders">OAuth service providers</a> (that is the sites that allow OAuth authorisation).  Part of the reason for the rapid adoption is that is not really new technology.  As mentioned in the <a href="http://oauth.net/about/design-goals">OAuth design goals</a>, the protocol is essentially a standardisation of the proprietary protocols used by Google’s <a href="http://code.google.com/apis/gdata/authsub.html">AuthSub</a>, <span class="caps">AOL</span>’s <a href="http://dev.aol.com/openauth">OpenAuth</a>, Yahoo’s <a href="http://developer.yahoo.com/auth/">BBAuth</a> and <a href="http://www.flickr.com/services/api/auth.howto.web.html">FlickrAuth</a> and Facebook’s <a href="http://developers.facebook.com/documentation.php?doc=auth">FacebookAuth</a>.</p></p><p><p>Eran Hammer-Lahav provides a <a href="http://oauth.net/documentation/getting-started">getting started with OAuth</a> guide,  Google provide <a href="http://sites.google.com/site/oauthgoog/oauth-practices">a good overview</a> of uses for OAuth, and the <a href="http://wiki.oauth.net/FrontPage">OAuth wiki</a> provides links to all the details.</p></p>]]>
    </content>
  </entry>
  <entry>
    <id>http://hugoduncan.org/post/cl_blog_generator_gets_comments.html</id>
    <link href="http://hugoduncan.org/post/cl_blog_generator_gets_comments.html"/>
    <title>cl-blog-generator Gets Comments</title>
    <summary>I have now added a comment system to cl-blog-generator.  My requirements were for a simple, low overhead, commenting system, preferable one that could possibly be fully automated.</summary>
    <updated>2009-03-31T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<p><p>I have now added a comment system to <a href="http://github.com/hugoduncan/cl-blog-generator">cl-blog-generator</a>.  My requirements were for a simple, low overhead, commenting system, preferable one that could possibly be fully automated.</p></p><p><p>The comment system was inspired by <a href="http://www.steve.org.uk/Software/chronicle/">Chronicle</a>'s, with a slight modification in approach - the comments are never saved on the web server, and are just sent by email to a dedicated email address.  Spam filtering is delegated to the whatever spam filtering is implemented on the mail server, or in your email client.  The comment emails are then processed in CL using <a href="http://common-lisp.net/project/mel-base/">mel-base</a> and written to the local filesystem.  Moderation can optionally occur on the CL side, if that is preferable to using the email client.</p></p><p><p>There is still some work left to do - I would like to be able to switch off comments on individual posts, either on demand on after a default time period - but I thought I would let real world usage drive my development.</p></p>]]>
    </content>
  </entry>
  <entry>
    <id>http://hugoduncan.org/post/search_across_open_browser_tabs.html</id>
    <link href="http://hugoduncan.org/post/search_across_open_browser_tabs.html"/>
    <title>Search Across Open Browser Tabs</title>
    <summary>I am an Opera user, these days mainly because it gives me integrated mail, feed and news reading, so that everything that comes from the web appears in one place.  The last significant innovation I remember was the introduction of tabs, and that was some time ago (long before it made its way into IE, for example).  I am a heavy user of tabs - it is not unusual for me to have over fifty pages open, as I tend to just open pages and rarely close them again.  This means that the tab icons are unreadable, and Alt+Tab (I'm on a mac) produces three or four columns to scroll through to select the tab I'm after.  I dream of a better tab navigation model, and would love to be able to search across all the open tabs.  Surely it wouldn't be that hard to implement.</summary>
    <updated>2009-03-28T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<p><p>I am an <a href="http://www.opera.com/">Opera</a> user, these days mainly because it gives me integrated mail, feed and news reading, so that everything that comes from the web appears in one place.  The last significant innovation I remember was the introduction of tabs, and that was some time ago (long before it made its way into IE, for example).  I am a heavy user of tabs - it is not unusual for me to have over fifty pages open, as I tend to just open pages and rarely close them again.  This means that the tab icons are unreadable, and Alt+Tab (I'm on a mac) produces three or four columns to scroll through to select the tab I'm after.  I dream of a better tab navigation model, and would love to be able to search across all the open tabs.  Surely it wouldn't be that hard to implement.</p> <p>Maybe I'm being a little harsh as there have been some other innovations.  <a href="http://www.mozilla.com/firefox/">Firefox's</a> add-ons work seamlessly, we can use arbitrary search engines thanks to <a href="http://www.opensearch.org/">OpenSearch</a>, and <a href="http://conkeror.org/">Conkeror</a> gives full screen browsing with emacs key bindings, but for the most part it has been better support for standards that has dominated the release notes.</p> <p>In preparing this post, I found that Google's <a href="http://www.google.com/chrome">Chrome</a> has search over your page history.  Sounds like that might fulfill my wish, though I wonder how its process per tab model will scale to lots of tabs.  I just have to wait for its release on mac OS X.</p></p>]]>
    </content>
  </entry>
  <entry>
    <id>http://hugoduncan.org/post/blog_site_generators.html</id>
    <link href="http://hugoduncan.org/post/blog_site_generators.html"/>
    <title>Blog Site Generators</title>
    <summary>I recently uploaded some links to my cl-blog-generator project, and have been getting some feedback with comparisons to other blog site generators, or compilers, such as Steve Kemp's Chronicle, or Jekyll as used on GitHub Pages.  Compared to these, cl-blog-generator is immature, but takes a different approach in several areas that Charles Stewart suggested might be worth exploring.  I look forward to any comments you might have. </summary>
    <updated>2009-03-27T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<p><p> I recently uploaded some links to my <a href="http://github.com/hugoduncan/cl-blog-generator">cl-blog-generator</a> project, and have been getting some feedback with comparisons to other blog site generators, or compilers, such as <a href="http://www.advogato.org/person/Stevey/">Steve Kemp</a>'s <a href="http://www.steve.org.uk/Software/chronicle/">Chronicle</a>, or <a href="http://github.com/mojombo/jekyll">Jekyll</a> as used on <a href="http://github.com/blog/272-github-pages">GitHub Pages</a>.  Compared to these, cl-blog-generator is immature, but takes a different approach in several areas that <a href="http://advogato.org/person/chalst/">Charles Stewart</a> suggested might be worth exploring.  I look forward to any comments you might have. </p> <h3>Formatting</h3> <p> All the blog generators seem to use a file based approach for writing content, but they differ in the choice of input formats supported, and in the approach to templating. <code>cl-blog-generator</code> is the least flexible, requiring input in XHTML, while <code>Chronicle</code> allows HTML, Textile or Markdown, and <code>Jekyll</code> Textile or Markdown.  For templates, <code>Chronicle</code> uses Perl's <a href="http://search.cpan.org/~samtregar/HTML-Template-2.9/Template.pm">HTML::Template</a>, and <code>Jekyll</code> uses <a href="http://www.liquidmarkup.org/">Liquid</a>. <code>cl-blog-generator</code> uses an approach which substitutes content into elements identified with specific id's or classes, similar to transforming the templates with XSLT. </p> <p> <code>cl-blog-generator</code>'s choice of XHTML input was driven by a requirement to enable the validation of post content in the editor, which is not possible using <code>Chronicle</code>'s HTML input because of the headers and lack of a <code>body</code> or <code>head</code> element, and a desire to be able to use any CSS tricks I wanted, which ruled out Textile and Markdown, or any other markup language.  The lack of an external templating engine in <code>cl-blog-generator</code> was driven by simplicity; I couldn't see a use for conditionals or loops given the fixed structure of the content, and this choice leads to templates that validate, unlike <code>Jekyll</code>, and which are not full of HTML comments.  The current id and class naming scheme in <code>cl-blog-generator</code> could certainly use some refinement to improve the flexibility of the output content format, and I would definitely welcome requests for enhancements should the scheme not fit your requirements. </p></p><p><h3>Database and Two Phase Publishing</h3> <p> Perhaps the most significant difference in approach for <code>cl-blog-generator</code> is its use of a database and an explicit publish step.  With <code>cl-blog-generator</code> a draft can exist anywhere in the filesystem, and must be "published" to be recognised by the blog site generator.  The publishing process fills in some default metadata, such as post date, if this is not originally specified, copies the modified draft to a configurable location, and enters the metadata into the database.  This ensures that the post is completely specified by its representation in the filesystem, and that the database is recreatable. </p> <p> The database enables the partial regeneration of the site, without having to parse the whole site, and makes the linking of content much simpler. However, having <a href="http://common-lisp.net/project/elephant/">Elephant</a> as a dependency is probably the largest impediment to installation at present. </p></p><p><h3>On Titles, Dates, Tags and Filenames</h3></p><p><p><code>cl-blog-generator</code>'s input XHTML has been augmented to add elements for specifying post title, date, update date (which I believe is missing from the other systems), slug, description, and tags.  On publising (see next section), any of these elements that is missing, except the mandatory title, is filled in with defaults.</p></p><p><p>Both <code>Chronicle</code> and <code>Jekyll</code> use a preamble to specify metadata, with the filename being used to generate the post's slug. <code>Jekyll</code> also uses the filename and its path for specifying the post date, and tags. </p></p><p><h3>Bells and Whistles</h3></p><p><p>Finally, here is a grab bag of features.</p> <ul> <li> <code>Chronicle</code> comes with a commenting system. </li> <li> <code>cl-blog-generator</code> generates a <code>meta</code> description element, which is used by search engines to generate link text.  It also generates <code>meta</code> elements with links to the previous and next posts. </li> <li> <code>Jekyll</code> has a "Related posts" feature for generating links to similar posts. </li></p><p><li> <code>Chronicle</code> and <code>Jekyll</code> both have migration scripts for importing content. </li> <li> <code>Chronicle</code> has a spooler for posting pre-written content at specific times</li> </ul></p>]]>
    </content>
  </entry>
  <entry>
    <id>http://hugoduncan.org/post/frameworks_and_productivity.html</id>
    <link href="http://hugoduncan.org/post/frameworks_and_productivity.html"/>
    <title>Frameworks and Productivity</title>
    <summary>Yesterday was frustrating; I spent far too long trying to debug some problems in a Rails application I am writing.  Rails, and frameworks in general, are supposed to give us improved productivity by hiding the complexity and mechanics of the task at hand.  This is great as long as the framework behaves as expected, but invariably causes problems when things go wrong.</summary>
    <updated>2009-03-20T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<p><p>Yesterday was frustrating; I spent far too long trying to debug some problems in a Rails application I am writing.  Rails, and frameworks in general, are supposed to give us improved productivity by hiding the complexity and mechanics of the task at hand.  This is great as long as the framework behaves as expected, but invariably causes problems when things go wrong.</p> <h2>The missing field</h2> My application uses associations, and I had a <code>belongs<i>to</code> association that was supposed to be populated in a <code>before</i>validation<i>on</i>create</code> callback.  In my tests I noticed that the linked model was not being instantiated.  After much searching, it turns out I had forgotten to create the foreign key field.  Unfortunately Rails was silent on this issue and the <code>belongs_to</code> association code seemed to execute quite happily without the field. <h2>Can't dup NilClass</h2> My models also use <code>has_many</code> associations, which I could populate with no problems.  When I tried to access the association though, I kept getting <code>Can't dup NilClass</code> errors.  This one turned out to be an issue with the generated <code>collection.build</code> method.  As noted in the documentation by the somewhat cryptic <cite>Note: This only works if an associated object already exists, not if it‘s nil!</cite>, the method fails if the collection is empty (at least that's what I think it means). Explicitly instantiating the associated model and then adding it to the collection fixed the problem. <h2>Bad choice of name</h2> <p>In my application I had a model named <code>Target</code>, which meant that models that associate with <code>Target</code>, such as my <code>TargetProfile</code> model, have a <code>target</code> attribute.</p>  Unfortunately the <code>target</code> attribute in <code>TargetProfile</code> always returned the instance of <code>TargetProfile</code> - not quite what is expected.  The problem was caused by the fact that <code>ActiveRecord</code>'s <code>AssociationProxy</code>, used to implement associations between models, has a <code>target</code> attribute.  The documentation contains another warning <cite>Don‘t create associations that have the same name as instance methods of ActiveRecord::Base</cite>, but mentions nothing of <code>AssociationProxy</code>, which isn't even part of the documented api.  I call this broken encapsulation. <h2>The Rails Way</h2> Rails heavily promotes testing and Test Driven Development (TDD), which is linked to the "fail early, fail fast" paradigm.  It seems strange that known issues (as partially documented) are allowed to persist and cause silent failures.  All the issues above could have been flagged by Rails. <h2>Conclusion</h2> I am not picking on Rails, however, as these type of issues seem to occur in many frameworks.  No software is bug free, and a framework has to work hard to hide the complexity and mechanics of its domain.  When things go wrong, we are always left questioning whether the issue is in the framework or in our application and we invariably end up getting to know the implementation details of the framework, which does little for our productivity.</p>]]>
    </content>
  </entry>
  <entry>
    <id>http://hugoduncan.org/post/create_a_catalog_for_xhtml_on_os_x.html</id>
    <link href="http://hugoduncan.org/post/create_a_catalog_for_xhtml_on_os_x.html"/>
    <title>Create a Catalog for XHTML on OS X</title>
    <summary>While trying to validate the output of cl-blog-generator I needed a local DTD for XHTML.  The textproc/xmlcatmgr package in Darwin Ports creates a catalog at /opt/local/etc/xml/catalog, but it does not include XHTML.  A flattened XHTML DTD can be found in the w3 validator library and installed the DTD's under /opt/local/share/xml/, but I couldn't find a catalog file for it.  It turns out it is pretty simple to write the catalog file; the Wikipedia XML Catalog entry has an example that contains what is needed.  Save the example next to the XHTML DTD's as catalog.xml and adjust the paths, then add a 'nextCatalog' entry in /opt/local/etc/xml/catalog pointing at the catalog.xml file.</summary>
    <updated>2009-03-11T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<p><p>While trying to validate the output of <a href="http://hugoduncan.github.com/cl-blog-generator/content/site/index.xhtml">cl-blog-generator</a> I needed a local DTD for XHTML.  The textproc/xmlcatmgr package in Darwin Ports creates a catalog at <code>/opt/local/etc/xml/catalog</code>, but it does not include XHTML.  A flattened XHTML DTD can be found in <a href="http://validator.w3.org/sgml-lib.tar.gz">the w3 validator library</a> and installed the DTD's under <code>/opt/local/share/xml/</code>, but I couldn't find a catalog file for it.  It turns out it is pretty simple to write the catalog file; the Wikipedia <a href="http://en.wikipedia.org/wiki/XML_Catalog">XML Catalog</a> entry has an example that contains what is needed.  Save the example next to the XHTML DTD's as <code>catalog.xml</code> and adjust the paths, then add a "nextCatalog" entry in <code>/opt/local/etc/xml/catalog</code> pointing at the <code>catalog.xml</code> file.</p> <p>Now I can use <code>(setf cxml:<em>catalog</em> (cxml:make-catalog '("/opt/local/etc/xml/catalog")))</code> and <a href="http://common-lisp.net/project/cxml/">CXML</a> will use the local DTD specified in the catalog.</p></p>]]>
    </content>
  </entry>
  <entry>
    <id>http://hugoduncan.org/post/parsing_yaml_dates_in_rails_gives_surprising_results.html</id>
    <link href="http://hugoduncan.org/post/parsing_yaml_dates_in_rails_gives_surprising_results.html"/>
    <title>Parsing YAML Dates in Rails Gives Surprising Results</title>
    <summary>Today's surprise was that "2009-01-01" and "2009-1-1" get parsed differently by the YAML parser in Rails.  The former gets converted to a Date&gt;, while the latter becomes a String.  It confused me for a while, as the problem only showed up when I wanted to send the dates to a Flot chart.  Looking at the standard, it's conforming behaviour.   Must be me that is non-conforming then...</summary>
    <updated>2009-03-07T23:59:59+00:00</updated>
    <content type="html">
      <![CDATA[<p><p>Today's surprise was that "2009-01-01" and "2009-1-1" get parsed differently by the <a href="http://www.yaml.org/">YAML</a> parser in <a href="http://rubyonrails.org/">Rails</a>.  The former gets converted to a <code>Date</code>, while the latter becomes a <code>String</code>.  It confused me for a while, as the problem only showed up when I wanted to send the dates to a <a href="http://code.google.com/p/flot/">Flot</a> chart.  Looking at the standard, it's conforming behaviour.   Must be me that is non-conforming then...</p></p>]]>
    </content>
  </entry>
</feed>
