So, what if you want to run jenkins locally on a mac? What if you want it to build from a private github repository, and you want it to use its own private key? And you’re on a mac?

Get Jenkins

First, you need to get jenkins. On my homebrew-enabled mac, I ran brew install jenkins. Without homebrew, you can just download the latest war file. (You will need java 6 to run jenkins.) Homebrew provides a plist for launching jenkins when you log in. If that’s adequate, then you can stop reading.

Create a service account

Next you need to create a user to run jenkins. This is the most challenging part of the process. (I used a script from a pastebin as reference.) Here’s what I did:

sudo mkdir /var/jenkins
sudo /usr/sbin/dseditgroup -o create -r 'Jenkins CI Group' -i 600 _jenkins
sudo dscl . -append /Groups/_jenkins passwd "*"
sudo dscl . -create /Users/_jenkins
sudo dscl . -append /Users/_jenkins RecordName jenkins
sudo dscl . -append /Users/_jenkins RealName "Jenkins CI Server"
sudo dscl . -append /Users/_jenkins uid 600
sudo dscl . -append /Users/_jenkins gid 600
sudo dscl . -append /Users/_jenkins shell /usr/bin/false
sudo dscl . -append /Users/_jenkins home /var/jenkins
sudo dscl . -append /Users/_jenkins passwd "*"
sudo dscl . -append /Groups/_jenkins GroupMembership _jenkins
sudo chown -R jenkins /var/jenkins

Before I ran through that, I did check that the ids were available. You can search for users with dscl . -search /Users uid 600 and groups with dscl . -search /Groups gid 600.

Also, I don’t think the uid and gid need to be the same, but most of the builtin service accounts (e.g. jabber) are, so I just went along with it.

Create the daemon

Mac OS uses launchd to control daemons and agents. It’s pretty easy to create a launch daemon. Create the file /Library/LaunchDaemons/org.jenkins-ci.plist with the following content, based on the plist from the homebrew jenkins formula. You may need to update the version number in the ProgramArguments.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
  <key>Label</key>
  <string>Jenkins</string>
  <key>ProgramArguments</key>
  <array>
    <string>/usr/bin/java</string>
    <string>-jar</string>
    <string>/usr/local/Cellar/jenkins/1.414/lib/jenkins.war</string>
  </array>
  <key>OnDemand</key>
  <false/>
  <key>RunAtLoad</key>
  <true/>
  <key>UserName</key>
  <string>jenkins</string>
</dict>
</plist>

I think I had to load the daemon (sudo launchctl load /Library/LaunchDaemons/org.jenkins-ci.plist). Rebooting should work, too.

Create an ssh key

Like I said, I wanted jenkins to have its own ssh identity. This is fairly easy: sudo -u jenkins ssh-keygen

The new key is in /var/jenkins/.ssh/id_rsa.pub and can be copied to github, or wherever you have your source code.

Set up a build

Now you need to configure jenkins. Open http://localhost:8080/, click “Manage Jenkins”, “Manage Plugins”, then “Available”. I installed the git and github plugins. The git plugin gives you basic git functionality. The github plugin gives you links from the build info pages to github commit pages. After the install is complete, click the “Schedule a restart” button.

After jenkins restarts, configure git. From the “Manage Jenkins” page, “Configure System” and make sure the path for git is right. (It wasn’t for me: /usr/bin/git is the default, but homebrew put it in /usr/local/bin/git.)

Now, you can create your project. The rest of this post is a description of how I set up the build for my rails 3.0 project.

Click “New Job” in the menu on the left. Choose the “free-style” option. On the configuration page, set the github url prefix (e.g. https://github.com/user/project). Set the source code as git, with the repo url (e.g. git@github.com:user/project.git; if your url starts with git:// or http://, then you probably don’t need to do the ssh key stuff earlier). I set branches to build to “master”, but leaving it blank is sometimes useful. I chose to Poll SCM, with a schedule of “* * * * *”, though I didn’t set that up until after I had the build mechanics set up right.

The only build step is to execute a shell. Here’s what I’m using. There are a few things to note here. I’m using a system-installed rvm, so all the ruby commands are run through it. Jenkins can collect JUnit test results and produce some trend graphs, so I use the ci_reporter gem’s ci:setup:testunit to format the results JUnit-style.

/usr/local/rvm/bin/rvm 1.8.7 exec bundle --path .bundle/gems

cat <config/database.yml
development:
  adapter: sqlite3
  database: db/development.sqlite3
test:
  adapter: sqlite3
  database: db/test.sqlite3
END_CONFIG

/usr/local/rvm/bin/rvm 1.8.7 exec bundle exec rake db:bootstrap db:migrate ci:setup:testunit test --trace

I just released git-tfs v0.11.0.

There are two new commands in this release: bootstrap and verify. bootstrap will find TFS commits and configure git-tfs remotes for you. verify checks that the latest TFS-related commit has the same content as the corresponding version in TFS.

There are a number of bug fixes:

  • Correctly handle the “no newer checkins on the server” case for VS2010. (commit)
  • Work on x86 and x64 more often (commit)
  • Allow checkin policy overrides (commit)
  • Generate a default checkin comment (commit, commit)
  • Ensure consistent casing in the new repository (commit)

You can see the full diff on github.

Today I pushed two new git-tfs goodies: bootstrap, and checkin policy override.

checkin policy override

The checkin policy override support is pretty straightforward. If you try to check in to TFS…

git tfs checkin

…and it fails with some messages about checkin policies (e.g. no associated work items, or the code analysis policy can’t run), you can now override the policy failures. Of course, the best fix is to comply with the policy. For example, if you need to specify an associated work item and provide a checkin comment:

git tfs checkin -w 12345 -m "My awesome code has an awesome checkin comment."

But, if you really need to override the policy failures, you can now do it:

git tfs checkin -f "Policy override because of X" -m "Normal checkin comment."

Of course, you can use checkintool to do all this in a GUI.

bootstrap

The other change was the addition of a bootstrap command. This is useful if you create a TFS clone and share it with a colleague who then needs to interact with TFS. While two identical invocations of git tfs clone will produce identical repositories, git clone is always going to be faster than git tfs clone. So, I would guess that most people who want to collaborate on a TFS project using git will benefit from this command.

The old workflow for this was:


[user1] git tfs clone http://blah/blah/blah $/blah
[user1] cd blah
[user1] git remote add shared git@someplace:shared/repo.git
[user1] git push shared master
[user2] git clone git@somplace:shared/repo.git
[user2] cd repo
[user2] git tfs init http://blah/blah/blah $/blah

At this point, the users can collaborate with each other using git, and they can both do TFS checkin or fetch. For the best workflow, both users need to type in the exact same path. An extra ‘/’ or a capitalization change will keep git-tfs from matching up the TFS remotes, and it will refetch things it doesn’t need to.

So, the bootstrap command replaces the last ‘git tfs init’:

[user2] git tfs bootstrap

This will scan HEAD’s history for checkins with TFS metadata, and configure one or more TFS remotes to match. If you already have the remotes configured, it will just tell you what it found.

The other day, I added experimental support for checkin directly from git-tfs to tfs. (Nate added an interactive version of checkin, too.) It doesn’t feel quite complete yet, and I haven’t decided which way to take it.

The main thing that’s missing is a way to tie the TFS checkin to the git branch. There are a few options that I’ve come up with for how to do this: dcommit, merge in TFS, or merge back to the git branch.

Dcommit would be similar to dcommit in git-svn, where the git commits are checked in to TFS one at a time, effectively rebasing the git branch onto the end of the TFS branch.

Merging in TFS doesn’t mean letting TFS do merges, but rather it means that git tfs checkin would fetch up to the new TFS commit, and give it two parents.

T1 --- T2 --- T3 --- X
   \                /
     G1 --- G2 -- G3

T1 is the base TFS changeset. G* are commits in git. T2 and T3 are commits made in TFS before the git branch is checked in to TFS. X is the TFS changeset created by git-tfs, with parents T3 and G3.

Merging back to the git branch is similar to merging in TFS, but with the merge commit in a different place:

T1 --- T2 --- T3 --- T4
   \                    \
     G1 --- G2 --- G3 -- X

Here, T4 is the new changeset created by git-tfs, and a merge commit is created with G3’s tree and parents G3 and T4.

The thing I like about dcommit is that it captures everything, if you want. It seems like it would be potentially problematic, in that it would be pretty slow and more error prone. Like rebase, it removes commits from history, which I’m a little wary of. Merging on the TFS branch is conceptually very nice, but it breaks the ability to refetch the exact same TFS history (given the same clone configuration). It also might not be as convenient for managing a git master that parallels the TFS mainline, because the merge won’t be available on the git branch. I’m not very well-versed in the mechanics of git’s merging awesomeness, so this might be a moot point, but it seems like the “merge in git” option would provide a better workflow.

If you have any thoughts, please leave a comment. I’m open to suggestions.

Super quick-start, based on another getting started with puppet guide:

Install Ubuntu 10.04 LTS server on two servers.
[server1] sudo apt-get install puppetmaster
[server2] sudo apt-get install puppet
[server2] sudo vi /etc/puppet/puppet.conf
add this line:
server=<fqdn of [server2]>
[server2] sudo puppetd --test
[server1] sudo puppetca --list
[server1] sudo puppetca --sign <fqdn of [server2]>
[server2] sudo puppetd --no-daemonize --verbose
or
[server2] sudo /etc/init.d/puppet start

If server1 is named “puppet”, the config change shouldn’t be necessary.

I just pushed out noodle, a new gem that we’re using to manage our .NET dependencies with ruby’s bundler.

Because .NET projects usually have to reference dependencies at a specific path, simple rubygems don’t quite cut it. With noodle, you use bundler to do the dependency analysis, and noodle copies the resolved dependencies into a local directory in the project. This way, .NET projects can reference the assemblies at a predictable path without having to check them all in.

Install with

gem install noodle

For example, say you have a project that uses StructureMap. Your Gemfile might look like this:

source :rubygems
gem 'structuremap'

If you create a Rakefile like this:

require 'noodle'
Noodle::Rake::NoodleTask.new

and then run

bundle install
rake noodle

Then you’ll have a copy of StructureMap.dll in lib/structuremap-<version>.

(Noodle 0.1.0 had an error, which added an extra ‘lib’ in the destination path.)

This post is about how to run your favorite rack application on IIS 7 using IronRuby. I’ve been unsatisfied with most other windows ruby app hosting I’ve tried, and IronRuby-Rack looks like it will fix that. (I haven’t tried deploying to JRuby on Windows, but I assume that experience would be pretty good.)

Surely I’m not the first to the punch on this, but there were some things I had to figure out that I thought I’d share.

I’m doing this in the context of a sinatra application I’m writing. More on the specific app later, but it wasn’t worth writing if it wasn’t going to run on IIS, or at least on Windows.

Also, I tried the ironruby-rack gem, but it’s pretty rough at this point. The best thing about it is that it included IronRuby.Rack.dll. My major complaint is that it put web.config in the root of the app, which meant that all the .rb files are in the web root. It seemed much classier to make public the web root, with web.config in there.

It wasn’t too hard to get the app running.

A rackup file seemed like a sensible first step, and it was. You can’t get very far these days without a rackup file.

I snagged IronRuby.Rack.dll from the ironruby gem, and checked it in public/bin. This was done because I’m lazy and didn’t want to build it myself. It’d be really nice if IronRuby.Rack was a stand-alone github project so I could fork it and patch it. Cloning all of ironruby just for a version of IronRuby.Rack that probably isn’t current wasn’t very interesting to me.

My rake tasks build the rest of the aspnet application. The tasks are aspnet:copybin, aspnet:logdir, aspnet:webconfig, and aspnet. The last just invokes the others.

aspnet:copybin finds IronRuby.Rack’s dependencies in the current ironruby environment and copies them into public/bin.

aspnet:logdir creates a directory for IronRuby.Rack to put its logs into. IronRuby.Rack is fussy about this directory existing, and about its ability to write to said directory.

aspnet:webconfig is more interesting. The web.config file it generates sets up the ASP.NET handler for ironruby.rack and tells it where everything is. I do bindingRedirects so that IronRuby.Rack can find the IronRuby version that I grabbed in aspnet:copybin. I started with the templates in the ironruby-rack gem and trimmed it down to what my app needed.

Here’s what I learned while crafting the web.config file:

IronRuby.Rack includes two hooks for ASP.NET: a module and a handler. The module seemed like the way to go, so I tried it first. I was a bit disappointed that it grabbed each request at the beginning of the application pipeline, and called EndRequest. It would have been fine if I didn’t care about anything that IIS was doing for me, but I did. I needed other modules to run (particularly the WindowsAuthentication module), and having IronRuby short-circuit the process broke that. I switched to the handler, and was much happier.

Also, IronRuby.Rack doesn’t mess with Environment.CurrentDirectory at all, so if your app needs to know about the directory it lives in, you need to tell it about that. Rails is pretty tolerant about this, with its Rails.root stuff, but bundler isn’t. Bundler was looking in c:\windows for my Gemfile. My first impulse was to set environment variables in web.config, but IronRuby.Rack doesn’t have hooks for that. So my app.rb has another bit of bundler bootstrapping that most apps can leave out: ENV['BUNDLE_GEMFILE'] ||= File.expand_path(__FILE__ + '/../Gemfile')

As a nice side-effect of using ASP.NET, to restart the application I just need to “rake aspnet:webconfig”. ASP.NET reloads the application whenever web.config changes.

Github is where to go to see the complete Rakefile.