#DNDwKids: Back to HeroLab after DndInsider #dnd

Last spring and fall I ran a Pathfinder game with 4th, 5th, and 6th graders at my sons’ school. Each term I had to turn away kids ‘cause I had to cap participation at six players, and even that was too many for the attention span of some of the kids.

Now that I’ve gotten two kids (6th grade and 7th grade) interested in DM’ing, I’m going have them run two games while I provide support, feedback and guidance.

But we’re switching to DnD 4e since Jesse and Harry each have stacks of 4e books and know the rules much better than for Pathfinder.

At our first session we just worked on converting our existing characters, and rolling up new ones for those who have just joined. I know there are a lot errors but that I would resolve them all by moving from paper to …, well from paper to DnD Insider Character Builder (CB) is what I thought but, jeez, what a beast that is.

I’d used HeroLab last fall and thought it pretty good, but got the sense that the WOTC online tools would be better suited for 4e. So I thought I would be ready to roll once I signed up for 3 months. But the tools require Silverlight, burn through my CPU, and really, really laggy, turning a data entry task into what seems like 15s of waiting between advancing between fields. Creating just one character took about an hour.

Also, it’s clear we’ll be using 4e for at least of couple of years, and there’s rumors that 4e support will be coming to an end.

So, I’m going to give 4e a whirl on HeroLab. The starting process is a bit arcane. First, since I’d already had a fully licensed HeroLab with Pathfinder, I didn’t seem to be able to use the 4e support in demo mode (I may be mistaken on that) unless I purchased the 4e license. Since LoneWolf offers a money-back option within 60 days, I thought I would lay down my $20 and hope for the best.

The 4e package for HeroLab has no content, since LoneWolf doesn’t have a license from WOTC, but instead has a downloader tool. Since I have a DnDInsider subscription it uses those credentials to d/l the data files.

I’m hoping once this completes I’ll have a faster, more usable character management system. Let’s see…

Chef Shell Attribute example

I find the dearth of chef-shell examples on the web really frustrating, so starting my own.

Getting started

root@logstash-i-ab57c6d1:~# export PATH=/opt/chef/embedded/bin:$PATH root@logstash-i-ab57c6d1:~# chef-shell -z

Querying node attributes

Get the logstash server outputs: ““

attributes_mode chef:attributes > node[‘logstash’] … chef:attributes > node[:logstash][:server][:outputs].length => 5 chef:attributes > node[:logstash][:server][:outputs][0] => {“elasticsearch_http”=>{“host”=>”logstash-elasticsearch.infra.example.in”}} ““

Pretty excited about jumbo dice for new season of #rpgkids. Giving them each a set last fall was a mess #fb

Pretty excited about jumbo dice for new season of #rpgkids. Giving them each a set last fall was a mess #fb

Fixing #sensuapp OpenSSL peer cert validation issues

Today I used Chef to configure a test sensu-server, but my Hipchat notifications were failing with this snippet in the logs:

/opt/sensu/embedded/lib/ruby/2.0.0/net/http.rb:917:in `connect': SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed (OpenSSL::SSL::SSLError)\n"

I soon determined that the httparty gem was at 0.11.0 on the prod sensu servers, and at 0.12.0 on the new one. Further, that httparty had (wisely) been changed to verify peer certs. No problem, but where to put the CA (Certificate Authority) bundle?

Tracking this down took more of the afternoon than ideal, but eventually I determined that the default SSL cert path can be determined with:

# irb
irb(main):001:0> require 'openssl'
=> true
irb(main):002:0> File.dirname OpenSSL::Config::DEFAULT_CONFIG_FILE
=> "/opt/sensu/embedded/ssl"

To get the CA certs into embedded ruby we can update the default sensu install with a bit of Chefery

cookbook_file '/opt/sensu/embedded/ssl/cert.pem' do
  source "cert.pem"
  mode 0755

Where cert.pem contents are pulled from ‘http://curl.haxx.se/ca/cacert.pem’ so we have a complete list of acceptable Certificate Authorites.

Ideally, would submit a PR to https://github.com/sensu/sensu-build/pulls, but for now I’ll have to content myself with an issue report.


Update: - https://github.com/sensu/sensu-build/pull/79 has a PR to sensu Omnibus to fix this.

JMX - collectd - graphite

I finally started sending some key JMX stats into Graphite via our collectd setup. A few notes since I’ll probably forget all about this until I next need to configure this.


JMX listens on a random port. Ended up adding all of the following to JAVA_OPTS


What objects and attributes are available to monitor? Enable jmxproxy in tomcat with the following in /etc/tomcat7/tomcat-users.xml

  <user username="tomcat" password="tomcat" roles="tomcat,manager-gui,manager-jmx"/>

and then peruse http://localhost:8080/manager/jmxproxy/


Configured the plugin with Miah’s chef-collectd cookbook. See my recipe and the template at:


The main changes to the plugin configuration is a change to prefix for the thread_pools and the ‘Type’ for class loading.


We use the carbon-writer plugin from Gregory Szorc. The plugin didn’t sanitize out double-quotes, which pretty much horked the Graphite browser. This pull request fixes that.

debugging collectd

The Ubuntu build doesn’t include debugging so turning up the log level to ‘debug’ does nothing. And the ‘info’ level gives you almost nothing. The most useful steps for tracking down my issues (which came down to the aforementioned double-quote) was a) running in the foreground:

/usr/sbin/collectd -f -C /etc/collectd/collectd.conf

and enabling the CSV plugin to see what was getting written before going to carbon/graphite.

LoadPlugin csv
<Plugin csv>
  DataDir "/var/lib/collectd/csv"
  StoreRates false
It’s been real, Tumblr

So I tried Tumblr as a blogging platform. Since Twitter has worked out so well, I thought Tumblr might have some appeal that wouldn’t be apparent until I dove in and tried it.

But I have a hard time taking myself seriously here, so I’m moving to Jekyll (at GitHub, but I can take it anywhere). The preview is at http://blog.pburkholder.com. I need to get a Disqus account set up and clean up the old posts. I hope it doesn’t take long, as I have some real content (sensu + chef, Puppet/Chef lessons) that deserve my real attention.


Er, back on Tumblr again. Why, well, as cool as Jekyll is, I can’t quite justify the time to get it ‘just so’ when I can come here and just write.

Meanwhile, I can use http://import.jekyllrb.com/docs/tumblr/ to export/backup my content here, just in case.

References for that:
Create chef client via api with validation key
    knife exec -s $CHEF \
 -E 'client_desc = { "name" => "peterb-hack", "admin" => false}; n=api.post "/clients", client_desc; puts n["private_key"]' \
 -u chef-validator -k /tmp/chef-validator.pem > client.pem

LifeOps sounds like just what the Doctor ordered to me. I’ve started with some goals 6 months ago, and had varying degrees of success in sticking with them.

Sounding off some ideas, getting feedback and having some structure around it via regular meetings all sounds good to sustain the motivation.

Anchoring with DevOpsDC meetups and augmenting with hangout fits nicely for me as well. So count me in!

Chef pain point 1: multiple repos

At $WORK we’ve been migrating from Puppet to Chef. I was in the minority in voting to stay with Puppet, since we were already 1/2 way through refactoring our initial Puppet implementation.

I have nothing against Chef as such, but there are some pain points that others considering a Puppet to Chef migration should consider. I need to write a full analysis of this migration, but with time short, I’ll start by just sharing some pain points which I’ll later pull into that magnum opus.

Multiple repos

Our Puppet code was in one repository. Our Chef code is in 24 repos so we can use Berkshelf that’s tied to versions and branches. When a Puppet module failed due to some odd dependency, I could ‘ack’ through the repo to find the a clue for what I was missing. Try that with 15 repos. Ugh.