The original words of Phanes, tirelessly carved into a slab of "No'".

Episode IV: A New Sleepy Hope

I can already tell I won’t finish this tonight.  I’m sleepy and twitchy from coffee.  I’m fixing my sleep from a late night work project that recurrently eats up a week (night and day) on a several weeks long rotation.  I just finished that week and am recovering.

A little life wisdom, here:  You can only fix a lack of sleep with sleep.  If you try to power through it, or medicate it away, or tease it with bits at a time–sleep will eventually tell you how the master/slave relationship works in cruel, cruel ways that neither you or anyone you know will ever forget.  Respect sleep.  Sleep is the master.  You are not.  That project deadline is not.  If you stay up and meet the deadline, it is because Master Sleep allowed it.

Anyway, where was this PuppetDB mystery last at?

There were two sets of files we needed, one that specifies where the puppetDB configuration files are to puppetDB, and then another that is those actual configuration files.

Location spec:


Where we specified the “default” path, whatever that means to puppetlabs:

root@oldhorse:~# cat /etc/default/puppetdb
# Init settings for puppetdb

# Location of your Java binary (version 7 or higher)

# Modify this if you'd like to change the memory allocation, enable JMX, etc

# These normally shouldn't need to be edited if using OS packages

# Bootstrap path


# START_TIMEOUT can be set here to alter the default startup timeout in
# seconds. This is used in System-V style init scripts only, and will have no
# effect in systemd.


So you can see in that default file that neither of those two paths exist:

root@oldhorse:~# stat /opt/puppetlabs/server/apps/puppetdb
stat: cannot stat '/opt/puppetlabs/server/apps/puppetdb': No such file or directory
root@oldhorse:~# stat /etc/puppetlabs/puppetdb/conf.d
stat: cannot stat '/etc/puppetlabs/puppetdb/conf.d': No such file or directory

So, it looks like we’re creating those, although I’ve not seen that indicated for sure yet in our document.  It could be that when we initialize that’s where it will generate a skeleton configuration filesystem structure.

At this point I’m not entirely sure if Puppet really did install puppetDB at all.

I know we already did this when a user on IRC had suggested it, but just in case something failed and I did not know to look for it, I ran:

puppet resource package puppetdb ensure=latest

Which is the command to run to install puppetdb via module, as per the “install from packages” document (I know).

My result:

root@oldhorse:~# puppet resource package puppetdb ensure=latest
Warning: Facter: Could not process routing table entry: Expected a destination followed by key/value pairs, got ' dev virbr0 proto kernel scope link src linkdown'
Notice: /Package[puppetdb]/ensure: created
package { 'puppetdb':
 ensure => '4.2.2-1puppetlabs1',

Now I check these paths again in case something magic happened.

And, it did:

root@oldhorse:~# stat /opt/puppetlabs/server/apps/puppetdb
 File: '/opt/puppetlabs/server/apps/puppetdb'
 Size: 4096 Blocks: 8 IO Block: 4096 directory
Device: fc00h/64512d Inode: 10491984 Links: 5
Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2016-09-29 01:53:16.000000000 -0400
Modify: 2016-09-29 01:53:19.973792517 -0400
Change: 2016-09-29 01:53:19.973792517 -0400
 Birth: -
root@oldhorse:~# stat /etc/puppetlabs/puppetdb/conf.d
 File: '/etc/puppetlabs/puppetdb/conf.d'
 Size: 4096 Blocks: 8 IO Block: 4096 directory
Device: fc00h/64512d Inode: 5114153 Links: 2
Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2016-09-29 01:53:16.000000000 -0400
Modify: 2016-09-29 01:53:20.029792659 -0400
Change: 2016-09-29 01:53:20.029792659 -0400
 Birth: -

Both directories were created.

So, since I’m documenting every step obsessively to generate content for a simple walkthrough guide to be used by others after this, I need to find out why this happened.

I need to go back to my round 2 article here to see:

Puppet, Round II. Mistakes shape Successes.

Initially, we were installing from packages.  A user on IRC has advised to follow the ‘install via module’ installation workflows instead, under the premise that it will set up most of the puppetdb module for you and save alot of work.  So, in good faith, I uninstalled the package for puppetDB and removed the configuration directory /etc/puppetlabs/puppetdb at his suggestion, then began following the ‘install via module‘ link.  From there, that page does not actually have instructions for installing via module, but does have a link to the official forge puppetdb documentation:

Here, that document explicitly tells you to run the following to install via module:

puppet module install puppetlabs-puppetdb

Which apparently did nothing but populate a code directory that hasn’t been covered in the docs yet and does not seem to be congruent with anything else happening.

Just now, I ran the module install command given:

puppet resource package puppetdb ensure=latest

And it seems to have fixed almost all of the confusion about stuff being missing.

A moment to shamelessly throw someone under the bus.  I’m talking about you, uber-smart puppetlabs documentation writer person.

Anyway save yourself some headache, use the “installation from package” route if you’re on Ubuntu Xenial.  In the final condensed, guide that will result from this, it will be the only route discussed until I see a release of puppet that has been documented by someone new.

Database backends are easy for most systems.  You install a database service, configure it.  Install a module if you have to.  Point your system at it.   Done.  So, by almost universal convention, from here I’m expecting it to go smoothly.

Let’s take a look at the new directories we were previously missing from the bad advice we were given:

root@oldhorse:/etc/puppetlabs/puppetdb# tree
├── conf.d
└── ssl
 ├── ca.pem
 ├── private.pem
 └── public.pem

2 directories, 3 files

Nothing really going on there yet.

root@oldhorse:/opt/puppetlabs/server/apps/puppetdb# tree
├── bin
│   └── puppetdb
├── cli
│   └── apps
│   ├── anonymize
│   ├── config-migration
│   ├── export
│   ├── foreground
│   ├── import
│   └── ssl-setup
├── ezbake.manifest
├── puppetdb.jar
└── scripts

4 directories, 11 files

New installation directory seems to be in order.

From here, both workflows dive right in to talk about creating classes, or creating a section, without saying where– database table?  File?  They never say.  Not once that I can find.

Since I’m at about the point where I’d rather put my fist through my monitor before trying to dig around this doc set to find these paths, I’m going to start browsing youtube to see if someone’s actually used puppetDB, and consequently installed it, and see if any of these videos have some clues to gleam.

Hey, cool, they have conventions.  As I’d expect, full of heavy drinking.  This one claims to cover installation.  Cool.

Notes.  While we know resource is a subcommand at this point from running it a few times over the last few posts on this journey, a resource is also dual-termed as an object used by puppet in JSON format as structures to represent system resources.  Resources are contained in catalogs, which are probably files that might be somewhere.  They use a concept called Facts, which they’ve hijacked as a term to use in puppet concept.  There’s a command called Facter which displays these things they call Facts.  I ran facter on system and found some interesting system statistics dumped in object notation.

Catalogs are what we tell [a] puppet [agent] about a node and contain resources.

Facts are what a node tells puppet about itself.  I think they mean they’re data that facter generates from various points on the host and are not differentiating between the puppet component, the facter component and the puppet framework.

rambling…i tuned out…

While he’s talking, I put together that the facter command dumped what we’re calling resources, and that facter is essentially returning a catalog.

More detail revealed, and phrased in language more purposeful to me, using facter, the puppet framework as a whole is able to apply the development concept of reflection to the host configuration management process performed by puppet.  This basically means you can apply settings to a machine which aren’t necessarily known yet, using another node as a reference.  “Make this machine like that machine”.

This can be done by something called storeconfigs.  These let you configure a node using resources from another node.

This might be relevant later in the context of storeconfig with special attention to the spaceship operator given as the value to propertyName.

class className {
    @@object {
        memberKey => memberValue,
    propertyName <<| |>>

Note:  Find out what the spaceship operator is actually called.  The names of things are important.

The spaceship operator pulls a resource defined on the nodes from all the nodes.

Note: What are the logistics of that interhost communication?  How is it pulling these resources from the other nodes?  Is it sending a request to the other host’s agent service to return a catalog using facter?

Someone on IRC has pointed out that the [sections] destination file I was looking for is the file specified in the init config script, which is set to default.

Well, it’s not, that points to an empty conf.d directory.  I’m missing something here.  I have documented proof that at least one person uses this now.

A couple more links to read were given for creating a new module:

And this, for no reason given:

I’m pretty frustrated with the docs here.  We’re completely derailed from the guide at this point until puppetdb is installed, then I’m going to streamline that into a guide, and then just supplant whatever acid trip this shit is with that in my puppet master install and configuration steps later.

Note:  One of the new projects in the new pipeline needs to be an orchestration layer framework to transition into away from puppet.

Back to the video, a monitoring system that has nothing to do with puppetdb, a data storage throughput lecture, also unrelated.  Basically says be careful with storeconfigs, which haven’t actually been defined yet.  He says their APIs are buggy.  Good to know.

Ooh.  PuppetDB!  Skip to 21:17 of this 40 minute video if you just want to see the part about PuppetDB in this video about PuppetDB (I know, it’s even in their videos).

Ok.  PuppetDB != storeconfigs.  Don’t use storeconfigs if you’re using PuppetDB.  PuppetDB is the intended replacement for storeconfigs.

PuppetDB provides an API accessible from curl.

22:38 shows the first architectural diagram I’ve come across in any of the docs I’ve seen so far.  VERY HELPFUL.

PuppetDB Server is comprised of 5 components:

  • DLO, Dead letter office for MQ messages containing facts submitted to the HTTP api via the agent which gets them using a resource query?  DLO is used as a diagnostics point for failures.
  • DB
  • Workers
  • HTTP
  • MQ

The PuppetDB usage model also involves the agent component and a Master.

The Master is comprised of the following components:

  • Facts
  • Catalog
  • Resource Query

All data is entered into puppetdb via an internal http server that it runs the previously mentioned API on.

All operations in the puppet master are non-blocking.

More irrelevant rambling about an unrelated benchmarking feature used by puppet.


Oh good, their dashboards use their API.

At this point, the clock’s caught up to me.  The learning curve of Puppet wins another battle.  Lost at 34:04.  Covered in Episode V.

Next Post

Previous Post

Leave a Reply

© 2024 Phanes' Canon

The Personal Blog of Chris Punches