Chris's Blog

Devops Shokunin

Sync Puppet Certs between EC2 regions

Comments Off on Sync Puppet Certs between EC2 regions

In the past I have used nginx to route all cert requests to a single cert server. This worked fine when I had limited my puppet infrastructure to a single EC2 region. However, I recently decided to have puppet masters on separate coasts.

Keeping the certs in sync requires a two-way sync, so I ruled out just rsyncing files around. I tried deploying various drbd+clustered_file_system solutions, and while the tests worked within a region, I could not get them working well through the NAT of the two regions.

A helpful IRC regular (semiosis, thanks!) suggested unison. The issue was that a cron job might be too slow and I may run into issues performing the unison sync. There’s a very useful program that monitors file systems for changes and performs actions based on inode level changes called incron. So the obvious solution was to monitor the filesystem for changes then force a unison sync.

The final solution looks like this:

Install Unison

apt-get install unison

make sure ssh works

unison -batch -auto /etc/puppet/ssl/ca/signed \
ssh://puppet@OTHERPUPPETHOST//etc/puppet/sslca/signed

write simplescript on each host

#!/bin/bash
 
/usr/bin/unison -batch -auto /etc/puppet/ssl/ca/signed \
ssh://puppet@OTHERPUPPETHOST//etc/puppet/ssl/ca/signed > /tmp/sync.log

Set the right mode

 chmod +x /bin/puppet_cert_sync

add a crontab entry to make sure it stays kosher

31 * * * * /bin/puppet_cert_sync

install incron

sudo apt-get install incron

configure it to allow user puppet

echo "puppet" >> /etc/incron.allow

add the incrontab entry

export EDITOR=vi
incrontab -e
 
/etc/puppet/ssl/ca/signed IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /bin/puppet_cert_sync

Then test on one host by running

watch -n 1 ls /etc/puppet/ssl/ca/signed/testhost.pem

And on the other host run

sudo puppetca --clean testhost

Caveats: This may not work in an environment with a many new certs being created very close to each other in both environments. It is also not as highly performant as a clustered file system, but seems to work well in my use case. In addition, the default puppet ssl directory is different, so adjust as necessary.

HTTP Troubleshooting with tcpdump/tcptrace

Comments Off on HTTP Troubleshooting with tcpdump/tcptrace

Operations people are often called upon to do low level HTTP troublshooting and I often end up using tcpdump and tcptrace to break out HTTP sessions and troubleshoot.

Install tcptrace on your localmachine

apt-get install tcptrace

or for you Mac people

brew install tcptrace

Run tcpdump on your server

tcpdump -s 1500 -w /tmp/DUMP.pcap -c 5000 -i eth0 port 80 and host www.mague.com
switch reason
-s Sets the snaplength or the length to capture, by default it is often too small and you lose data that you want for analysis
-w Write a pcap file to this location. I usually prefer to perform analysis on another host
-c Capture this many packets. Not necessary, but useful if you forget to stop the capture
-i Interface to capture on. lo is the looopback and you can find interfaces by running ifconfig -a
expression limit to ports or protocols More info on filtering

Copy the dumpfile down to your local machine

mkdir -p ~/tmp/analysis
cd ~/tmp/analysis
scp remotehost:/tmp/DUMP.pcap .
tcptrace -n -xhttp DUMP.pcap

This will create a bunch of files in your directory like so:

172.16.0.20_http.xpl contains information that you can plot using xplot
http.times contains information on the timestamps when data was first fetched and completed
for troubleshooting however we are interested in the *.dat files

The request and response are in separate files with the names reversed.

For example a2b_contents.dat is the request

and b2a_contents.dat is the response

Now you can go about finding errors with grep

chris@gorilla:o ] ~/tmp/analysis 
$ grep --binary-files=text 404 *.dat
o2p_contents.dat:GET /throw/me/a/404/please HTTP/1.1
p2o_contents.dat:HTTP/1.1 404 Not Found

This is also super useful if you want to use curl later to reproduce any issue because now you can just add all of the headers that were previously sent

curl -v -H "Host: www.mague.com" -H \
    "Accept-Encoding: gzip,deflate,sdch" http://www.mague.com