Chris's Blog

Devops Shokunin

Things you need to know about giving tech talks

Comments Off on Things you need to know about giving tech talks

I recently just sat through one of the most painful technical talks that I have ever attended, so I would like to offer some advice on giving a technical presentation that is mostly stolen from good speakers.  At the end of the talk, I was left with the impression that the speaker may have known what they were talking about, but had given very little consideration as to the audience experience.  My speaking tips for pulling the audience along with you are as follows.

 

Frame the Talk

Tell the audience what you are going to tell them, tell them, and go over what you told them. This advice works for all talks, because it helps the audience prepare to pay attention and reinforces the content which increases retention. Our earliest methods of learning are based on repetition, so while later in life we need less repetition, it certainly helps to repeat a few times.

 

State the Problem or Use Case

One of the first questions your audience would ask would be “Why would you go to all the trouble to roll out a rewrite of your application in Elixir?”. Answer that question first as it establishes that you are considering actual real-world problems and that the rest of your talk is not simply an exercise in hand waving. “Our goal was to rewrite our application that was simply not performant enough at scale due to the limitations of the Ruby language”.

 

Explain the Why of the Tool Choice

“We decided to use Redis”. While Redis is certainly one of my favorite hammers, not everything is a nail. “We chose Redis over Memcached due to the fact that our application uses keys longer than the 250 bytes that Memcached can handle and we needed the Redis capability of varied cache expiration algorithms”. The why of the choice allows the audience to determine if this same choice makes sense for them going forward.  The speaker’s reasoning also may provided some useful new ideas to the audience, that they may have not considered previously.

 

Explain Some of the Drawbacks of the Tool Choice

“Redis is single threaded and this creates some issues where a single instance can get overwhelmed”. This demonstrates that the presenter fully understands the trade-offs involved in a selection. Generally, it is a good idea to follow up with either a workaround or mitigation of that drawback. “To avoid this issue, we sharded out the cache to run over several instances using Twemproxy”. After giving a presentation for the CTO of a large company, the CTO later cornered me and said that he was very impressed at the team’s forethought and he appreciated the honesty.

 

Show How Everything Fits together

A simple diagram of how the running parts fit together really cement the flow in the mind of the audience. Really interesting ideas and presentations usually can prove useful in other situations that may be radically different, but the flow of data will remain.  Really effective talks have an overarching theme and it is good to go into details, but be sure to tie those details back to the problem or use case.

 

The “Silver Bullet Slide”

I often encourage people giving tech talks to include a slide that sets realistic expectations. “Docker and Kubernetes is a great solution for stateless applications, but not necessarily the best choice for data stores given the extra management overhead”. There are no silver bullets in life, so be realistic.

 

Discuss the Past

After completing this project what was learned? “After completion, it turns out that the cost and operational overhead of running Kafka is too high, coupled with the fact that I now need non-Java clients, I should probably have chosen a different technology”. The audience is often looking for insight, so discussing what could have gone better is invaluable.

 

Discuss the Future

Where are we going with this? “In the future we are looking to use Terraform to build a complete testing environment from scratch nightly to test our builds to ensure we can also use it for Disaster Recovery Scenarios”. No project is every really complete and discussing possible future improvements ends on a positive note.

 

Informative and enjoyable talks are those that put the audience first and inspire them to seek creative solutions to their unique problems.

Public speaking is an excellent career multiplier, so get over your nervousness and get to it.

 

Thanks to Chris Sessions and Zane Williamson for providing feedback.

Using JMX on Vagrant

Comments Off on Using JMX on Vagrant

Getting JMX remote from my desktop to a Vagrant machine took a few tries.

Vagrant file configuration to add more memory and forward the HTTP and JMX ports

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
 config.vm.box = "ubuntu/xenial64"
 config.vm.network "forwarded_port", guest: 8080, host: 8080
 config.vm.network "forwarded_port", guest: 9010, host: 9010
 config.vm.provider "virtualbox" do |vb|
   vb.customize ["modifyvm", :id, "--memory", "2048"]
 end
end

Application Start script needs to be modified to setup remote access

 java \
 -Dcom.sun.management.jmxremote \
 -Dcom.sun.management.jmxremote.port=9010 \
 -Dcom.sun.management.jmxremote.rmi.port=9010 \
 -Djava.rmi.server.hostname=127.0.0.1 \
 -Dcom.sun.management.jmxremote.authenticate=false \
 -Dcom.sun.management.jmxremote.ssl=false \
 -jar "${SOURCE_DIR}/bin/myjar.jar"

Connection is now possible by running:

jconsole 127.0.0.1:9010

When Jconsole starts up Select “Insecure Connection”
screen-shot-2017-03-02-at-11-09-50-am

Guide to running redis in production

Comments Off on Guide to running redis in production

On my company blog, I wrote a guide to running redis in production environments.

Blogspam Analysis with R Part 1

Comments Off on Blogspam Analysis with R Part 1

This morning while checking the comments on this blog I was surprised at the amount of spam comments caught by the Akismet plugin, so I decided to dive in with some logfile analysis using R to see if I could lessen the scourge.

Grab the data from my nginx logs, since I get very few comments, we can assume that everything is spam.

echo '"IP", "DATE"' > ~/tmp/data_analysis/blogspam.csv
zgrep '/wp-comments-post.php' /var/log/nginx/acc* |
perl -ne 'if (m/.*:(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}) \- \- \[(\d+)\/(\w+)\/2014/)
{print "\",$1, "\",\"", $3, "/", $2, "\"\n"}' >>
~/tmp/data_analysis/blogspam.csv

Install R and start it

$ sudo apt-get install -y r-base-core
$ R

Load the data into R

spammers  <- read.csv(file="blogspam.csv", head=TRUE,sep=",")

Let’s find the biggest IP and heavest days:

 > summary(spammers)
               IP             Date      
 1.1.1.1        : 2135   Sep/07 : 1364  
 2.2.2.2        : 2069   Oct/02 : 1353  
 3.3.3.3        : 1971   Oct/03 : 1348  
 4.4.4.4        : 1864   Sep/09 : 1344  
 5.5.5.5        : 1819   Oct/01 : 1333  
 6.6.6.6        : 1712   Sep/30 : 1328  
 (Other)        :50435   (Other):53935

Histogram by IP Frequency

iplist <- as.data.frame(table(spammers$IP))
hist(iplist$Freq, breaks=100, xlab="ip distribution",
      main="Spammer IPs",  col="darkblue")

IP_distribution

This shows that there is no single IP causing all of the trouble, so there is no simple solution of blocking a single IP.

Graph the number of spam comments per day.
Note: you need to sort the data by date or your lines will be all over the place and the graph unreadable

dates <- as.data.frame(table(spammers$Date))
datessorted <- dates[order(as.Date(dates$Var1,format = "%b/%d")),]
plot(as.POSIXct(datessorted$Var1,format = "%b/%d"),
  datessorted$Freq, main="spam comments", xlab="date", ylab="count", type="l")

timeseries

This gives me a basic idea of the problem and further analysis will be available in Part 2

Note: Since I sat down to write this post after clearing out the spam comments I now have 101 new spam comments.

Getting started with VimWiki

2 Comments »

VimWiki is an excellent tool for creating both a personal knowledgebase and journal.  I use it everyday to keep track of new techniques that I learn as well as to keep notes on how I perform tasks everyday.  Because it is completely text based with minimal formatting, it is possible to share easily and keep in git version control for a full history of how and what I learn.  In addition, the search feature allows me to quickly recall how to perform a task or find the information I need quickly.

Installation

1) Install vundle or download my dotfiles to get started with managing vim plugins

2) Install the VimWiki and Calendar plugins for vim by adding the following to your vundle file

Bundle 'mattn/calendar-vim'
Bundle 'vimwiki'

3) update all of your plugins

vim +BundleInstall +qall
mkdir -p ~/Documents/VimWiki/

4)  Remap your leader key in your .vimrc . I set mine to ,

:let mapleader=","

To find your current leader key run

:echo mapleader

and it will appear in the bottom left

5) Add the the following to your .vimrc to get started

" vimwiki stuff "
" Run multiple wikis "
let g:vimwiki_list = [
                        \{'path': '~/Documents/VimWiki/personal.wiki'},
                        \{'path': '~/Documents/VimWiki/tech.wiki'}
                \]
au BufRead,BufNewFile *.wiki set filetype=vimwiki
:autocmd FileType vimwiki map d :VimwikiMakeDiaryNote
function! ToggleCalendar()
  execute ":Calendar"
  if exists("g:calendar_open")
    if g:calendar_open == 1
      execute "q"
      unlet g:calendar_open
    else
      g:calendar_open = 1
    end
  else
    let g:calendar_open = 1
  end
endfunction
:autocmd FileType vimwiki map c :call ToggleCalendar()

 

Using the Wiki Features

 

1)  Start up Vim

ss1

 

2)  The following command will take you to the top page of your wiki

,ww

ss2

3) select yes to create the new directory and it will take you to your new Wiki index

 

4) enter information

ss3

5) to make the text actual links surround with [[ ]]

ss4

6) Move up and down when the cursor is on the link you want Company Information in this case hit return and it will generate a new wiki page

ss5

7) add data to the page

ss6

8) go back to the main page by pressing the backspace key

ss3

Searching the Wiki

1) Search for the term “blog” by running the following command

:VWS /blog/

ss8

shows (1 of 2)

ss9

2) show all of the matches

:lopen

ss10

3)  navigate up and down and hit return to select that file

ss11
Using diary

1) create a new diary entry and add data.  If you add the == Title == at the top it will be visible in the index (next step)

,w,w

ss12

2) go to the diary index

,wi

ss13

3) build a diary index

:VimwikiDiaryGenerateLinks

ss14
4) using the calendar plugin to toggle on/off

,c

ss15

5) you can navigate to the date and hit return on that date and the diary entry for that date will open!

ss16

Resources

VimWiki Quick Reference Guide  

VimWiki on Github

Opensource Infrastructure Revisited

Comments Off on Opensource Infrastructure Revisited

In a previous article, I detailed the open source projects that I used to implement a PaaS infrastructure.

Since that time the number of instances in the infrastructure has grown by 2.5X and several of the components needed to be rethought.

Capacity/Performance Management

Previous: Collectd/Visage
Replacement: Collectd/Graphite
Reasons: The collectd backend was too slow and I/O heavy
Graphite graphs are easily to embed in dashboard applications
Ability to easily transform metrics, such as average CPU across a cluster of servers

Continuous Integration

Previous: Selenium
Replacement: Custom Tests
Reasons: Selenium tests failed too often for undiscernable reasons
False positives slowed development too often

Log Collection

Previous: Rsyslog/Graylog2
Replacement: Logstash/ElasticSearch/Kibana
Reasons: Mongodb too slow in EC2 for storing and searching

Logstash offers better parsing and indexing of logs with powerful filtersElasticSearch is super fast and scales horizontally on EC2

Kibana is simple to use and allows Developers to quickly find the relevant information

All of these components are easily integrated into our dashboard application

These changes not only allow the infrastructure to scale, but provide APIs that allow easy integration with custom dashboards.

Upgrading to Puppet 3.0

1 Comment »

After spending a few days at PuppetConf and talking with Eric S. and Jeff McC. of puppetlabs, I felt compelled to upgrade the latest version of our architecture to Puppet 3.0 from 2.7. Mainly to fix plugin sync issues and for the increased performance.

Here is my list of gotchas:

0) You cannot run both and ENC and stored configs at the same time. Filed a bug (#16698) on this

1) On my puppet master I run as user puppet under unicorn+nginx. When you don’t run as root, puppet only looks in ~/.puppet for the configuration file (Bug #16637). A note is also in the release notes. I temporarily got around this by:

mv ~/.puppet ~/.puppet.old
ln -s /etc/puppet ~/.puppet
#also grab the new rack config
cp /puppet-3.0.0-rc8/ext/rack/files/config.ru .

and restarting my unicorn processes. I am much happier than I was on Passenger+Apache.

2) Ignoring deprication warnings. The following will fail

  file { '/etc/nginx/nginx.conf':
    ensure  => present,
    source  => [ "puppet:///nginx/${::fqdn}.nginx.conf",
                "puppet:///nginx/${::role}.nginx.conf",
                'puppet:///nginx/default.nginx.conf'],
  }

with this message:

err: /Stage[pre]/Nginx::Config/File[/etc/nginx/nginx.conf]: 
Could not evaluate: Error 400 on SERVER: Not authorized to call find on
/file_metadata/nginx/foo.bar.com.nginx.conf Could not retrieve file metadata for
puppet:///file_metadata/nginx/foo.bar.com.nginx.conf: Error 400 on SERVER: 
Not authorized to call find on /file_metadata/nginx/foo.bar.com.nginx.conf at 
/etc/puppet/modules/nginx/config.pp:86

You need to be sure to include the modules path between in the source.

  file { '/etc/nginx/nginx.conf':
    ensure  => present,
    source  => [ "puppet:///modules/nginx/${::fqdn}.nginx.conf",
                "puppet:///modules/nginx/${::role}.nginx.conf",
                'puppet:///modules/nginx/default.nginx.conf'],
  }

Jeff even helped by suggesting the error message was somewhat cryptic and filed Bug #16667

3) When I write the facts.yml file for mcollective I needed to remove the map call so that I did not get the following error after moving to ruby 1.9:

--- a/modules/mcollective/manifests/config.pp
+++ b/modules/mcollective/manifests/config.pp
@@ -34,7 +34,7 @@ class mcollective::config inherits mcollective {
     group    => root,
     mode     => '0400',
     loglevel => debug,
-    content  => inline_template('<%= Hash[scope.to_hash.reject 
- { |k,v| k.to_s =~ /
- (uptime_seconds|uptime|timestamp|free|path|ec2_metrics_vhostmd|
- servername|ec2_public_keys_0_openssh_key|sshrsakey|sshdsakey|serverip)/ 
- }.sort.map].to_yaml - %>'),
+    content  => inline_template('<%= Hash[scope.to_hash.reject 
+ { |k,v| k.to_s =~ /
+ (uptime_seconds|uptime|timestamp|free|path|ec2_metrics_vhostmd|
+ servername|ec2_public_keys_0_openssh_key|sshrsakey|sshdsakey|serverip)/ 
+ }.sort].to_yaml %>'),
     require  => Class['mcollective::package'],
   }

Thanks to the Eric and Jeff for helping me to roll this out.
So far, I’m super happy with the significantly faster catalog compile times and the awesome support that the PuppetLabs Community team has provided.

Sync Puppet Certs between EC2 regions

Comments Off on Sync Puppet Certs between EC2 regions

In the past I have used nginx to route all cert requests to a single cert server. This worked fine when I had limited my puppet infrastructure to a single EC2 region. However, I recently decided to have puppet masters on separate coasts.

Keeping the certs in sync requires a two-way sync, so I ruled out just rsyncing files around. I tried deploying various drbd+clustered_file_system solutions, and while the tests worked within a region, I could not get them working well through the NAT of the two regions.

A helpful IRC regular (semiosis, thanks!) suggested unison. The issue was that a cron job might be too slow and I may run into issues performing the unison sync. There’s a very useful program that monitors file systems for changes and performs actions based on inode level changes called incron. So the obvious solution was to monitor the filesystem for changes then force a unison sync.

The final solution looks like this:

Install Unison

apt-get install unison

make sure ssh works

unison -batch -auto /etc/puppet/ssl/ca/signed \
ssh://puppet@OTHERPUPPETHOST//etc/puppet/sslca/signed

write simplescript on each host

#!/bin/bash
 
/usr/bin/unison -batch -auto /etc/puppet/ssl/ca/signed \
ssh://puppet@OTHERPUPPETHOST//etc/puppet/ssl/ca/signed > /tmp/sync.log

Set the right mode

 chmod +x /bin/puppet_cert_sync

add a crontab entry to make sure it stays kosher

31 * * * * /bin/puppet_cert_sync

install incron

sudo apt-get install incron

configure it to allow user puppet

echo "puppet" >> /etc/incron.allow

add the incrontab entry

export EDITOR=vi
incrontab -e
 
/etc/puppet/ssl/ca/signed IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /bin/puppet_cert_sync

Then test on one host by running

watch -n 1 ls /etc/puppet/ssl/ca/signed/testhost.pem

And on the other host run

sudo puppetca --clean testhost

Caveats: This may not work in an environment with a many new certs being created very close to each other in both environments. It is also not as highly performant as a clustered file system, but seems to work well in my use case. In addition, the default puppet ssl directory is different, so adjust as necessary.

HTTP Troubleshooting with tcpdump/tcptrace

Comments Off on HTTP Troubleshooting with tcpdump/tcptrace

Operations people are often called upon to do low level HTTP troublshooting and I often end up using tcpdump and tcptrace to break out HTTP sessions and troubleshoot.

Install tcptrace on your localmachine

apt-get install tcptrace

or for you Mac people

brew install tcptrace

Run tcpdump on your server

tcpdump -s 1500 -w /tmp/DUMP.pcap -c 5000 -i eth0 port 80 and host www.mague.com
switch reason
-s Sets the snaplength or the length to capture, by default it is often too small and you lose data that you want for analysis
-w Write a pcap file to this location. I usually prefer to perform analysis on another host
-c Capture this many packets. Not necessary, but useful if you forget to stop the capture
-i Interface to capture on. lo is the looopback and you can find interfaces by running ifconfig -a
expression limit to ports or protocols More info on filtering

Copy the dumpfile down to your local machine

mkdir -p ~/tmp/analysis
cd ~/tmp/analysis
scp remotehost:/tmp/DUMP.pcap .
tcptrace -n -xhttp DUMP.pcap

This will create a bunch of files in your directory like so:

172.16.0.20_http.xpl contains information that you can plot using xplot
http.times contains information on the timestamps when data was first fetched and completed
for troubleshooting however we are interested in the *.dat files

The request and response are in separate files with the names reversed.

For example a2b_contents.dat is the request

and b2a_contents.dat is the response

Now you can go about finding errors with grep

chris@gorilla:o ] ~/tmp/analysis 
$ grep --binary-files=text 404 *.dat
o2p_contents.dat:GET /throw/me/a/404/please HTTP/1.1
p2o_contents.dat:HTTP/1.1 404 Not Found

This is also super useful if you want to use curl later to reproduce any issue because now you can just add all of the headers that were previously sent

curl -v -H "Host: www.mague.com" -H \
    "Accept-Encoding: gzip,deflate,sdch" http://www.mague.com

Puppet ENCs and Automating Monitoring Setups with Puppet

Comments Off on Puppet ENCs and Automating Monitoring Setups with Puppet

Patrick Buckley came and spoke about how he uses Puppet Node Classifiers – Slideshare

After him, I spoke about using Puppet to configure monitoring. – Slideshare