Another Approach To Ansible With Vagrant

To speed up the ansible development process it is common to use virtual machines and Vagrant is a pretty good option for that, specially if, like me, you come from a ruby backgroud.

I just had a problem with using Vagrant because it sets up the vagrant user for you but you have no direct root ssh access, which is a common form of accessing a real server.

Of course I could just use the vagrant user with some become: yes instruction but that wouldn’t really replicate the real world scenario.

So I came up with this configuration in order to address that problem:

Vagrant.configure("2") do |config|
  # Base
  config.vm.box = "debian/stretch64"

  # Copy your ssh public key to /home/vagrant/.ssh/me.pub
  config.vm.provision "file", source: "~/.ssh/id_rsa.pub", destination: "~/.ssh/me.pub"

  # Sets ups root ssh access with the copied key:
  config.vm.provision "shell", inline: <<-SHELL
    mkdir -p /root/.ssh
    chmod -R 700 /root/.ssh
    cat /home/vagrant/.ssh/me.pub > /root/.ssh/authorized_keys
    chmod 644 /root/.ssh/authorized_keys
  SHELL
end

With this, we can simply access the machine via the port 2222:

ssh -p 2222 root@localhost

And add the machine to your inventory:

my-machine ansible_connection=local ansible_port=2222

Improvements

Even though that already works, it difficult to control the machines with the port vagrant is granting access to via ssh.

In order to make it better we could add a private network to the box and access via the assigned IP address:

Vagrant.configure("2") do |config|
  # other config
  config.vm.network :private_network, ip: "10.0.1.10", mask: "255.255.255.0"
  # some more config

This way we can simple access the machine via ssh:

ssh root@10.0.1.10

Or add it to the ansible inventory:

my-machine ansible_host=10.0.1.10

Splitting ssh config in multiple files

I have an extensive ~/.ssh/config file due to multiple credentials in differents servers for work and for personal purposes. This means different public/private ssh keys pairs, different users, different servers, etc.

My file something like :

# General configuration
Host: *
  Port 22
  IdentityFile ~/.ssh/id_rsa
  ServerAliveInterval 60
  ServerAliveCountMax 5

# Personal config
Host github
  HostName github.com
  IdentityFile ~/.ssh/github_rsa

Host lcguida
  HostName lcguida.com
  User admin

# Work servers
# ...

# Another company servers
# ...

This was OK at the beginning but as time passes and the file grows (Raspberry access, another server at work, etc, etc) this has become a huge mess.

But since the 7.3p1 release in 2016, ssh allows us to use a Include directive to import other config files. It supports the ~ shortcut as well as wildcard notation (*).

Inspired by the .d directory pattern present in many linux programs, I configured my system as the following:

.ssh
├── config
├── config.d
│   ├── work.config
│   ├── home.config
│   └── code.config
└── known_hosts

So each <name>.config file has credential informations for a specific topic (work, personal, home, etc) and I have the config file configure as following:

Host *
  Port 22
  IdentityFile ~/.ssh/id_rsa
  ServerAliveInterval 60
  ServerAliveCountMax 5

Include ~/.ssh/config.d/*.config

Voilà. Sanity is back to ssh config files.

td: Todo list the geek way

Today I was looking for a todo list that was simple. I did find todoist but since I don’t use apple store at work and my home computer runs Ubuntu I kept searching.

I ended up finding td.

td is a simple command line todo list with the some interesting functionalities.

You can either have a .todos file in a directory or have a global database configured.

Installation

For Mac, you can find td on brew, so simply do:

$ brew install td

For linux, you can either install it from source with go get github.com/Swatto/td or download the executable and put in your system (/usr/local/bin, for example).

Configuration

The thing I found interesting in td was the ability to have a “per project” to-do list.

Simply create a .todos file in a folder and a to-do list is created and accessible whenever you’re in this folder or any sub-folders (Yes, it is recursive).

On top of it, you can define a global database using the TODO_DB_PATH environment variable.

Just add something like this to your .bashrc or .zshrc:

export TODO_DB_PATH=$HOME/.config/todo.json

If td finds a .todos local file it will start that list, otherwise it will use the global database.

For a multi-computer solution, you can point this global database to a Dropbox folder so you can have it synchronized:

export TODO_DB_PATH=$HOME/myDropboxFolder/todo.json

Usage

td usage is very simple. Just take a look at its help:

COMMANDS:
   init, i  Initialize a collection of todos
   add, a   Add a new todo
   modify, m   Modify the text of an existing todo
   toggle, t   Toggle the status of a todo by giving his id
   clean Remove finished todos from the list
   reorder, r  Reset ids of todo or swap the position of two todo
   search, s   Search a string in all todos
   help, h  Shows a list of commands or help for one command

GLOBAL OPTIONS:
   --done, -d     print done todos
   --all, -a      print all todos
   --help, -h     show help
   --version, -v  print the version

Using db:structure dump and load instead of db:schema

Today I faced the following problem: I had a migration creating a index in SQL, like this:

CREATE UNIQUE INDEX some_table_single_row ON some_table((1))

which will make this table unique in the database.

Problem is, Rails won’t import this INDEX to db/schema.rb and, because of it, my test database didn’t created it and I had a failing test.

rake:structure

Rails comes with a rake task called db:structure which will do pretty much what db:schema does but using the specific database tool for it, in this case pg_dump.

So, I created structure dump from dev database:

[$] rake db:structure:dump

which creates a db/structure.sql file, dropped my test database and re-created it from this new dump:

[$] RAILS_ENV=test rake:structure:load

Et voilà, tests were passing.

Deploying jekyll with capistrano

Assumptions:

  • Server uses RVM
  • Jekyll uses bundle

Install Capistrano

Add capistrano and dependencies to Gemfile:

group :deployment do
  gem 'capistrano'
  gem 'capistrano-rvm'
  gem 'capistrano-bundler'
end

And bundle install:

$ blog cap install .

mkdir -p config/deploy
create config/deploy.rb
create config/deploy/staging.rb
create config/deploy/production.rb
mkdir -p lib/capistrano/tasks
create Capfile
Capified

Configuring capistrano

deploy.rb:

# config valid only for current version of Capistrano
lock "3.8.1"

set :application, "my_app"
set :repo_url, "git@<git-url>.git"

# Default deploy_to directory is /var/www/my_app_name
set :deploy_to, '/home/deploy/mysite.com'

set :rvm_type, :user
set :rvm_ruby_version, '2.4.0@mysite'

namespace :deploy do

  task :jekyll_build do
    on roles(:app), in: :groups, limit: 3, wait: 10 do
      within current_path do
        execute :bundle, 'exec jekyll build'
      end
    end
  end

  # Run the jekyll build command after the release folder is created
  after "symlink:release", :jekyll_build
end

And production.rb:

server "mysite.com", user: "deploy", roles: %w{app web}

That’s it! Now just cap production deploy to update the site.