Local Gemfile

Sometimes I like to use some gems that may not be present in the project Gemfile. If I have no power to add to it, or that gem only make sense to me, I use a strategy to create a local Gemfile

Let’s suppose the project uses pry but I’m a byebug fan. I will create a new Gemfile in the project root called Gemfile.local:

# Reads and evaluates the original Gemfile
eval File.read('Gemfile')

group :development do
  gem 'byebug'

This file will evaluate all contents of Gemfile and add the gems we describe below that line. In this case, we will use all gems plus the byebug gem in the developement group.

Now let’s copy the original Gemfile.lock to a Gemfile.local.lock to make sure we don’t change any gem version:

cp Gemfile.lock Gemfile.local.lock

And then we use the --gemfile bundle option to install the gems:

bundle install --gemfile Gemfile.local

And now we can use it:

# my_code.rb
require 'byebug'

class SomeClass
  def some_method
    p 'Hello'

bundle exec --gemfile Gemfile.local ruby my_code.rb

[1, 10] in my_code.rb
    1: require 'byebug'
    3: class SomeClass
    4:   def some_method
    5:     byebug
=>  6:     p 'Hello'
    7:   end
    8: end
   10: SomeClass.new.some_method

Nice it works. But do I need to set the --gemfile in all my commands?

Not really. We can set a environment variable for that:

export BUNDLE_GEMFILE='Gemfile.local'

Another Approach To Ansible With Vagrant

To speed up the ansible development process it is common to use virtual machines and Vagrant is a pretty good option for that, specially if, like me, you come from a ruby backgroud.

I just had a problem with using Vagrant because it sets up the vagrant user for you but you have no direct root ssh access, which is a common form of accessing a real server.

Of course I could just use the vagrant user with some become: yes instruction but that wouldn’t really replicate the real world scenario.

So I came up with this configuration in order to address that problem:

Vagrant.configure("2") do |config|
  # Base
  config.vm.box = "debian/stretch64"

  # Copy your ssh public key to /home/vagrant/.ssh/me.pub
  config.vm.provision "file", source: "~/.ssh/id_rsa.pub", destination: "~/.ssh/me.pub"

  # Sets ups root ssh access with the copied key:
  config.vm.provision "shell", inline: <<-SHELL
    mkdir -p /root/.ssh
    chmod -R 700 /root/.ssh
    cat /home/vagrant/.ssh/me.pub > /root/.ssh/authorized_keys
    chmod 644 /root/.ssh/authorized_keys

With this, we can simply access the machine via the port 2222:

ssh -p 2222 root@localhost

And add the machine to your inventory:

my-machine ansible_connection=local ansible_port=2222


Even though that already works, it difficult to control the machines with the port vagrant is granting access to via ssh.

In order to make it better we could add a private network to the box and access via the assigned IP address:

Vagrant.configure("2") do |config|
  # other config
  config.vm.network :private_network, ip: "", mask: ""
  # some more config

This way we can simple access the machine via ssh:

ssh root@

Or add it to the ansible inventory:

my-machine ansible_host=

Splitting ssh config in multiple files

I have an extensive ~/.ssh/config file due to multiple credentials in differents servers for work and for personal purposes. This means different public/private ssh keys pairs, different users, different servers, etc.

My file something like :

# General configuration
Host: *
  Port 22
  IdentityFile ~/.ssh/id_rsa
  ServerAliveInterval 60
  ServerAliveCountMax 5

# Personal config
Host github
  HostName github.com
  IdentityFile ~/.ssh/github_rsa

Host lcguida
  HostName lcguida.com
  User admin

# Work servers
# ...

# Another company servers
# ...

This was OK at the beginning but as time passes and the file grows (Raspberry access, another server at work, etc, etc) this has become a huge mess.

But since the 7.3p1 release in 2016, ssh allows us to use a Include directive to import other config files. It supports the ~ shortcut as well as wildcard notation (*).

Inspired by the .d directory pattern present in many linux programs, I configured my system as the following:

├── config
├── config.d
│   ├── work.config
│   ├── home.config
│   └── code.config
└── known_hosts

So each <name>.config file has credential informations for a specific topic (work, personal, home, etc) and I have the config file configure as following:

Host *
  Port 22
  IdentityFile ~/.ssh/id_rsa
  ServerAliveInterval 60
  ServerAliveCountMax 5

Include ~/.ssh/config.d/*.config

Voilà. Sanity is back to ssh config files.

td: Todo list the geek way

Today I was looking for a todo list that was simple. I did find todoist but since I don’t use apple store at work and my home computer runs Ubuntu I kept searching.

I ended up finding td.

td is a simple command line todo list with the some interesting functionalities.

You can either have a .todos file in a directory or have a global database configured.


For Mac, you can find td on brew, so simply do:

$ brew install td

For linux, you can either install it from source with go get github.com/Swatto/td or download the executable and put in your system (/usr/local/bin, for example).


The thing I found interesting in td was the ability to have a “per project” to-do list.

Simply create a .todos file in a folder and a to-do list is created and accessible whenever you’re in this folder or any sub-folders (Yes, it is recursive).

On top of it, you can define a global database using the TODO_DB_PATH environment variable.

Just add something like this to your .bashrc or .zshrc:

export TODO_DB_PATH=$HOME/.config/todo.json

If td finds a .todos local file it will start that list, otherwise it will use the global database.

For a multi-computer solution, you can point this global database to a Dropbox folder so you can have it synchronized:

export TODO_DB_PATH=$HOME/myDropboxFolder/todo.json


td usage is very simple. Just take a look at its help:

   init, i  Initialize a collection of todos
   add, a   Add a new todo
   modify, m   Modify the text of an existing todo
   toggle, t   Toggle the status of a todo by giving his id
   clean Remove finished todos from the list
   reorder, r  Reset ids of todo or swap the position of two todo
   search, s   Search a string in all todos
   help, h  Shows a list of commands or help for one command

   --done, -d     print done todos
   --all, -a      print all todos
   --help, -h     show help
   --version, -v  print the version

Using db:structure dump and load instead of db:schema

Today I faced the following problem: I had a migration creating a index in SQL, like this:

CREATE UNIQUE INDEX some_table_single_row ON some_table((1))

which will make this table unique in the database.

Problem is, Rails won’t import this INDEX to db/schema.rb and, because of it, my test database didn’t created it and I had a failing test.


Rails comes with a rake task called db:structure which will do pretty much what db:schema does but using the specific database tool for it, in this case pg_dump.

So, I created structure dump from dev database:

[$] rake db:structure:dump

which creates a db/structure.sql file, dropped my test database and re-created it from this new dump:

[$] RAILS_ENV=test rake:structure:load

Et voilà, tests were passing.