Stephen's Thoughts, and Other Nonsense... A collection of random thoughts, reviews, code, and other such things. https://stephen.rees-carter.net/feed 2016-08-05T09:59:56+00:00 Stephen <![CDATA[Byobu for Terminal Management on Ubuntu]]> https://stephen.rees-carter.net/thought/byobu-for-terminal-management-on-ubuntu Byobu for Terminal Management on Ubuntu

Note: I originally wrote and published this article as a DigitalOcean Community Tutorial.

Introduction

Byobu is an easy-to-use wrapper around the tmux (or screen) terminal multiplexer. This means that it makes it easy for you to open multiple windows and run multiple commands within a single terminal connection.

Byobu's primary features include multiple console windows, split panes within each window, notifications and status badges to display the status of the host, and persistent sessions across multiple connections. These provide you with a lot of different options and possibilities, and it is flexible enough to get out of your way and let you get things done.

This tutorial will cover how to install and configure Byobu as well as how to use its most common features.

Prerequisites

For this tutorial, you will need:

Step 1 — Installing Byobu

Ubuntu should come with Byobu installed by default, so here, we'll check that it's installed and then configure some if its settings.

To check that Byobu is installed, try running this command to output its version.

byobu --version
byobu version 5.106
tmux 2.1

If that does not display the current version number, you can manually install Byobu using sudo apt-get install byobu.

Now that Byobu is installed, we can configure some options.

Step 2 — Starting Byobu on Login

Byobu is disabled by default after installation. There are two main ways you can enable Byobu: you can manually start it with the byobu command every time you want to use it, or you can set it to start automatically when you log in to your account.

To add Byobu to your login profile, run the following command. This means that every time you log in to your account, it will be launched.

byobu-enable
The Byobu window manager will be launched automatically at each text login.

If you change your mind later on and want to disable Byobu on login, run byobu-disable.

Because Byobu sessions are maintained across multiple login sessions, if you don't specifically close a Byobu session, it will be loaded again the next time you log in. This means you can leave scripts running and files open between connections with no problems. You can also have multiple active logins connected to the same session.

Once Byobu is configured to start on login if you want it to, you can customize which multiplexer it uses.

Step 3 — Setting the Backend Multiplexer

By default, Byobu will use tmux as the backend multiplexer. However, if you prefer to use screen, you can easily change the enabled backend.

byobu-select-backend

This will give you a prompt to choose the backend multiplexer. Enter the number for whichever you prefer, and then press ENTER.

Select the byobu backend:
  1. tmux
  2. screen

Choose 1-2 [1]:

This tutorial assumes you have the tmux backend enabled, however, the default keybindings should be the same with screen as well.

Step 4 — Enabling the Colorful Prompt

Byobu also includes a colorful prompt which includes the return code of the last executed command. It is enabled by default in some environments. You can manually enable it (or check that it's already enabled) by running:

byobu-enable-prompt

After this, you'll need to reload your shell configuration.

. ~/.bashrc

Byobu's colorful prompt looks like this:

Byobu enabled prompt

If you change your mind later on and want to disable Byobu's colorful prompt, you can run byobu-disable-prompt.

Byobu is fully configured now, so let's go over how to use it.

Step 5 — Using Sessions

Byobu uses the function keys (F1 through F12, the top row of your keyboard) for the default keybindings which provide access to all of the available functions. In the next few steps, we'll talk about the keybindings for sessions, windows, and panes.

A session is simply a running instance of Byobu. A session consists of a collection of windows, which are basically shell sessions, and panes, which are windows split into multiple sections.

The first time you start Byobu, it starts you a fresh session in which you create windows and panes. On subsequent connections, if you have only one session open, Byobu will automatically open that session when you connect; if you have more than one session open, Byobu will ask you which session you want to use with a prompt like this:

Byobu sessions...

  1. tmux: 1: 1 windows (created Wed Aug  3 16:34:26 2016) [80x23]
  2. tmux: 2: 1 windows (created Wed Aug  3 16:34:38 2016) [80x23]
  3. Create a new Byobu session (tmux)
  4. Run a shell without Byobu (/bin/bash)

Choose 1-4 [1]:

One reason to use sessions is because each session can have its own window size, which is useful if you're connecting with multiple devices with different screen sizes (say, a phone and a desktop computer). Another reason to use sessions is simply to have a clean workspace without closing your existing windows.

First, SSH into your server and enable Byobu, if it isn't already enabled from the previous steps. Start a new session by pressing CTRL+SHIFT+F2, then use ALT+UP and ALT+DOWN to move backwards and forwards through your open sessions.

You can press CTRL+D to exit Byobu and close all of your sessions. If you instead want to detach your session, there are three useful ways to do this.

Pressing F6 cleanly detaches your current session and logs you out of the SSH connection. It does not close Byobu, so the next time you connect to the server, the current session will be restored. This is one of the most useful features of Byobu; you can leave commands running and documents open while disconnecting safely.

If you wish to detach the current session but maintain an SSH connection to the server, you can use Shift-F6. This will detach Byobu (but not close it), and leave you in an active SSH connection to the server. You can relaunch Byobu at any time using the byobu command.

Next, consider a scenario where you are logged into Byobu from multiple locations. While this is often quite a useful feature to take advantage of, it can cause problems if, for example, one of the connections has a much smaller window size (because Byobu will resize itself to match the smallest window). In this case, you can use ALT+F6, which will detach all other connections and leave the current one active. This ensures only the current connection is active in Byobu, and will resize the window if required.

To recap:

  • CTRL+SHIFT+F2 will create a new session.

  • ALT+UP and ALT+DOWN` will scroll through your sessions.

  • F6 will detach your current Byobu session.

  • SHIFT+F6 will detach (but not close) Byobu, and will maintain your SSH connection to the server. You can get back to Byobu with the byobu command.

  • ALT+F6 will detach all connections to Byobu except for the current one.

Next, let's explore one of Byobu's features: windows.

Step 6 — Using Windows

Byobu provides the ability to switch between different windows within a single session. This allows you to easily multi-task within a single connection.

To demonstrate how to manipulate windows, let us consider a scenario where we want to SSH into a server and watch a system log file while editing a file in another window. In a Byobu session, use tail to watch a system log file.

sudo tail -n100 -f /var/log/syslog

While that is running, open a new window by pressing F2, which will provide us with a new command prompt. We'll use this new window to edit a new text file in our home directory using editor:

editor ~/random.file

We now have two windows open: one tailing /var/log/syslog and the other in an editor session. You can scroll left and right through your windows by using F3 and F4 respectively. You can also give these windows names so it's easier to organize and find them. To add a name to your current window, press F8, then type in a useful name (like "tail syslog"), and press ENTER. Scroll through each window and name them. If you want to reorder them, use CTRL+SHIFT+F3/F4 to move the current left or right through the list, respectively.

At this point, there should be some log entries in syslog. In order to look through some of the older messages that are no longer being displayed on the screen, scroll to the log window and press F7 to enter the scrollback history. You can use Up/Down and PageUp/PageDown to move through the scrollback history. When you are finished, press ENTER.

Now, if you need to disconnect from the server for a moment, you can press F6. This will clearly end the SSH connection and detach from Byobu. When it has closed, you can use SSH to reconnect again, and when Byobu comes back, all three of our existing windows will be there.

To recap:

  • F2 creates new windows within the current session.

  • F3 and F4 scroll left and right through the windows list.

  • CTRL+SHIFT+F3/F4 moves a window left and right through the windows list.

  • F8 renames the current open window in the list.

  • F7 lets you view scrollback history in the current window.

Using just a few options, you have performed a number of useful actions that would be hard to easily replicate with a single standard SSH connection. This is what makes Byobu so powerful. Next, let's extend this example by learning how to use panes.

Step 7 — Using Panes

Byobu provides the ability to split windows into multiple panes with both horizontal and vertical splits. These allow you multi-task within the same window, as opposed to across multiple windows.

Create horizontal splits in the current window panel by pressing SHIFT+F2, and vertical ones with CTRL+F2. The focused pane will be split evenly, allowing you to split panes as much as is required to create quite complex layouts. Note that you cannot split a pane if there is not enough space for the pane to split into two.

Once you have split a window into at least two panes, navigate between them using SHIFT+LEFT/RIGHT/UP/DOWN or SHIFT+F3/F4. This allows you to leave a command running in one pane, and then move to another pane to run a different command. You can reorder panes by using CTRL+F3/F4 move the current pane up or down, respectively.

SHIFT+ALT+LEFT/RIGHT/UP/DOWN allows you to manipulate the width and height of the currently selected pane. This will automatically resize the surrounding panels within the window as the split is moved and makes it easy to make a pane a lot larger when you are working in it, and then enlarge a different pane when your focus has shifted.

You can also zoom into a pane with SHIFT+F11, which makes it fill the entire window; pressing SHIFT+F11 again switches it back to its original size. Finally, if you want to split a pane into a completely new window, use ALT+F11.

To recap:

  • SHIFT+F2 creates a horizontal pane; CTRL+F2 creates a vertical one.

  • SHIFT+LEFT/RIGHT/UP/DOWN or SHIFT+F3/F4 switches between panes.

  • CTRL+F3/F4 moves the current pane up or down, respectively.

  • SHIFT+ALT+LEFT/RIGHT/UP/DOWN resizes the current pane.

  • SHIFT+F11 toggles a pane to fill the whole window temporarily.

  • ALT+F11 splits a pane into its own new window permanently.

In the example from Step 7, it would've have been easy to use splits instead of windows to allows us to have the syslog tail, editor window, and new command prompt, all open in the same window. Here's what that would have looked like with one window split into three panes:

Windows and panes example

Now that you know how to use sessions, windows, and panes, we'll cover another one of Byobu's features: status notifications.

Step 8 — Using Status Notifications

Status notifications are notifications in the status bar at the bottom of a Byobu screen. These are a great way to customize your Byobu experience.

Press F9 to enter the Byobu configuration menu. The options available are to view the help guide, toggle status notifications, change the escape sequence, and toggle Byobu on or off at login. Navigate to the Toggle status notification option and press ENTER. The list of all available status notifications will be displayed; you can select the ones you wish to enable or disable.

Status notifications

When status notifications are enabled, they will appear in the bottom status bar, alongside the window indicators. By default there are a couple enabled, usually including the date, load and memory. Some notifications have options that can be configured through config files, which we will cover in the next tutorial.

There are a lot of different notifications to choose from, some of the commonly used ones are:

  • arch shows the system architecture, i.e. x86_64.
  • `battery shows the current battery level (for laptops).
  • date shows the current system date.
  • disk shows the current disk space usage.
  • hostname shows the current system hostname.
  • ip_address shows the current system IP address.
  • load_average shows the current system load average.
  • memory shows the current memory usage.
  • network shows the current network usage, sending and receiving.
  • reboot_required shows an indicator when a system reboot is required.
  • release shows the current distribution version (e.g. 14.04).
  • time shows the current system time.
  • updates_available shows an indicator when there are updates available.
  • uptime shows the current system uptime.
  • whoami shows the currently logged in user.

After selecting the status notifications you wish to enable, select Apply. You may need to press F5 to refresh the status bar; an indicator in the status bar will appear, if required.

Status notifications are a great way to see the information you care about in your system at a glance.

Conclusion

There's a lot more that Byobu is capable of. You can read Byobu's man pages for more detail, but here are a few more useful keybindings:

  • SHIFT+F1 displays the full list of keybindings. If you forget every other keybinding, just remember this one! Press q to exit.

  • SHIFT+F12 toggles whether keybinding are enabled or disabled. This comes in handy if you are trying to use another terminal application within Byobu that has conflicting keyboard keybindings.

  • CTRL+F9 opens a prompt that lets you send the same input to every window; SHIFT+F9 does the same for every pane.

As you can see from the wide range of functions that we have covered, there are a lot of things that Byobu can do and there is a good chance that it will fit into your workflow to make getting things done easier.

]]>
2016-08-05T09:59:56+00:00
Stephen <![CDATA[Introducing My Habits for Pebble]]> https://stephen.rees-carter.net/thought/introducing-my-habits-for-pebble Introducing My Habits for Pebble

I am really excited to announce the release of the much awaited Habits app paid upgrade: My Habits. This upgrade brings to you much awaited and requested features, including fully customisable Habit frequencies and times, the ability to select any day combination for your Habit, and hidden Reminder pins to keep your Timeline neat.

This great new upgrade to Habits is available through KiezelPay for just $2.99.

MY HABITS (paid upgrade)

  • Includes a wide range of fully customisable Habit modes.
  • Allows you to track Streak and Count Habits at any frequency, with presets for Hourly, Daily, Weekly, Fortnightly and Monthly.
  • Enables you to choose custom days of the week for your Habits - no more losing the Streak on weekends! (Most requested feature!)
  • Special Advanced modes: Hidden pins, Random reminders, and Read-only reminders.
  • Companion Website with statistics and graphs for each Habit, as well as full habit configuration. Manage all of your Habits in one place!
  • +++ All of the existing great features of the free Habits app.

DEVELOPER API

My Habits also comes with a Third-Party API, that lets you integrate Habits into your own apps! The first app to take advantage of the new API is due to be released this week (it's a very cool collaboration!), and I can't wait to see what other developers come up with. More information about the API can be found at: https://api.my-habits.net

For more information about My Habits, please visit: https://my-habits.net

My Habits is available for download through the Pebble App Store. Don't forget, if you like My Habits, please give it a ❤️!

Thanks!

~Stephen

]]>
2016-04-19T11:39:37+00:00
Stephen <![CDATA[How To Use the DigitalOcean API v2 with Ansible]]> https://stephen.rees-carter.net/thought/how-to-use-the-digitalocean-api-v2-with-ansible How To Use the DigitalOcean API v2 with Ansible

Note: I originally wrote and published this article as a DigitalOcean Community Tutorial.

Introduction

Ansible 2.0 has recently been released, and with it comes support for version 2 of the DigitalOcean API. This means that you can use Ansible to not only provision your web applications, but also to provision and manage your Droplets automatically.

While DigitalOcean provides a simple web interface for setting up SSH keys and creating Droplets, it is a manual process that you need to go through each time you want to provision a new server. When your application expands to include a larger number of servers and needs the ability to grow and shrink on demand, you don't want to have to deal with creating and configuring the application deployment scripts for each server by hand.

The benefit of using a provisioning tool like Ansible is that it allows you to completely automate this process, and initiating it is as simple as running a single command. This tutorial will show by example how to use Ansible's support of the DigitalOcean API v2.

In particular, this tutorial will cover the process of setting up a new SSH key on a DO account and provisioning two different Droplets so they are ready to use for deploying your web applications. After following this tutorial, you'll be able to modify and integrate these tasks into your existing application deployment scripts.

Prerequisites

This tutorial builds on basic Ansible knowledge, so if you are new to Ansible, you can read this section of the Ansible installation tutorial first.

To follow this tutorial, you will need:

Step 1 — Configuring Ansible

In this step, we will configure Ansible to communicate with the DigitalOcean API.

Typically, Ansible just uses SSH to connect to different servers and run commands. This means that the configuration necessary start using Ansible is generally standard for all modules. However, because communicating with the DigitalOcean API is not simply an SSH shell command, we'll need to do a little additional setup. The dopy (DigitalOcean API Python Wrapper) Python module is what will allow Ansible to communicate with the API.

In order to install dopy, first install the Python package manager pip.

sudo apt-get install python-pip

Then, install dopy using pip.

sudo pip install 'dopy>=0.3.5,<=0.3.5'

Note, we're specifying version 0.3.5 of dopy. At the time of writing, newer versions of dopy are broken and do not work with Ansible.

Next, we'll create a new directory to work in to keep things neat, and we'll set up a basic Ansible configuration file.

By default, Ansible uses a hosts file located at /etc/ansible/hosts, which contains all of the servers it is managing. While that file is fine for some use cases, it's global. This is a global configuration, which is fine in some uses cases, but we'll use a local hosts file in this tutorial. This way, we won't accidentally break any existing configurations you might have while learning about and testing Ansible's DO API support.

Create and move into a new directory, which we will use for the rest of this tutorial.

mkdir ~/ansible-do-api
cd ~/ansible-do-api/

When you run Ansible, it will look for an ansible.cfg file in the directory where it is run, and if it finds one, it'll apply those configuration settings. This means we can easily override options, such as the hostfile option, for each individual use case.

Create a new file called ansible.cfg and open it for editing using nano or your favorite text editor.

nano ansible.cfg

Paste the following into ansible.cfg, then save and close the file.

[defaults]
hostfile = hosts

Setting the hostfile option in the [defaults] group tells Ansible to use a particular hosts file instead of the global one. This ansible.cfg tells Ansible to look for a hostfile called hosts in the same directory.

Next, we'll create the hosts file.

nano hosts

Because we will only be dealing with the DigitalOcean API in this tutorial, we can tell Ansible to run on localhost, which keeps things simple and will remove the need to connect to a remote host. This can be done by telling Ansible to use localhost, and specifying the ansible_connection as local. Paste the code below into hosts, then save and close the file.

[digitalocean]
localhost ansible_connection=local

Finally, we will use the API token created in the prerequisites to allow Ansible to communicate with the DigitalOcean API. There are three ways we can tell Ansible about the API token:

  1. Provide it directly on each DigitalOcean task, using the api_token parameter.
  2. Define it as a variable in the playbook or hosts file, and use that variable for the api_token parameter.
  3. Export it as an environment variable, as either DO_API_TOKEN or DO_API_KEY.

Option 1 is the most direct approach and may sound appealing if you do not wish to create variables. However, it means that the API token will need to be copied into each task it is being used for. More importantly, this means that if it ever changes, you'll need to find all instances of it and replace them all.

Option 2 allows us to set the API token directly within our playbook, like option 1. Unlike option 1, we only define it in a single place by using a variable, which is more convenient and easier to update. We will be using option 2 for this tutorial because it is the simplest approach.

However, it's worth nothing that option 3 is the best method to use to protect your API token because it makes it a lot harder for you to accidentally commit the API token into a repository (which might be shared with anyone). It allows the token to be configured on the system level, and to work across different playbooks without having to include the token in each.

Create a basic playbook called digitalocean.yml.

nano digitalocean.yml

Paste the following code into the file, making sure to substitute in your API token.

---
- hosts: digitalocean

  vars:
    do_token: your_API_token

  tasks:

You can leave this file open in your editor as we'll continue working with it in the next step.

Step 2 — Setting Up an SSH key

In this step, we will create a new SSH key on your server and add it to your DigitalOcean account using Ansible.

The first thing we need to do is ensure the user has a SSH key pair, which we can push to DigitalOcean so it can be installed by default on your new Droplets. Although this is easy to do this via the command line, we can do it just as easily with the users module in Ansible. Using Ansible also has the benefit of ensuring the key exists before it is used, which can avoid issues when running the playbook on different hosts.

In your playbook, add in the user task below, which we can use to ensure an SSH key exists, then save and close the file.

---
- hosts: digitalocean

  vars:
    do_token: your_API_token

  tasks:

  - name: ensure ssh key exists
    user: >
      name={{ ansible_user_id }}
      generate_ssh_key=yes
      ssh_key_file=.ssh/id_rsa

You can change the name of the key if you would like to use something other than ~/.ssh/id_rsa.

Run your playbook.

ansible-playbook digitalocean.yml

The output should look like this:

PLAY ***************************************************************************

TASK [setup] *******************************************************************
ok: [localhost]

TASK [ensure ssh key exists] ***************************************************
changed: [localhost]

PLAY RECAP *********************************************************************
localhost                  : ok=2    changed=1    unreachable=0    failed=0   

When that has finished, you can manually verify the key exists by running:

ls -la ~/.ssh/id_rsa*

It will list all files that match id_rsa*. You should see id_rsa and id_rsa.pub listed, indicating that your SSH key exists.

Next, we'll push the key into your DigitalOcean account, so open your playbook for editing again.

nano digitalocean.yml

We'll be using the digital_ocean Ansible module to upload your SSH key.We will also register the output of the task as the my_ssh_key variable because we'll need it for a later step.

Add the task to the bottom of the file, then save and close the file.

---
. . .
  - name: ensure ssh key exists
    user: >
      name={{ ansible_user_id }}
      generate_ssh_key=yes
      ssh_key_file=.ssh/id_rsa

  - name: ensure key exists at DigitalOcean
    digital_ocean: >
      state=present
      command=ssh
      name=my_ssh_key
      ssh_pub_key={{ lookup('file', '~/.ssh/id_rsa.pub') }}
      api_token={{ do_token }}
    register: my_ssh_key

If you named your key something other than id_rsa, make sure to update the name is the ssh_pub_key line in this task.

We're using a number of different options from the digital_ocean module here:

  • state — This can be present, active, absent, or deleted. In this case, we want present, because we want the SSH key to be present in the account.
  • command — This is the either droplet or ssh. We want ssh, which allows us to manage the state of SSH keys within the account.
  • name — This is the name to save the SSH key under, this must be unique and will be used to identify your key via the API and the web interface.
  • ssh_pub_key — This is your SSH public key, which will be the key whose existence we assured using the user module.
  • api_token — This is your DigitalOcean API token, which we have accessible as a variable (do_token, defined in the vars section).

Now, run your playbook.

ansible-playbook digitalocean.yml

The output should look like this:

. . .

TASK [ensure key exists at digital ocean] **************************************
changed: [localhost]

PLAY RECAP *********************************************************************
localhost                  : ok=3    changed=1    unreachable=0    failed=0   

When that has finished, you can manually check that your SSH key exists in your DigitalOcean account by going to the control panel, clicking Settings (from the gear menu), then Security (in the User category on the left sidebar). You should see your new key listed under SSH Keys.

Step 3 — Creating a New Droplet

In this step, we will create a new Droplet.

We briefly touched on the digital_ocean module in Step 2. We will be using a different set of options for this module in this step:

  • command — We used this option in the previous step with ``ssh; this time, we'll use it withdroplet` to manage Droplets via this module.
  • state — We used this in the previous step, too; here, it represents the state of the Droplet, which we want to be present.
  • image_id — This is the image to use for the new Droplet, like ubuntu-14-04-x64.
  • name — This is the hostname to use when creating the Droplet.
  • region_id — This is the the region to create the Droplet in, like NYC3.
  • size_id — This is the the size of the Droplet we want to create, like 512mb.
  • ssh_key_ids — This is SSH key ID (or IDs) to be set on the server when it is created.

There are many more options than just the ones that we are covering in this tutorial (all of which can be found on the Ansible documentation page), but using these options as a guide, we can write your new task.

Open your playbook for editing.

nano digitalocean.yml

Update your playbook to with the new task highlighted in red below, then save and close the file. You can change options like size, region, and image to suit your application. The options below will create a 512MB Ubuntu 14.04 server named droplet-one using the SSH key we created in the previous step.

. . .
      api_token={{ do_token }}
    register: my_ssh_key

  - name: ensure droplet one exists
    digital_ocean: >
      state=present
      command=droplet
      name=droplet-one
      size_id=512mb
      region_id=sgp1
      image_id=ubuntu-14-04-x64
      ssh_key_ids={{ my_ssh_key.ssh_key.id }}
      api_token={{ do_token }}
    register: droplet_one

  - debug: msg="IP is {{ droplet_one.droplet.ip_address }}"

Note that we are using {{ my_ssh_key.ssh_key.id }} to retrieve the ID of the previously set up SSH key and pass it into your new Droplet. This works if the SSH key is newly created or if it already exists.

Now, run your playbook. This will take a little longer to execute than it did previously because it will be creating a Droplet.

ansible-playbook digitalocean.yml

The output should look like this:

. . .

TASK [ensure key exists at DigitalOcean] **************************************
ok: [localhost]

TASK [ensure droplet one exists] ******************************************************
changed: [localhost]

TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": "IP is 111.111.111.111"
}

PLAY RECAP *********************************************************************
localhost                  : ok=5    changed=1    unreachable=0    failed=0   

Ansible has provided us with the IP address of the new Droplet in the return message. To verify that it's running, you can log into it directly using SSH.

ssh root@111.111.111.111

This should connect you to your new server (using the SSH key we created on your Ansible server in step 2). You can then exit back to your Ansible server by pressing CTRL+D.

Step 4 — Ensuring a Droplet Exists

In this step, we will discuss the concept of idempotence and how to relates to provisioning Droplets with Ansible.

Ansible aims to operate using the concept of idempotence. This means that you can run the same tasks multiple times, and changes should only be made when they are needed — which is usually the first time it's run. This idea maps well to provisioning servers, installing packages, and other server administration.

If you run your playbook again (don't do it yet!), given the current configuration, it will go ahead and provision a second Droplet also called droplet-one. Run it again, and it will make a third Droplet. This is due to the fact that DigitalOcean allows multiple Droplets with the same name. To avoid this, we can use the unique_name parameter.

The unique_name parameter tells Ansible and DigitalOcean that you want unique hostnames for your servers. This means that when you run your playbook again, it will honor idempotence and consider the Droplet already provisioned, and therefore won't create a second server with the same name.

Open your playbook for editing:

nano digitalocean.yml

Add in the unique_name parameter:

. . .
  - name: ensure droplet one exists
    digital_ocean: >
      state=present
      command=droplet
      name=droplet-one
      unique_name=yes
      size_id=512mb
. . .

Save and run your playbook:

ansible-playbook digitalocean.yml

The output should result in no changed tasks, but you will notice the debug output with the IP address is still displayed. If you check your DigitalOcean account, you will notice only a single droplet-one Droplet was provisioned.

Step 5 — Creating a Second Droplet

In this step, we will replicate our existing configuration to provision a separate Droplet.

In order to provision a separate Droplet, all we need to do is replicate the Ansible task from our first Droplet. To make our playbook a little more robust, however, we will convert it to using a list of Droplets to provision, which allows us to easily scale out our fleet as required.

First, we need to define our list of Droplets.

Open your playbook for editing:

nano digitalocean.yml

Add in a list of Droplet names to be provisioned in the vars section.

---
- hosts: digitalocean

  vars:
    do_token: <digitalocean_token>
    droplets:
    - droplet-one
    - droplet-two

  tasks:
. . .

Next, we need to update our task to loop through the list of Droplets, check if they exist, and then save the results into a variable. Following that, we also need to modify our debug tasks to output the information stored in the variable for each item.

To do this, update the ensure droplet one exists task in your playbook as below:

---
. . .
  - name: ensure droplets exist
    digital_ocean: >
      state=present
      command=droplet
      name={{ item }}
      unique_name=yes
      size_id=512mb
      region_id=sgp1
      image_id=ubuntu-14-04-x64
      ssh_key_ids={{ my_ssh_key.ssh_key.id }}
      api_token={{ do_token }}
    with_items: droplets
    register: droplet_details

  - debug: msg="IP is {{ item.droplet.ip_address }}"
    with_items: droplet_details.results

Save and run your playbook.

ansible-playbook digitalocean.yml

The results should look like this:

. . .

TASK [ensure droplets exists] **************************************************
ok: [localhost] => (item=droplet-one)
changed: [localhost] => (item=droplet-two)

TASK [debug] *******************************************************************

. . .

"msg": "IP is 111.111.111.111"

. . .

"msg": "IP is 222.222.222.222"
}

PLAY RECAP *********************************************************************
localhost                  : ok=5    changed=1    unreachable=0    failed=0   

You might notice that the the debug output has a lot more information in it than it did the first time. This is because the debug module prints additional information for help with debugging; this is a small downside of using registered variables with this module.

Apart from that, you will see that our second Droplet has been provisioned, while our first was already running. You have now provisioned two DigitalOcean Droplets using only Ansible!

Deleting your Droplets is just as simple. The state parameter in the task tells Ansible what state the Droplet should be in. Setting it to present ensures that the Droplet exists, and it will be created if it doesn't already exist; setting it to absent ensures the Droplet with the specified name not exist, and it will delete any Droplets matching the specified name (as long as unique_name is set).

If you want to delete the two example Droplets you created in this tutorial, just change the state in the creation task to absent and rerun your playbook.

---
. . .
  - name: ensure droplets exist
    digital_ocean: >
      state=absent
      command=droplet
. . .

You may also want to remove the debug line before you rerun your playbook. If you don't, your Droplets will still be deleted, but you'll see an error from the debug command (because there are no IP addresses to return).

ansible-playbook digitalocean.yml

Now your two example Droplets will be deleted.

Conclusion

Ansible is an incredibly powerful and very flexible provisioning tool. You have seen how easy it is to provision (and deprovision) Droplets using the DigitalOcean API using only standard Ansible concepts and the built-in modules.

The state parameter, which was set to present, tells Ansible what state the Droplet should be in. Setting it to present ensures that the Droplet exists, and it will be created if it doesn't already exist; setting it to absent tells Ansible to ensure the Droplet with the specified name not exist, and it will delete any Droplets matching the specified name (as long as unique_name is set).

As your number of Droplets you manage increases, the ability to automate the process will save you time in creating, setting up, and destroying Droplets as part of a automated process. You can adapt and expand the examples in this tutorial to improve your provisioning scripts custom to your setup.

]]>
2016-03-15T07:29:22+00:00
Stephen <![CDATA[Getting Started with PHPUnit in Laravel]]> https://stephen.rees-carter.net/thought/getting-started-with-phpunit-in-laravel Getting Started with PHPUnit in Laravel

Note: I originally wrote and published this article as a Semaphore CI Community Tutorial.

Introduction

PHPUnit is one of the oldest and most well known unit testing packages for PHP. It is primarily designed for unit testing, which means testing your code in the smallest components possible, but it is also incredibly flexible and can be used for a lot more than just unit testing.

PHPUnit includes a lot of simple and flexible assertions that allow you to easily test your code, which works really well when you are testing specific components. It does mean, however, that testing more advanced code such as controllers and form submission validation can be a lot more complicated.

To help make things easier for developers, the Laravel PHP framework includes a collection of application test helpers, which allow you to write very simple PHPUnit tests to test complex parts of your application.

The purpose of this tutorial is to introduce you to the basics of PHPUnit testing, using both the default PHPUnit assertions, and the Laravel test helpers. The aim is for you to be confident writing basic tests for your applications by the end of the tutorial.

Prerequisites

This tutorial assumes that you are already familiar with Laravel and know how to run commands within the application directory (such as php artisan commands). We will be creating a couple of basic example classes to learn how the different testing tools work, and as such it is recommended that you create a fresh application for this tutorial.

If you have the Laravel installer set up, you can create a new test application by running:

laravel new phpunit-tests

Alternatively, you can create a new application by using Composer directly:

composer create-project laravel/laravel --prefer-dist

Other installation options can also be found in the Laravel documentation.

Creating a New Test

The first step when using PHPUnit is to create a new test class. The convention for test classes is that they are stored within ./tests/ in your application directory. Inside this folder, each test class is named as <name>Test.php. This format allows PHPUnit to find each test class — it will ignore anything that does not end in Test.php.

In a new Laravel application, you will notice two files in the ./tests/ directory: ExampleTest.php and TestCase.php. The TestCase.php file is a bootstrap file for setting up the Laravel environment within our tests. This allows us to use Laravel facades in tests, and provides the framework for the testing helpers, which we will look at shortly. The ExampleTest.php is an example test class that includes a basic test case using the application testing helpers - ignore it for now.

To create a new test class, we can either create a new file manually - or run the helpful Artisan make:test command provided by Laravel.

In order to create a test class called BasicTest, we just need to run this artisan command:

php artisan make:test BasicTest

Laravel will create a basic test class that looks like this:

<?php
class BasicTest extends TestCase
{
    /**
     * A basic test example.
     *
     * @return void
     */
    public function testExample()
    {
        $this->assertTrue(true);
    }
}

The most important thing to notice here is the test prefix on the method name. Like the Test suffix for class names, this test prefix tells PHPUnit what methods to run when testing. If you forget the test prefix, then PHPUnit will ignore the method.

Before we run our test suite for the first time, it is worth pointing out the default phpunit.xml file that Laravel provides. PHPUnit will automatically look for a file named phpunit.xml or phpunit.xml.dist in the current directory when it is run. This is where you configure the specific options for your tests.

There is a lot of information within this file, however the most important section for now is the testsuite directory definition:

<?xml version="1.0" encoding="UTF-8"?>
<phpunit ... >

    <testsuites>
        <testsuite name="Application Test Suite">
            <directory>./tests/</directory>
        </testsuite>
    </testsuites>

    ...
</phpunit>

This tells PHPUnit to run the tests it finds in the ./tests/ directory, which, as we have previously learned, is the convention for storing tests.

Now that we have created a base test, and are aware of the PHPUnit configuration, it is time to run our tests for the first time.

You can run your PHPUnit tests by running the phpunit command:

./vendor/bin/phpunit

You should see something similar to this as the output:

PHPUnit 4.8.19 by Sebastian Bergmann and contributors.

..

Time: 103 ms, Memory: 12.75Mb

OK (2 tests, 3 assertions)

Now that we have a working PHPUnit setup, it is time to move onto writing a basic test.

Note, it counts 2 tests and 3 assertions as the ExampleTest.php file includes a test with two assertions. Our new basic test includes a single assertion, which passed.

Writing a Basic Test

To help cover the basic assertions that PHPUnit provides, we will first create a basic class that provides some simple functionality.

Create a new file in your ./app/ directory called Box.php, and copy this example class:

<?php
namespace App;

class Box
{
    /**
     * @var array
     */
    protected $items = [];

    /**
     * Construct the box with the given items.
     *
     * @param array $items
     */
    public function __construct($items = [])
    {
        $this->items = $items;
    }

    /**
     * Check if the specified item is in the box.
     *
     * @param string $item
     * @return bool
     */
    public function has($item)
    {
        return in_array($item, $this->items);
    }

    /**
     * Remove an item from the box, or null if the box is empty.
     *
     * @return string
     */
    public function takeOne()
    {
        return array_shift($this->items);
    }

    /**
     * Retrieve all items from the box that start with the specified letter.
     *
     * @param string $letter
     * @return array
     */
    public function startsWith($letter)
    {
        return array_filter($this->items, function ($item) use ($letter) {
            return stripos($item, $letter) === 0;
        });
    }
}

Next, open your ./tests/BasicTest.php class (that we created earlier), and remove the testExample method that was created by default. You should be left with an empty class.

We will now use seven of the basic PHPUnit assertions to write tests for our Box class. These assertions are:

  • assertTrue()
  • assertFalse()
  • assertEquals()
  • assertNull()
  • assertContains()
  • assertCount()
  • assertEmpty()

assertTrue() and assertFalse()

assertTrue() and assertFalse() allow you to assert that a value is equates to either true or false. This means they are perfect for testing methods that return boolean values. In our Box class, we have a method called has($item), which returns true or false when the specified item is in the box or not.

To write a test for this in PHPUnit, we can do the following:

<?php
use App\Box;

class BasicTest extends TestCase
{
    public function testHasItemInBox()
    {
        $box = new Box(['cat', 'toy', 'torch']);

        $this->assertTrue($box->has('toy'));
        $this->assertFalse($box->has('ball'));
    }
}

Note how we only pass a single parameter into the assertTrue() and assertFalse() methods, and it is the output of the has($item) method.

If you run the ./vendor/bin/phpunit command now, you will notice the output includes:

OK (2 tests, 4 assertions)

This means our tests have passed.

If you swap the assertFalse() for assertTrue() and run the phpunit command again, the output will look like this:

PHPUnit 4.8.19 by Sebastian Bergmann and contributors.

F.

Time: 93 ms, Memory: 13.00Mb

There was 1 failure:

1) BasicTest::testHasItemInBox
Failed asserting that false is true.

./tests/BasicTest.php:12

FAILURES!
Tests: 2, Assertions: 4, Failures: 1.

This tells us that the assertion on line 12 failed to assert that a false value was true - as we switched the assertFalse() for assertTrue().

Swap it back, and re-run PHPUnit. The tests should again pass, as we have fixed the broken test.

assertEquals() and assertNull()

Next, we will look at assertEquals(), and assertNull().

assertEquals() is used to compare the actual value of the variable to the expected value. We want to use it to check if the value of the takeOne() function is an item that is current in the box. As the takeOne() method returns a null value when the box is empty, we can use assertNull() to check for that too.

Unlike assertTrue(), assertFalse(), and assertNull(), assertEquals() takes two parameters. The first being the expected value, and the second being the actual value.

We can implement these assertions in our class as follows:

<?php
use App\Box;

class BasicTest extends TestCase
{
    public function testHasItemInBox()
    {
        $box = new Box(['cat', 'toy', 'torch']);

        $this->assertTrue($box->has('toy'));
        $this->assertFalse($box->has('ball'));
    }

    public function testTakeOneFromTheBox()
    {
        $box = new Box(['torch']);

        $this->assertEquals('torch', $box->takeOne());

        // Null, now the box is empty
        $this->assertNull($box->takeOne());
    }
}

Run the phpunit command, and you should see:

OK (3 tests, 6 assertions)

assertContains(), assertCount(), and assertEmpty()

Finally, we have three assertions that work with arrays, which we can use to check the startsWith($item) method in our Box class. assertContains() asserts that an expected value exists within the provided array, assertCount() asserts the number of items in the array matches the specified amount, and assertEmpty() asserts that the provided array is empty.

We can implement tests for these like this:

<?php
use App\Box;

class BasicTest extends TestCase
{
    public function testHasItemInBox()
    {
        $box = new Box(['cat', 'toy', 'torch']);

        $this->assertTrue($box->has('toy'));
        $this->assertFalse($box->has('ball'));
    }

    public function testTakeOneFromTheBox()
    {
        $box = new Box(['torch']);

        $this->assertEquals('torch', $box->takeOne());

        // Null, now the box is empty
        $this->assertNull($box->takeOne());
    }

    public function testStartsWithALetter()
    {
        $box = new Box(['toy', 'torch', 'ball', 'cat', 'tissue']);

        $results = $box->startsWith('t');

        $this->assertCount(3, $results);
        $this->assertContains('toy', $results);
        $this->assertContains('torch', $results);
        $this->assertContains('tissue', $results);

        // Empty array if passed even
        $this->assertEmpty($box->startsWith('s'));
    }
}

Save and run your tests again:

OK (4 tests, 9 assertions)

Congratulations, you have just fully tested the Box class using seven of the basic PHPUnit assertions. You can do a lot with these simple assertions, and most of the other, more complex, assertions that are available still follow the same usage pattern.

Testing Your Application

Unit testing each component in your application works in a lot of situations and should definitely be part of your development process, however it isn't all the testing you need to do. When you are building an application that includes complex views, navigation and forms, you will want to test these components too. This is where Laravel's test helpers make things just as easy as unit testing simple components.

We previously looked at the default files within the ./tests/ directory, and we skipped the ./tests/ExampleTest.php file. Open it now, and it should look something like this:

<?php
class ExampleTest extends TestCase
{
    /**
     * A basic functional test example.
     *
     * @return void
     */
    public function testBasicExample()
    {
        $this->visit('/')
             ->see('Laravel 5');
    }
}

We can see the test in this case is very simple. Without any prior knowledge of how the test helpers work, we can assume it means something like this:

  1. when I visit / (webroot)
  2. I should see 'Laravel 5'

If you open your web browser to our application (you can run php artisan serve if you don't have a web server set up), you should see a splash screen with "Laravel 5" on the web root. Given that this test has been passing PHPUnit, it is safe to say that our translation of this example test is correct.

This test is ensuring that the web page rendered at the / path returns the text 'Laravel 5'. A simple check like this may not seem like much, but if there is critical information your website needs to display, a simple test like this may prevent you from deploying a broken application if a change somewhere else causes the page to no longer display the right information.

visit(), see(), and dontSee()

Let's write our own test now, and take it one step further.

First, edit the ./app/Http/routes.php file, to add in a new route. For the sake of this tutorial, we will go for a Greek alphabet themed route:

<?php
Route::get('/', function () {
    return view('welcome');
});

Route::get('/alpha', function () {
    return view('alpha');
});

Next, create the view template at ./resources/views/alpha.blade.php, and save some basic HTML with the Alpha keyword:

<!DOCTYPE html>
<html>
    <head>
        <title>Alpha</title>
    </head>
    <body>
        <p>This is the Alpha page.</p>
    </body>
</html>

Now open it in your browser to ensure it is working as expected: http://localhost:8000/beta, and it should display a friendly "This is the Alpha page." message.

Now that we have the template, we will create a new test. Run the make:test command:

php artisan make:test AlphaTest

Then edit the test, using the example test as a guide, but we also want to ensure that our "alpha" page does not mention "beta". To do this, we can use the dontSee() assertion, which does the opposite of see().

This means we can do a simple test like this:

<?php
class AlphaTest extends TestCase
{
    public function testDisplaysAlpha()
    {
        $this->visit('/alpha')
             ->see('Alpha')
             ->dontSee('Beta');
    }
}

Save it and run PHPUnit (./vendor/bin/phpunit), and it should all pass, with the status line looking something like this:

OK (5 tests, 12 assertions)

Writing Tests First

A great thing about tests is that you can use the Test Driven Development (TDD) approach, and write your tests first. After writing your tests, you run them and see that they fail, then you write the code that satisfies the tests to make everything pass again. So, let's do that for the next page.

First, make a BetaTest class using the make:test artisan command:

php artisan make:test BetaTest

Next, update the test case so it is checking the /beta route for "Beta":

<?php
class BetaTest extends TestCase
{
    public function testDisplaysBeta()
    {
        $this->visit('/beta')
             ->see('Beta')
             ->dontSee('Alpha');
    }
}

Now, run the test using ./vendor/bin/phpunit. The result should be a slightly unfriendly error message, that looks like this:

> ./vendor/bin/phpunit
PHPUnit 4.8.19 by Sebastian Bergmann and contributors.

....F.

Time: 144 ms, Memory: 14.25Mb

There was 1 failure:

1) BetaTest::testDisplaysBeta
A request to [http://localhost/beta] failed. Received status code [404].

...

FAILURES!
Tests: 6, Assertions: 13, Failures: 1.

We now have an expectation for a missing route. Let's create it.

First, edit the ./app/Http/routes.php file to create the new /beta route:

<?php
Route::get('/', function () {
    return view('welcome');
});

Route::get('/alpha', function () {
    return view('alpha');
});

Route::get('/beta', function () {
    return view('beta');
});

Next, create the view template at ./resources/views/beta.blade.php:

<!DOCTYPE html>
<html>
    <head>
        <title>Beta</title>
    </head>
    <body>
        <p>This is the Beta page.</p>
    </body>
</html>

Now, run PHPUnit again, and the result should be back to green.

> ./vendor/bin/phpunit
PHPUnit 4.8.19 by Sebastian Bergmann and contributors.

......

Time: 142 ms, Memory: 14.00Mb

OK (6 tests, 15 assertions)

We have now implemented our new page using Test Driven Development by writing the test first.

click() and seePageIs()

Laravel also provides a helper to allow the test to click a link that exists on the page (click()), as well as a way to check what the resulting page is (seePageIs()).

Let's use these two helpers to implement links between the Alpha and Beta pages.

First, let's update our tests. Open the AlphaTest class, and we will add in a new test method, which will click the 'Next' link found on the "alpha" page to go to the "beta" page.

The new test should look like this:

<?php
class AlphaTest extends TestCase
{
    public function testDisplaysAlpha()
    {
        $this->visit('/alpha')
             ->see('Alpha')
             ->dontSee('Beta');
    }

    public function testClickNextForBeta()
    {
        $this->visit('/alpha')
             ->click('Next')
             ->seePageIs('/beta');
    }
}

Notice that we aren't checking the content of either page in our new testClickNextForBeta() test method. Other tests are successfully checking the content of both pages, so all we care about is that clicking the "Next" link will send us to /beta.

You can run the test suite now, but as expected it will fail as we haven't updated our HTML yet.

Next, we will update the BetaTest to do similar:

<?php
class BetaTest extends TestCase
{
    public function testDisplaysBeta()
    {
        $this->visit('/beta')
             ->see('Beta')
             ->dontSee('Alpha');
    }

    public function testClickNextForAlpha()
    {
        $this->visit('/beta')
             ->click('Previous')
             ->seePageIs('/alpha');
    }
}

Next, let's update our HTML templates.

./resources/views/alpha.blade.php:

<!DOCTYPE html>
<html>
    <head>
        <title>Alpha</title>
    </head>
    <body>
        <p>This is the Alpha page.</p>
        <p><a href="/beta">Next</a></p>
    </body>
</html>

./resources/views/beta.blade.php:

<!DOCTYPE html>
<html>
    <head>
        <title>Beta</title>
    </head>
    <body>
        <p>This is the Beta page.</p>
        <p><a href="/alpha">Previous</a></p>
    </body>
</html>

Save the files and run PHPUnit again:

> ./vendor/bin/phpunit
PHPUnit 4.8.19 by Sebastian Bergmann and contributors.

F....F..

Time: 175 ms, Memory: 14.00Mb

There were 2 failures:

1) AlphaTest::testDisplaysAlpha
Failed asserting that '<!DOCTYPE html>
<html>
    <head>
        <title>Alpha</title>
    </head>
    <body>
        <p>This is the Alpha page.</p>
        <p><a href="/beta">Next</a></p>
    </body>
</html>
' does not match PCRE pattern "/Beta/i".

2) BetaTest::testDisplaysBeta
Failed asserting that '<!DOCTYPE html>
<html>
    <head>
        <title>Beta</title>
    </head>
    <body>
        <p>This is the Beta page.</p>
        <p><a href="/alpha">Previous</a></p>
    </body>
</html>
' does not match PCRE pattern "/Alpha/i".

FAILURES!
Tests: 8, Assertions: 23, Failures: 2.

We have broken our tests somehow. If you look at our new HTML carefully, you will notice we now have the terms beta and alpha on the /alpha and /beta pages, respectively. This means we need to slightly change our tests so they don't match false positives.

Within each of the AlphaTest and BetaTest classes, update the testDisplays* methods to use dontSee('<page> page'). This way, it will only match the string, not the term.

The two tests should look like this:

./tests/AlphaTest.php:

<?php
class AlphaTest extends TestCase
{
    public function testDisplaysAlpha()
    {
        $this->visit('/alpha')
             ->see('Alpha')
             ->dontSee('Beta page');
    }

    public function testClickNextForBeta()
    {
        $this->visit('/alpha')
             ->click('Next')
             ->seePageIs('/beta');
    }
}

./tests/BetaTest.php:

<?php
class BetaTest extends TestCase
{
    public function testDisplaysBeta()
    {
        $this->visit('/beta')
             ->see('Beta')
             ->dontSee('Alpha page');
    }

    public function testClickNextForAlpha()
    {
        $this->visit('/beta')
             ->click('Previous')
             ->seePageIs('/alpha');
    }
}

Run your tests again, and everything should pass again. We have now tested our new pages, including the Next/Previous links between them.

Conclusion

You should notice a common theme across all of the tests that we have written in this tutorial: they are all incredibly simple. This is one of the benefits of learning how to use the basic test assertions and helpers, and trying to use them as much as possible. The simpler you can write your tests, the easier your tests will be to understand and maintain.

Once you have mastered the PHPUnit assertions we have covered in this tutorial, you can find a lot more in the PHPUnit documentation. They all follow the same basic pattern, but you will find that you keep coming back to the basic assertions for most of your tests.

Laravel's test helpers are a fantastic compliment to the PHPUnit assertions, and make testing your application templates easy. That said, it is important to recognize that, as part of our tests, we only checked the critical information - not the entire page. This keeps the tests simple, and allows the page content to change as the application changes. If the critical content still exists, the test still passes, and everyone is happy.

]]>
2016-01-06T20:28:08+00:00
Stephen <![CDATA[Habits - Pebble Timeline app]]> https://stephen.rees-carter.net/thought/pebble-habits Habits - Pebble Timeline app Habits banner

I am excited to announce that Habits, my latest Pebble Timeline app has been released!

Habits is the evolution of 8-A-Day, designed to help you start and maintain (good) habits through the Pebble Timeline. It supports three different habit frequencies (weekly, daily, and hourly) which you can fully customize to suit your habits and schedule.

As you complete your weekly and daily habits, your success streaks will be incremented, encouraging you to continue working on your habits. Likewise, your hourly habits will provide a summary of how many times you completed the habit throughout your day. If you often miss habit notifications, you can enable extra reminders to provide a bit of extra encouragement to help you get into your habit.

You can find it in the Pebble appstore.

The idea for Habits was originally proposed to me by Katherine and Aaron from Pebble, as a way to build on what I'd started with 8-A-Day, to allow allow you to track more than just drinking water throughout the day in the Timeline. I thought it was a great idea, so I dove headlong into it, and now, a few months later, Habits exists!

There were a lot of challenges that I had to figure out when building Habits - mainly around handling timezones and time manipulation. Luckily for me, Laravel's Lumen made building a lot of the backend app a piece of cake, and Carbon was a real help with time manipulation - not to mention my PinPusher library made sending pins simple.

Habits has turned out better than I expected - but don't just take my word for it:

I owe a huge thank you to Katherine McAuliffe and Aaron Cannon from Pebble, for encouraging me to build Habits - and for testing it. Especially Kat, who put up with a lot of back-and-forth at the Pebble Developer Retreat as I fixed the various iOS bugs. Also to Harel on the Pebble design team, who made the awesome banner image above!

One of these days I'll actually post those "building a Pebble app" posts I keep talking about... so stay tuned! I promise to wite them! One day... :-)

]]>
2015-11-12T10:19:03+00:00
Stephen <![CDATA[Using Laravel 5 Middleware for Parameter Persistence]]> https://stephen.rees-carter.net/thought/using-laravel-5-middleware-for-parameter-persistence Using Laravel 5 Middleware for Parameter Persistence

I am lucky enough to get to work on a large Laravel 5 full time in my day job, with some awesome developers. I recently came up with what is, in my humble opinion, an elegant and simple solution for what we all initially thought was quite a complex problem. If you've used this trick before, or know how to improve on it, please let me know - I'd love to hear from you!

The problem

We are working on the latest iteration of our product, which is a re-write in Laravel 5 using PJAX for the UI interaction. The old version of the product used Laravel 3 and AngularJS. This means that we are using very different technologies for all parts of the stack, and as you can probably imagine, some things that were easy in the old version now have added complexity. (That said, the new stack also makes some things a lot easier!)

We needed to implement persistence in our record list filtering behaviour. The filter system is being powered using URL parameters that are appended to / updated when you change the filter options - however this does not persist when you leave the page and come back to it. We all agreed it was going to be tricky, given our design, and I was the unlucky developer whose queue it ended up at the top of.

Possible options

I thought for a while about possible solutions - maybe hijacking the events in PJAX and remembering the URL parameters in the browser, and manually injecting them, but I really don't like Javascript so I tried to find a solution that would approach the problem on the PHP side. I contemplated possible options of moving the filters out of URL parameters and persisting them in the session, but that gets messy: do they need to be POST'ed to the server? How do you share links with specific filter options? How should they be stored? Should the common options work across different lists?

No, I had to stick with URL parameters.

My solution

After a bit of thought, I remembered the middleware design that Laravel 5 uses. I realised that this is the perfect place to inject logic relating to a request before the request is actually processed.

The solution is elegant and simple:

<?php
namespace App\Http\Middleware;

use Closure;
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Session;

class ParameterPersistence
{
    public function handle(Request $request, Closure $next)
    {
        // Unique session key based on class name and route name
        $key = __CLASS__.'|'.$request->route()->getName();
        
        // Retrieve all request parameters
        $parameters = $request->all();

        // IF  no request parameters found
        // AND the session key exists (i.e. an previous URL has been saved)
        if (!count($parameters) && Session::has($key)) {
            // THEN redirect to the saved URL
            return redirect(Session::get($key));
        }

        // IF there are request parameters
        if (count($parameters)) {
            // THEN save them in the session
            Session::put($key, $request->fullUrl());
        }

        // Process and return the request
        return $next($request);
    }
}

The way it works is simple.

First, if you access the page initially without any parameters before a URL has been saved (http://example.com/users), then it will do nothing. The request will be processed and nothing will be stored in the session.

Next, if you access the page with URL parameters, for example by clicking on a sort order link (http://example.com/users?sort=name&order=asc), then it will save the requested URL in the session (under a unique key for that specific page) and then the request will be processed as per normal.

Finally, if you access the page without any URL parameters but after a URL has been saved (http://example.com/users), then it will redirect you to the last saved URL that contained parameters (http://example.com/users?sort=name&order=asc).

This process is completely transparent to the application itself, as it all happens in the middleware using redirects, before the application logic is initiated. This makes the redirects very fast, and removes any complexity around persistence in the filter handling logic.

Things to note

  1. It will only redirect when the request has no parameters (i.e. http://example.com/users) and has a previously saved URL. This means that any time a parameter is added or removed (without clearing all parameters), it will save the new request URL and render the page.
  2. It will only save requests that have parameters. If there are no parameters, and no saved URLs, it will do nothing. This saves data being needlessly stored and potential redirect loops.
  3. It should only be included on pages that need it - it will cause issues with other parameters that should not be automatically remembered. (Something like ?delete=true would be very bad!)
  4. If you are using a load balancer, or for some reason cannot use absolute URLs, you will need to replace $request->fullUrl() with something more appropriate. For example, this will give you a relative URL: $request->path().'?'.$request->getQueryString().
  5. We use named routes, so the session key uses the route name. If you don't use named routes, you will need to change the key generation to use whatever is unique about the request for that record list page.

I hope you found that useful! :-)

]]>
2015-07-16T22:37:47+00:00
Stephen <![CDATA[Pebble Timeline Challenge Week 10 Winner - 8-A-Day!]]> https://stephen.rees-carter.net/thought/pebble-timeline-challenge-week-10-winner-8-a-day Pebble Timeline Challenge Week 10 Winner - 8-A-Day!

I was very excited for my Pebble Timeline app 8-A-Day to be chosen as the week 10 winner of the Pebble Timeline Challenge.

Doctors have been telling us this for years: drink 8 glasses of water a day, preferably more. Now, with the 10th winner of the Pebble Timeline Challenge, 8-A-Day, you can have your Pebble remind you instead! Get pinged hourly, and use your timeline to stay healthy and hydrated.

At 9 AM in your local time zone, you'll receive a notification reminding you to drink your first glass of water. You can input whether or not you've finished a glass, or choose to stop the notifications. If you continue receiving notifications, a pin will appear in an hour reminding you to continue drinking water. This pattern will continue throughout the day, and at the end of the day you'll receive a summary of your daily water intake.

An Australian developer, Stephen Rees-Carter, developed this app after conversations with his wife about the idea. A web developer by training, he was excited to use his PHP skills to create an app for the timeline.

Here are some of the features he used in his application.

  • Timeline for reminders, notifications, and updates
  • PebbleKit JS for external API calls
  • PHP API Wrapper
  • An external PHP Application for logging and reminding the user

You can find the official announcement here.

Most of the logic is within a small Laravel Lumen API application that I built for it, with minimal wrapper and interaction code in the actual Pebble app. It was a great chance to play with Lumen and see what it could do, and it was very well suited building an API. I also built a simple open source PHP API wrapper called PinPusher to make working with the Timeline API easy.

That was actually my first ever Pebble app, and I'm really proud of how it turned out. The Timeline API made it a piece of cake to get timeline pins working, and Lumen was a joy to use to get my API server working. Since releasing 8-A-Day, I've written and released another app, Timeline Tag, which is a multi-player app built within the timeline. It's pretty cool, I'm very proud of it too.

I am planning on writing a couple of blog posts about the process of building these Pebble apps when I have time. Until then, you'll just have to use your imagination!

Oh btw, if you're looking for other cool Pebble apps, check out: Pokedex Challenge.

]]>
2015-07-13T09:50:00+00:00
Stephen <![CDATA[How To Deploy Multiple PHP Applications using Ansible]]> https://stephen.rees-carter.net/thought/how-to-deploy-multiple-php-applications-using-ansible How To Deploy Multiple PHP Applications using Ansible

Note: I originally wrote and published this article as part of the Automating Your PHP Application Deployment Process with Ansible tutorial series for the Digital Ocean Community.

Introduction

This tutorial is the third in a series about deploying PHP applications using Ansible on Ubuntu 14.04. The first tutorial covers the basic steps for deploying an application; the second tutorial covers more advanced topics such as databases, queue daemons, and task schedulers (crons).

In this tutorial, we will build on what we learned in the previous tutorials by transforming our single-application Ansible playbook into a playbook that supports deploying multiple PHP applications on one or multiple servers. This is the final piece of the puzzle when it comes to using Ansible to deploy your applications with minimal effort.

We will be using a couple of simple Lumen applications as part of our examples. However, these instructions can be easily modified to support other frameworks and applications if you already have your own. It is recommended that you use the example applications until you are comfortable making changes to the playbook.

Prerequisites

To follow this tutorial, you will need:

  • Two Droplets set up by following the first and second tutorials in this series.

  • A new (third) Ubuntu 14.04 Droplet set up like the original PHP Droplet in the first tutorial, with a sudo non-root user and SSH keys. This Droplet which will be used to show how to deploy multiple applications to multiple servers using one Ansible playbook. We'll refer to the IP addresses of the original PHP Droplet and this new PHP Droplet as your_first_server_ip and your_second_server_ip respectively.

  • An updated /etc/hosts file on your local computer with the following lines added. You can learn more about this file in step 6 of this tutorial.

your_first_server_ip laravel.example.com one.example.com two.example.com
your_second_server_ip laravel.example2.com two.example2.com

The example websites we'll use in this tutorial are laravel.example.com, one.example.com, and two.example.com. If you want to use your own domain, you'll need to update your active DNS records instead.

Step 1 — Setting Playbook Variables

In this step, we will set up playbook variables to define our new applications.

In the previous tutorials, we hard-coded all of the configuration specifics, which is normal for many playbooks that perform specific tasks for a specific application. However, when you wish to support multiple applications or broaden the scope of your playbook, it no longer makes sense to hard code everything.

As we have seen before, Ansible provides variables which you can use in both your task definitions and file templates. What we haven't seen yet is how to manually set variables. In the top of your playbook, alongside the hosts and tasks parameters, you can define a vars parameter, and set your variables there.

If you haven't done so already, change directories into ansible-php from the previous tutorials.

cd ~/ansible-php/

Open up our existing playbook for editing.

nano php.yml

The top of the file should look like this:

---
- hosts: php
  sudo: yes

  tasks:
. . .

To define variables, we can just add in a vars section in, alongside hosts, sudo, and tasks. To keep things simple, we will start with a very basic variable for the www-data user name, like so:

---
- hosts: php
  sudo: yes

  vars:
    wwwuser: www-data

  tasks:
. . .

Next, go through and update all occurrences of the www-data user with the new variable {{ wwwuser }}. This format should be familiar, as we have used it within looks and for lookups.

To find and replace using nano, press CTRL+\. You'll see a prompt which says Search (to replace):. Type www-data , then press ENTER. The prompt will change to Replace with:. Here, type {{ wwwuser }} and press ENTER again. Nano will take you through each instance of www-data and ask Replace this instanace?. You can press y to replace each one by one, or a to replace all.

Note: Make sure the variable declaration that we just added at the top isn't changed too. There should be 11 instances of www-data that need to be replaced.

Before we go any further, there is something we need to be careful of when it comes to variables. Normally we can just add them in like this, when they are within a longer line:

- name: create /var/www/ directory
  file: dest=/var/www/ state=directory owner={{ wwwuser }} group={{ wwwuser }} mode=0700

However, if the variable is the only value in the string, we need to wrap it in quotes so the YAML parser can correctly understand it:

- name: Run artisan migrate
  shell: php /var/www/laravel/artisan migrate --force
  sudo: yes
  sudo_user: "{{ wwwuser }}"
  when: dbpwd.changed

In your playbook, this needs to happen any time you have sudo_user: {{ wwwuser }}. You can use a global find and replace the same way, replacing sudo_user: {{ wwwuser }} with sudo_user: "{{ wwwuser }}". There should be four lines that need this change.

Once you have changed all occurrences, save and run the playbook:

ansible-playbook php.yml --ask-sudo-pass

There should be no changed tasks, which means that our wwwuser variable is working correctly.

Step 2 — Defining Nested Variables for Complex Configuration

In this section, we will look at nesting variables for complex configuration options.

In the previous step, we set up a basic variable. However, it is also possible to nest variables and define lists of variables. This provides the functionality we need to define the list of sites we wish to set up on our server.

First, let us consider the existing git repository that we have set up in our playbook:

- name: Clone git repository
  git: >
    dest=/var/www/laravel
    repo=https://github.com/do-community/do-ansible-adv-php.git
    update=yes
    version=example

We can extract the following useful pieces of information: name (directory), repository, branch, and domain. Because we are setting up multiple applications, we will also need a domain name for it to respond to. Here, we'll use laravel.example.com, but if you have your own domain, you can substitute it.

This results in the following four variables that we can define for this application:

name: laravel
repository: https://github.com/do-community/do-ansible-adv-php.git
branch: example
domain: laravel.example.com

Now, open up your playbook for editing:

nano php.yml

In the top vars section, we can add in our application into a new application list:

---
- hosts: php
  sudo: yes

  vars:
    wwwuser: www-data

    applications:
      - name: laravel
        domain: laravel.example.com
        repository: https://github.com/do-community/do-ansible-adv-php.git
        branch: example

...

If you run your playbook now (using ansible-playbook php.yml --ask-sudo-pass), nothing will change because we haven't yet set up our tasks to use our new applications variable yet. However, if you go to http://laravel.example.com/ in your browser, it should show our original application.

Step 3 — Looping Variables in Tasks

In this section we will learn how to loop through variable lists in tasks.

As mentioned previously, variable lists need looped over in each task that we wish to use them in. As we saw with the install packages task, we need to define a loop of items, and then apply the task for each item in the list.

Open up your playbook for editing:

nano php.yml

We will start with some easy tasks first. Around the middle of your playbook, you should find these two env tasks:

- name: set APP_DEBUG=false
  lineinfile: dest=/var/www/laravel/.env regexp='^APP_DEBUG=' line=APP_DEBUG=false

- name: set APP_ENV=production
  lineinfile: dest=/var/www/laravel/.env regexp='^APP_ENV=' line=APP_ENV=production

You will notice that they are currently hard-coded with the laravel directory. We want to update it to use the name property for each application. To do this we add in the with_items option to loop over our applications list. Within the task itself, we will swap out the laravel reference for the variable {{ item.name }}, which should be familiar from the formats we've used before.

It should look like this:

- name: set APP_DEBUG=false
  lineinfile: dest=/var/www/{{ item.name }}/.env regexp='^APP_DEBUG=' line=APP_DEBUG=false
  with_items: applications

- name: set APP_ENV=production
  lineinfile: dest=/var/www/{{ item.name }}/.env regexp='^APP_ENV=' line=APP_ENV=production
  with_items: applications

Next, move down to the two Laravel artisan cron tasks. They can be updated exactly the same as we just did with the env tasks. We will also add in the item.name into the name parameter for the cron entries, as Ansible uses this field to uniquely identify each cron entry. If we left them as-is, we would not be able to have multiple sites on the same server as they would overwrite each over constantly and only the last one would be saved.

The tasks should look like this:

- name: Laravel Scheduler
  cron: >
    job="run-one php /var/www/{{ item.name }}/artisan schedule:run 1>> /dev/null 2>&1"
    state=present
    user={{ wwwuser }}
    name="{{ item.name }} php artisan schedule:run"
  with_items: applications

- name: Laravel Queue Worker
  cron: >
    job="run-one php /var/www/{{ item.name }}/artisan queue:work --daemon --sleep=30 --delay=60 --tries=3 1>> /dev/null 2>&1"
    state=present
    user={{ wwwuser }}
    name="{{ item.name }} Laravel Queue Worker"
  with_items: applications

If you save and run the playbook now (using ansible-playbook php.yml --ask-sudo-pass), you should only see the two updated cron tasks as updated. This is due to the change in the name parameter. Apart from that, there have been no changes, and this means that our applications list is working as expected, and we have not yet made any changes to our server as a result of refactoring our playbook.

Step 4 — Applying Looped Variables in Templates

In this section we will cover how to use looped variables in templates.

Looping variables in templates is very easy. They can be used in exactly the same way that they are used in tasks, like all other variables. The complexity comes in when you consider file paths as well as variables, as in some uses we need to factor in the file name and even run other commands because of the new file.

In the case of Nginx, we need to create a new configuration file for each application, and tell Nginx that it should be enabled. We also want to remove our original /etc/nginx/sites-available/default configuration file in the process.

First, open up your playbook for editing:

nano php.yml

Find the Configure Nginx task (near the middle of the playbook), and update it as we have done with the other tasks:

- name: Configure nginx
  template: src=nginx.conf dest=/etc/nginx/sites-available/{{ item.name }}
  with_items: applications
  notify:
    - restart php5-fpm
    - restart nginx

While we are here, we will also add in two more tasks that were mentioned above. First, we will tell Nginx about our new site configuration file. This is done with a symlink between the sites-available and sites-enabled directories in /var/nginx/.

Add this task after the Configure nginx task:

- name: Configure nginx symlink
  file: src=/etc/nginx/sites-available/{{ item.name }} dest=/etc/nginx/sites-enabled/{{ item.name }} state=link
  with_items: applications
  notify:
    - restart php5-fpm
    - restart nginx

Next, we want to remove the default enabled site configuration file so it doesn't cause problems with our new site configuration files. This is done easily with the file module:

- name: Remove default nginx site
  file: path=/etc/nginx/sites-enabled/default state=absent
  notify:
    - restart php5-fpm
    - restart nginx

Note that we didn't need to loop applications, as we were looking for a single file.

The Nginx block in your playbook should now look like this:

- name: Configure nginx
  template: src=nginx.conf dest=/etc/nginx/sites-available/{{ item.name }}
  with_items: applications
  notify:
    - restart php5-fpm
    - restart nginx

- name: Configure nginx symlink
  file: src=/etc/nginx/sites-available/{{ item.name }} dest=/etc/nginx/sites-enabled/{{ item.name }} state=link
  with_items: applications
  notify:
    - restart php5-fpm
    - restart nginx

- name: Remove default nginx site
  file: path=/etc/nginx/sites-enabled/default state=absent
  notify:
    - restart php5-fpm
    - restart nginx

Save your playbook and open the nginx.conf file for editing:

nano nginx.conf

Update the configuration file so it uses our variables:

server {
    listen 80 default_server;
    listen [::]:80 default_server ipv6only=on;

    root /var/www/{{ item.name }}/public;
    index index.php index.html index.htm;

    server_name {{ item.domain }};

    location / {
        try_files $uri $uri/ =404;
    }

    error_page 404 /404.html;
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root /var/www/{{ item.name }}/public;
    }

    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass unix:/var/run/php5-fpm.sock;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    }
}

However, we haven't finished yet. Notice the default_server at the top? We want to only include that for the laravel application, to make it the default. To do this we can use a basic IF statement to check if item.name is equal to laravel, and if so, display default_server.

It will look like this:

server {
    listen 80{% if item.name == "laravel" %} default_server{% endif %};
    listen [::]:80{% if item.name == "laravel" %} default_server ipv6only=on{% endif %};

Update your nginx.conf accordingly and save it.

Now it is time to run our playbook:

ansible-playbook php.yml --ask-sudo-pass

You should notice the Nginx tasks have been marked as changed. When it finishes running, refresh the site in your browser and it should be displaying the same as it did at the end of the last tutorial:

Queue: YES
Cron: YES

Step 5 — Looping Multiple Variables Together

In this step we will loop multiple variables together in tasks.

Now it is time to tackle a more complex loop example, specifically registered variables. In order to support different states and prevent tasks from running needlessly, you will remember that we used register: cloned in our Clone git repository task to register the variable cloned with the state of the task. We then used when: cloned|changed in the following tasks to trigger tasks conditionally. Now we need to update these references to support the applications loop.

First, open up your playbook for editing:

nano php.yml

Look down for the Clone git repository task:

- name: Clone git repository
  git: >
    dest=/var/www/laravel
    repo=https://github.com/do-community/do-ansible-adv-php.git
    update=yes
    version=example
  sudo: yes
  sudo_user: "{{ wwwuser }}"
  register: cloned

As we're registering the variable in this task, we don't need to do anything that we haven't already done:

- name: Clone git repository
  git: >
    dest=/var/www/{{ item.name }}
    repo={{ item.repository }}
    update=yes
    version={{ item.branch }}
  sudo: yes
  sudo_user: "{{ wwwuser }}"
  with_items: applications
  register: cloned

Now, move down your playbook until you find the composer create-project task:

- name: composer create-project
  composer: command=create-project working_dir=/var/www/laravel optimize_autoloader=no
  sudo: yes
  sudo_user: "{{ wwwuser }}"
  when: cloned|changed

Now we need to update it to loop through both applications and cloned. This is done using the with_together option, and passing in both applications and cloned. As with_together loops through two variables, accessing items is done with item.#, where # is the index of the variable as it is defined. So for example:

with_together:
- list_one
- list_two

item.0 will refer to list_one, and item.1 will refer to list_two.

Which means that for applications we can access the properties via: item.0.name. For cloned we need to pass in the results from the tasks, which can be accessed via cloned.results, and then we can check if it was changed via item.1.changed.

This means the task becomes:

- name: composer create-project
  composer: command=create-project working_dir=/var/www/{{ item.0.name }} optimize_autoloader=no
  sudo: yes
  sudo_user: "{{ wwwuser }}"
  when: item.1.changed
  with_together:
    - applications
    - cloned.results

Now save and run your playbook:

ansible-playbook php.yml --ask-sudo-pass

There should be no changes from this run. However, we now have a registered variable working nicely within a loop.

Step 6 — Complex Registered Variables and Loops

In this section we will learn about more complicated registered variables and loops.

The most complicated part of the conversion is handling the registered variable we are using for password generation for our MySQL database. That said, there isn't much more that we have to do in this step that we haven't covered, we just need to update a number of tasks at once.

Open your playbook for editing:

nano php.yml

Find the MySQL tasks, and in our initial pass we will just add in the basic variables like we have done in previous tasks:

- name: Create MySQL DB
  mysql_db: name={{ item.name }} state=present
  with_items: applications

- name: Generate DB password
  shell: makepasswd --chars=32
  args:
    creates: /var/www/{{ item.name }}/.dbpw
  with_items: applications
  register: dbpwd

- name: Create MySQL User
  mysql_user: name={{ item.name }} password={{ dbpwd.stdout }} priv={{ item.name }}.*:ALL state=present
  when: dbpwd.changed

- name: set DB_DATABASE
  lineinfile: dest=/var/www/{{ item.name }}/.env regexp='^DB_DATABASE=' line=DB_DATABASE={{ item.name }}
  with_items: applications

- name: set DB_USERNAME
  lineinfile: dest=/var/www/{{ item.name }}/.env regexp='^DB_USERNAME=' line=DB_USERNAME={{ item.name }}
  with_items: applications

- name: set DB_PASSWORD
  lineinfile: dest=/var/www/{{ item.name }}/.env regexp='^DB_PASSWORD=' line=DB_PASSWORD={{ dbpwd.stdout }}
  when: dbpwd.changed

- name: Save dbpw file
  lineinfile: dest=/var/www/{{ item.name }}/.dbpw line="{{ dbpwd.stdout }}" create=yes state=present
  sudo: yes
  sudo_user: "{{ wwwuser }}"
  when: dbpwd.changed

- name: Run artisan migrate
  shell: php /var/www/{{ item.name }}/artisan migrate --force
  sudo: yes
  sudo_user: "{{ wwwuser }}"
  when: dbpwd.changed

Next we will add in with_together so we can use our database password. For our password generation, we need to loop over dbpwd.results, and will be able to access the password from item.1.stdout, since applications will be accessed via item.0.

We can update our playbook accordingly:

- name: Create MySQL DB
  mysql_db: name={{ item.name }} state=present
  with_items: applications

- name: Generate DB password
  shell: makepasswd --chars=32
  args:
    creates: /var/www/{{ item.name }}/.dbpw
  with_items: applications
  register: dbpwd

- name: Create MySQL User
  mysql_user: name={{ item.0.name }} password={{ item.1.stdout }} priv={{ item.0.name }}.*:ALL state=present
  when: item.1.changed
  with_together:
  - applications
  - dbpwd.results

- name: set DB_DATABASE
  lineinfile: dest=/var/www/{{ item.name }}/.env regexp='^DB_DATABASE=' line=DB_DATABASE={{ item.name }}
  with_items: applications

- name: set DB_USERNAME
  lineinfile: dest=/var/www/{{ item.name }}/.env regexp='^DB_USERNAME=' line=DB_USERNAME={{ item.name }}
  with_items: applications

- name: set DB_PASSWORD
  lineinfile: dest=/var/www/{{ item.0.name }}/.env regexp='^DB_PASSWORD=' line=DB_PASSWORD={{ item.1.stdout }}
  when: item.1.changed
  with_together:
  - applications
  - dbpwd.results

- name: Save dbpw file
  lineinfile: dest=/var/www/{{ item.0.name }}/.dbpw line="{{ item.1.stdout }}" create=yes state=present
  sudo: yes
  sudo_user: "{{ wwwuser }}"
  when: item.1.changed
  with_together:
  - applications
  - dbpwd.results

- name: Run artisan migrate
  shell: php /var/www/{{ item.0.name }}/artisan migrate --force
  sudo: yes
  sudo_user: "{{ wwwuser }}"
  when: item.1.changed
  with_together:
  - applications
  - dbpwd.results

Once you have updated your playbook, save it and run it:

ansible-playbook php.yml --ask-sudo-pass

Despite all of the changes we've made to our playbook, there should be no changes to the database tasks. With the changes in this step, we should have finished our conversion from a single application playbook to a multiple application playbook.

Step 7 — Adding More Applications

In this step we will configure two more applications in our playbook.

Now that we have refactored our playbook to use variables to define the applications, the process for adding new applications to our server is very easy. Simply add them to the applications variable list. This is where the power of Ansible variables will really shine.

Open your playbook for editing:

nano php.yml

At the top, in the vars section, find the applications block:

applications:
  - name: laravel
    domain: laravel.example.com
    repository: https://github.com/do-community/do-ansible-adv-php.git
    branch: example

Now add in two more applications:

applications:
  - name: laravel
    domain: laravel.example.com
    repository: https://github.com/do-community/do-ansible-adv-php.git
    branch: example

  - name: one
    domain: one.example.com
    repository: https://github.com/do-community/do-ansible-php-example-one.git
    
    branch: master

  - name: two
    domain: two.example.com
    repository: https://github.com/do-community/do-ansible-php-example-two.git
    branch: master

Save your playbook.

Now it is time to run your playbook:

ansible-playbook php.yml --ask-sudo-pass

This step may take a while as composer sets up the new applications. When it has finished, you will notice a number of tasks are changed, and if you look carefully you'll notice that each of the looped items will be listed. The first, our original application should say ok or skipped, while the new two applications should say changed.

More importantly, if you visit all three of the domains for your configured sites in your web browser you should notice three different websites.

The first one should look familiar. The other two should display:

This is example app one!

and

This is example app two!

With that, we have just deployed two new web applications by simply updating our applications list.

Step 8 — Using Host Variables

In this step we will extract our variables to host variables.

Taking a step back, playbook variables are good, but what if we want to deploy different applications onto different servers using the same playbook? We could do conditional checks on each task to work out which server is running the task, or we can use host variables. Host variables are just what they sound like: variables that apply to a specific host, rather than all hosts across a playbook.

Host variables can be defined inline, within the hosts file, like we've done with the ansible_ssh_user variable, or they can be defined in dedicated file for each host within the host_vars directory.

First, create a new directory alongside our hosts file and our playbook. Call the directory host_vars:

mkdir host_vars

Next we need to create a file for our host. The convention Ansible uses is for the filename to match the host name in the hosts file. So, for example, if your hosts file looks like this:

[php]
your_first_server_ip ansible_ssh_user=sammy

Then you should create a file called host_vars/your_first_server_ip. Let's create that now:

nano host_vars/your_first_server_ip

Like our playbooks, host files use YAML for their formatting. This means we can copy our applications list into our new host file, so it looks like this:

---
applications:
  - name: laravel
    domain: laravel.example.com
    repository: https://github.com/do-community/do-ansible-adv-php.git
    branch: example

  - name: one
    domain: one.example.com
    repository: https://github.com/do-community/do-ansible-php-example-one.git
    branch: master

  - name: two
    domain: two.example.com
    repository: https://github.com/do-community/do-ansible-php-example-two.git
    branch: master

Save your new hosts file, and open your playbook for editing:

nano php.yml

Update the top to remove the entire applications section:

---
- hosts: php
  sudo: yes

  vars:
    wwwuser: www-data

  tasks:
. . .

Save the playbook, and run it:

ansible-playbook php.yml --ask-sudo-pass

Even though we have moved our variables from our playbook to our host file, the output should look exactly the same, and there should be no changes reported by Ansible. As you can see, host_vars work in the exact same way that vars in playbooks do; they are just specific to the host.

Variables defined in host_vars files will also be accessible across all playbooks that manage the server, which is useful for common options and settings. However, be careful not to use a common name that might mean different things across different playbooks.

Step 9 — Deploying Applications on Another Server

In this step we will utilize our new host files and deploy applications on a second server.

First, we need to update our hosts file with our new host. Open it for editing:

nano hosts

And add in your new host:

[php]
your_first_server_ip ansible_ssh_user=sammy
your_second_server_ip ansible_ssh_user=sammy

Save and close the file.

Next, we need to create a new hosts file, like we did with the first.

nano host_vars/your_second_server_ip

You can pick one or more of our example applications and add them into your host file. For example, if you wanted to deploy our original example and example two to the new server, you could use:

---
applications:
  - name: laravel
    domain: laravel.example2.com
    repository: https://github.com/do-community/do-ansible-adv-php.git
    branch: example

  - name: two
    domain: two.example2.com
    repository: https://github.com/do-community/do-ansible-php-example-two.git
    branch: master

Save your playbook.

Finally we can run our playbook:

ansible-playbook php.yml --ask-sudo-pass

Ansible will take a while to run because it is setting everything up on your second server. When it has finished, open up your chosen applications in your browser (in the example, we used laravel.example2.com two.example2.com)and to confirm they have been set up correctly. You should see the specific applications that you picked for your host file, and your original server should have no changes.

Conclusion

This tutorial took a fully functioning single-application playbook and converted it to support multiple applications across multiple servers. Combined with the topics covered in the previous tutorials, you should have everything you need to write a full playbook for deploying your applications. As per the previous tutorials, we still have not logged directly into the servers using SSH.

You will have noticed how simple it was to add in more applications and another server, once we had the structure of the playbook worked out. This is the power of Ansible, and is what makes it so flexible and easy to use.

]]>
2015-07-08T23:03:03+00:00
Stephen <![CDATA[How to Deploy an Advanced PHP Application Using Ansible]]> https://stephen.rees-carter.net/thought/how-to-deploy-an-advanced-php-application-using-ansible How to Deploy an Advanced PHP Application Using Ansible

Note: I originally wrote and published this article as part of the Automating Your PHP Application Deployment Process with Ansible tutorial series for the Digital Ocean Community.

Introduction

This tutorial is the second in a series about deploying PHP applications using Ansible on Ubuntu 14.04. The first tutorial covers the basic steps for deploying an application, and is a starting point for the steps outlined in this tutorial.

In this tutorial we will cover setting up SSH keys to support code deployment/publishing tools, configuring the system firewall, provisioning and configuring the database (including the password!), and setting up task schedulers (crons) and queue daemons. The goal at the end of this tutorial is for you to have a fully working PHP application server with the aforementioned advanced configuration.

Like the last tutorial, we will be using the Laravel framework as our example PHP application. However, these instructions can be easily modified to support other frameworks and applications if you already have your own.

Prerequisites

This tutorial follows on directly from the end of the first tutorial in the series, and all of the configuration and files generated for that tutorial are required. If you haven't completed that tutorial yet, please do so first before continuing with this tutorial.

Step 1 — Switching the Application Repository

In this step, we will update the Git repository to a slightly customized example repository.

Because the default Laravel installation doesn't require the advanced features that we will be setting up in this tutorial, we will be switching the existing repository from the standard repository to an example repository with some debugging code added, just to show when things are working. The repository we will use is located at https://github.com/do-community/do-ansible-adv-php.

If you haven't done so already, change directories into ansible-php from the previous tutorial.

cd ~/ansible-php/

Open up our existing playbook for editing.

nano php.yml

Find and update the "Clone git repository" task, so it looks like this.

- name: Clone git repository
  git: >
    dest=/var/www/laravel
    repo=https://github.com/do-community/do-ansible-adv-php
    update=yes
    version=example
  sudo: yes
  sudo_user: www-data
  register: cloned

Save and run the playbook.

ansible-playbook php.yml --ask-sudo-pass

When it has finished running, visit your Droplet in your web browser (i.e. http://your_server_ip/). You should see a message that says "could not find driver".

This means we have successfully swapped out the default repository for our example repository, but the application cannot connect to the database. This is what we expect to see here, and we will install and set up the database later in the tutorial.

Step 2 — Setting up SSH Keys for Deployment

In this step, we will set up SSH keys that can be used for application code deployment scripts.

While Ansible is great for maintaining configuration and setting up servers and applications, tools like Envoy and Rocketeer are often used to push code changes onto your server and run application commands remotely. Most of these tools require an SSH connection that can access the application installation directly. In our case, this means we need to configure SSH keys for the www-data user.

We will need the public key file for the user you wish to push your code from. This file is typically found at ~/.ssh/id_rsa.pub. Copy that file into the ansible-php directory.

cp ~/.ssh/id_rsa.pub ~/ansible-php/deploykey.pub

We can use the Ansible authorized_key module to easily install our public key within /var/www/.ssh/authorized_keys, which will allow the deployment tools to easily connect and access our application. The configuration only needs to know where the key is, using a lookup, and the user the key needs to be installed for (www-data in our case).

- name: Copy public key into /var/www
  authorized_key: user=www-data key="{{ lookup('file', 'deploykey.pub') }}"

We also need to set the www-data user's shell, so we can actually log in. Otherwise, SSH will allow the connection, but there will be no shell presented to the user. This can be done using the user module, and setting the shell to /bin/bash (or your preferred shell).

- name: Set www-data user shell
  user: name=www-data shell=/bin/bash

Now, open up the playbook for editing to add in the new tasks.

nano php.yml

Add the above tasks to your php.yml playbook; the end of the file should match the following. The additions are highlighted in red.

. . .

  - name: Configure nginx
    template: src=nginx.conf dest=/etc/nginx/sites-available/default
    notify:
      - restart php5-fpm
      - restart nginx

  - name: Copy public key into /var/www
    authorized_key: user=www-data key="{{ lookup('file', 'deploykey.pub') }}"

  - name: Set www-data user shell
    user: name=www-data shell=/bin/bash

  handlers:
  
. . .

Save and run the playbook.

ansible-playbook php.yml --ask-sudo-pass

When Ansible finishes, you should be able to SSH in using the www-data user.

ssh www-data@your_server_ip

If you successfully log in, it's working! You can now log back out by entering logout or pressing CTRL+D.

We won't need to use that connection for any other steps in this tutorial, but it will be useful if you are setting up other tools, as mentioned above, or for general debugging and application maintenance as required.

Step 3 — Configuring the Firewall

In this step we will configure the firewall on the droplet to allow only connections for HTTP and SSH respectively.

Ubuntu 14.04 comes with UFW (Uncomplicated Firewall) installed by default, and Ansible supports it with the ufw module. It has a number of powerful features and has been designed to be as simple as possible. It's perfectly suited for self-contained web servers that only need a couple of ports open. In our case, we want port 80 (HTTP) and port 22 (SSH) open. You may also want port 443 for HTTPS.

The ufw module has a number of different options which perform different tasks. The different tasks we need to perform are:

  1. Enable UFW and deny all incoming traffic by default.

  2. Open the SSH port but rate limit it to prevent brute force attacks.

  3. Open the HTTP port.

This can be done with the following tasks, respectively.

- name: Enable UFW
  ufw: direction=incoming policy=deny state=enabled

- name: UFW limit SSH
  ufw: rule=limit port=ssh

- name: UFW open HTTP
  ufw: rule=allow port=http

As before, open the php.yml file for editing.

nano php.yml

Add the above tasks to the the playbook; the end of the file should match the following.

. . .

  - name: Copy public key into /var/www
    authorized_key: user=www-data key="{{ lookup('file', 'deploykey.pub') }}"

  - name: Set www-data user shell
    user: name=www-data shell=/bin/bash

  - name: Enable UFW
    ufw: direction=incoming policy=deny state=enabled

  - name: UFW limit SSH
    ufw: rule=limit port=ssh

  - name: UFW open HTTP
    ufw: rule=allow port=http

  handlers:
  
. . .

Save and run the playbook.

ansible-playbook php.yml --ask-sudo-pass

When that has successfully completed, you should still be able to connect via SSH (using Ansible) or HTTP to your server; other ports will now be blocked.

You can verify the status of UFW at any time by running this command:

ansible php --sudo --ask-sudo-pass -m shell -a "ufw status verbose"

Breaking down the Ansible command above:

  • ansible: Run a raw Ansible task, without a playbook.
  • php: Run the task against the hosts in this group.
  • --sudo: Run the command as sudo.
  • --ask-sudo-pass: Prompt for the sudo password.
  • -m shell: Run the shell module.
  • -a "ufw status verbose": The options to be passed into the module. Because it is a shell command, we pass the raw command (i.e. ufw status verbose) straight in without any key=value options.

It should return something like this.

your_server_ip | success | rc=0 >>
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
22                         LIMIT IN    Anywhere
80                         ALLOW IN    Anywhere
22 (v6)                    LIMIT IN    Anywhere (v6)
80 (v6)                    ALLOW IN    Anywhere (v6)

Step 4 — Installing the MySQL Packages

In this step we will set up a MySQL database for our application to use.

The first step is to ensure that MySQL is installed on our server. This can be easily achieved by adding the required packages to the install packages task at the top of our playbook. The packages we need are mysql-server, mysql-client, and php5-mysql. We will also need python-mysqldb so Ansible can communicate with MySQL.

As we are adding packages, we need to restart nginx and php5-fpm to ensure the new packages are usable by the application. In this case, we need MySQL to be available to PHP, so it can connect to the database.

One of the fantastic things about Ansible is that you can modify any of the tasks and re-run your playbook and the changes will be applied. This includes lists of options, like we have with the apt task.

As before, open the php.yml file for editing.

nano php.yml

Find the install packages task, and update it to include the packages above:

. . .

- name: install packages
  apt: name={{ item }} update_cache=yes state=latest
  with_items:
    - git
    - mcrypt
    - nginx
    - php5-cli
    - php5-curl
    - php5-fpm
    - php5-intl
    - php5-json
    - php5-mcrypt
    - php5-sqlite
    - sqlite3
    - mysql-server
    - mysql-client
    - php5-mysql
    - python-mysqldb
  notify:
    - restart php5-fpm
    - restart nginx

. . .

Save and run the playbook:

ansible-playbook php.yml --ask-sudo-pass

Step 5 — Setting up the MySQL Database

In this step we will create a MySQL database for our application.

Ansible can talk directly to MySQL using the mysql_-prefaced modules (e.g. mysql_db, mysql_user). The mysql_db module provides a way to ensure a database with a specific name exists.

We can use a task that looks like this:

- name: Create MySQL DB
  mysql_db: name=laravel state=present

We also need a valid user account, with a known password, to allow our application to connect to the database successfully. One approach to this is to generate a password locally and save it in our Ansible playbook, but that is insecure and there is a better way. We will generate the password, using Ansible, on the server itself and use it directly where it is needed.

To generate a password, we will use the makepasswd command line tool, and ask for a 32-character password. Because makepasswd isn't default on Ubuntu, we will need to add that to the packages list too.

We will also tell Ansible to remember the output of the command (i.e. the password), so we can use it later in our playbook. However, because Ansible doesn't know when it's already run a shell command before, we also need to configure a file to check for; if the file exists, Ansible assumes the command has already been run so it won't run it again.

The task looks like this:

- name: Generate DB password
  shell: makepasswd --chars=32
  args:
    creates: /var/www/laravel/.dbpw
  register: dbpwd

Next, we need to create the actual MySQL database user with the password we specified. This is done using the mysql_user module, and we can use the stdout option on the variable we defined during the password generation task to get the raw output of the shell command, like this: dbpwd.stdout.

The mysql_user command accepts the name of the user and the privileges required. In our case, we want to create a user called laravel and give them full privileges on the laravel table. We also need to tell the task to only run when the dbpwd variable has changed, which will only be when the password generation task is run.

The task should look like this:

- name: Create MySQL User
  mysql_user: name=laravel password={{ dbpwd.stdout }} priv=laravel.*:ALL state=present
  when: dbpwd.changed

Putting this together, open the php.yml file for editing, so we can add in the above tasks.

nano php.yml

Firstly, find the install packages task, and update it to include the makepasswd package.

. . .

- name: install packages
  apt: name={{ item }} update_cache=yes state=latest
  with_items:
    - git
    - mcrypt
    - nginx
    - php5-cli
    - php5-curl
    - php5-fpm
    - php5-intl
    - php5-json
    - php5-mcrypt
    - php5-sqlite
    - sqlite3
    - mysql-server
    - mysql-client
    - php5-mysql
    - python-mysqldb
    - makepasswd
  notify:
    - restart php5-fpm
    - restart nginx

. . .

Then, add the password generation, MySQL database creation, and user creation tasks at the bottom.

. . .

  - name: UFW limit SSH
    ufw: rule=limit port=ssh

  - name: UFW open HTTP
    ufw: rule=allow port=http

  - name: Create MySQL DB
    mysql_db: name=laravel state=present

  - name: Generate DB password
    shell: makepasswd --chars=32
    args:
      creates: /var/www/laravel/.dbpw
    register: dbpwd

  - name: Create MySQL User
    mysql_user: name=laravel password={{ dbpwd.stdout }} priv=laravel.*:ALL state=present
    when: dbpwd.changed

  handlers:

. . .

Do not run the playbook yet! You may have noticed that although we have created the MySQL user and database, we haven't done anything with the password. We will cover that in the next step. When using shell tasks within Ansible, it is always important to remember to complete the entire workflow that deals with the output/results of the task before running it to avoid having to manually log in and reset the state.

Step 6 — Configuring the PHP Application for the Database

In this step we will save the MySQL database password into the .env file for the application.

Like we did in the last tutorial, we will update the .env file to include our newly created database credentials. By default Laravel's .env file contains these lines:

DB_HOST=localhost
DB_DATABASE=homestead
DB_USERNAME=homestead
DB_PASSWORD=secret

We can leave the DB_HOST line as-is, but will update the other three using the following tasks, which are very similar to the tasks we used in the previous tutorial to set APP_ENV and APP_DEBUG.

- name: set DB_DATABASE
  lineinfile: dest=/var/www/laravel/.env regexp='^DB_DATABASE=' line=DB_DATABASE=laravel

- name: set DB_USERNAME
  lineinfile: dest=/var/www/laravel/.env regexp='^DB_USERNAME=' line=DB_USERNAME=laravel

- name: set DB_PASSWORD
  lineinfile: dest=/var/www/laravel/.env regexp='^DB_PASSWORD=' line=DB_PASSWORD={{ dbpwd.stdout }}
  when: dbpwd.changed

As we did with the MySQL user creation task, we have used the generated password variable (dbpwd.stdout) to populate the file with the password, and have added the when option to ensure it is only run when dbpwd has changed.

Now, because the .env file already existed before we added our password generation task, we will need to save the password to another file so then generation task can look for it's existence (which we already set up within the task). We will also use the sudo and sudo_user options to tell Ansible to create the file as the www-data user.

- name: Save dbpw file
  lineinfile: dest=/var/www/laravel/.dbpw line="{{ dbpwd.stdout }}" create=yes state=present
  sudo: yes
  sudo_user: www-data
  when: dbpwd.changed

Open the php.yml file for editing.

nano php.yml

Add the above tasks to the the playbook; the end of the file should match the following.

. . .

  - name: Create MySQL User
    mysql_user: name=laravel password={{ dbpwd.stdout }} priv=laravel.*:ALL state=present
    when: dbpwd.changed

  - name: set DB_DATABASE
    lineinfile: dest=/var/www/laravel/.env regexp='^DB_DATABASE=' line=DB_DATABASE=laravel

  - name: set DB_USERNAME
    lineinfile: dest=/var/www/laravel/.env regexp='^DB_USERNAME=' line=DB_USERNAME=laravel

  - name: set DB_PASSWORD
    lineinfile: dest=/var/www/laravel/.env regexp='^DB_PASSWORD=' line=DB_PASSWORD={{ dbpwd.stdout }}
    when: dbpwd.changed

  - name: Save dbpw file
    lineinfile: dest=/var/www/laravel/.dbpw line="{{ dbpwd.stdout }}" create=yes state=present
    sudo: yes
    sudo_user: www-data
    when: dbpwd.changed

  handlers:

. . .

Again, do not run the playbook yet! We have one more step to complete before we can run the playbook.

Step 7 — Migrating the Database

In this step, we will run the database migrations to set up the database tables.

In Laravel, this is done by running the migrate command (i.e. php artisan migrate --force) within the Laravel directory. Note that we have added the --force flag because the production environment requires it.

The Ansible task to perform this looks like this.

- name: Run artisan migrate
  shell: php artisan migrate --force
  when: dbpwd.changed

Now it is time to update our playbook. Open the php.yml file for editing.

nano php.yml

Add the above tasks to the the playbook; the end of the file should match the following.

. . .

  - name: Save dbpw file
    lineinfile: dest=/var/www/laravel/.dbpw line="{{ dbpwd.stdout }}" create=yes   state=present
    sudo: yes
    sudo_user: www-data
    when: dbpwd.changed

  - name: Run artisan migrate
    shell: php /var/www/laravel/artisan migrate --force
    sudo: yes
    sudo_user: www-data
    when: dbpwd.changed

  handlers:

. . .

Finally, we can save and run the playbook.

ansible-playbook php.yml --ask-sudo-pass

When that finishes executing, refresh the page in your browser and you should see a message that says:

Queue: NO
Cron: NO

This means the database is set up correctly and working as expected.

Step 8 — Configuring cron Tasks

In this step, we will set up any cron tasks that need to be configured.

Cron tasks are commands that run on a set schedule and can be used to perform any number of tasks for your application. They are often used for performing maintenance tasks or sending out email activity updates — essentially anything that needs to be done periodically without a user starting it manually. Cron schedules can run as frequently as every minute, or as infrequently as you require.

Laravel comes by default with an Artisan command called schedule:run, which is designed to be run every minute and executes the defined scheduled tasks within the application. This means we only need to add a single cron task, if our application takes advangate of this feature.

Ansible has a cron module, which allows you to add cron tasks easily. It has a number of different options that translate directly into the different options you can configure via cron:

  • job: The command to execute. Required if state=present.
  • minute, hour, day, month, and weekday: The minute, hour, day, month, or day of the week when the job should run, respectively.
  • special_time (reboot, yearly, annually, monthly, weekly, daily, hourly): Special time specification nickname.

By default, it will create a task that runs every minute, which is what we want. This means the task we want looks like this:

- name: Laravel Scheduler
  cron: >
    job="run-one php /var/www/laravel/artisan schedule:run 1>> /dev/null 2>&1"
    state=present
    user=www-data
    name="php artisan schedule:run"

The run-one command is a small helper in Ubuntu that ensures the command is only being run once. This means that if a previous schedule:run command is still running, it won't be run again. This is helpful to avoid situations where a cron task becomes locked in a loop, and over time more and more instances of the same task are started until the server runs out of resources.

As before, open the php.yml file for editing.

nano php.yml

Add the above task to the the playbook; the end of the file should match the following.

. . .

  - name: Run artisan migrate
    shell: php /var/www/laravel/artisan migrate --force
    sudo: yes
    sudo_user: www-data
    when: dbpwd.changed

  - name: Laravel Scheduler
    cron: >
      job="run-one php /var/www/laravel/artisan schedule:run 1>> /dev/null 2>&1"
      state=present
      user=www-data
      name="php artisan schedule:run"

  handlers:

. . .

Save and run the playbook:

ansible-playbook php.yml --ask-sudo-pass

Now, refresh the page in your browser. In a minute, it will update to look like this.

Queue: NO
Cron: YES

This means that the cron is working in the background correctly. As part of the example application, there is a cron job that is running every minute updating a status entry in the database so the application knows it is running.

Step 9 — Configuring the Queue Daemon

In this step we will configure the queue daemon worker for Laravel.

Like the schedule:run Artisan command from step 8, Laravel also comes with a queue worker that can be started with the queue:work --daemon Artisan command. We will set that up now, so you can take advantage of it in your application.

Queue workers are similar to cron jobs in that they run tasks in the background. The difference is that the application pushes jobs into the queue, either via actions performed by the user, or from tasks scheduled through a cron job. Queue tasks are executed by the worker one at a time, and will be processed on-demand when they are found in the queue. They are commonly used for tasks that take time to execute, such as sending emails or making API calls to external services.

Unlike the schedule:run command, this isn't a command that needs to be run every minute. Instead it needs to be running as a daemon in the background constantly. A common way to do this is to use a third party package, like supervisord, but that method requires understanding how to configure and manage said system. There is a much simpler way to do it using cron and the run-one command.

We will create a cron entry to start the queue worker daemon, and use run-one to run it. This means that cron will start the process the first time it runs, and any subsequent cron runs will be ignored by run-one while the worker is running. As soon as the worker stops, run-one will allow the command to run again, and the queue worker will start again. It is an incredibly simple and easy to use method that saves you from needing to learn how to configure and use another tool.

With all of that in mind, we will create another cron task to run our queue worker.

- name: Laravel Queue Worker
  cron: >
    job="run-one php /var/www/laravel/artisan queue:work --daemon --sleep=30 --delay=60 --tries=3 1>> /dev/null 2>&1"
    state=present
    user=www-data
    name="Laravel Queue Worker"

As before, open the php.yml file for editing.

nano php.yml

Add the above task to the the playbook; the end of the file should match the following:

. . .

  - name: Laravel Scheduler
    cron: >
      job="run-one php /var/www/laravel/artisan schedule:run 1>> /dev/null 2>&1"
      state=present
      user=www-data
      name="php artisan schedule:run"

  - name: Laravel Queue Worker
    cron: >
      job="run-one php /var/www/laravel/artisan queue:work --daemon --sleep=30 --delay=60 --tries=3 1>> /dev/null 2>&1"
      state=present
      user=www-data
      name="Laravel Queue Worker"

  handlers:
. . .

Save and run the playbook:

ansible-playbook php.yml --ask-sudo-pass

Like before, refresh the page in your browser. After a minute, it will update to look like this:

Queue: YES
Cron: YES

This means that the queue worker is working in the background correctly. The cron job that we started in the last step is pushing a job onto the queue. This job updates the database when it is run, to show that it is working.

We now have a working example Laravel application which includes functioning cron jobs and queue workers.

Conclusion

This tutorial covered the some of the more advanced topics when using Ansible for deploying PHP applications. All of the tasks used can be easily modified to suit most PHP applications (depending on their specific requirements), and it should give you a good starting point to set up your own playbooks for your applications.

You will notice that we have not used a single SSH command as part of this tutorial (apart from checking the www-data user login), and everything — including the MySQL user password — has been set up automatically. After following this tutorial, your application is ready to go and supports tools to push code updates.

]]>
2015-07-05T21:20:00+00:00
Stephen <![CDATA[How to Deploy a Basic PHP Application using Ansible]]> https://stephen.rees-carter.net/thought/how-to-deploy-a-basic-php-application-using-ansible How to Deploy a Basic PHP Application using Ansible

Note: I originally wrote and published this article as part of the Automating Your PHP Application Deployment Process with Ansible tutorial series for the Digital Ocean Community.

Introduction

This tutorial covers the process of provisioning a basic PHP application using Ansible. The goal at the end of this tutorial is to have your new web server serving a basic PHP application without a single SSH connection or manual command run on the target Droplet.

We will be using the Laravel framework as an example PHP application, but these instructions can be easily modified to support other frameworks and applications if you already have your own.

Prerequisites

For this tutorial, we will be using Ansible to install and configure Nginx, PHP, and other services on a Ubuntu 14.04 Droplet. This tutorial builds on basic Ansible knowledge, so if you are new to Ansible, you can read through this basic Ansible tutorial first.

To follow this tutorial, you will need:

  • One Ubuntu 14.04 Droplet of any size that we will be using to configure and deploy our PHP applicaton onto. The IP address of this machine will be referred to as your_server_ip throughout the tutorial.

  • One Ubuntu 14.04 Droplet which will be used for Ansible. This is the Droplet you will be logged into for the entirety of this tutorial.

  • Sudo non-root users configured for both Droplets.

  • SSH keys for the Ansible Droplet to authorize login on the PHP deployment Droplet, which you can set up by following this tutorial on your Ansible Droplet.

Step 1 — Installing Ansible

The first step is to install Ansible. This is easily accomplished by installing the PPA (Personal Package Archive), and installing the Ansible package with apt.

First, add the PPA using the apt-add-repository command.

sudo apt-add-repository ppa:ansible/ansible

Once that has finished, update the apt cache.

sudo apt-get update

Finally, install Ansible.

sudo apt-get install ansible

Once Ansible is installed, we'll create a new directory to work in and set up a basic configuration. By default, Ansible uses a hosts file located at /etc/ansible/hosts, which contains all of the servers it is managing. While that file is fine for some use cases, it's global, which isn't what we want here.

For this tutorial, we will create a local hosts file and use that instead. We can do this by creating a new Ansible configuration file within our working directory, which we can use to tell Ansible to look for a hosts file within the same directory.

Create a new directory (which we will use for the rest of this tutorial).

mkdir ~/ansible-php

Move into the new directory.

cd ~/ansible-php/

Create a new file called ansible.cfg and open it for editing using nano or your favorite text editor.

nano ansible.cfg

Add in the hostfile configuration option with the value of hosts in the [defaults] group by copying the following into the ansible.cfg file.

[defaults]
hostfile = hosts

Save and close the ansible.cfg file. Next, we'll create the hosts file, which will contain the IP address of the PHP Droplet where we will deploy our application.

nano hosts

Copy the below to add in a section for php, replacing your_server_ip with your server IP address and sammy with the sudo non-root user you created in the prerequisites on your PHP Droplet.

[php]
your_server_ip ansible_ssh_user=sammy

Save and close the hosts file. Let's run a simple check to make sure Ansible is able to connect to the host as expected by calling the ping module on the new php group.

ansible php -m ping

You may get an SSH host authentication check, depending on if you've ever logged into that host before. The ping should come back with a successful response, which looks something like this:

111.111.111.111 | success >> {
    "changed": false,
    "ping": "pong"
}

Ansible is now be installed and configured; we can move on to setting up our web server.

Step 2 — Installing Required Packages

In this step we will install some required system packages using Ansible and apt. In particular, we will install git, nginx, sqlite3, mcrypt, and a couple of php5-* packages.

Before we add in the apt module to install the packages we want, we need to create a basic playbook. We'll build on this playbook as we go through the tutorial. Create a new playbook called php.yml.

nano php.yml

Paste in the following configuration. The first two lines specifies the hosts group we wish to use (php) and makes sure it runs commands with sudo by default. The rest adds in a module with the packages that we need. You can customize this for your own application, or use the configuration below if you're following along with the example Laravel application.

---
- hosts: php
  sudo: yes

  tasks:

  - name: install packages
    apt: name={{ item }} update_cache=yes state=latest
    with_items:
      - git
      - mcrypt
      - nginx
      - php5-cli
      - php5-curl
      - php5-fpm
      - php5-intl
      - php5-json
      - php5-mcrypt
      - php5-sqlite
      - sqlite3

Save the php.yml file. Finally, run ansible-playbook to install the packages on the Droplet. Don't forget to use the --ask-sudo-pass option if your sudo user on your PHP Droplet requires a password.

ansible-playbook php.yml --ask-sudo-pass

Step 3 — Modifying System Configuration Files

In this section we will modify some of the system configuration files on the PHP Droplet. The most important configuration option to change (aside from Nginx's files, which will be covered in a later step) is the cgi.fix_pathinfo option in php5-fpm, because the default value is a security risk.

We'll first explain all the sections we're going to add to this file, then include the entire php.yml file for you to copy and paste in.

The lineinfile module can be used to ensure the configuration value within the file is exactly as we expect it. This can be done using a generic regular expression so Ansible can understand most forms the parameter is likely to be in. We'll also need to restart php5-fpm and nginx to ensure the change takes effect, so we need to add in two handlers as well, in a new handlers section. Handlers are perfect for this, as they are only fired when the task changes. They also run at the end of the playbook, so multiple tasks can call the same handler and it will only run once.

The section to do the above will look like this:

  - name: ensure php5-fpm cgi.fix_pathinfo=0
    lineinfile: dest=/etc/php5/fpm/php.ini regexp='^(.*)cgi.fix_pathinfo=' line=cgi.fix_pathinfo=0
    notify:
      - restart php5-fpm
      - restart nginx

  handlers:
    - name: restart php5-fpm
      service: name=php5-fpm state=restarted

    - name: restart nginx
      service: name=nginx state=restarted

Next, we also need to ensure the php5-mcrypt module is enabled. This is done by running the php5enmod script with the shell task, and checking the 20-mcrypt.ini file is in the right place when it's enabled. Note that we are telling Ansible that the task creates a specific file. If that file exists, the task won't be run.

  - name: enable php5 mcrypt module
    shell: php5enmod mcrypt
    args:
      creates: /etc/php5/cli/conf.d/20-mcrypt.ini

Now, open php.yml for editing again.

nano php.yml

Add the above tasks and handlers, so the file matches the below:

---
- hosts: php
  sudo: yes

  tasks:

  - name: install packages
    apt: name={{ item }} update_cache=yes state=latest
    with_items:
      - git
      - mcrypt
      - nginx
      - php5-cli
      - php5-curl
      - php5-fpm
      - php5-intl
      - php5-json
      - php5-mcrypt
      - php5-sqlite
      - sqlite3

  - name: ensure php5-fpm cgi.fix_pathinfo=0
    lineinfile: dest=/etc/php5/fpm/php.ini regexp='^(.*)cgi.fix_pathinfo=' line=cgi.fix_pathinfo=0
    notify:
      - restart php5-fpm
      - restart nginx

  - name: enable php5 mcrypt module
    shell: php5enmod mcrypt
    args:
      creates: /etc/php5/cli/conf.d/20-mcrypt.ini

  handlers:
    - name: restart php5-fpm
      service: name=php5-fpm state=restarted

    - name: restart nginx
      service: name=nginx state=restarted

Finally, run the playbook.

ansible-playbook php.yml --ask-sudo-pass

The Droplet now has all the required packages installed and the basic configuration set up and ready to go.

Step 4 — Cloning the Git Repository

In this section we will clone the Laravel framework repository onto our Droplet using Git. Like in Step 3, we'll explain all the sections we're going to add to the playbook, then include the entire php.yml file for you to copy and paste in.

Before we clone our Git repository, we need to make sure /var/www exists. We can do this by creating a task with the file module.

- name: create /var/www/ directory
  file: dest=/var/www/ state=directory owner=www-data group=www-data mode=0700

As mentioned above, we need to use the Git module to clone the repository onto our Droplet. The process is simple because all we normally require for a git clone command is the source repository. In this case, we will also define the destination, and tell Ansible to not update the repository if it already exists by setting update=no. Because we are using Laravel, the git repository URL we will use is https://github.com/laravel/laravel.git.

However, we need to run the task as the www-data user to ensure that the permissions are correct. To do this, we can tell Ansible to run the command as a specific user using sudo. The final task will look like this:

- name: Clone git repository
  git: >
    dest=/var/www/laravel
    repo=https://github.com/laravel/laravel.git
    update=no
  sudo: yes
  sudo_user: www-data

Note: For SSH-based repositories you can add accept_hostkey=yes to prevent SSH host verification from hanging the task.

As before, open the php.yml file for editing.

nano php.yml

Add the above tasks to the the playbook; the end of the file should match the following:

---
...

  - name: enable php5 mcrypt module
    shell: php5enmod mcrypt
    args:
      creates: /etc/php5/cli/conf.d/20-mcrypt.ini

  - name: create /var/www/ directory
    file: dest=/var/www/ state=directory owner=www-data group=www-data mode=0700

  - name: Clone git repository
    git: >
      dest=/var/www/laravel
      repo=https://github.com/laravel/laravel.git
      update=no
    sudo: yes
    sudo_user: www-data

  handlers:
    - name: restart php5-fpm
      service: name=php5-fpm state=restarted

    - name: restart nginx
      service: name=nginx state=restarted

Save and close the playbook, then run it.

ansible-playbook php.yml --ask-sudo-pass

Step 5 — Creating an Application with Composer

In this step, we will use Composer to install the PHP application and its dependencies.

Composer has a create-project command that installs all of the required dependencies and then runs the project creation steps defined in the post-create-project-cmd section of the composer.json file. This is the best way to ensure the application is set up correctly for its first use.

We can use the following Ansible task to download and install Composer globally as /usr/local/bin/composer. It will then be accessible by anyone using the Droplet, including Ansible.

- name: install composer
  shell: curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
  args:
    creates: /usr/local/bin/composer

With Composer installed, there is a Composer module that we can use. In our case, we want to tell Composer where our project is (using the working_dir paramter), and to run the create-project command. We also need to add optimize_autoloader=no parameter, as this flag isn't supported by the create-project command. Like the git command, we also want to run this as the www-data user to ensure permissions are valid. Putting it all together, we get this task:

- name: composer create-project
  composer: command=create-project working_dir=/var/www/laravel optimize_autoloader=no
  sudo: yes
  sudo_user: www-data

Note: create-project task may take a significant amount of time on a fresh Droplet, as Composer will have an empty cache and will need download everything fresh.

Now, open the php.yml file for editing.

nano php.yml

Add the tasks above at the end of the tasks section, above handlers, so that the end of the playbook matches the following:

...

  - name: Clone git repository
    git: >
      dest=/var/www/laravel
      repo=https://github.com/laravel/laravel.git
      update=no
    sudo: yes
    sudo_user: www-data

  - name: install composer
    shell: curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
    args:
      creates: /usr/local/bin/composer

  - name: composer create-project
    composer: command=create-project working_dir=/var/www/laravel optimize_autoloader=no
    sudo: yes
    sudo_user: www-data

  handlers:
    - name: restart php5-fpm
      service: name=php5-fpm state=restarted

    - name: restart nginx
      service: name=nginx state=restarted

Finally, run the playbook.

ansible-playbook php.yml --ask-sudo-pass

What would happen if we ran Ansible again now? The composer create-project would run again, and in the case of Laravel, this means a new APP_KEY. So what we want instead is to set that task to only run after a fresh clone. We can ensure that it is only run once by registering a variable with the results of the git clone task, and then checking those results within the composer create-project task. If the git clone task was Changed, then we run composer create-project, if not, it is skipped.

Note: There appears to be a bug in some versions of the Ansible composer module, and it may output OK instead of Changed, as it ignores that scripts were executed even though no dependencies were installed.

Open the php.yml file for editing.

nano php.yml

Find the git clone task. Add the register option to save the results of the task into the the cloned variable, like this:

- name: Clone git repository
  git: >
    dest=/var/www/laravel
    repo=https://github.com/laravel/laravel.git
    update=no
  sudo: yes
  sudo_user: www-data
  register: cloned

Next, find the composer create-project task. Add the when option to check the cloned variable to see if it has changed or not.

- name: composer create-project
  composer: command=create-project working_dir=/var/www/laravel optimize_autoloader=no
  sudo: yes
  sudo_user: www-data
  when: cloned|changed

Save the playbook, and run it:

ansible-playbook php.yml --ask-sudo-pass

Now Composer will stop changing the APP_KEY each time it is run.

Step 6 — Updating Environment Variables

In this step, we will update the environment variables for our application.

Laravel comes with a default .env file which sets the APP_ENV to local and APP_DEBUG to true. We want to swap them for production and false, respectively. This can be done simply using the lineinfile module with the following tasks.

- name: set APP_DEBUG=false
  lineinfile: dest=/var/www/laravel/.env regexp='^APP_DEBUG=' line=APP_DEBUG=false

- name: set APP_ENV=production
  lineinfile: dest=/var/www/laravel/.env regexp='^APP_ENV=' line=APP_ENV=production

Open the php.yml file for editing.

nano php.yml

Add this task to the the playbook; the end of the file should match the following:

...

  - name: composer create-project
    composer: command=create-project working_dir=/var/www/laravel optimize_autoloader=no
    sudo: yes
    sudo_user: www-data
    when: cloned|changed

  - name: set APP_DEBUG=false
    lineinfile: dest=/var/www/laravel/.env regexp='^APP_DEBUG=' line=APP_DEBUG=false

  - name: set APP_ENV=production
    lineinfile: dest=/var/www/laravel/.env regexp='^APP_ENV=' line=APP_ENV=production

  handlers:
    - name: restart php5-fpm
      service: name=php5-fpm state=restarted

    - name: restart nginx
      service: name=nginx state=restarted

Save and run the playbook:

ansible-playbook php.yml --ask-sudo-pass

The lineinfile module is very useful for quick tweaks of any text file, and it's great for ensuring environment variables like this are set correctly.

Step 7 — Configuring Nginx

In this section we will configure a Nginx to serve the PHP application.

If you visit your Droplet in your web browser now (i.e. http://your_server_ip/), you will see the Nginx default page instead of the Laravel new project page. This is because we still need to configure our Nginx web server to serve the application from the /var/www/laravel/public directory. To do this we need to update our Nginx default configuration with that directory, and add in support for php-fpm, so it can handle PHP scripts.

Create a new file called nginx.conf:

nano nginx.conf

Save this server block within that file. You can check out Step 4 of this tutorial for more details about this Nginx configuration; the modifications below are specifying where the Laravel public directory is and making sure Nginx uses the hostname we've defined in the hosts file as the server_name with the inventory_hostname variable.

server {
    listen 80 default_server;
    listen [::]:80 default_server ipv6only=on;

    root /var/www/laravel/public;
    index index.php index.html index.htm;

    server_name {{ inventory_hostname }};

    location / {
        try_files $uri $uri/ =404;
    }

    error_page 404 /404.html;
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root /var/www/laravel/public;
    }

    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass unix:/var/run/php5-fpm.sock;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    }
}

Save and close the nginx.conf file.

Now, we can use the template module to push our new configuration file across. The template module may look and sound very similar to the copy module, but there is a big difference. copy will copy one or more files across without making any changes, while template copies a single files and will resolve all variables within the the file. Because we have used {{ inventory_hostname }} within our config file, we use the template module so it is resolved into the IP address that we used in the hosts file. This way, we don't need to hard code the configuration files that Ansible uses.

However, as is usual when writing tasks, we need to consider the what will happen on the Droplet. Because we are changing the Nginx configuration, we need to restart Nginx and php-fpm. This is done using the notify options.

- name: Configure nginx
  template: src=nginx.conf dest=/etc/nginx/sites-available/default
  notify:
    - restart php5-fpm
    - restart nginx

Open your php.yml file:

nano php.yml

Add in this nginx task at the end of the tasks section. The entire php.yml file should now look like this:

---
- hosts: php
  sudo: yes

  tasks:

  - name: install packages
    apt: name={{ item }} update_cache=yes state=latest
    with_items:
      - git
      - mcrypt
      - nginx
      - php5-cli
      - php5-curl
      - php5-fpm
      - php5-intl
      - php5-json
      - php5-mcrypt
      - php5-sqlite
      - sqlite3

  - name: ensure php5-fpm cgi.fix_pathinfo=0
    lineinfile: dest=/etc/php5/fpm/php.ini regexp='^(.*)cgi.fix_pathinfo=' line=cgi.fix_pathinfo=0
    notify:
      - restart php5-fpm
      - restart nginx

  - name: enable php5 mcrypt module
    shell: php5enmod mcrypt
    args:
      creates: /etc/php5/cli/conf.d/20-mcrypt.ini

  - name: create /var/www/ directory
    file: dest=/var/www/ state=directory owner=www-data group=www-data mode=0700

  - name: Clone git repository
    git: >
      dest=/var/www/laravel
      repo=https://github.com/laravel/laravel.git
      update=no
    sudo: yes
    sudo_user: www-data
    register: cloned

  - name: install composer
    shell: curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
    args:
      creates: /usr/local/bin/composer

  - name: composer create-project
    composer: command=create-project working_dir=/var/www/laravel optimize_autoloader=no
    sudo: yes
    sudo_user: www-data
    when: cloned|changed

  - name: set APP_DEBUG=false
    lineinfile: dest=/var/www/laravel/.env regexp='^APP_DEBUG=' line=APP_DEBUG=false

  - name: set APP_ENV=production
    lineinfile: dest=/var/www/laravel/.env regexp='^APP_ENV=' line=APP_ENV=production

  - name: Configure nginx
    template: src=nginx.conf dest=/etc/nginx/sites-available/default
    notify:
      - restart php5-fpm
      - restart nginx

  handlers:
    - name: restart php5-fpm
      service: name=php5-fpm state=restarted

    - name: restart nginx
      service: name=nginx state=restarted

Save and run the playbook again:

ansible-playbook php.yml --ask-sudo-pass

Once it completes, go back to your browser and refresh. You should now see the Laravel new project page!

Conclusion

This tutorial covers deploying a PHP application with a public repository. While is is perfect for learning how Ansible works, you won't always be working on fully open source projects with open repositories. This means that you will need to authenticate the git clone in Step 3 with your private repository. This can be very easily done using SSH keys.

For example, once you have your SSH deploy keys created and set on your repository, you can use Ansible to copy and configure them on your server before the git clone task:

- name: create /var/www/.ssh/ directory
  file: dest=/var/www/.ssh/ state=directory owner=www-data group=www-data mode=0700

- name: copy private ssh key
  copy: src=deploykey_rsa dest=/var/www/.ssh/id_rsa owner=www-data group=www-data mode=0600

That should allow the server to correctly authenticate and deploy your application.

However, you have just deployed a basic PHP application on a Ubuntu-based Nginx web server using Composer to manage dependencies! All of it has been completed without a needing to log directly into your PHP Droplet and run a single manual command.

]]>
2015-07-03T03:08:00+00:00