Amtrak, B-Movies, Web Development, and other nonsense

Tag: Capistrano

Node on RHEL7

In August 2017 we implemented Swarthmore College’s PDF accessibility tool for Moodle. This required us to stand up a Node.js application, which was a new experience for us. Our environment was RHEL7, and our preferred web server Apache.

Application deployment

We followed our usual Capistrano principles for deploying the application. We created a simple project with all the Capistrano configuration and then mounted the Swarthmore project as a git submodule in a top-level directory named public. We configured the Capistrano npm module to use public as its working directory to ensure that the various node modules are installed on deployment.

PM2

PM2 is a Node process manager; its role is to ensure that the application runs at boot. To use it, we first need to install it globally:

sudo npm install -g pm2

Next, we create an ecosystem.json file. This needs to be in the root of the project repository; since we’re using Capistrano we define it in shared/public and symlink it on deploy. This is what ours looked like:

{
"apps": [{
    "name": "{NAME}",
    "script": "./index.js",
    "cwd": "/var/www/{NAME}/current/public",
    "error_file": "/var/www/{NAME}/current/logs/{NAME}.err.log",
    "out_file": "/var/www/{NAME}/current/logs/app.out.log",
    "exec_mode": "fork_mode"
}] }

All straightforward. We create a new user on the unix platform to own this job, have it start the process:

sudo -u {USER} pm2 start ecosystem.json

We can run a second command which generates the necessary syntax for setting up the systemd commands:

sudo -u {USER} pm2 startup

Apache

Having done all that, the node application is happily running on port 8080. We’re not interested in exposing that port in our environment, so we add a proxy pass to our standard Apache configuration for that virtual host:

        ProxyRequests on
        ProxyPass / http://localhost:8080/

We’ll have to revisit this if we ever want to have a second node application on the system, but for now it works.

Featured image by Hermann Luyken [CC BY-SA 3.0 or GFDL], from Wikimedia Commons.

Quick note on agent forwarding with Docker

I’ve been building a CI/deployment stack with GitLab CI, Docker, and Capistrano. I’m hoping to give a talk on this in the near future, but I wanted to add a brief note on a problem I solved using SSH agent forwarding in Docker in case anyone else runs into it.

In brief, I have code flowing like this:

  1. Push from local development environment to GitLab
  2. GitLab CI spins up a Docker container running Capistrano
  3. Capistrano deploys code to my staging environment via SSH

To do this elegantly requires a deploy user on the staging environment whose SSH key has read access to the repository on GitLab. I don’t want to deploy the private key to the remote staging server. The deploy tasks fire from the Docker container so we bake the private key into it:

# setup deploy key
RUN mkdir /root/.ssh
ADD repo-key /root/.ssh/id_rsa
RUN chmod 600 /root/.ssh/id_rsa
ADD config /root/.ssh/config

That part is relatively straightforward. Copy the private key into wherever you’re building the docker image (do not version it) and you’re good to go. Forwarding the key is trickier. First, you’ll have to tell Capistrano to use key forwarding in each deploy step. In 3.4.0 that looks like this:

 set :ssh_options, {
   forward_agent: true
 }

Next, you’ll have to bootstrap agent forwarding in your CI task. In a CI environment the docker container starts fresh every time and has even less state than usual. You need to start the agent and add your identity every time the task runs. See StackOverflow for a long discussion of agents and identifies. TLDR; add this step:

before_script:
 - eval `ssh-agent -s` && ssh-add

I experimented with adding that command to my Dockerfile but it didn’t work. This was the most common error: “Could not open a connection to your authentication agent.” The command has to be executed in the running container, which means in this case the CI task configuration, or the key won’t be forwarded and clones on the staging environment will fail with publickey denied.