Script to remove all asterisk call agents from all phone queues

At the newspaper we make heavy use of FreePBX and Asterisk to power our phone system. That includes the use of the call queue feature, where a caller interested in subscriptions or advertising or placing an obituary can be routed to the right place via a phone menu, hear an appropriate message, and then ring through to one of the staff members trained to help with that particular topic, or leave a voicemail in the right place if no one is available. We’re a small paper and our phone system is mostly quiet, but I have seen days where multiple calls are being handled simultaneously, and queues are very helpful.

One aspect of our setup that has taken some figuring out is having staff log in and out of the phone system so that they can be available to answer those queue calls at the right time.

Remembering to log in at the start of the day is fairly straightforward, though is still a habit all of us our developing. Remembering to log out at the end of the day is for some reason a bit more hit and miss; when my brain has decided it’s time to leave the office or stop working, for some reason logging out of the phone is frequently not top of mind, and apparently that’s often true for my coworkers as well.

It may not seem like it would be a big deal to just let folks stay logged in all the time, but it can mean the difference between a caller sitting on hold for an extra minute or two as the phone system rings the phones of folks who have left for the day, or the caller more quickly getting a useful message and the option to leave a voicemail. We could address this through more complex conditional logic in our phone system set up, but for now I’m trying to address it in a way that mostly maintains user-level control.

So, based on some other bits and pieces of scripts found on Stackoverflow and elsewhere (# #), I put together this Bash script that logs everyone out of all queues:

#!/usr/bin/bash

# Remove an Asterisk agent from queues, or all agents from all queues

Help()
{
   # Display Help
   echo "Remove Asterisk dynamic agents from queues."
   echo
   echo "Syntax: remove_from_queue.sh [-a|eh]"
   echo "options:"
   echo "a         Remove all members from all queues."
   echo "e <123>   Specify an extension to be removed ."
   echo "h         Print this help."
   echo
}

while getopts ":ahe:" option; do
   case $option in
      h) # display Help
         Help
         exit;;
      e) # Enter a specific extension to be removed
         member=$OPTARG;;
      a) # set all queue members to be removed
         all=true;;
      \?) # Invalid option
         echo "Error: Invalid option"
         exit;;
   esac
done

## all queues
declare -a queues=(`asterisk -rx "queue show" | cut -d " " -f1`  )

for q in "${queues[@]}"
do
    ## all agents in queue
    declare -a members=(`asterisk -rx "queue show $q" | grep "/" | cut -d"(" -f2 | cut -d" " -f1`)

    for m in "${members[@]}"
    do
       if [ ! -z $member ]; then
          if [[ $m == *"$member"* ]]; then
              echo "Removing member Local/$member@from-queue/n from $q"
              cmd="queue remove member Local/$member@from-queue/n from $q"
              asterisk -rx "$cmd"
          fi
       else
         if [ "$all" = "true" ]; then
             echo "Removing member $m from $q"
              cmd="queue remove member $m from $q"
              asterisk -rx "$cmd"
         fi
       fi
    done
done

I run it via cron like so:

# Remove all asterisk queue agents from all queues at the end of the day
0 18  *  *  *  root       /usr/local/bin/remove_from_queue.sh -a

Cloudflare helper scripts for nginx and ufw

I’ve been making more use of Cloudflare as a DNS and proxy/DoS protection service. I tend to use it by default now when I set up any kind of public-facing web application that might attract non-trivial traffic, store sensitive information, or where performance is a factor.

When spinning up a new server on an infrastructure platform like Digital Ocean, it’s almost certain that the IP address assigned to it is going to immediately see traffic from bad actors using automated attempts to find a web-based exploit. Even if you put Cloudflare in front of a web service and lock things down, connection attempts directly to the IP address will bypass their proxy and get through.

That’s where this first script comes in handy. Instead of allowing all tcp network activity to ports 80 and 443 to get through, I want to only allow traffic from known Cloudflare IP addresses. We can do this using the Ubuntu ufw firewall and Cloudflare’s published IP address blocks.

GadElKareem shared a script that I’ve adapted a bit to make use of ufw’s application profiles:

#!/usr/bin/env bash

set -euo pipefail

# lock it
PIDFILE="/tmp/$(basename "${BASH_SOURCE[0]%.*}.pid")"
exec 200>${PIDFILE}
flock -n 200 || ( echo "${BASH_SOURCE[0]} script is already running. Aborting . ." && exit 1 )
PID=$$
echo ${PID} 1>&200

cd "$(dirname $(readlink -f "${BASH_SOURCE[0]}"))"
CUR_DIR="$(pwd)"

wget -q https://www.cloudflare.com/ips-v4 -O /tmp/cloudflare-ips-v4
wget -q https://www.cloudflare.com/ips-v6 -O /tmp/cloudflare-ips-v6

for cfip in `cat /tmp/cloudflare-ips-v4`; do /usr/sbin/ufw allow from $cfip to any app "Nginx Full" comment "cloudflare"; done
for cfip in `cat /tmp/cloudflare-ips-v6`; do /usr/sbin/ufw allow from $cfip to any app "Nginx Full" comment "cloudflare"; done

It basically downloads known Cloudflare IPv4 and IPv6 addresses and then adds ufw rules to allow traffic from those addresses. I run this on a cron job like so:

# Refresh cloudflare IPs in ufw
0 7 * * * /root/bin/cloudflare-only-web-ports-ufw.sh >/dev/null 2>&1

The final step is to remove any ufw rules that allow traffic through to ports 80 and 443 for any source IP. This kind of rule may or may not be in place as a part of your existing server configuration.

The end result is that any web connection attempts not from Cloudflare will not be allowed through.

A second challenge with Cloudflare is making sure anything you do with visitor IP addresses found in access logs is using the actual original visitor IP instead of the Cloudflare proxy IP. An example would be using fail2ban to block accesses from a host that has tried too many times to gain unauthorized access to a server.

Fortunately, Cloudflare passes through the original visitor IP as a header, so we just have to make use of that in our logs. This script, originally shared by ergin on GitHub, will download the published Cloudflare IP addresses and generate an nginx-friendly configuration file that adjusts the “real IP” of the visitor for logging and other purposes:

#!/bin/bash

CLOUDFLARE_FILE_PATH=/etc/nginx/conf.d/cloudflare-realips.conf

echo "#Cloudflare" > $CLOUDFLARE_FILE_PATH;
echo "" >> $CLOUDFLARE_FILE_PATH;

echo "# - IPv4" >> $CLOUDFLARE_FILE_PATH;
for i in `curl -s https://www.cloudflare.com/ips-v4/`; do
    echo "set_real_ip_from $i;" >> $CLOUDFLARE_FILE_PATH;
done

echo "" >> $CLOUDFLARE_FILE_PATH;
echo "# - IPv6" >> $CLOUDFLARE_FILE_PATH;
for i in `curl -s https://www.cloudflare.com/ips-v6/`; do
    echo "set_real_ip_from $i;" >> $CLOUDFLARE_FILE_PATH;
done

echo "" >> $CLOUDFLARE_FILE_PATH;
echo "real_ip_header CF-Connecting-IP;" >> $CLOUDFLARE_FILE_PATH;

#test configuration and reload nginx
nginx -t && systemctl reload nginx

I run this on cron like so:

# Refresh cloudflare IPs and reload nginx
30 4 * * * /root/bin/cloudflare-ip-whitelist-sync.sh >/dev/null 2>&1

Because the config file is output in the /conf.d/ directory in the nginx main directory, the default nginx config will pick up its contents without further action.

Continuing with the example of using fail2ban to block unwanted traffic from abusive hosts, you can set up a set of custom Cloudflare ban/unban actions in a file like /etc/fail2ban/action.d/cloudflare.conf, using this version adapted from the original by Mike Rushton:

[Definition]
actionstart =
actionstop =
actioncheck =

actionban = curl -s -X POST -H 'X-Auth-Email: <cfuser>' -H 'X-Auth-Key: <cftoken>' \
            -H 'Content-Type: application/json' -d '{ "mode": "block", "configuration": { "target": "ip", "value": "<ip>" } }' \
            https://api.cloudflare.com/client/v4/user/firewall/access_rules/rules

actionunban = curl -s -o /dev/null -X DELETE -H 'X-Auth-Email: <cfuser>' -H 'X-Auth-Key: <cftoken>' \
              https://api.cloudflare.com/client/v4/user/firewall/access_rules/rules/$(curl -s -X GET -H 'X-Auth-Email: <cfuser>' -H 'X-Auth-Key: <cftoken>' \
              'https://api.cloudflare.com/client/v4/user/firewall/access_rules/rules?mode=block&configuration_target=ip&configuration_value=<ip>&page=1&per_page=1' | tr -d '\n' | cut -d'"' -f6)

[Init]
cftoken =
cfuser =

and then in the jail.local file you can set this up as the default action:

cfemail=<your cloudflare@email here.com>
cfapikey=<your cloudflare API key here>

# Define CF ban action without mailing
action_cf = cloudflare[cfuser="%(cfemail)s", cftoken="%(cfapikey)s"]

# Set default action to Cloudflare ban/unban
action = %(action_cf)s

The end result is that bad actors never get past Cloudflare’s proxy protection to attempt additional foolishness directly on your server.

Thanks to all of the folks who created the original version of these scripts. If you have suggestions for improvement or your own fun Cloudflare tooling, please share!

Adobe InDesign script to pull newspaper stories from Airtable API

Over on my newspaper publisher blog I recently shared about things we’ve been doing to automate pieces of the newspaper’s print layout process:

One of them that was in progress at the time was a script to fetch articles and story content from our Airtable-managed story database for faster placement on the page, instead of copying and pasting.

That script is now in production and while it still has some rough edges, is also open sourced on GitHub. I thought I’d go into a little more detail here about how it works:

  1. Layout editor opens a page file they want to do layout on
  2. Layout editor runs InDesign script
  3. Script determines what page is being worked on based on the filename
  4. Script makes a call to a remote API with the page number as a query parameter, to see what stories are available and ready to be placed on that page, and gets them as a JSON data structure
  5. Using a base story layer that exists in our page template, script creates a new story layer with the headline, subhead, byline, story content, etc. filled in from the JSON data
  6. Script finds and replaces Markdown syntax in the content with established InDesign styles for bold, italics, body subheaders, bullet points, etc.
  7. Layout editor drags the new layer(s) into place, adjusts dimensions, and marks the stories as placed
  8. Rinse and repeat

We’re also working toward adding support for images and captions/cutlines.

This turns what could be a tens or hundreds of clicks process for a given story into just a few clicks. It saves more time on some stories than others, but especially for the ones that involved a lot of applying inline character styles that were being lost or mangled during copy/paste, I think it’s a clear win.

A screen capture of an InDesign layout window running the discussed script and importing story layers onto the page.

I referred to a “remote API” above because even though our stories are managed in Airtable right now, I chose to introduce an intermediate API for the InDesign script to call for simplicity and so that we weren’t locked in to Airtable’s way of doing things.

In our case, it’s implemented as a single action controller in Laravel, which essentially proxies the query on to the Airtable API and maps out a new, simpler data structure from the result with some content cleanup thrown in along the way:

$airtableResults = $this->getAirtableStories($pageNumber, $nextIssueDate);

$stories = $airtableResults->map(function (array $item) {
	$content = $this->processContent($item['fields']['Story Content']);
	return [
		'id' => $item['id'],
		'headline' => $this->extractHeadlineFromContent($content, $item['fields']['Story']),
		'subhead' => $this->extractSubtitleFromContent($content),
		'byline' => $this->getByline($item),
		'body' => $content,
	];
});

return response()->json([
	'status' => 'success',
	'count' => $stories->count(),
	'data' => $stories->toArray(),
]);

You can tell that there are some hacky things going on with how we represent headlines, subheads and bylines in the content of our stories, but that’s a blog post for another time.

Here’s what a response might look like:

{
  "status": "success",
  "count": 2,
  "data": [
    {
      "id": "record_id_1",
      "headline": "My first headline",
      "subhead": null,
      "byline": "From staff reports",
      "body": "Lectus viverra cubilia.",
      "image_url": "https://placehold.co/600x400",
      "cutline": "My image caption 1 here"
    },
    {
      "id": "record_id_2",
      "headline": "A second story",
      "subhead": "You should really read this",
      "byline": "By David Carr",
      "body": "Lectus viverra cubilia.",
      "image_url": null,
      "cutline": null
    }]
}

The README at the root of the repository explains further how the API query and response should work and includes some basic installation instructions. (If you use this, I recommend including some light caching so that you don’t over-query the Airtable API, or wherever your stories are stored.)

I realize everyone’s layout and story management tools are different and that the likelihood that our workflow matches up with yours is slim…but just in case this is helpful to anyone else, I wanted to get it out there! Let us know what you think.

Use the WordPress REST API to create a custom taxonomy term

It’s unreasonably difficult to find information about working with custom taxonomies and their terms in the WordPress REST API. It would be easy to conclude that it’s not possible or that one has to create a custom endpoint to achieve that functionality.

It ends up being fairly straightforward, so I’m sharing what I learned here in case it saves someone else time.

The developer REST API reference endpoint list doesn’t make any obvious reference to custom taxonomies or terms. If you end up on the tags endpoint reference page, you’ll see that there’s a way to work with these terms in the standard post_tag taxonomy, but no way to specify a custom taxonomy during write operations. There’s actually a taxonomies endpoint reference page that doesn’t seem to be linked to from the main navigation anywhere, but can be found in a search; still, nothing on it about working with terms in those taxonomies.

But eventually you’ll find your way to the page Adding REST API Support For Custom Content Types, which has a section Registering A Custom Taxonomy With REST API Support that finally makes it clear that in order to work with custom taxonomies in the REST API, you have to enable/create an endpoint for that purpose.

In your custom taxonomy definition, it ends up being pretty simple with these additional arguments to the register_taxonomy call:

    'show_in_rest'          => true,
    'rest_base'             => 'my-taxonomy-api-slug',
    'rest_controller_class' => 'WP_REST_Terms_Controller',

The second two are optional, where you can define the REST API url slug and controller you want to use; if you leave them out it will use the default terms controller and a slug derived from your taxonomy definition.

With this code in place, you’ll now have two new REST API endpoints available:

 /wp/v2/my-taxonomy-api-slug
 /wp/v2/my-taxonomy-api-slug/(?P<id>[\d]+)

You can then essentially treat those the same way as the tags endpoints, getting/posting to create, update and retrieve terms in that custom taxonomy. For example, to create a new term:

POST https://example.test/wp-json/wp/v2/my-taxonomy-api-slug

slug:my-first-term
name:My First Term

I’ve submitted a pull request to the WP API docs to add at least one pointer in the documentation to help others on a similar quest get there faster.

Updated May 3 @ 11:49 AM Eastern to note that the second two taxonomy registration options rest_base and rest_controller_class are optional to define, thanks to Marcus Kober for noting this.

WooCommerce Subscriptions API start_date bug workaround

The WooCommerce Subscriptions plugin has a bug where if you use its REST API endpoints to make updates to a subscription, it will almost always reset the start date of the subscription you are updating to the current date and time.

I reported the bug to the WooCommerce folks in November 2022 and I believe it has a GitHub issue filed. When I checked in about it recently I was told it’s a low priority to fix. I consider it somewhat serious for my purposes — unexpectedly overwriting/losing key data about a record that’s used in calculating user-facing financial transactions —  so I’m documenting it here for others who might encounter it, along with a possible workaround.

The bug is in the plugin file includes/api/class-wc-rest-subscriptions-controller.php that processes the incoming API request. In particular, in the function prepare_object_for_database() there’s this code:

// If the start date is not set in the request, set its default to now.
if ( ! isset( $request['start_date'] ) ) {
	$request['start_date'] = gmdate( 'Y-m-d H:i:s' );
}

All of the valid dates contained in the $request array are subsequently copied into an array called $dates. Later in the same function, there’s this code:

if ( ! empty( $dates ) ) {
	...
	try {
		$subscription->update_dates( $dates );
	} catch ( Exception $e ) {
		...
	}
}

The implication is that for completely unrelated API requests to change something like, say, a meta field value or the subscription’s status, the date validation and update logic will be run. And because the start date value is overridden to be the current date, it means that any API request to update a completely unrelated field is going to unintentionally reset the start date of the subscription.

Nothing about the documentation at https://woocommerce.github.io/subscriptions-rest-api-docs/?php#update-a-subscription indicates that a start_date value is required in the API requests. In fact, the example given in those docs where the status of a subscription is updated would trigger this bug.

I’ve even noticed this bug surfacing even in regular wp-admin operations involving WooCommerce Subscriptions, as I think some of the logic used to do something like put a subscription on hold or cancel it from within the admin interface is calling the same internal functions.

My workaround for this bug, at least on the API client side, introduces an extra API request, and so is less than ideal for any kind of production or long-term use.

For every API request I make to the Subscriptions API update endpoint, if I’m not explicitly setting/changing the start_date field in my request, I first fetch the existing subscription record and then set the start_date field in my request to the current value.

if ('subscription' === $recordType && empty($updateValues['start_date'])) {
      $current_values = $wooApiFacade::find($recordId);
      $updateValues['start_date'] = Carbon::parse($current_values['start_date_gmt'])->format('Y-m-d\ H:i:s');
}

Hopefully they’ll fix this issue sooner rather than later so that users of the plugin don’t unexpectedly see subscription start dates overwritten.

Tools and tech we’re using to publish a print, online newspaper

Wow, it’s been over a month since I took ownership of a print and online newspaper here in my community. There’s a lot to say about that experience and what I’ve learned so far. In this post, I’ll be focused on the tools and technology we’re using  to operate this business. Some of these were in place before I came in, some are new in the last month.

I’m sharing this because (a) I generally enjoy the topic of if/how tools can make life and business easier, and (b) I hope it could be useful to someone else publishing a newspaper or building a media organization.

Print Layout and Design

It’s Adobe Creative Cloud all the way, for better or worse. InDesign for newspaper layout, Photoshop for image editing. Given the way our staff is set up and our weekly newspaper production process works, almost everyone touches the newspaper pages at some point or another, so the monthly license costs to cover all of that is somewhat ouch. If there were a viable alternative to InDesign, we’d probably switch to it.

Issue and Story Budget Planning

We’re using an Airtable base that helps us record story ideas and plan for our upcoming issues by tracking what articles are going to go where, what state their in, and all the associated data that goes with them such as photos, source info, internal notes, etc. It’s pretty great and the real-time collaboration that it makes possible is hard to beat. I think down the road we may move toward a custom Laravel-powered solution that allows for tighter integration of all of our business operations, but that’s a ways off.

Phone System

We’re using a self-hosted FreePBX (Asterisk) installation with the Sysadmin Pro and EndPoint Manager paid add-on modules. Digital Ocean had a 1-click installer on their marketplace that made it super fast to get going. We’re using VOIP.ms for our trunk lines and they made DID porting in very easy.

Having used Asterisk in a previous business I was already familiar with its architecture and features, but FreePBX meant I could configure everything via web interface instead of editing dialplan files – amazing. We have extensions, queues, interactive voice menus, voicemail speech to text transcription (using this tool) and more, and it sets up a nice foundation for future integration with other tools like our CRM data.

We’re using Yealink T31P and T33G VOIP phones and so far Counterpath’s Bria Mobile has been the most compatible/feature complete softphone for iOS that I’ve found.

Continue reading Tools and tech we’re using to publish a print, online newspaper

Define, fire and listen for custom Laravel model events within a trait

It took me some time to figure out the right way to define, fire and listen for custom events for a Laravel model using a trait, so I’m writing down how I did it in case it’s helpful to others.

Let’s say you have a model Post that you set up as having a “draft” status by default, but eventually will have a status of “publish”. Let’s also say you want to make the act of publishing a post a custom model event that can be listened for in addition to the standard model events like “created” or “updated”. And let’s say you want to do all of this using a trait so that you can apply the same logic to another model in the future, such as comments on the post, without repeating yourself.

Here’s what my Post model might look like:

<?php

namespace App\Models;

class Post extends Model
{
    //
}

Let’s create a Publishable trait that can be applied to this model:

<?php

namespace App\Models\Traits;

trait Publishable
{
    // Add to the list of observable events on any publishable model
    public function initializePublishable()
    {
        $this->addObservableEvents([
            'publishing',
            'published',
        ]);
    }

    // Create a publish method that we'll use to transition the 
    // status of any publishable model, and fire off the before/after events
    public function publish()
    {
        if (false === $this->fireModelEvent('publishing')) {
            return false;
        }
        $this->forceFill(['status' => 'publish'])->save();
        $this->fireModelEvent('published');
    }

    // Register the existence of the publishing model event
    public static function publishing($callback)
    {
        static::registerModelEvent('publishing', $callback);
    }

    // Register the existence of the published model event
    public static function published($callback)
    {
        static::registerModelEvent('published', $callback);
    }
}

This new trait can now be applied to the Post model:

Continue reading Define, fire and listen for custom Laravel model events within a trait

CrowdTangle API SDK in PHP

After not finding anyone else who has done so, I created a minimal PHP implementation of the CrowdTangle API, which I needed anyway for a project I’m working on:

Example usage syntax:

$client = new ChrisHardie\CrowdtangleApi\Client($accessToken);

// get lists
$client->getLists();

// get accounts in a list
$client->getAccountsForList($listId);

// get posts
$client->getPosts([
    'accounts' => '12345678',
    'startDate' => '2022-03-01',
]);

// get a single post
$client->getPost($postId);

I’m sure there’s plenty to improve but I hope it’s helpful to anyone working with CrowdTangle in PHP.

Locking Adobe InDesign files for editing in shared network or cloud folders

Today I had a chance to help a print publication solve a workflow challenge that is apparently very common.

If you open an Adobe InDesign layout file from a local folder on your computer, the software creates an Adobe InDesign Lock File (.idlk) in that folder, which prevents the same file from being opened by another copy of InDesign. But if the file exists in a folder that is shared via network or cloud service, InDesign does not create a lock file when the InDesign file is opened for editing. This includes Adobe’s own “Creative Cloud” file sharing option.

There may be good technical reasons for not creating or syncing lock files across network folders, but the end result is that multiple users can decide to open the same file at the same time, and whomever saves their changes last will “win,” with the other user’s changes being lost.

In researching this, I found it was not some edge case. There seem to be many newsrooms and other organizations struggling with this every day. They work around it by doing things like copying the InDesign files to a local folder, making their changes, and then uploading back to the shared folder, hoping that the internal communication about such things is sufficient along the way. This Adobe Community Support forum thread illustrates the pain points involved.

Thankfully, Adobe InDesign is a scriptable software tool, and so
Max Schmidt and the folks at t3n created a script that creates a locking system for network shared InDesign files. And there was much rejoicing!

After installing the script on all the devices that will be accessing shared InDesign files, anyone trying to open a file that someone is already editing gets an error message and then the file is closed. (I submitted a Pull Request to expand the README documentation so the installation process is a bit clearer.)

In the long run, Adobe needs to solve this problem in a more standard way for its users. But this script is a great alternative option and I hope bringing some additional attention to it helps out other publications or news rooms that might be struggling with the same challenge.

Updated April 26, 2023: the current version of the script we are using is here:

https://github.com/CivicSparkMedia/indesign-scripts/blob/main/Startup%20Scripts/prevent_multiple_opens.jsx

We found that we had to place this version in the Startup Scripts directory for it to keep working.

An example of why RSS is useful and important

Let’s say I want to be an informed, engaged local citizen. Let’s say my town has a website set up where they post local updates and alerts that I might care about.

Specifically, let’s say there was a water main break a few days ago that led to everyone in the town being asked to boil water, and that I wanted to know when that boil advisory was over.

Sure, the information is right there on their website, so one option is to just reload the website all day and wait for an alert to pop up:

Screenshot of an alert on a city website advising that a boil advisory has been lifted

But what if my goal and wish is to have access to that information in a programmatic way? Such as, say, RSS, the technology that has long allowed websites to share information with each other in a format that other websites and software can read?

Once I have that, I could read the alert in a news reader, I could have it piped to a Slack or Discord channel, I could make a little notification appear on my computer desktop, make the lights flash using IFTTT or Zapier, or whatever was most useful to me. The steps would be:

  1. Get RSS feed link.
  2. Do whatever I want to with the information in the feed.

But this town does not offer an RSS feed on its website. Not for alerts, not for regular community news, nothing. (To make matters worse, it’s powered by web software that appears aimed at helping smaller towns and cities have a web presence, so we know this missing feature is now being repeated in communities across the world!)

What are my other options?

Continue reading An example of why RSS is useful and important