I wanted a sign I could mount in the window at the front entrance of our newspaper’s office to display customizable information about our office’s current status and other useful info.
If you’ve ever researched this category of products, you know there are many vendors out there who will happily sell you very expensive devices and supporting accessories to make this happen. This is “call us for pricing” level product sales meant for hotels, convention centers, malls and large office buildings, not for a small community newspaper on a budget.
Even the few standalone products I was able to find for sale on Amazon and elsewhere seemed pretty janky. I’d have to download an app or install a hub with a proprietary communication technology or be constrained to a certain number of characters and layout, or some other limitation I wasn’t quite happy with.
I decided to ask the Fediverse:
(I learned while writing this that Mastodon posts do not seem to embed in WordPress seamlessly – tragic!)
At first, the responses were reinforcing how difficult this might be to find, but then my colleague Scott Evans made a comment that sparked the eventual solution:
I feel like an e-ink reader of some kind might be your best bet (one that can run Android apps). Then you could install a kiosk style browser and update the page it’s pointing at. Something from Onyx? https://toot.scott.ee/@scott/statuses/01JKBE3QZWEZCMN2QZZH75R6BY
After some searching I found this wonderful blog post by Jan Miksovsky, MomBoard: E-ink display for a parent with amnesia. Jan described using a BOOX Note Air2 Series e-ink device to create a dynamic sign display for Jan’s mom. They used a web browser to load a simple website that would display the latest customized messages based on details configured in web interface elsewhere, source code here. I love it!
I emailed Jan with some questions about the setup and Jan was kind enough to write back with a clarification that the original BOOX ability to launch a web browser on device restart appears to have been removed, but a request is in to restore this functionality.
Jan also had a helpful thought about how to launch the browser in to full-screen mode, which I haven’t tried yet: “That’s controlled by whether a website has a web app manifest defined for it. If it does, then the browser “Add to Home Screen” command should allow you to add the site to your home screen. When launched, it should open in full screen mode.“
I found a used Note Air2 device on eBay for $189 plus shipping and tax. A non-trivial expense, but far less expensive than any of the other options I was finding.
When it arrived I turned off all of the “auto-sleep” type settings so that the device would basically stay on all the time. I tested out having the web browser stay open on one page for days on end, and it worked! Now I just needed to figure out my own web interface to manage the sign message.
So you’ve decided to publish timely content on the Internet but your website does not have an RSS feed.
Or even worse, you have played some role in designing or building a tool or service that other people use to publish timely content on the Internet, and you unforgivably allowed it to ship without support for RSS feeds.
And you probably just assume that a few remaining RSS feed nerds out there are just quietly backing away in defeat. “They’ll just have to manually visit the site every day if they want updates,” you rationalize. “They can download our app and get push notifications instead,” you somehow say with a straight face.
No.
I don’t know who you are. I don’t know what you want. If you are looking for clicks, I can tell you I don’t have any clicks left to give.
But what I do have are a very particular set of skills, skills I have acquired over a very long career. Skills that make me a nightmare for people like you.
If you publish your content in an RSS feed now, that’ll be the end of it. I will not look for you, I will not pursue you, I will simply subscribe to your feed.
But if you don’t, I will look for you, I will find you, and I will make an RSS feed of your website.
And here’s how I’ll do it.
(Psst, this last bit is a playful reference to the movie Taken, I am a nice person IRL and don’t actually tend to threaten or intimidate people around their technology choices. But my impersonation of a militant RSS advocate continues herein…)
Okay, heh, simple misunderstanding, you actually DO have an RSS feed but you do not mention or link to it anywhere obvious on your home page or content index. You know what, you’re using a <link rel="alternate" type="application/rss+xml" href="..." /> tag, so good for you.
I found it, I’m subscribing, we’re moving on, but for crying out loud please also throw a link in the footer or something to help the next person out.
You’re hiding your RSS feed URL structure in documentation or forum responses
Oh so you don’t have an advertised RSS feed or <link> tag but you do still support RSS feeds after all, it’s just that you tucked that fact away in some kind of obscure knowledgebase article or a customer support rep’s response in a community forum?
Fine. I would like you to feel a bit bad about this and I briefly considered writing a strongly worded email to that end, but instead I’m using the documented syntax to grab the link, subscribe and move on. Just…do better.
You decided to put your RSS feed behind a Cloudflare access check. Or you aggressively cached it. Or your web application firewall is blocking repeated accesses to the URL from strange network locations and user agents, which is exactly the profile of feed readers.
I understand, we can’t always remember to give every URL special attention.
But I will email your site administrator or tech support team about it, and you will open a low-priority ticket or send my request to your junk folder, and I will keep following up until you treat your RSS feed URLs with the respect they deserve.
You somehow ended up with some terrible, custom, non-standard implementation of RSS feed publishing and you forgot to handle character encoding or media formats or timestamps or weird HTML tags or some other thing that every other established XML parsing and publishing library has long since dealt with and now you have an RSS feed that does not validate.
Guess what, RSS feeds that don’t validate might as well be links to a Rick Astley video on YouTube given how picky most RSS feed readers are. It’s just not going to work.
But I will work with that. I will write a cron job that downloads your feed every hour and republishes it with fixes needed to make it valid.
#!/bin/sh
# Define variables for URLs, file paths, and the fix to apply
RSS_FEED_URL="https://example.com/rss.xml"
TEMP_FILE="/tmp/rss_feed.broken"
OUTPUT_FILE="/path/to/feeds/rss_feed.xml"
SEARCH_STRING="Parks & Recreation"
REPLACEMENT_STRING="Parks & Recreation"
# Download the RSS feed
curl -s -L -o "$TEMP_FILE" "$RSS_FEED_URL"
# Fix invalid characters or strings in the RSS feed
perl -pi -e "s/$SEARCH_STRING/$REPLACEMENT_STRING/gi;" "$TEMP_FILE"
# Move the fixed file to the desired output location
cp "$TEMP_FILE" "$OUTPUT_FILE"
# Clean up the temporary file
rm "$TEMP_FILE"
If I want to embarrass you in front of your friends I will announce the new, valid feed URL publicly and let others use it too. But really, you should just fix your feed and use a standard library.
You programmatically publish your timely content in a structured format from a structured source but offer no RSS feed? What a terrible decision you’ve made for the distribution of your information, for your readers, for your stakeholders, and for the whole world.
But ’tis no matter, I’ll have an RSS feed of your site up and running shortly using a DOM crawler (e.g. Symfony’s DomCralwer component in PHP) that can navigate HTML and XML documents using XPath, HTML tags or CSS selectors to get me what I want.
Your stuff’s generated using an HTML/CSS template with a loop? Great? Or it’s even just in a plain HTML table? Let’s do this.
You use asynchronous requests to fetch JSON and render your content
You were right there at the finish line, buddy, and you choked.
You had everything you need — structured data storage, knowledge of how to convey data from one place to another in a structured format, probably some API endpoints that already serve up exactly what you need — all so you can dynamically render your webpage with a fancy pagination system or a super fast search/filter feature.
And yet, still, you somehow left out publishing your content in one of the most commonly used structured data formats on the web. (Psst, that’s RSS that I’m talking about. Pay attention, this whole thing is an RSS thing!)
So I will come for your asynchronous requests. I will use my browser’s dev tools to inspect the network requests made when you render your search results, and I will pick apart the GET or POST query issued and the headers and data that are passed with it, and I will strip out the unnecessary stuff so that we have the simplest query possible.
If it’s GraphQL, I will still wade through that nonsense. If it uses a referring URL check as some kind of weak authentication, I will emulate that. If you set an obscurely named cookie on first page visit that is required to get the content on page visit two, be assured that I will find and eat and regurgitate that cookie.
And then I will set up a regularly running script to ingest the JSON response and create a corresponding RSS feed that you could have been providing all along. I can do this all day long.
Even though I build my content fetching tools to be good citizens on the web — at most checking once per hour, exponential backoff if there are problems, etc. — you may still decide to block random web requests for random reasons. Maybe it’s the number of queries to a certain URL, maybe it’s the user agent, or maybe it’s because they’re coming from the IP address of a VPS in a data center that you assume is attacking you.
Fine. Whatever. I used to spend time trying to work around individual fingerprinting and blocking mechanisms, but I got tired of that. So now I pay about $25/year for access to rotating residential IP addresses to serve as proxies for my requests.
I don’t like supporting these providers but when I come to make an RSS feed of your content, it will look like it’s from random Jane Doe’s home computer in some Virginia suburb instead of a VPS in a data center.
You don’t put timely new content on your website, but you sure do love your fancy email newsletters with the timely updates your audience is looking for. You like them so much that you favor them over every other possible content distribution channel, including a website with an RSS feed.
That’s okay. Fortunately, many email newsletter platforms have figured out that offering a web-based archive of past newsletter campaigns is a valuable service they can provide to, um, people like you, and many of those have figured out that including an RSS feed of said campaigns is worthwhile too.
Mailchimp puts a link to right there in the header of their web-based view of an individual campaign send unless (boo) a list owner has turned off public archives, which you should seriously NOT do unless your list really needs to be private/subscribers only.
You’re using an email newsletter platform that doesn’t have RSS feeds built in?
Please, allow me to make one for you. I will subscribe to your email newsletter with a special address that routes messages to a WordPress site I set up just for the purpose of creating public posts with your newsletter content. Once that’s done, the RSS part is easy given that WordPress has that support built right in.
I’ve detailed the technical pieces of that particular puzzle in a separate post:
Your mobile app is where you put all the good stuff
No RSS feed. No email newsletter. The timely content updates are locked up in your mobile app where the developers involved have absolutely no incentive to provide alternative channels you didn’t ask for.
No hope? This is probably where a reasonable person would say “nah, I’m out.” Me? I’m still here.
The nice thing is that if a mobile app is reasonably well designed and supports any kind of information feed, push notifications, etc. then it almost has to have some kind of API driving that. Sometimes the APIs are gloriously rich in the information they make available, but of course usually also completely undocumented and private.
I’ve found Proxyman to be an indispensable tool for what I need to do next:
Intercept all web traffic from my mobile device and route it through my laptop
Install a new certificate authority SSL certificate on my mobile device that tells it to trust my laptop as a source of truth when verifying the validity of SSL certificates for certain domains.
Start making requests from your mobile app that loads the information I want to be in an RSS feed.
Review the HTTP traffic being proxied through my laptop and look for a request that has a good chance of being what I want.
Tell my laptop to intercept not only the traffic meta data but the secure SSL-encrypted traffic to the URL I care about.
Inspect the request and response payload of the HTTPS request and learn what I need to know about how the API powering that mobile app feed operates.
It’s an amazing and beautiful process, and if it’s wrong I don’t want to be right.
If too many mobile app developers get wind of this kind of thing happening, they’ll start using SSL certificate pinning more, which builds in to the app the list of certificates/certificate authorities that can be trusted, removing the ability to intercept by installing a new one. So, you know, shhh. (Joking aside, secure all the things, use certificate pinning…but then also publish an RSS feed!)
Once I have your mobile app’s API information, it’s back to the mode of parsing a JSON response noted above, and generating my own RSS feed from it.
There are other escalations possible, of course. Taking unstructured data and feeding it to an LLM to generate structured data. Recording and replaying all the keystrokes, mouse moves and clicks of a web browsing session. Convincing an unsuspecting IT department worker to build a special export process without really mentioning it to anyone. Scraping data from social media platforms.
I’ve done it all, none of it is fun, and none of it should be necessary.
Publish your info in a structured, standard way.
What do I do with all of these feeds?
Thanks for asking. I own a newspaper. We publish in print and online. Our job is to be aware of what’s happening in the community so we can distill what’s most important and useful in to reporting that benefits our readers. We have a very small staff tasked with keeping up with a lot of information. Aggregating at least some of it through RSS feeds has made a huge difference in our work.
Oh, you meant at the technical level? I have one Laravel app that is just a bunch of custom classes doing the stuff above and outputting standard RSS feeds. I have another Laravel app that aggregates all of the sources of community information I can find, including my custom feeds, in to a useful, free website and a corresponding useful, free daily email summary. I also read a lot of it in my own feed reader as a part of my personal news and information consumption habits.
Related writings
I’ve written a lot of stuff elsewhere about how to unlock information from one online source or another, and the general concepts of working on a more open, interoperable, standards-based web. If you enjoyed the above, you might like these:
At the newspaper we make heavy use of FreePBX and Asterisk to power our phone system. That includes the use of the call queue feature, where a caller interested in subscriptions or advertising or placing an obituary can be routed to the right place via a phone menu, hear an appropriate message, and then ring through to one of the staff members trained to help with that particular topic, or leave a voicemail in the right place if no one is available. We’re a small paper and our phone system is mostly quiet, but I have seen days where multiple calls are being handled simultaneously, and queues are very helpful.
One aspect of our setup that has taken some figuring out is having staff log in and out of the phone system so that they can be available to answer those queue calls at the right time.
Remembering to log in at the start of the day is fairly straightforward, though is still a habit all of us our developing. Remembering to log out at the end of the day is for some reason a bit more hit and miss; when my brain has decided it’s time to leave the office or stop working, for some reason logging out of the phone is frequently not top of mind, and apparently that’s often true for my coworkers as well.
It may not seem like it would be a big deal to just let folks stay logged in all the time, but it can mean the difference between a caller sitting on hold for an extra minute or two as the phone system rings the phones of folks who have left for the day, or the caller more quickly getting a useful message and the option to leave a voicemail. We could address this through more complex conditional logic in our phone system set up, but for now I’m trying to address it in a way that mostly maintains user-level control.
So, based on some other bits and pieces of scripts found on Stackoverflow and elsewhere (##), I put together this Bash script that logs everyone out of all queues:
#!/usr/bin/bash
# Remove an Asterisk agent from queues, or all agents from all queues
Help()
{
# Display Help
echo "Remove Asterisk dynamic agents from queues."
echo
echo "Syntax: remove_from_queue.sh [-a|eh]"
echo "options:"
echo "a Remove all members from all queues."
echo "e <123> Specify an extension to be removed ."
echo "h Print this help."
echo
}
while getopts ":ahe:" option; do
case $option in
h) # display Help
Help
exit;;
e) # Enter a specific extension to be removed
member=$OPTARG;;
a) # set all queue members to be removed
all=true;;
\?) # Invalid option
echo "Error: Invalid option"
exit;;
esac
done
## all queues
declare -a queues=(`asterisk -rx "queue show" | cut -d " " -f1` )
for q in "${queues[@]}"
do
## all agents in queue
declare -a members=(`asterisk -rx "queue show $q" | grep "/" | cut -d"(" -f2 | cut -d" " -f1`)
for m in "${members[@]}"
do
if [ ! -z $member ]; then
if [[ $m == *"$member"* ]]; then
echo "Removing member Local/$member@from-queue/n from $q"
cmd="queue remove member Local/$member@from-queue/n from $q"
asterisk -rx "$cmd"
fi
else
if [ "$all" = "true" ]; then
echo "Removing member $m from $q"
cmd="queue remove member $m from $q"
asterisk -rx "$cmd"
fi
fi
done
done
I run it via cron like so:
# Remove all asterisk queue agents from all queues at the end of the day
0 18 * * * root /usr/local/bin/remove_from_queue.sh -a
I’ve been making more use of Cloudflare as a DNS and proxy/DoS protection service. I tend to use it by default now when I set up any kind of public-facing web application that might attract non-trivial traffic, store sensitive information, or where performance is a factor.
When spinning up a new server on an infrastructure platform like Digital Ocean, it’s almost certain that the IP address assigned to it is going to immediately see traffic from bad actors using automated attempts to find a web-based exploit. Even if you put Cloudflare in front of a web service and lock things down, connection attempts directly to the IP address will bypass their proxy and get through.
That’s where this first script comes in handy. Instead of allowing all tcp network activity to ports 80 and 443 to get through, I want to only allow traffic from known Cloudflare IP addresses. We can do this using the Ubuntu ufw firewall and Cloudflare’s published IP address blocks.
GadElKareem shared a script that I’ve adapted a bit to make use of ufw’s application profiles:
#!/usr/bin/env bash
set -euo pipefail
# lock it
PIDFILE="/tmp/$(basename "${BASH_SOURCE[0]%.*}.pid")"
exec 200>${PIDFILE}
flock -n 200 || ( echo "${BASH_SOURCE[0]} script is already running. Aborting . ." && exit 1 )
PID=$$
echo ${PID} 1>&200
cd "$(dirname $(readlink -f "${BASH_SOURCE[0]}"))"
CUR_DIR="$(pwd)"
wget -q https://www.cloudflare.com/ips-v4 -O /tmp/cloudflare-ips-v4
wget -q https://www.cloudflare.com/ips-v6 -O /tmp/cloudflare-ips-v6
for cfip in `cat /tmp/cloudflare-ips-v4`; do /usr/sbin/ufw allow from $cfip to any app "Nginx Full" comment "cloudflare"; done
for cfip in `cat /tmp/cloudflare-ips-v6`; do /usr/sbin/ufw allow from $cfip to any app "Nginx Full" comment "cloudflare"; done
It basically downloads known Cloudflare IPv4 and IPv6 addresses and then adds ufw rules to allow traffic from those addresses. I run this on a cron job like so:
The final step is to remove any ufw rules that allow traffic through to ports 80 and 443 for any source IP. This kind of rule may or may not be in place as a part of your existing server configuration.
The end result is that any web connection attempts not from Cloudflare will not be allowed through.
A second challenge with Cloudflare is making sure anything you do with visitor IP addresses found in access logs is using the actual original visitor IP instead of the Cloudflare proxy IP. An example would be using fail2ban to block accesses from a host that has tried too many times to gain unauthorized access to a server.
Fortunately, Cloudflare passes through the original visitor IP as a header, so we just have to make use of that in our logs. This script, originally shared by ergin on GitHub, will download the published Cloudflare IP addresses and generate an nginx-friendly configuration file that adjusts the “real IP” of the visitor for logging and other purposes:
#!/bin/bash
CLOUDFLARE_FILE_PATH=/etc/nginx/conf.d/cloudflare-realips.conf
echo "#Cloudflare" > $CLOUDFLARE_FILE_PATH;
echo "" >> $CLOUDFLARE_FILE_PATH;
echo "# - IPv4" >> $CLOUDFLARE_FILE_PATH;
for i in `curl -s https://www.cloudflare.com/ips-v4/`; do
echo "set_real_ip_from $i;" >> $CLOUDFLARE_FILE_PATH;
done
echo "" >> $CLOUDFLARE_FILE_PATH;
echo "# - IPv6" >> $CLOUDFLARE_FILE_PATH;
for i in `curl -s https://www.cloudflare.com/ips-v6/`; do
echo "set_real_ip_from $i;" >> $CLOUDFLARE_FILE_PATH;
done
echo "" >> $CLOUDFLARE_FILE_PATH;
echo "real_ip_header CF-Connecting-IP;" >> $CLOUDFLARE_FILE_PATH;
#test configuration and reload nginx
nginx -t && systemctl reload nginx
Because the config file is output in the /conf.d/ directory in the nginx main directory, the default nginx config will pick up its contents without further action.
Continuing with the example of using fail2ban to block unwanted traffic from abusive hosts, you can set up a set of custom Cloudflare ban/unban actions in a file like /etc/fail2ban/action.d/cloudflare.conf, using this version adapted from the original by Mike Rushton:
and then in the jail.local file you can set this up as the default action:
cfemail=<your cloudflare@email here.com>
cfapikey=<your cloudflare API key here>
# Define CF ban action without mailing
action_cf = cloudflare[cfuser="%(cfemail)s", cftoken="%(cfapikey)s"]
# Set default action to Cloudflare ban/unban
action = %(action_cf)s
The end result is that bad actors never get past Cloudflare’s proxy protection to attempt additional foolishness directly on your server.
Thanks to all of the folks who created the original version of these scripts. If you have suggestions for improvement or your own fun Cloudflare tooling, please share!
One of them that was in progress at the time was a script to fetch articles and story content from our Airtable-managed story database for faster placement on the page, instead of copying and pasting.
That script is now in production and while it still has some rough edges, is also open sourced on GitHub. I thought I’d go into a little more detail here about how it works:
Layout editor opens a page file they want to do layout on
Layout editor runs InDesign script
Script determines what page is being worked on based on the filename
Script makes a call to a remote API with the page number as a query parameter, to see what stories are available and ready to be placed on that page, and gets them as a JSON data structure
Using a base story layer that exists in our page template, script creates a new story layer with the headline, subhead, byline, story content, etc. filled in from the JSON data
Script finds and replaces Markdown syntax in the content with established InDesign styles for bold, italics, body subheaders, bullet points, etc.
Layout editor drags the new layer(s) into place, adjusts dimensions, and marks the stories as placed
Rinse and repeat
We’re also working toward adding support for images and captions/cutlines.
This turns what could be a tens or hundreds of clicks process for a given story into just a few clicks. It saves more time on some stories than others, but especially for the ones that involved a lot of applying inline character styles that were being lost or mangled during copy/paste, I think it’s a clear win.
I referred to a “remote API” above because even though our stories are managed in Airtable right now, I chose to introduce an intermediate API for the InDesign script to call for simplicity and so that we weren’t locked in to Airtable’s way of doing things.
In our case, it’s implemented as a single action controller in Laravel, which essentially proxies the query on to the Airtable API and maps out a new, simpler data structure from the result with some content cleanup thrown in along the way:
You can tell that there are some hacky things going on with how we represent headlines, subheads and bylines in the content of our stories, but that’s a blog post for another time.
Here’s what a response might look like:
{
"status": "success",
"count": 2,
"data": [
{
"id": "record_id_1",
"headline": "My first headline",
"subhead": null,
"byline": "From staff reports",
"body": "Lectus viverra cubilia.",
"image_url": "https://placehold.co/600x400",
"cutline": "My image caption 1 here"
},
{
"id": "record_id_2",
"headline": "A second story",
"subhead": "You should really read this",
"byline": "By David Carr",
"body": "Lectus viverra cubilia.",
"image_url": null,
"cutline": null
}]
}
The README at the root of the repository explains further how the API query and response should work and includes some basic installation instructions. (If you use this, I recommend including some light caching so that you don’t over-query the Airtable API, or wherever your stories are stored.)
I realize everyone’s layout and story management tools are different and that the likelihood that our workflow matches up with yours is slim…but just in case this is helpful to anyone else, I wanted to get it out there! Let us know what you think.
It’s unreasonably difficult to find information about working with custom taxonomies and their terms in the WordPress REST API. It would be easy to conclude that it’s not possible or that one has to create a custom endpoint to achieve that functionality.
It ends up being fairly straightforward, so I’m sharing what I learned here in case it saves someone else time.
The developer REST API reference endpoint list doesn’t make any obvious reference to custom taxonomies or terms. If you end up on the tags endpoint reference page, you’ll see that there’s a way to work with these terms in the standard post_tag taxonomy, but no way to specify a custom taxonomy during write operations. There’s actually a taxonomies endpoint reference page that doesn’t seem to be linked to from the main navigation anywhere, but can be found in a search; still, nothing on it about working with terms in those taxonomies.
The second two are optional, where you can define the REST API url slug and controller you want to use; if you leave them out it will use the default terms controller and a slug derived from your taxonomy definition.
With this code in place, you’ll now have two new REST API endpoints available:
You can then essentially treat those the same way as the tags endpoints, getting/posting to create, update and retrieve terms in that custom taxonomy. For example, to create a new term:
POST https://example.test/wp-json/wp/v2/my-taxonomy-api-slug
slug:my-first-term
name:My First Term
I’ve submitted a pull request to the WP API docs to add at least one pointer in the documentation to help others on a similar quest get there faster.
Updated May 3 @ 11:49 AM Eastern to note that the second two taxonomy registration options rest_base and rest_controller_class are optional to define, thanks to Marcus Kober for noting this.
The WooCommerce Subscriptions plugin has a bug where if you use its REST API endpoints to make updates to a subscription, it will almost always reset the start date of the subscription you are updating to the current date and time.
I reported the bug to the WooCommerce folks in November 2022 and I believe it has a GitHub issue filed. When I checked in about it recently I was told it’s a low priority to fix. I consider it somewhat serious for my purposes — unexpectedly overwriting/losing key data about a record that’s used in calculating user-facing financial transactions — so I’m documenting it here for others who might encounter it, along with a possible workaround.
The bug is in the plugin file includes/api/class-wc-rest-subscriptions-controller.php that processes the incoming API request. In particular, in the function prepare_object_for_database() there’s this code:
// If the start date is not set in the request, set its default to now.
if ( ! isset( $request['start_date'] ) ) {
$request['start_date'] = gmdate( 'Y-m-d H:i:s' );
}
All of the valid dates contained in the $request array are subsequently copied into an array called $dates. Later in the same function, there’s this code:
The implication is that for completely unrelated API requests to change something like, say, a meta field value or the subscription’s status, the date validation and update logic will be run. And because the start date value is overridden to be the current date, it means that any API request to update a completely unrelated field is going to unintentionally reset the start date of the subscription.
I’ve even noticed this bug surfacing even in regular wp-admin operations involving WooCommerce Subscriptions, as I think some of the logic used to do something like put a subscription on hold or cancel it from within the admin interface is calling the same internal functions.
My workaround for this bug, at least on the API client side, introduces an extra API request, and so is less than ideal for any kind of production or long-term use.
For every API request I make to the Subscriptions API update endpoint, if I’m not explicitly setting/changing the start_date field in my request, I first fetch the existing subscription record and then set the start_date field in my request to the current value.
Wow, it’s been over a month since I took ownership of a print and online newspaper here in my community. There’s a lot to say about that experience and what I’ve learned so far. In this post, I’ll be focused on the tools and technology we’re using to operate this business. Some of these were in place before I came in, some are new in the last month.
I’m sharing this because (a) I generally enjoy the topic of if/how tools can make life and business easier, and (b) I hope it could be useful to someone else publishing a newspaper or building a media organization.
Print Layout and Design
It’s Adobe Creative Cloud all the way, for better or worse. InDesign for newspaper layout, Photoshop for image editing. Given the way our staff is set up and our weekly newspaper production process works, almost everyone touches the newspaper pages at some point or another, so the monthly license costs to cover all of that is somewhat ouch. If there were a viable alternative to InDesign, we’d probably switch to it.
Issue and Story Budget Planning
We’re using an Airtable base that helps us record story ideas and plan for our upcoming issues by tracking what articles are going to go where, what state their in, and all the associated data that goes with them such as photos, source info, internal notes, etc. It’s pretty great and the real-time collaboration that it makes possible is hard to beat. I think down the road we may move toward a custom Laravel-powered solution that allows for tighter integration of all of our business operations, but that’s a ways off.
Having used Asterisk in a previous business I was already familiar with its architecture and features, but FreePBX meant I could configure everything via web interface instead of editing dialplan files – amazing. We have extensions, queues, interactive voice menus, voicemail speech to text transcription (using this tool) and more, and it sets up a nice foundation for future integration with other tools like our CRM data.
We’re using Yealink T31P and T33G VOIP phones and so far Counterpath’s Bria Mobile has been the most compatible/feature complete softphone for iOS that I’ve found.
It took me some time to figure out the right way to define, fire and listen for custom events for a Laravel model using a trait, so I’m writing down how I did it in case it’s helpful to others.
Let’s say you have a model Post that you set up as having a “draft” status by default, but eventually will have a status of “publish”. Let’s also say you want to make the act of publishing a post a custom model event that can be listened for in addition to the standard model events like “created” or “updated”. And let’s say you want to do all of this using a trait so that you can apply the same logic to another model in the future, such as comments on the post, without repeating yourself.
Here’s what my Post model might look like:
<?php
namespace App\Models;
class Post extends Model
{
//
}
Let’s create a Publishable trait that can be applied to this model:
<?php
namespace App\Models\Traits;
trait Publishable
{
// Add to the list of observable events on any publishable model
public function initializePublishable()
{
$this->addObservableEvents([
'publishing',
'published',
]);
}
// Create a publish method that we'll use to transition the
// status of any publishable model, and fire off the before/after events
public function publish()
{
if (false === $this->fireModelEvent('publishing')) {
return false;
}
$this->forceFill(['status' => 'publish'])->save();
$this->fireModelEvent('published');
}
// Register the existence of the publishing model event
public static function publishing($callback)
{
static::registerModelEvent('publishing', $callback);
}
// Register the existence of the published model event
public static function published($callback)
{
static::registerModelEvent('published', $callback);
}
}
This new trait can now be applied to the Post model:
After not finding anyone else who has done so, I created a minimal PHP implementation of the CrowdTangleAPI, which I needed anyway for a project I’m working on:
$client = new ChrisHardie\CrowdtangleApi\Client($accessToken);
// get lists
$client->getLists();
// get accounts in a list
$client->getAccountsForList($listId);
// get posts
$client->getPosts([
'accounts' => '12345678',
'startDate' => '2022-03-01',
]);
// get a single post
$client->getPost($postId);
I’m sure there’s plenty to improve but I hope it’s helpful to anyone working with CrowdTangle in PHP.