Earlier this week, I got an email saying one of my social media accounts had an unusual login, but it was nearby, and sometimes that happens normally when my one tool uses a different server, etc. or a bot runs from another setup. Not necessarily “me” accessing, but things that I authorized to access showing up on a server in another city nearby. It usually doesn’t do anything else, and often it isn’t even successful. I have a few accounts that didn’t quite have my latest passwords on them, but they were decent enough.
Tuesday, I went to a coffee shop in Nepean, and my VPN on my laptop wasn’t working. However, I also played an online game on Tuesday night that has a lot of stuff going on to run it. One or the other could have compromised my access, I suppose.
I’ve spent a lot of time tonight rebooting accounts, changing passwords, logging any other access out, and generally being paranoid AF. Of 8 things I use regularly, it compromised the four easiest. The second tier wasn’t hacked, and the third tier doesn’t look like it was even viewed. I still upgraded a ton of passwords tonight using a combination of tools and double-checking 2FA. Most are fine, and are the reason I was able to recover what I did. One remains outstanding, not that critical.
Still, quite annoying and tiring. My protections around the castle engaged when the Trojan Horse was opened, and mostly worked as intended. I’d like to write in more detail, but that would be incredibly stupid.
The bigger question is the vector of attack. One of the vectors may have started before Tuesday, except I can’t think of where or how really. The wifi network would be the obvious idea, or if I had used wifi while I was in New Orleans. But the whole time I was in New Orleans, I only used my phone as a hotspot, no wifi. Hmmm…
The fun that remains is when I go to access an app on my phone or load something on another computer and it says, “NO! You shall not pass!” because I haven’t logged into it with the new passwords.
Jacob has a higher-end gaming PC. Not top of the line, but certainly higher than the mid-range. Great graphics card, decent memory and speed, and a nice curved large monitor.
He comes down to see me yesterday afternoon and says, “Umm…my monitor stopped working.” Huh? Yep, he rebooted, did all the basic stuff, nothing. No signal to the monitor.
At the time, I was working my real job, so no time for much in the way of tech support. I gave him three possibilities:
1. Full shut down, see if the PC has somehow lost its setup info;
2. Try the monitor with a different source;
3. Try a different monitor.
He comes back later to say that the existing monitor works with another laptop, no problem, and the PC itself doesn’t work on other monitors. Excellent, we’ve narrowed it down to the PC. Right???? RIGHT???? Monitor works, PC doesn’t.
Now, there were some confounding variables to add to the mix. He’d been running a new game, and the refresh rate was dead slow. He had tried playing with graphics settings, downloaded a tool from AMD, and after that, nope.
I was initially worried he had fallen for some sort of scam pop-up, but it was indeed all legit. And nothing sounded like it should have screwed up too much, but maybe he lost his graphics drivers. My brain couldn’t decide if the PC would still send a proper video signal if the drivers weren’t on it, but I was wondering if maybe the graphics card went pffft.
I popped over to Canada Computers, where we bought it, and they weren’t busy so I said, “Hey, I might have the easiest fix ever. I think he just blew off the drivers.” Which the guy told me wouldn’t matter. It would still send the basic signal, even if only BIOS info. Huh.
He reached over to his computer for checking things in, unplugged the video feed and plugged it into the PC, added power, and voila, Jacob’s login came up. So, the PC **was** working. Just not with Jacob’s monitor. Or any monitor at the house. Huh?
We chatted about a few other things, but nothing that would give me a lead anywhere. But it was working.
So brought it all back home, plugged in again, nada. No signal to his monitor. We did have a small problem with Windows not being still registered, but apparently unrelated. Huh.
Jacob went off to have a bath, I started noodling. I literally couldn’t think of anything. Then it occurred to me that while we had shut the PC down to “nothing”, we had NOT reset the monitor, and it IS a smarter-than-average monitor. It has some internal memory, auto config stuff, etc. And since it plugs directly into the powerbar, not the PC, it is always “on” at least somewhat.
What if I shut it down too to fully off? I turned off the power bar and let everything go to zero. Nothing on, nothing running, etc., and let it stay off for about 5 minutes.
Then, I turned it all back on, started the PC…and got Jacob’s login on the screen, no problem. After his bath, Jacob reenabled the proper graphics drivers, tested all his normal games, and they all work. The “problem” one still didn’t, but we’ll deal with that on the weekend. The rest is running fine.
I’d love to say I’m a god for figuring out how to reset it, but well, all I really did was turn it off completely before turning it back on. Exactly what we tried multiple times, but as I said, the monitor was staying on and remembering that it didn’t like the previous signal from the PC and thus continuing to block it.
I can’t say I was looking to solve a hardware problem last night. But all’s well that fixes itself.
For those who engage in any sort of IT world, or anything that comes with a manual, the general joke is that 90% of all problems would be eliminated if you just “read the f***ing manual” (RTFM).
A few weeks ago, I noted that I had solved a chron problem with my website. Well, to be candid, the chron problem apparently solved itself, upgraded itself beyond its own obsolescence, or it just evaporated as a problem. Whereas I had problems previously where things I told it to post at say noon wouldn’t post at noon, or even at all until someone refreshed the website thus triggering an update, the chron suddenly started working. If I scheduled something for 12:30 p.m., it would publish at 12:30 p.m. It I said, 4:30 p.m., it would go at 4:30 p.m. More importantly, if I scheduled things so one would publish at 8:30 a.m., one at 12:30 p.m., and one at 4:30 p.m., AND had it go to my Buffer app to share things to social media at 9:00, 1:00 and 5:00, then lo and behold (!), it would indeed publish on time and Buffer would post it to media as per its separate schedule.
So I went back to looking at quotes and humour, and figured out a way to post them as pictures / images, with the text in the ALT area so it would still get indexed (i.e., if a joke refers to nuns, but the joke was a picture not text, a search for the word nuns would NOT pull up the page; if instead, I also add the text to the ALT text for the image, it will indeed find the word and show me the relevant post).
So why does all that matter?
Because before Chron was working and before I had images to share, I basically had to do things manually. If I wanted to post at 9:00, I had to go online and post at 9:00. If I wanted to post at 1:00 p.m., I had to go online and post. Otherwise, the people who subscribe to my website (rather than on social media) would get all the posts at once. Soooo, if I pre-posted a month’s worth of quotes? They would all go out at once by email. Not the best plan. Instead, I generally held myself to one post per day, and let it go out by email at whatever time but would tweak the settings in Buffer so it would look right. One day ahead. If I was too tired the night before, nothing went out.
Yet I’ve always had a bit of a challenge with sharing images with my posts to social media. From a WordPress site, there are essentially three possible images to choose from:
The featured image (FI) aka usually a smallish image next to your post … because of the way I designed my layout, the FI image sits outside of the main text to the left above the date;
An image embedded somewhere in the text; or,
An image embedded in the text and tagged with OPENGRAPH settings as an image to share.
Some social media sites have a priority of 1, 3, 2 for which image it will choose; others will do 3, 1, 2; and still others will do 2, 3, 1. There has been some challenge at times sharing what I want to share. I also don’t have some big IT department to figure this out for me. Because my FI photos sometimes are the ONLY photos in a post, I want it to use that image if there aren’t any others. When I do book reviews, I also include a copy of the book cover — usually category 2 above. On occasion, I insert other images and depending on how they are embedded, sometimes they use OPENGRAPH tags and sometimes not. Sigh.
Now, things get a bit more interesting. I mentioned above that I use Buffer as an online app that takes my posts from the WordPress site and acts as an intermediary with various social media sites — Facebook/Meta; Twitter/X; BlueSky; and Threads. And then gives me a fourth place to store photos for the post, ideally for sharing with social media.
I was struggling to get the right image to show up, and I frequently would still have to go in and edit the Buffer queue so that the right image would show. I would put in the photo I wanted into the #4 slot, but it wouldn’t show in the post. I read the text that went with it multiple times, and it didn’t seem to apply to what I wanted.
Ultimately, with this plugin, I now had FOUR places to put a photo and then four possible settings for the actual sharing to social media, so 16 combinations in all. I wasn’t looking forward to trying them all, to be honest. I had tweaked something last week to make my ThePolyBlog site look like what was posting from PolyWogg.ca, but it made it worse, not better. The quotes and humour “images” were no longer showing as previews in FaceBook — it was just the featured image (FI), so if anyone wanted to see the joke or quote, they had to click through to see it. Exactly what I was trying to avoid — I wanted the image versions to be more easily shared than forcing a click-through.
I decided to do a bit of work on it on Sunday night, and I figured the best option had to be somewhere in Option 4. It’s a separate area, as I said, where you can tell WP that these are the photos you want to share. And in fact, it is designed to share multiple photos, maybe even a little mini-gallery if you want. It wasn’t limited to one photo.
There was some generic text about adding images, but it recently updated the language on the screen. It now says:
The first image only replaces the Featured Image in a status where a status’ option is not set to “Use OpenGraph Settings”. Additional images only work where a status’ option is set to “Use Featured Image, not Linked to Post”.
I read that text a couple of dozen times over recent weeks and it was not really jiving. For sentence two, it’s talking about “additional” images but I didn’t have any additional images, I was just putting in one. Sometimes it took it, sometimes not. For the first sentence, you may notice that it is kind of badly worded.
While it says the first image replaces the FI when it is NOT set to “use OG settings”, but so what? I don’t really care if it replaces the FI when it isn’t set to that, I want it set to that. Don’t I? But then it hit me that the two sentences sort of work together.
My general posts have a Featured Image. The Open Graph settings are designed to add OG codes/tags to the Featured Image. So it should use FI and OG. Great. Except you have to change the settings from using OG to something where it is NOT using OG. Wait, what? Oh yeah, that’s the second phrase. If you set it to use an FI that is NOT the main FI linked to the post, then it will use the first image saved here in this box.
So while every website advice about sharing images says to use the regular FI and add the OG settings, this one tells me to NOT use the FI, not use the OG settings, and to use a different setting that doesn’t sound right at all. It goes against everything obvious in the approach. But, ultimately, it totally overrides letting the social media sites decide between #1-3 and tells it, regardless of what else is going on, just use whatever image I tell it to use in this setting (#4).
Bam! All four of my social media posts shared perfectly the next time. A book review with a cover, a quote as an image, and a joke as an image. All three “images” shared as the main pic with their posts x 4 different sites.
Because I RTFM, apparently. It’s not just a meme. The manual makes no SENSE, but that’s beside the point. It’s fixed.
Last week, I introduced TT — Tadpole Tuesday — where I blog about a current project that I’m working on. I started with some website stuff I was doing, namely putting quotes into shareable images and uploading those to the website. I had them in a different form, and I wanted to redo them to make them shareable. All good. But something odd happened on the way to the market, so to speak.
For a bit of background context, you need to know that my website is not a commercial enterprise, so it doesn’t have the complete set of bells and whistles that a whole business website might have. I have a small personal site, at a smaller price point, and a lot of manual management by me. It isn’t super sophisticated, I don’t have e-commerce options running on it, and it isn’t integrated with a warehouse for shipping products. It is a WordPress site running some plugins, and if I run TOO MANY plugins, the site stops loading everything. It runs out of memory, basically, while I’m working on it.
To load a webpage, one of two general options happens on hosted sites (full commercial or personal site).
On a personal site, my site sits somewhat dormant. It isn’t really DOING anything until someone asks it to do something. And technically, it doesn’t ask my site to do anything, it routes a request like load ThePolyBlog.ca home page; the internet routes that command to the hosting company I have my site with; the hosting company’s servers recognizes that it matches my site; and it sends a command to my personal sub-server area to “wake up and process this command”. Until it does that, my site is almost completely asleep.
By contrast, a full site runs constantly, like the hosting company’s servers do, looking to see if anyone is sending it a request, kind of like a dog jumping up and down saying, “Is it me? Is it me? Is it me? Is it me?”.
Think of it much like your own PC at home in hibernation mode vs. being “awake” and active. On your own PC, it is running much like my rented server space. It doesn’t do anything unless a timer goes off or someone taps the keyboard / loads a page.
Except for one little niggly detail. Timers would not work on my server.
I don’t have Chron for this
The timer on servers is generically called Chron. It is a traffic cop seeing which requests are coming in when, and processing them in order of their timestamp. And it has a timer. In theory, on just about every server known to exist, you can tell it that you want it to check mail, for instance, every 10 minutes. And every 10 minutes, it goes out and checks mail. Or once a day, it runs a backup. Or once every two days, it refreshes the cache. Or a whole host of regular things that require the timer to trigger them. Most run on their own, at least on most servers.
On my websites, the maintenance stuff for the servers doesn’t actually reside on my server, it sits on the hosting company’s servers. And their Chron works just fine. For my website, however, WordPress has to interface with the Chron timer, and well, they didn’t like each other very much. If the maintenance stuff didn’t run, this could be a problem, but the hosting company’s servers took care of that, leaving my server to just run whatever I want to run within WordPress. Except I don’t have anything that is time-based / schedule-based within WordPress. As I said, I don’t run anything commercial in it, so there’s no newsletter release, or content being pushed for sales notices, or carousels being updated with new ads.
Between 2005 and 2017, I was on various servers. And none of the Chrons would run reliably. In effect, my Chron went to sleep. Once something happened, like someone loading a page anywhere on the site, the server would wake up after the hosting server pinged it, it would sit up in bed, scramble to show whatever page was being requested, and then run Chron. So, if I had scheduled a post to go live at 9:00 a.m., it wouldn’t go live at 9:00 a.m. Chron was sleeping. However, if someone tried to load a page at 9:37 a.m., the server would wake up, show the page, and then run Chron, which would put the page/post live. The one that SHOULD have gone live while Chron was sleeping.
I worked with multiple support groups at different servers. For whatever reason, and it is not comforting to know this, some sites run Chron just fine and others are sleepy butts. Mine has always been a sleepy butt. I fought with it about 4 times, I think, over a 10-12 year period, but it was never a “must-have” for me. If it was, I could have upgraded my server package.
The only benefit that Chron could have given me was to allow scheduling of posts. So, again, if I was a larger enterprise with multiple posts per day and/or week, I could write them in advance (or other contributors could write them), and we could schedule them throughout the week. Jane’s post about corruption at City Hall could go live immediately while updating carousels and ads to go with it. Mike’s post about a cat family at the local park could go on Tuesday near the commute time. Blah blah blah. The comic strips for the week could be pre-loaded to post #272 on Monday at 8:00 a.m., 273 on Tuesday at 8:00 a.m., 274 on Wednesday at 8:00 a.m., and so on.
I don’t normally have that much content that I have to handle scheduling. Search engine optimization and blogging experts advice that if you want to grow your blog, you should post at a set interval, and monitor your take-up. If you know that people click through more if you publish in the afternoon, set your posts to go live in the afternoon; if they like mornings better, post in the morning; if Thursday and Friday are better than the weekend, then post by noon on Friday or wait until Monday for anything else. Have a schedule and stick to it.
Great advice, very logical. And my response has almost always been “meh”. While I would like to boost interaction so that I know SOMEONE is reading my stuff, I’m going to blog regardless. My past research shows that people click when they like, not when I want them to, and because all my sharing is through social media, it depends more heavily on when they read and what their algorithms do with my posts, than what time of day it got posted. Obviously, I don’t want to dump 100 posts in a single day. But since they are always delayed viewing anyway, I feel it is more about the type of posts I do in a single day than the content. For example, I feel like I can post at most one good medium to long post per day. Like this one. But a quote is a single image, and a joke will be a single image. I can add that to my daily feed without overwhelming the recipients. The question is how to queue that up properly.
Buffer is like the Chron I never had
I use Buffer as my social media manager. The way it works is you add a “channel” to Buffer, say your Facebook page; you go in and edit all the settings for that channel and how you want it to post to Facebook with extra words, the order of various fields, which image to use (a default image, a featured image, the first image in a post, etc.), and a number of other features and formats; AND you tell it the schedule to use. This goes back to the advice from blogging experts. For me, I said, “Okay, publish to the Facebook channel at 9:00 a.m. every morning”. That was the only time in the queue. If I wrote a post at midnight and pressed publish, WordPress would send that post to Buffer, Buffer would put it in the next slot (9:00 a.m. the next morning) and when the slot came up, Buffer’s Chron would send / post it to Facebook. In theory, I could write 20 posts over a weekend, tell it to post them, and they would be live on the website immediately. But Buffer would add them into the queue and send them out one per day for the next 20 days. It was the only time I really wanted Chron. Alternatively, I could write all the posts and save them as pending for now. Then, each day, I could open the next post, and say PUBLISH. It would go out in the next slot, likely 9:00 a.m. the next morning. If I missed a day, it didn’t go.
Except I hate having to run Chron manually. And Buffer is okay, but not exceptional. If, for example, I wrote 20 book reviews and had them all queued up, they would occupy the slots for the next 20 days. If I then wrote a great post about some news item, Buffer would add it to the queue in the 21st slot. If, instead, I wanted it to go out “immediately” or the next morning at least, it was not easy to bump it up in the queue. Oftentimes, I’d be manually moving stuff around, or adding an extra push one day.
Now, don’t get me wrong, Chron on a website isn’t a lot better. It can be challenging to move things around relative to each other. You can easily say, “Hey, send this one at 9:00 a.m. tomorrow”, but the previous one would also be scheduled to go at 9:00 the next morning, so it would go too. People get around this by embedding schedulers into their website. Adding Buffer’s functionality to the website. Except it requires Chron to run.
And the circle is complete. Buffer and WordPress work better together if both Chrons work; if only Buffer’s Chron works, it can handle most issues, but changes on the fly are sometimes more painful than I would like. I know, however, there ARE ways to go into the complex settings of Buffer and say, “Okay, here’s the schedule for the week…there’s a slot at 9:00 a.m., 1:00 p.m. and 5:00 p.m. The 1:00 p.m. slot should use this complex FILTER and send a post IF and ONLY IF it has the category of QUOTES”, for example. It’s a bit more complicated than that, and it has never been worth my time to go in and figure it all out. I don’t have that many posts in a queue usually.
Part of that queue avoidance is tied to the issue of people who subscribe directly to the website. At first, I didn’t realize this was happening, although I should have. When I did a bunch of movement of posts between websites, I essentially imported 100 posts from PolyWogg and posted them to ThePolyBlog. These were posts that had already been shared, were all old, and so I didn’t think of them as “new”. But, of course, ThePolyBlog site saw them as new and TREATED them as newly published content. It sent all 100 posts to Buffer, overloading my queue. And it sent 100 posts out to those who were subscribed. So their inboxes got flooded. With old content. Whoops.
So, when I work on my site, I have to remember to turn off the post to subscribers function. And to remember to turn it BACK on afterwards. If I queue, say, 20 quotes to go out Monday to Friday over the next month, it will work fine for Buffer. But the people who are subscribed directly will get 20 posts immediately. WordPress makes them live immediately. Grrr…
Website, heal thyself
The funny thing that happened last week is that Chron seems to work now. WTF?
I don’t know how, I don’t know why, I don’t know when. But Chron is working on my site.
As I mentioned, I’m redoing 93 Quotes posts. Which basically means I copy them to a new post, add in the image version, and repost as a new post, before deleting the old post. If I process all 93 at once, my subscribers will get 93 emails in a day. If I want to avoid that, I can turn off the newsletter feature / share post option until I get them all updated, but then my subscribers don’t get that content. It’s efficient to do all 93 at once, a lot of repetitive steps, and in about two hours, I could blast through all of it. Or maybe an hour here and there, do it over a couple of days. But again, I have to avoid them going out to the subscribers too often. Buffer can handle my social media, I just wish they had a newsletter option (something I can check on for future, it would be great to move my subscribers OFF my website overhead).
Last weekend, I set up a quote post. I didn’t want to add it to the queue and publish immediately, so I saved it as pending. And then, just for fun, I told WordPress to publish it at 8:30 a.m. I wanted to see how it would handle the subscription feeds to individuals. I figured it would eventually get “flagged” when someone loaded the website, and even if it didn’t go at 8:30, well, it would go sometime that day. I would check at lunch, and if it hadn’t gone, I’d load the page and trigger it myself.
On Monday morning, my phone buzzed at 8:30 and then buzzed repeatedly at 9:00. I was getting dressed, and it buzzed twice which I ignored and then four times in under a minute. I thought someone was calling me, which is exceedingly rare, and I’m not even a huge texter. At 8:30, my Chron had run all on its own. Like it would on a bigger server. Somehow, WordPress and my server decided to get along and Chron ran on time. Maybe it’s been able to run for five years, which is the last time I tried to fix it. Maybe it just fixed itself last week. I have no idea. But it ran.
Which then meant it sent a full copy of my post to my main email address. Then it sent a short copy of the post to my secondary email address (sending it twice in two different forms lets me see what my subscribers are seeing, and I save it in email as a backup). Then Buffer entered the chat 30m later, and told me it had shared my post with a) FaceBook/Meta, b) Twitter/X, c) Blue Sky and d) Threads. Exactly on time. Because Chron ran on its own.
Holy snicker doodles.
Do I need this functionality? No. Do I want this functionality? Sure. Will I use this functionality? Absolutely.
The first case will be my quotes. I can set up all 93, queued to go live over various days, likely Monday, Wednesday and Friday at lunchtime. I’ll also queue up some jokes to go on Tuesdays and Thursdays at the same time. I’ve already modified Buffer to give me three send times a day — 9:00 a.m. that I’ll use for blog posts; 1:00 p.m. that I’ll use for shorter posts; and 5:00 p.m. that I’ll use for reviews. All I have to do is set my Chron / WordPress to push the content live about 30 minutes before the slot. Then, the slot will accept the post and share it on the schedule set with Buffer. It probably needs about 2 minutes’ lead time to accept it, but I’ll give Buffer more like an hour.
After all the times the website seemed to break almost on its own, I’ll take the win from it fixing itself on its own. Now I just have to post the content. And as a further test, I’m writing this post on Sunday to go out Tuesday for my Tadpole Tuesday series. Fingers crossed.
When people think of piracy, they often immediately think of movies or software. Rewind to the ’90s, and your thoughts would have been about music with sites like Napster. Almost all of the previous significant industries went with alternate business models that put a huge crimp in piracy. In some ways, at least. Music was the first — they created the online platforms with unlimited streaming for a fee, aka the all-you-can-eat buffet. They also created distribution models where most major stars are available on all platforms, so you CAN still pirate music, but it’s a lot of work that is easily waived with a simple tap of your payment card. With way more benefit than you have time to do with pirated music. You don’t OWN the music, but if you have access to it generally whenever you want, why care?
Software has gone all-in on subscription models. Even if you can hack the current model or version, it won’t connect to a bunch of the online validation tools, and it’s only good for a certain amount of time. Game systems have moved to online platforms where the software does little more than give you access; without the subscription, there’s no point. So the software companies give free downloads of the access software, decreasing the benefit of hacking anything. It’s not zero benefit, but the traffic is way down for major packages. Meanwhile, Microsoft gave away new copies of Windows 10 and 11 for free to get people off the old versions and into the new monetized subscription versions.
Movies have a huge market overseas for pirated first-run movies that are still in theatres, although the early copies are often terrible, handheld copies taken on phones. If you’re desperate to see something that won’t reach your region for another 3 months, well, you can find it online within 2 days of the release. Movie company executives act terrified, but they also know the reality…the people who pirate are not those who pay, and those who pay, probably are only pirating to see extra stuff they wouldn’t pay for or will pay again when it finally hits their area.
There are even studies, kept heavily under wraps, where they have tested to see if releasing a poor quality version actually increases or decreases box office revenue, with mixed results, where those who have seen it say it’s hard to interpret since there’s no way to segregate the market. If you release it in LA on a Tuesday morning, people are downloading from Chinese servers by Tuesday afternoon. If you assume that “similar” movies should have “similar” revenue patterns after staged releases around the world, you can point to abnormalities where the profit from Movie 1 had a slow drop over time, while Movie 2 that was pirated had a sharper decline; which then makes no sense when Movie 3 shows a sharp decline with no piracy (aka the movie just sucked) and Movie 4 had a spike AFTER the piracy (suggesting any publicity is good publicity). In short? There’s no way to know.
But pundits and moviemakers argue it’s a huge business loss — with zero evidence of actual loss tied to piracy. The more detailed analyses point to another cause — streaming movies in high definition. Current movie releases no longer compete just in the current market of active releases; they now compete against every blockbuster of the last 40 years available at home with zero friction. Those same analysts also point to another revenue-killer for movie revenues: the insanely high cost at the snack bar. A high ticket price plus high snack bar pricess combine to keep families out of the running. Sure, they make tons AT the snackbar, but in the same way that cable audiences have died, so have movie audiences.
If you take revenue out of the equation, and only look at attendance, the change in business models translates into LOWER piracy after market. DVD sales, digital copies of any kind, are following the dinosaurs. The new subscription-based platforms are the real killers of piracy, just like Apple Music and Spotify did for music. All-you-can-eat buffets are popular for a reason.
Wait, weren’t you going to talk about books?
Yes, I am going to talk about books. Books have some similarities with music and movies. First, people generally have always liked OWNING copies, not just borrowing, renting or viewing. They want something tangible that they can go back to later and watch / listen to / read again. But is there no real Netflix for books.
Oh, sure, services like to ADVERTISE that they are the Netflix of books. Take Amazon’s Unlimited, for example. For a book to be in Amazon Unlimited, you need to first realize that it is for DIGITAL books only. Yet, ebooks are only 17% of the market. In addition, the author has to guarantee that it is exclusive to Amazon. You can’t sell a Kobo version, or on any other ebook platform. While they advertise it as all-you-can-eat, it’s like going to a buffet restaurant and finding out that the all-you-can-eat part is just a salad bar; no meat or desserts are included in that option. It’s not NetFlix. It’s Olive Garden.
Do you want to consider another platform? Well, you CAN, but Amazon makes up 80% or more of the ebook market (with variations by country and continent). Kind of like Apple TV compared to the other big platforms, others are a rounding error on the market.
Amazon has had a digital ebook problem for quite some time, and without a viable solution, they have faced heavy piracy. The issue is that unlike movies, which are huge files, ebooks are relatively small and uncomplicated. And if one person has a copy, in most cases, it isn’t too difficult for them to share that file with someone else. They have some mechanisms in place to try to reduce piracy, but they’ve mainly been ineffective. The tool is called Digital Rights Management, or DRM.
So let’s focus on file types for a moment. The current main types of files are either EPUB format (that has no security built into it), MOBI (also no real security), Adobe Digital Editions PDF (which has decent security in it), and/or AZW or AZW3 for the latest version (which comes with DRM as the security).
The Adobe Digital Editions PDF is tied to your account, in theory at least. You buy the file (or borrow it from a library), it ties the file to your account, and when you open it, it verifies your account information by using your account info to open the file and let you see it unencrypted. If you send the same file to your friend, and they try to open it, Adobe Digital Editions will balk. They are not you, the software will not open it. If you give them your account information, all good. They can open the file on their computer as if they were you. Works fine with family members or friends you trust, but you’re not going to give your account info to random people on the internet so they can open your file.
Amazon’s AZW/AZW3 format does almost the same thing, but instead of using your account information, it issues a specific serial number for your device (either Kindle or PC if using Kindle for PC software or phone if using the app). When you try to open the file, it uses your Amazon account to transfer the file to your device, tied to whatever serial number you have on your account. If I downloaded a file for my Kindle, I can’t simply copy that over to my son’s Kindle and have it work. I would have to redownload it on his Kindle with his serial number embedded. The device opens the file, compares the serial numbers, and if it matches, it will let you see the unencrypted file.
As a side note, some security experts have argued that both Adobe Digital Editions and Amazon are not technically using encryption in the normal sense. Encryption would normally translate every word or group of words or letters into other gobbledy gook and it just looks like gibberish. Sometimes, however, these files have entire sections and subsections that are actually readable still for small snippets. As such, some experts argue that it is more like half-encryption and half-jigsaw puzzle. To their mind, it is more like slicing and dicing the file, jumbling most of it up, using the serial number or account info to encrypt the algorithm that created the jumble so it can’t be undone, and voila, one file.
Up until recently, people have needed three things to get around the semi-encryption easily. I’ll deal with them in reverse order of obviousness.
First and foremost, there is a collection of software tools bundled together called DeDRM. Yep, that’s sort of what it does — it removes the DRM from files. But it isn’t a “cracking” tool. The way it works is you tell it your serial number for your Kindle, it uses that serial number to “unlock” the file, and then creates a new version without the DRM included. It’s really no different than removing a password protect from a locked Word document that you created. You open the file, enter your password to unlock it, tell it to remove the password protection, re-enter your password, and then save a version without the password protection enabled. Voila, one password-free file. No hacking or cracking involved.
For Adobe files, it’s relatively the same process, it just uses your account info to unlock it and then creates a version without a password enabled. I’ll come back to this tool in a paragraph or two. While Amazon and Adobe constantly iterate their anti-piracy attempts, they are limited in what they can do — if you can enter a password to open the file, then once it’s open, it’s vulnerable to being copied and used in other ways. There’s no way to end that usage. And DeDRM software is constantly iterating to combat the latest tweaks. It might take a week or two, but it will catch up relatively fast. Up until this last issue, it seems to have generally adapted within 3 weeks or so at the outside.
Second, you need a way to actually manage the eBook file. Of course, Amazon gives you their Kindle software, but we’re talking something else. You need to be able to edit elements, move things around, add metadata, etc. The biggest non-proprietary tool for managing ebooks is called Calibre. It has a somewhat dated interface, looking like something from the ’90s, but the thing has power to spare. Most people who are beyond “click and read” have some sort of file manager for ebooks, and Calibre probably has about 90% of that market. Why? Because it works, has been around forever, and mainly because it is free. It not only manages ebooks, including direct transfer to and from ebook readers, it lets you CONVERT books that have had their DRM removed from one format to another. Have something that only reads MOBI and your file is in EPUB? There are lots of little one-off tools that will do that conversion for you, but Calibre has it already built in. Every time a format is tweaked, Calibre is tweaked. It’s a bit annoying to constantly update to the latest version, a little more manual process than it should be, but lots of people leave it un-updated for months, even years, with little lack of functionality. But here’s the fun part. Calibre allows third-party plugins.
I know, I know, you’re like, ummm, what? It means that Calibre itself does NOTHING wrong. It is a vast program, and all it does is give people a way to manage files. It’s like a file manager with extra interface options, similar to a lot of music programs out there — it adds buttons to help you automate certain things, will let you open the file in the program, and enables you to move it around, create groups, etc. If you tell Calibre what your serial number is for your files, it will open them up and let you read them. But DeDRM can also be added as a plugin, and if you tell DeDRM what your serial numbers are, Calibre will open the file AND give you a chance to convert to another format, removing the password/encryption. That’s relevant because while everyone uses Calibre to remove DRM, the program isn’t about DRM. If you happen to add a separate plugin created by someone else to his program, well, that’s not his business. That’s yours. Calibre is a file manager, that’s all. Adding DeDRM as a plugin makes it easier, but you could use it directly as a separate tool (like in Linux).
So, what does Calibre do, and why do you need it? Calibre will read Epub, Mobi, AZW, and AZW3 books and allow conversion between them if the password has been removed by DeDRM. AZW was Amazon’s attempt to use Epub as a ubiquitous file format and add password protection to it as basic DRM. Over the years, Amazon has upgraded from AZW to AZW3 and, more recently, KFX and KPF. Each time, Calibre has added the ability to read those new formats IF/WHEN the protection aka DRM is removed. Once removed, you can do whatever you want with it.
Third, you need the file itself. That sounds easy, right? I buy on Amazon, I download the file (*), and I can read it, right? The asterisk is what makes the previous sentence end with a question mark instead of a period.
How/where do you get the ebook file?
Up until recently, you generally had two ways to get the Amazon or Adobe Digital Editions file of the book you just bought.
First and foremost, you downloaded the book directly into the Amazon or Adobe tool you would use to read it. If you were on a Kindle, you opened up the Kindle, it looked at your Amazon library, saw that you had a new book, and downloaded it to your Kindle (note that the file is IN YOUR LIBRARY and now ON YOUR KINDLE, nowhere else). If you were reading on a PC or a phone, you would open your Amazon App (Kindle for PC or Kindle App on iPhone or Android), it would do the same as the Kindle and access your library, see that you have a new book, and download it to your app for reading. Some would do it automatically, some would do it when requested, but it would be available in your app. It might be on YOUR PHONE or hidden in the FILES ON YOUR COMPUTER, but primarily either way, you would have the file in your Amazon library and available within your app. If you delete it from your app or the Amazon library? Gone. It would delete it. Adobe works basically the same way, you can access it through the app.
Now, here’s the kicker. If YOU delete it? Gone. Understandable. If Amazon or Adobe deletes it, or your access expires? Also gone. Wait, what? Yeah, you don’t OWN the ebook. You own a license to the ebook, and Amazon or Adobe can change your license at any time. This is what drives a TON of people nuts. Not the reality of it in most cases but the potential risk. There are lots of anecdotes available online where “owners” of books got bitten. Potentially myself included, at least hypothetically.
Way back in 2007 when the Kindle stuff got going, they used to have huge promotions on. Free books were available EVERY SINGLE DAY and like many people, I said, “Sure, I’ll take your free book.” Having no idea if I would ever read it or if it was any good, I swiped right on free books. Not for long, but maybe six months to a year or so. Literally 100s of books. Some people went all-in and downloaded thousands of books. So think of it as you believing you have 1000s of books in your personal library. And then one day, some people tried to go into THEIR library, and it was closed. For whatever reason, they had been locked out of their account. Maybe they said something rude on an Amazon forum, maybe somebody thought they were pirating books and complained, didn’t matter. Amazon has a history of blocking accounts with almost zero recourse to getting back in. Now, you might think, “Well, you’re out free books.”. Nooooo. They’re out of ALL their books. All gone, inaccessible.
In any other digital endeavour, the first rule is to have backups. But you couldn’t backup your account easily. If the software wouldn’t recognize your account, or you couldn’t get into your account, you were dead in the water. And if your device did a synch, you could literally lose all your downloads even if you had any.
There are also anecdotes about people who went into read a book only to find that Amazon had some sort of legal skirmish with an author, the author’s books were removed from Amazon, and guess what, yes, the books were now GONE from your library. You paid for them, Amazon got their money, and 2-3 years later, they revoked your license and it was gone. No warning, nothing. Gone. It was the first time for many users to realize that they didn’t own their ebooks, they only had a license. It was also news to some US states and foreign countries, who did not recognize that universal licensing arrangement. More skirmishes happened, often around forcing Amazon to return the purchase price they had collected. Most of which was buried in administration, while Amazon discussion boards when nuts from both users who had lost books to authors who had been banned with no appeal process evident.
Enter the attraction to a second way to get your file. You could go into your Amazon library, click on options next to the book, and there was an option designed to let you transfer your ebook files to your Kindle without internet. Since the beginning, Kindles had an option to use wifi or even free wireless if it could connect to transfer your file. Amazon ate the cost for free wireless connections, as it literally had to — you had to be able to get the file to your Kindle. But for people without good cell reception or wifi, there was an option to download the file manually and then transfer it to your Kindle. Or potentially to any ereader that could read an Amazon file.
For those people who were beyond the “click and read” crowd, this was a godsend. They could download and backup their files. For Adobe, they would copy the files out of an Adobe Digital Editions folder and back those up too. All good, right?
Well, not quite. If all you had was the original file, you were still going to get caught with password-protected versions that may or may not work when you went to enter them.
But let’s reverse the order. You have the file. You have a regularly-updated program like Calibre that lets you open the files. You have a regularly-updated decryption plugin like DeDRM. If you take the file, use Calibre, and use DeDRM, you now have the potential to create a protection-free file that you can backup, read forever, and if Amazon deletes files from your account or kills your whole account entirely, you’re still golden.
Do you see the lynchpin for that system? Amazon did.
Amazon removed the ability to download the file
I said above that there were two ways to get the file. Direct transfer between devices using the Amazon apps OR download to your PC and manually transfer the file. Amazon announced in early February an upcoming change, and as of the end of February, you could no longer use the second option. They removed the option to download the file manually.
So, as of March 1st, if you buy an ebook, the ONLY way to read it is directly on your Amazon app on your phone, through Amazon app for PC/MAC, or directly on your Kindle. You have no file to work with, at least not directly. And if you have another type of ereader that is not linkable directly to your Amazon account? Well, good luck with other sources for ebooks, Amazon would no longer work for you. If you ask Amazon, they’ll tell you to buy a Kindle. Nice.
Yet, at first glance, those wanting to do something manual with the file, this removal of an easy way to download doesn’t seem to change anything really, as of course the other apps still have a “file” to work with, right? Yes, but not the SAME file. A few years ago, Amazon introduced what they call KFX. Instead of a single ebook file, it is now more like a set of interlinked HTML files. Quite complex, actually. Almost all of the apps use a form of AZW/AZW3 format but it comes as a download in KFX-ZIP format, for the most part. Previously, when you downloaded to your PC manually from your library, it came as a SINGLE file. Now, if / when you can find something, it’s a bunch of files.
To put it bluntly? The “click and read” people use the apps, never realizing any of the risk they have in their account until there is a problem. It works, they’ve never had a problem with their account, don’t ever expect to, they don’t care. They got over their Luddite phase enough to use ebooks, or at least 17% of the market did, while the rest do audio or paper. Audio is growing, but the stats vary from 10% to 20% for market share, and then there are ludicrous studies in some areas saying audio is now 80% (mostly due to methodological issues with calculating the use of all-you-can-eat subscriptions).
The next tier of users were the digitally-enabled users who could download things well enough, and use a file manager. This group of people are screwed. If they were doing downloads before, they have NOTHING now for doing either easy backups or DeDRM+backup.
The third tier are those who are mostly concerned with using the tools for their own backups. While industry lobbyists want to argue it’s people wanting to pirate, they’re really confused with the methodologies. They twist the term piracy to include anyone who removes a DRM protection option from a file, even if they own it. The law isn’t as clear as the lobbyists want to make it, but that is not the same piracy as they then use the term when they refer to people trading files.
Unlike the image of massive numbers of people hacking and cracking encryption, I can only unlock MY files, the ones that I have legally purchased and have access to, aka the ones that I have the password to be able to open. I can’t download 5000 files and crack them; that’s not what this software does. I can only open MY files and save them in a format that doesn’t have protection. That doesn’t immediately mean that I am going to share it with someone else, upload it to the web, or spread it around to the masses. The vast majority of the people who use DeDRM do not use it to upload files to other people. They do it to be able to back up their own files. Most of them are rightfully scared of uploading files to other sites. A huge portion of them have no idea how a VPN even works, let alone creating fake email accounts, hiding their IP, etc. I’m pretty tech-savvy, in the top 10% of average users, and it is at the top of my user ability to think I could do it safely, if I were so inclined. I’m not. Nor are most of the users. They don’t mind removing password protection to make a copy, but they aren’t going to pay $10 for an ebook and then upload it to the web with the potential to be sued later. That’s not their risk level.
The people they need to worry about who ARE uploading books available for the masses have 1000s of ways to get to the files without the download button on Amazon. Removing it hurts the average consumer, while doing virtually NOTHING to stop the active pirate.
Reactions after the removal
There have been four fully expected reactions in the community.
The first by Amazon is absolute silence. They are not commenting on it, and they will not comment on it. They’re not stupid when it comes to Communications. They also have a really strong track record of NOT commenting on DRM nor listening to anyone but internal people who say, “Hey, let’s lock it down,” even though their own staff know it will do nothing for anti-piracy efforts.
The second from the Kindle readers is generally a mix of “I never used that feature, who cares?” (aka the “click and read” crew) and the “This is an absolute outrage, I will never buy another book from Amazon!”. People who knew about it and used it for backups are not homogenous in make-up. Some care, some don’t. But there was a strong reaction in forums with many people arguing for digital boycotts of Amazon. Yeah, right, let me look at that market share again? Oh, yeah, 80%. They don’t care if you order books from Kobo instead. It’s a rounding error. Except, it hit at the exact same time the Orange Noodle in charge of the US started ramping up anti-US sentiment around the world. Dozens of countries have pushed for “buy local” initiatives in response to tariffs, and guess what? The two together seem to have had an impact on Amazon digital sales. A large number of tier 2 and tier 3 Authors who publish on Amazon reported huge sales drops in March. JK Rowling, Lee Child, and John Grisham won’t be affected, but everybody else? Buckle up, buttercups. It’ll be interesting to see quarterly earnings reports and sales figures for ebooks, but the book market is always in chaos, so who knows if it will show anything resembling a trend.
But another group responded too.
The hackers have entered the chat
Now, as I said, every time that someone changes their DRM methodology, the real actual hackers figure out what they did and create a response that undoes it. While it might look like magic to the casual users, any software that can encrypt and decrypt something can be copied to see what it does as it works. It’s not like an Enigma machine, where it was hard to get a copy of one in WWII. Amazon’s and Adobe’s Enigma machines are software that you download for free AND you know the keys you have to enter to unlock and use it. That’s a pretty big head start for the hackers.
Except in this case, it wasn’t even a real change in the software. It was more like Amazon trying to hide their lips while they talked, so you couldn’t steal their plays on a football field. Amazon apps still have to download the files, though; they still need something to open. They just made it harder for you to get to the file.
The DeDRM and Calibre people separately looked at the problem and tweaked the existing methodologies for the file. Right now, the file (AZW3) or files (KFX-Zip) is being downloaded to three possible places automatically:
A storage area on your phone for the Apps to open and read;
A file area on your PC (or MAC) for the Kindle for PC (or MAC) to open and read; and,
Directly to your Kindle.
Phones are often a pain to work with and move files around, particularly if they contain multiple files. There are options available to try and do something on your phone, but most tools and users don’t bother. There is too much friction and variation.
Apps on your PC are a viable input source, but to be honest, they tend to have more complicated options than they did about 4 years ago. Most of the advice on this method has started with an approach that had you turn off the updates to the app, use a version from 2017 that would only download a format with a single file (same as what you would download manually), and so it would work to get the file. But it was a lot more painful than simply just saying DOWNLOAD manually, so few people seemed to bother unless they had a reason to regularly use the Kindle App. And about a year ago or so, the old version of the app that would give you that simple file format stopped reading new DRM titles. The methodology was tweaked, but it seemed to be hit or miss if people could get it working.
However, in the last six weeks, people have revisited the methodology and added extra steps that work with more recent versions of the software. Amazon added friction; the tweakers for DeDRM and Calibre found ways to reduce the friction. Most people choosing this method to get a new file or files seemed to feel that it was closest to the old method — either way, you were downloading directly to your PC. It was a very different methodology, though, and many of the tier 2 users bowed out fast, with tier 3 users struggling to make it work reliably across various configurations.
While I’m primarily talking about Kindle, this method with apps on the PC is exactly what people do with Adobe Digital Editions files, like the ones they get from libraries. They put themselves on a waiting list, they eventually get to the head of the queue, they go to their library website, log in, check out their ebook, and it downloads to their PC. When they open it, it opens in Adobe Digital Editions. This essentially “unlocks” it. If they copy it over to Calibre, they can then transfer it to their Kindle. There are a lot of people who use this method to get the file from their library to their Kindle ereader (and other ereaders) because it simply ticks them off that their library has ebooks in formats that aren’t easy to use. There are a lot of librarians who agree with them. They think if they could find a way to get books directly into the Kindle, they’d be able to boost ebook usage dramatically.
Except here’s the kicker. The books from libraries through Adobe Digital Editions or that are tied to textbook editions (another popular market for Adobe) all come with strict licensing. For libraries, there is usually a very clear time limit for their use. While the book is checked out to you, nobody else can sign it out (just like a physical book). The library bought, say, ten copies, and therefore, ten people can use them at once. When your loan period is up, it automatically expires in your library. It won’t open any longer. At least, more accurately, it won’t open in your APP anymore.
If you removed the DRM to get it to your Kindle, you ALSO removed the licensing controls. This means you now have a DRM-free copy of the file sitting in your Calibre library or on your Kindle, and when the next person goes to read it at the library, they can do so. Your copy stays with you. Because the system didn’t give you an easy way to get it to your device, your transfer looks like piracy. Even if you delete the file from your device and your library when you’re done, so that it seems more like the original intended usage, it’s a hard sell to say it wasn’t piracy.
Hard-core techies came up with an alternative solution that was a bit more radical, which has come up in the recent reactions to the Amazonian change: they hacked their Kindle operating systems so they can side-load software that will read other books like Adobe books with their DRM intact. It’s sort of like they installed the other apps directly on the Kindle, which it wasn’t really designed to do (at least most Kindles; I’m not talking about Amazon tablets). The only two downsides? It’s for hard-core users only in terms of their comfort levels, and if you do it wrong, you brick your Kindle. Oops.
Many people reported a much larger success rate using a physical Kindle. The way it works for the Kindle is that, like the App version, the Kindle downloads the file directly from Amazon. Then, when you plug your Kindle into your PC, and load Calibre, you can use your file manager (NOT Calibre, apparently) to copy the DRM-protected file from your Kindle to Calibre (drag and drop rather than importing). I recently took a test file from an ebook creator that was properly password-protected and ran the tests to make sure it looked like a full Kindle file, but “normal processing” failed. I redid it, dragged it, and dropped it to Calibre. It found the KFX-Zip file it should have, copied it over, removed the DRM, and left me a KFX file. It opened fine in Calibre. I converted to EPUB, opened it in another app, and it worked fine.
A bunch of the metadata was lost in the process, and a colour image was converted to black and white, but that’s relatively just details. Most Calibre users know how to update metadata on a file already…you basically right-click the title, tell it to edit metadata individually, it opens a “info screen” about the title, and it has an option to go out and get metadata about the title. Mainly this is for the correct title wording, author’s name and order, if it’s part of a series, ISBN numbers, year published, publisher’s name, genre if available, etc. There are a dozen+ sites from which it pulls info, including the World Catalog, Amazon, Google, and GoodReads, and downloads it. And it will even look for covers…if yours was only in black and white previously, and it finds the book online, it will show you other cover images you could import (like from Amazon, Google or GoodReads) in varying resolutions and in colour.
So, where does that leave people?
The “click and read” crew are still in the same place. They have no idea what people care about or why, and won’t until the day they find out that their thousands of dollars in book purchases are gone from their account with zero recourse from Amazon. The only response Amazon gives people is to create a new account — which doesn’t retrieve all their previous purchases. Content creators who rely on their book purchases can literally go out of business with a stroke of a digital pen by an Amazon employee. Some have had to have lawyers contact Amazon on their behalf, and the only “correction” is access to their old account. Until they get another complaint about something else the next time, and their account is locked again.
For the tier 2 types, they seem to have split pretty evenly. About 40% joined the “click and read” crew uneasily. Another 40% figured out how to use the new method. And 20% have permanently moved to other vendors if they can (some authors are only on Amazon).
Overall, the larger digital ebook community responded to Amazon’s disruption of casual piracy and came up with a solution within 4-6 weeks of the change. They even found ways to automate it so more people could do it easily. That’s pretty significant timing.
But it’ll be interesting to see if sales remain down. A potentially significant revenue hit if it kills the market for something that does nothing to combat intentional active piracy.
All it did was make it harder for some basic users to make backups.