Looped Network

Mastodon

Lately I've been lamenting the death of the functional scrollbar. Remember back in the day when scrollbars looked like this?

Folders in macOS 9

The scrollbars were prominent because they were meant to be used. You were expected to grab one with your mouse and move it up and down. Today, scrollbars have a much different appearance and expectation of functionality. This is what they look like in Firefox on Gnome 43 running under Fedora 37:

Super narrow scrollbar

That's just terrible. The idea here is that the scrollbar serves as more of a visual indicator of how much content is on the page rather than as something to be used directly. The problem is that I frequently find myself wanting to use scrollbars, even in 2023. For example, as a trackball user it should be faster for me to grab a scrollbar and drag it to the top of a large body of text than it is for me to make a dozen revolutions of the scroll wheel. Trying to actually click on a bar that narrow is a challenge, though. The scrollbar will become a little wider when my mouse is over it, but 1.) not by much and 2.) it defeats the purpose if my mouse has to be over the super narrow bar in order to make it... the slightest bit less narrow.

Here are two terrible images I captured with my phone showcasing the problem since getting the scrollbars to even appear and then capturing a screenshot that also contains my cursor is basically impossible. I think this also does a nice job of illustrating just how narrow the default scrollbars are relative to my mouse cursor:

Mouse cursor next to an extremely narrow scrollbar

Once my cursor is already over it, then it looks like this... which is still not good, even if it was helpful... which it isn't:

Scrollbar with a cursor already over it

Firefox is the most egregious offender that I regularly use. For example, Gnome Terminal offers a slightly wider scrollbar that is actually possible to use. Scrollbars in GIMP are virtually akin to the macOS 9 ones and are quite nice. I honestly don't even think about the ones in Firefox if I'm using my laptop in an undocked fashion since kinetic scrolling on the trackpad makes it trivial to quickly move between the top and bottom of a long page. When using a mouse that doesn't also offer kinetic scrolling in the scroll wheel, though, slightly more usable scrollbars would go a long way. They don't just have to be a visual indicator.

For the majority of my life, I've only had one Internet service provider where I lived. That was originally just a single dial-up provider when I was growing up to only having one coaxial cable provider at 4 different apartments in the span of 11 years. That recently changed when I went from having a single provider (coax) where I'm currently at to having 3 providers (coax, 5G, and fiber.)

While the additional options on top of coax became available at the tail end of 2022, I stuck with my coax provider because I was getting a fairly steep discount from them. When that discount was about to end in February of 2023, however, I decided it was time to check out some other options. While my coax service had been (largely) reliable, the non-discounted price was a bit steep for the service offered (200 Mbps down.) This was also the same provider I had for nearly a decade in the previous town I lived in, and an absolutely terrible customer service experience with them left me with a permanent negative attitude toward them.

5G

I first decided to try T-Mobile's 5G home service. I use T-Mobile as my cellular provider (I think that's a prerequisite to sign up for their home service), and in my area 5G absolutely screams, regularly pulling down 500 Mbps or more. When I came home and my phone switched from 5G to my home WiFi, it was a pretty sizable speed downgrade.

While I understand that a lot of people are still a bit leery of using a cellular network for their home Internet, I had some previous experience with this in my last job. I worked for a network company, and one of the fun toys for my home lab was a Cradlepoint IBR900. I basically double-NAT'd my home network to have all of my work-from-home gear sitting behind the Cradlepoint. In the event my terrestrial WAN link was unavailable, the Cradlepoint would fail over to AT&T's LTE network (5G wasn't really available when I got the device in 2019.) I spent a not-insignificant amount of time working from the Cradlepoint during outages, and even on LTE it was a smooth experience. As a result, I had high hopes for 5G.

Setup

One of the reasons I opted to try T-Mobile first is that there was really no setup involved from an installation standpoint, so if things didn't work out it would be easy to just bail on the service. They sent me a 5G gateway the next day. All I had to do was turn it on, install their mobile app (not exactly my favorite thing), and walk through a few steps. Since I viewed this as “kicking the tires” and still had my coax service, my plan was to simply turn off the router connected to my coax modem and make the T-Mobile SSID and PSK the same as what I was currently using so that everything I had would just automatically reconnect to the new gateway. That was a good idea, but for some absurd reason T-Mobile decided to disallow spaces in the SSID. I can't fathom why this would be the case since having spaces in an SSID is complete valid, but there was no means by which to work around it... meaning I had to reconnect all of my devices to a different wireless network to effectively test it. While that's not a huge deal for computers, tablets, and phones, it's truly painful for IoT devices and streaming sticks.

Performance

Performance was quite good, though admittedly not as good as I was expecting. As mentioned previously, my phone (an iPhone 13) regularly pulls down 500 Mbps or more. My 5G home gateway would usually get around 200 Mbps. I could quite literally set my phone right next to the gateway, and while on WiFi I would get 200 Mbps down while swapping my phone to 5G would result in 500 Mbps down.

Here's a speed test I had taken from my phone while connected to WiFi:

T-Mobile Home speeds

And this is an example of relatively slow 5G speeds on my phone itself... which are still significantly better than the home service:

T-Mobile 5G from iPhone 13

I'm guessing there must be some kind of traffic shaping that's prioritizing mobile throughput over home throughput, but that's just wild speculation on my part. Living by myself, 200 Mbps is more than enough, though, and was comparable to my coax service that I was paying more for.

I had been more concerned about latency and jitter, but those proved to be non-issues. Conference calls on the web, including watching screenshares and sharing my own screen, worked without any issues. RDP and SSH sessions across the Internet were similarly as performant as I would expect. Streaming video, music, and even playing online games even had no issues.

Problems

The problem I eventually ran into, though, was that I was struggling to maintain a consistent connection to a VPN I use to access a development environment for work. How frequently I need to be connected to this VPN varies based upon what I'm working on at any given moment, but when I need to access the VPN, I need to access the VPN. The behavior I saw is that it would work for some variable amount of time after connecting; sometimes it would be an hour, sometimes it would be an entire day. But occasionally it would suddenly begin to drop 50% or more of the packets, and response times would be measured in tens of seconds rather than milliseconds. This would persist for anywhere from 10 minutes to an hour, at which point it would usually clear up. From there it would again continue to work for a variable amount of time before causing havoc again.

When the issue transpired, only the VPN was impacted. Connectivity to the public web was fine, as was throughput, latency, and jitter. Troubleshooting this was fairly easy since when the issue occurred, just trying to ping the public-facing gateway for the VPN (not even being connected to it) would show dropped packets and latency. I saw the same behavior when I connected my laptop to my phone's WiFi hotspot, which was naturally also using T-Mobile's network. Then I connected to a VPS I have via SSH and attempted to ping the same VPN gateway. This would show no packet loss and 40 ms response times. Likewise, Slack wasn't blowing up with people reporting VPN issues, so clearly I was the only one experiencing a problem.

Support

After this, I called up T-Mobile to ask them what was going on. The short of it, which I'm honestly a bit impressed that they were very upfront about, was that corporate VPNs often don't work reliably on their network. The customer service representative I spoke to basically said there wasn't anything they could do from their end, and that if this was going to be a problem it would be best for me to use a different service.

On one hand, I really appreciate the candor. On the other hand, if this is such a known issue then I think maybe some sort of disclaimer when signing up for the service would be warranted, such as noting that corporate VPNs may not be usable. If nothing else, I would've tested this facet much more rigorously out of the gate. Even the CSR said that when she works from home for T-Mobile she can't use their home Internet service due to VPN-related issues. If nothing else, she offered to make notes on my account so that if I needed to cancel after procuring another service they wouldn't try to hassle me into staying subscribed to a service that wouldn't work for me.

Aftermath

Almost as soon as that call was over I reached out to the local provider which had recently lit up fiber services in my area and got an appointment for later that week. There was already a drop in my apartment, but I would still need a strand run from that to wherever I would actually egress. Pricing was fairly aggressive, especially because—as a new service in the area—they were offering some special deals that I was able to take advantage of. Once the service was connected and I verified that everything worked, I called T-Mobile back.

I have to give the original CSR credit, as when I called back and explained why I needed to cancel my service, the new CSR didn't try to talk me into staying. She prompted processed the cancellation and provided me with a shipping label to send the 5G gateway back.

Conclusion

On the whole, I'm honestly a little bit bummed that T-Mobile's service didn't work out for me as I really liked the idea of it. The price was good and—VPN connectivity aside—the performance was great. Humorously enough, just a few weeks after I cancelled my service, I found out that my work is decommissioning the VPN I had problems with as we move our development lab to a new environment. That's not to say T-Mobile's service wouldn't have the same problems with the VPN for the new environment, but it's just amusing timing to me.

In the future, I'd absolutely consider T-Mobile's home service again if I find myself shopping around. I'd probably just try to spend some time working from my phone's hotspot—at least a few days—to see if there are any obvious issues prior to opting in, though.

While going down a rabbit hole on Mastodon the other day, I stumbled across a post from SimpleLogin saying that they were now a part of the Proton family. I've been a fan of ProtonMail for a long time, and now also use both Proton Drive and Proton Calendar regularly (especially now that they have iOS apps.) While I've not used a lot of email alias services like this before, I have used the one Apple offers through iCloud. I don't really use my iCloud email address for anything other than apps which want to authenticate me through iCloud, which I usually accept out of convenience but opt to use a throwaway email address to avoid being spammed by them months down the line when I've stopped using their app. This is especially big for games that I know I'll play for a few weeks or months and then never think about again.

Today, I happened to be feeling like blogging again—hence why this site is once again live—and thought I would check out a few options in the blogging space to see if I wanted to swap to something different. While trying out Ghost, I remembered SimpleLogin and decided to give it a shot. While it lives outside of the Proton umbrella (using it involves going to their own site rather than having it available in the Proton app switcher), I was still able to use SSO to log in via my Proton account. This also showed that I had a “Premium” account. Given that this is normally $30 USD per year billed annually, I assume this must be due to my existing Proton subscription. It's always nice to get more services and features without having to pay anything additional!

I quickly generated a new alias and used that to sign up for Ghost, which worked smoothly. I get the option to either create my own alias or have an alias automatically generated via either a UUID or random words. There are additional settings for opting to suffix every alias with random numbers and which of the dozen or so domains offered by the service will be used as the default, several of which are denoted as being “premium” domains that can only be used with paid accounts. I'm curious if a service like this runs into issues where sign-ups with particular domains are blocked. I've seen this behavior with many of the domains used by Guerrilla Mail, for example, when I wanted to sign up for a throwaway account for something that I had zero interest in maintaining long term. I know I was concerned about the same when Apple introduced their alias service, though I figured most providers wouldn't be able to turn away millions of Apple users. I'll see over time if this becomes problematic or not.

Along with managing the aliases I've created, the dashboard page also provides metrics from the past 2 weeks on things like the number of messages forwarded, blocked, replied to, etc. I do find it nice that replying is an option, though I doubt it's one that I'll use. With my personal email account, at least, I very rarely actually send email. In the instance that I am sending email, I want the recipient to see my actual address the overwhelming majority of the time.

I also found it nice that SimpleLogin offers an iOS app that I quickly installed. At a quick glance it seems to offer essentially all of the same functionality as the web frontend.

I'm really not a fan of basically anything from Google these days. While I was once an Android and Chromebook user with a Google Home Mini and Google WiFi, I've since done a 180 and cut almost every Google product out of my life simply because I don't enjoy how invasive their products tend to be. Google Maps is terrific, for example, but I don't think that's worth giving what amounts to an advertising company detailed tracking information about everywhere that I go.

That being said, there are a handful of Google products I still use, albeit with slight modifications from my previous behavior. One example is Google Voice. One of the things I find most unpleasant in my life is having to speak on the phone; I really dislike having phone conversations and ultimately strive to treat my phone as a miniature tablet in my pocket which is incapable of making or receiving phone calls. In order to facilitate that, I try to avoid giving out my actual phone number where possible. If my real phone number rings, I want it to be a call that I actually need to answer. For example, it should be from work, family, friends, etc. I don't want spam calls coming to my actual number.

Use

To help me with this endeavor, I use the Google Voice number I created more than a decade ago when I moved to a new state. I initially created the number because I was looking for a new job, and I wanted a number in the same area code. This was well before almost anyone was okay with the idea of working remotely, and I found that I had much better success in applying for positions if my number appeared local. Times have changed quite a bit since then, but I've still always kept the same number active as something that I can give out for times when I have to provide a phone number but don't necessarily want to give my real one.

I keep my Google Voice number completely segmented from my actual number in that I don't even have what it receives forward anywhere; Google Voice is the first and last stop for it. I also don't install the Google Voice app on my phone; I typically log in to the web app once or twice a week to check what junk that number received and see if anything is legitimate. A solid 99% of the time it's not.

Are You Still Watching?

In what I can only describe as Google's nod to Netflix, I receive this message about once a year:

Screenshot of an email from Google Voice stating that my number will expire in 30 days and be available for other users.

To keep things as complicated as possible, this doesn't occur once a year. The last time I received it was January of 2022. I'm now receiving it again in October of 2022. While I check the number regularly, I don't actively send anything from it. To Google, however, this doesn't count as “using” the account. To me, this is pure madness for a few reasons.

Usage

My biggest issue is that Google defines usage as making calls or sending messages:

If you’d like to keep your Google Voice number [number removed], you will need to make calls or send text messages by November 7, 2022 by logging in to your account or using the Google Voice app on Android or iOS.

I very intentionally don't have the Google Voice app installed on any of my devices because I'm not interested in feeding data to Google, but I log in to the web app once a week to check for any missed calls or messages. For anything legitimate, I'll call or text back from my actual number. However, that isn't enough to Google; Google needs me to generate activity in order to count as active. So I have an SMS thread with a friend of mine that is literally just me saying something to the effect that I'm only sending a message to keep my Google Voice account active. To highlight how ridiculous this is, my friend replied to today's message with:

You gonna turn off my calendar feature cause I haven't created an event on it in a while?

Seems like a bit of a jump to assume that because I haven't created a new event in a while, I'm not using the product... especially when I know a company like Google is keeping tabs on exactly how frequently I access the service. The big difference between Google Voice and Google Calendar, of course, is that the calendar is associated with my email account while Google Voice requires a phone number. But is it so different? They wouldn't allow someone else to make a new calendar with my address, right?

The Yahoo Scenario

Readers old enough may recall a shit-storm Yahoo caused back in 2013 when they decided they were going to take idle email addresses and put those back into the available pool of addresses which could be used for new accounts. The backlash was immediate and intense, to the point where Yahoo decided to scrap what was a supremely terrible decision.

The big issue at hand was the fact that anyone who was able to lay claim to a previously leveraged email address would be able to receive any emails which had been destined for it. If you think about the context of old newsletters, it's not a particularly big deal. However, if you think about anything sensitive, things can quickly start to become more dire. Within the context of a phone number, that seems even worse. Phone numbers are often used as a fairly shitty form of 2FA, and criminals go to great lengths to perform SIM swapping attacks in order to reset accounts. If Google is going to allow for numbers to be re-used, someone just has to get lucky for them to be able to access a treasure trove of data from someone else.

The natural counter-argument is that if something is important, the user will be getting messages from their Google Voice account... so it won't be considered idle. However, remember that for Google, only sending messages means you're using the account. Receiving them doesn't matter. A great example of where this may come into play is something like Namecheap, a domain registrar which I would 100% recommend avoiding. Their only 2FA option is via SMS. So if you were to set up Google Voice for it, you may get a few messages when buying a domain, logging in to make your initial DNS changes, etc. Then you may not use it for another year or more depending on how long your renewal is. In that time, Google could expire the number, meaning that not only could you not log in to your account, but someone else may be able to do so instead if they get access to the number.

Conclusion

Ultimately, having a phone number that isn't my real number can be a pretty big boon for keeping the amount of spam blowing up my phone to a more manageable level. However, I feel like using Google Voice for anything actually intended to be important is a bit like taking your digital life into your hands. While it's easy to think that you'll definitely log in to the account and be able to send something in order to keep the account from being treated as idle, the right set of circumstances could easily alter that. While I'll continue to keep my Google Voice number active as a bit of a trash receptacle for phone calls, it's definitely not something to treat as reliable. I'm just curious how long it will be before I receive yet another message informing me that my account will expire in 30 days.

I'm excited that I've hit a bit of a milestone with my WriteFreely client. I basically have the package to a point where I'm content with it and feel like I can actually start using it on the regular. The code is available on GitHub; I know I referenced the repo on GitLab in my original blog post, but that changed a few weeks ago for reasons that will be another post for another time.

This has been a few months in the making, as I spent a not insignificant amount of time working on it each weekend since July. I also feel like I'm starting to put a bow on it at a good time since I've had some other personal project ideas crop up that I'd like to start working on that also offer the benefit of helping me learn a new language, C#, which I'm looking to learn for work.

What's also kind of special to me is that this is really the first “bigger” personal project that I've actually completed. I've written plenty of code in my free time, but nothing that really amounted to more than a basic script for something. I can also see why projects like this die so frequently; there were plenty of times where, after not looking at the code for a week, I just really didn't want to spend any time diving back into it in order to figure out where I left off, what I needed to change, or what design decisions I needed to fix.

The Client

The client is accessible in 2 different ways:

  1. A CLI client executed from the command line with a plethora of sub-commands, similar to something like kubectl.
  2. An interactive TUI client, made significantly less bland-looking via rich.

I struggled initially to think of a good reason why the TUI client would exist, especially since I was never going to create a text editor in my client better than what people would already be using. As is so often the case, though, I was really overthinking the situation, and I ultimately realized I could just use whatever was already set as the $EDITOR for the content itself and just have my code act as a wrapper to manage that content. Win-win. I honestly now use the TUI version more than anything else!

Design

Probably the best thing that I learned from this experience, along with updating my Python skills, was around design. I'm often very guilty of immediately diving into writing code without really thinking through the bigger picture of what I'd like to accomplish and how my decisions will impact that. A great example is the fact that I wanted to have both a CLI and TUI option for the application, but I initially worked on only the CLI side. While that's probably for the best to save me from juggling too many items at once, I made decisions that often only worked for the CLI version of the application, such as assuming that if I ran into an error I could just exit the application with a non-zero code. Obviously that then didn't work for the TUI version where I wouldn't want the user to be kicked out due to an error; I'd just want to note what went wrong and offer some options for what should be done while keeping the application active. This resulted in my having to change the behavior for a lot of my classes. While it wasn't a big deal in the end, I still could've saved myself from a lot of work with better planning.

Wrap Up

I'm really happy with where the client is now, though if I happen to think of anything which I'd like to add I'll certainly continue working on it in the future. By the same token, if anyone other than me actually ends up using this and has a feature request, it's certainly something I'll consider. I'm just excited now to have what I feel is a solid option for posting to WriteFreely from the CLI, something I'm actually doing right now since this post is being made via my client. 🙂

It just dawned on me that I've finally made good progress in decreasing the number of domains that I own. I've historically purchased domains on a whim; when I'm bored or sitting on a bar stool somewhere, I'll grab my phone and check if random domains that pop into my head are available. While the overwhelming majority of them would be taken, I'd occasionally strike upon something that hadn't been snatched up. I'd typically always buy them... and then do nothing with them the vast majority of the time. They were basically like ICANN Pokémon.

A little over a year ago, I decided that I would start to let some of my domains lapse through attrition. I turned off automatic renewal and figured that if I didn't come up with a use for a particular domain by the time I started getting alerts about the fact that it was expiring, then I didn't really need it in the first place. At the time I owned 9 domains. To date, I've let 5 of them expire. 1 was something I used for a skunk works project at my job that ended up becoming fairly critical to their workflow, so I transferred that domain to the company when I left that job. (Humorously, I never expensed this domain — even after it became “production” — because it renewed on the same date as laifu.moe, and I didn't want to submit the receipt showing both domains. 😅) That leaves me with:

  1. This domain.
  2. The aforementioned laifu.moe, the domain I've actually owned the longest, and the only domain I've ever purchased as a “joke” and actually done something with.
  3. A random domain I bought prior to deciding that I'd rather just use looped.network as my primary domain. I'm letting this one expire, though it has until next summer.

It's typically been easy for me to justify the expense of domains because I tend to use (relatively) inexpensive TLDs. I believe the most expensive domain I've ever purchased as a .io that was around $40 USD for a year. While TLDs which are $10 to $15 a year are a bit more palatable, they still add up when I've got a large number of them... and that cost is doing absolutely nothing if I do nothing with the domain. Today I'd only consider purchasing a domain if I have an immediate use for it; I no longer buy any just because it's a fun name that I want to hold on to. I've actually had a handful of scenarios where I thought of decent domain names and discovered they were available, but so far I've been walking the straight and narrow without buying them.

While it's a little silly to keep a domain for a single web page, I don't see myself ditching laifu.moe any time soon since my inner weeb likes it too much. looped.network hosts several websites (like this one) and a few servers/services that I run, so it's also pretty locked in. At this point I think I'd have to have something pretty outstanding that I wouldn't just host it on another subdomain of looped.network.

As the title alludes to, this morning I tried updating my Pinebook Pro running Manjaro Linux through my normal method:

sudo pacman -Syu

Today, this resulted in an error message about the libibus package:

warning: could not fully load metadata for package libibus-1.5.26-2 error: failed to prepare transaction (invalid or corrupted package)

Fun. I first wanted to see if it was really just this package that was causing the problem or if there were other issues. Being a pacman noob, I just used the UI to mark updates to the libibus package as ignored. Once I did that, all of the other packages installed successfully. This prompted for a reboot, which I gladly did since I figured I'd see if that made any difference. Once my laptop was back up and running, though, executing pacman -Syu again still gave the same error related to libibus.

Some searches online showed that a mirror with a bad package could be the problem, so I updated my mirrors via:

sudo pacman-mirrors -f5

This didn't solve the problem, but the new mirror gave me a different error message:

error: could not open file /var/lib/pacman/local/libibus-1.5.26-2/desc

With some more searches online, I saw a few people on the Manjaro forums say that simply creating those files was enough to fix similar errors they had with other packages. Creating the file above just resulted in an error about a second file being missing, so I ultimately ended up running:

sudo touch /var/lib/pacman/local/libibus-1.5.26-2/desc
sudo touch /var/lib/pacman/local/libibus-1.5.26-2/file

Now running an update allowed things to progress a little further, but I got a slew of errors complaining about libibus header files (.h) existing on the filesystem. My next less-than-well-thought-out idea was to just remove the package and try installing it fresh. I tried running:

sudo pacman -R libibus

Fortunately, Manjaro didn't let me do this by telling me that it was a dependency for gnome-shell. Yeah, removing that would've been bad. It was back to searching online. The next tip I stumbled across was to try clearing the pacman cache and then install updates with:

sudo pacman -Scc
sudo pacman -Syyu

This unfortunately gave me the same error about the header files. However, the same forum thread had another recommendation to run:

sudo pacman -Syyu --overwrite '*'

Curious about exactly what this would do prior to running it, I checked out the man page for pacman:

Bypass file conflict checks and overwrite conflicting files. If the package that is about to be installed contains files that are already installed and match glob, this option will cause all those files to be overwritten. Using —overwrite will not allow overwriting a directory with a file or installing packages with conflicting files and directories. Multiple patterns can be specified by separating them with a comma. May be specified multiple times. Patterns can be negated, such that files matching them will not be overwritten, by prefixing them with an exclamation mark. Subsequent matches will override previous ones. A leading literal exclamation mark or backslash needs to be escaped.

I took this to mean that instead of complaining about the header files that already existed on the filesystem, it would simply overwrite them since my glob was just * to match anything. I ran this, and sure enough everything was fine.

I mainly run Manjaro on my Pinebook Pro just because it's such a first class citizen there with tons of support. It's now the default when new Pinebook devices ship; back when I got mine it was still coming with Debian, though I quickly moved it over after seeing how in love the community was with Manjaro. I do find that I run into more random issues like this on Manjaro than I do with Fedora on my other laptop or Debian on my servers, for example, and at times it can be a little frustrating. I didn't really want to spend a chunk of my Saturday morning troubleshooting this, for example. But while there seem to be more issues with Manjaro, the documentation and community are so good that usually after a little time digging in, the solution can always be found. I've yet to run into any issue where the current installation was a lost cause forcing me to reinstall the operating system.

Just a few moments ago I needed to extract the audio component out of a video file into some type of standalone audio file, like .mp3. Since I've been working with Audacity to record audio, I figured maybe it had some capability for ripping it out of video.

My initial searches gave me results like this which quickly made it clear that while this is technically possible, it requires some add-ins that I didn't really want to mess around with. However, since the add-in mentioned in that video was for FFmpeg, I realized I could just use that directly.

I didn't have ffmpeg installed, but that was easy enough to rectify on Fedora 36.

sudo dnf install ffmpeg-free

Then I needed to extract the audio. I first checked how it was encoded in the video with:

ffprobe my_video.mp4

After sifting through the output, I saw that it was encoded as aac:

Stream #0:10x2: Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, s16, 317 kb/s (default)

Rather than that, I wanted to simultaneously re-encode the audio as MP3. Another quick search showed me some great resources. Ultimately, I ended up doing:

ffmpeg -i my_video.mp4 -q:a 0 -map a bourbon.mp3

As mentioned in the Stack Overflow post, the -q:a 0 parameter allows for a variable bitrate while while -map a says to ignore everything else except the audio.

Just a few moments later, and my MP3 was successfully encoded.

I recently ran across an interesting error with my development Kubernetes cluster, and while I still have no idea what I may have done to cause it, I at least figured out how to rectify it. As is commonly the case, most of the things I end up deploying to Kubernetes simply log to standard out so that I can view logs with the kubectl logs command. While running this against a particular deployment, though, I received an error:

failed to try resolving symlinks

Looking at the details of the error message, it seemed that running a command like:

kubectl logs -f -n {namespace} {podname}

Is looking for a symbolic link at the following path:

/var/log/pods/{namespace}_{pod-uuid}/{namespace}

The end file itself seems to be something extremely simple, like a number followed by a .log suffix. In my case, it was 4.log. That symbolic link then points to a file at:

/var/lib/docker/containers/{uuid}/{uuid}-json.log

Where the uuid is the UUID of the container in question.

Note: The directory above isn’t even viewable without being root, so depending on your setup you may need to use sudo ls to be able to look at what’s there.

I was able to open the -json.log file and validate that it had the information I needed, so I just had to create the missing symlink. I did that with:

sudo ln -s /var/lib/docker/containers/{uuid}/{uuid}-json.log 4.log

Since my shell was already in the /var/log/pods/{namespace}_{pod-uuid}/{namespace} directory, I didn’t need to give the full path to the actual link location, just specify the relative file of 4.log.

Sure enough, after creating this I was able to successfully run kubectl logs against the previously broken pod.

Lately I've been working through getting WinRM connectivity working between a Linux container and a bunch of Windows servers. I'm using the venerable pywinrm library. It works great, but there was a decent bit of setup for the underlying host to make it work that I had been unfamiliar with; you can't just create a client object, plug in some credentials, and go. A big part of this for my setup was configuring krb5 to be able to speak to Active Directory appropriately.

My setup involves a container that runs an SSH server which another, external service actually SSHs into in order to execute various pieces of code. So my idea was to take the entrypoint script that configures the SSH server and have it also both:

  1. Create a keytab file.
  2. Use it to get a TGT.
  3. Create a cron job to keep it refreshed.

Let's pretend the AD account I had been given to use was:

Username@sub.domain.com

In my manual testing, this worked fine after I was prompted for the password:

kinit Username@SUB.DOMAIN.COM

If you're completely new to this, note that it's actually critical that the domain (more appropriately called the “realm” in this case) is in all capital letters. If I run this manually by execing my way into a container, I get a TGT just like I'd expect. I can view it via:

klist -e

Unfortunately, things didn't go smoothly when I tried to use a keytab file. I created one in my entrypoint shell script via a function that runs:

{
    echo "addent -password -p Username@SUB.DOMAIN.COM -k 1 -e aes256-cts-hmac-sha1-96"
    sleep 1
    echo <password>
    sleep 1
    echo "wkt /file.keytab"
} | ktutil &> /dev/null

The keytab file is created successfully, but as soon as I try to leverage it with...

kinit Username@SUB.DOMAIN.COM -kt /file.keytab

...I receive a Kerberos preauthentication error. After much confusion and searching around online, I finally found an article that got me on the right track.

The article discusses the fact that an assumption is being made under the hood that the salt being used to encrypt the contents of the keytab file is the realm concatenated together with the user's samAccountName (aka “shortname”). So for my sample account, the salt value would be:

SUB.DOMAIN.COMUsername

The problem highlighted by the article is that when you authenticate via the UserPrincipalName format (e.g.: username@domain.com) rather than the shortname format (e.g.: domain\username), another assumption is made that the prefix of the UPN is the same as the shortname. This is very commonly not the case; in a previous life where I actually was the AD sysadmin, I had shortnames of first initial and last name while the UPNs were actually firstname dot lastname. So for example, my UPN was:

looped.network@domain.com

While my samAccountName was:

lnetwork

If this type of mismatch happens, you can use -s when running addent to specify the salt. After checking AD, I verified in my current case that the username was the same for both properties... but that in both places it was completely lowercase. I can't say why it was given to me with the first character capitalized, but after re-trying with username@SUB.DOMAIN.COM, everything was successful. This made sense to me because while AD doesn't care about the username's capitalization when it authenticates (hence why manually running kinit and typing the password worked), using a keytab file means that the wrong salt was given.