Looped Network


There’s nothing quite like being on a live call to make you realize that you’re not as savvy with Vim as you thought. I’ll probably be shifting back to Sublime with my main workflow for the foreseeable future.

I had written not very long ago about my progress on my little Write Freely python client that I've been working on to facilitate my ability to create posts from an SSH session to a VPS. I actually had a bit of an “Oh no!” moment just the other day when I realized that I might be able to accomplish what I'm looking to do just by going to the write.as website from a TUI browser like w3m, but a quick test let me know that Javascript was required.

This weekend I felt like I didn't have a ton left to work out from the perspective of the CLI version of the application, at least for a first build. I wanted to round out some of the functionality with pulling back post information to then be able to get IDs for deleting posts. Based on that, then I needed to update some of the help documentation. With that implemented, though, I wanted to test it from my VPS. Out of the gate, that was a bit of a pain since I'm not feeling like things are ready to push to something like PyPI yet. So instead, I just cloned my repo, manually created the virtual environment, installed the dependencies, and then created a shell script in my $PATH named writepyly that just contained:

#!/usr/bin/env bash
/home/{username}/code/writepyly/.venv/bin/python /home/{username}/code/writepyly/src/__main__.py $@

In this case, {username} holds my actual username on the system. This works great and allowed me to put some of the functionality through its paces. I got to fix a few bugs with things like trying to push posts when I didn't have any configuration files, for example. I apparently like to catch errors and then not actually stop the execution flow. This post, however, is being made from the my client on the VPS.

After getting the VPS side of things sorted, I went back to start building out the TUI version of the application, which I want to launch when writepyly is executed without any commands provided. In the original branch, that would simply print the help documentation. In this new version, only writepyly help will trigger that while writepyly by itself will cause the TUI to load up.

This will be an interesting learning experience for me since I have zero experience building something like this. I'm using rich as the framework for the TUI, and it honestly seems very easy to work with. I think building out everything except for creating new posts will be super easy. Creating new posts is going to involve basically having a text editor in my application, so I currently have no idea what the hell that will look like. Maybe instead of having a text editor for post creation, I'll just initiate prompt the user from the TUI for where the file they want to use is. I don't see a ton of value in trying to recreate something like Vim, Emacs, Micro, etc. given that they'll all be better solutions for writing content than what I would put together. 🤔

I feel dumb right now, especially after my post about what I've been doing with Neovim. While working on a personal project, I kept having complaints from Neovim about my file having mixed indentation, indents and unindents not aligning, etc. This project has now been worked on with both VS Code, Sublime, and Neovim. After struggling to manually rectify things one line at a time in Neovim, I eventually did the smart thing and took to the Internet where I learned that:

I can easily issue the command:

:set syntax=whitespace

To see what whitespace is comprised of tabs and which is comprised of spaces. If I've got Neovim set the way I want as far as tabs and spaces are concerned, I can then just issue:


To make everything match. I guess it's another “better later than never” scenario.

I had written a few months ago on Medium that I was trying to switch from using VS Code as my main editor to Vim. As I mentioned in that post, I've used Vim for years now, but never as my “main” editor for when I need to get serious work done, such as with my job. I also swapped from vanilla Vim to Neovim, which I found to have a few quality of life improvements that I enjoyed. I just couldn't stick with it, though, because I missed the how frequently VS Code saved me from myself when I did things like making stupid mistakes that I've need to debug manually because my editor wasn't telling me about the problems in advance. Likewise, I got irritated when I kept having to check things like what parameters I needed to pass to a method or where I defined a particular class manually because I couldn't easily peek them like I can in VS Code.

That being said, I knew this functionality was possible in Neovim (and Vim), but I just never bothered to check exactly how. During some initial homework on the matter, it seemed like parts of it were fairly simple while other parts were complicated. Ultimately, it turned out that how difficult the process is to set everything up really depends on how difficult you want to make it and how much you want to customize things. I just reproduced the steps I originally followed on my work laptop with my personal laptop to validate my notes prior to making this post, and it probably took me less than 5 minutes.

Plugins and init.vim

When I first started with Neovim, I quite literally told it to just use what I had already set up with Vim as far as configuration and plugins were concerned. I had used Pathogen for my Vim plugins and had my configuration done in ~/.vimrc. Neovim looks for configuration files in ~/.config/nvim, and they can be written in Vimscript, Lua, or a combination of the two. I initially just had my init.vim file with:

set runtimepath^=~/.vim runtimepath+=~/.vim/after
let &packpath = &runtimepath
source ~/.vimrc

This was taken straight from the documentation. It worked fine, but I wanted to keep my configs separate in this case. I started my just copying the content of my existing .vimrc file to ~/.config/nvim/init.vim.

Note: If you're curious, my full Neovim configuration is on GitLab.

Next I wanted a plugin manager. vim-plug seems to be extremely popular and was simple enough to install with the command they provide:

sh -c 'curl -fLo "${XDG_DATA_HOME:-$HOME/.local/share}"/nvim/site/autoload/plug.vim --create-dirs \

Then I just updated my init.vim with the plugins I wanted to install:

call plug#begin('~/.config/plugged')
Plug 'https://github.com/joshdick/onedark.vim.git'
Plug 'https://github.com/vim-airline/vim-airline.git'
Plug 'https://github.com/tpope/vim-fugitive.git'
Plug 'https://github.com/PProvost/vim-ps1.git'
Plug 'https://github.com/wakatime/vim-wakatime.git'
Plug 'neovim/nvim-lspconfig'
Plug 'neoclide/coc.nvim', {'branch': 'release'}
call plug#end()

call plug#begin('~/.config/plugged') and call plug#end() indicate what configuration pertains to vim-plug. The path inside of call plug#begin is where plugins get installed to; I could pick whatever arbitrary location I wanted. Plugins can be installed with any valid git link. You can see above that there's a mix of full URLs and a shorthand method. I started off by just copying the links for plugins I already used with Vim (all of the full GitHub links) and then adding the others as I looked up how to do some additional configuration. More on those later.

With init.vim updated, I just needed to close and re-open Neovim for everything to apply, followed by running:


This opens a new pane and shows the progress as the indicated plugins are all installed. What's really cool about this is that I can also use :PlugUpdate to update my plugins, rather than going to my plugin folder and using git commands to check for them.

Note On Configuration

I ultimately ended up doing all of my configuration in Vimscript. I would actually prefer to use Lua, but most of the examples I found were using Vimscript. I also have a fairly lengthy function in my original Vim configuration for adding numbers to my tabs that I didn't want to have to rewrite, especially since I wholesale copied it from somewhere online. Depending on what you want to do, however, you may end up with a mix of both, especially if you find some examples in Vimscript and some in Lua. This is entirely possible. Just note there can be only one init file, either init.vim or init.lua. If you create both, which is what I initially did, you'll get a warning each time you open Neovim and only one of them will be loaded.

To use init.vim as a base and then also import some Lua configuration(s), I created a folder for Lua at:


In there, I created a file called basic.lua where I had some configuration. Then, back in init.vim, I just added the following line to tell it to check this file as well:

lua require('basic')

Error Checking

Note: I ended up not using the steps below, so if you want to follow along with exactly what I ended up using, there's no need to actually do any of the steps in this section.

This is where some options come in to play. Astute readers may have noticed the second to last plugin in my vim-plug config was for:

Plug 'neovim/nvim-lspconfig'

This is for the LSP, or Language Server Protocol. This allows Neovim to talk to various language servers and implement whatever functionality they offer. However, it doesn't actually come with any language servers included, so I needed to get those and configure them as needed. For example, I could install pyright from some other source, like NPM:

npm i -g pyright

And then I needed additional configuration to tell Neovim about this LSP. The samples were in Lua, which is why I initially needed to use Lua configuration alongside Vimscript:


This actually worked for me with respect to error checking. Opening up a Python file would give me warnings and errors on the fly. However, I didn't get any code completion. I started looking at options for this, but frankly a lot of them seemed pretty involved to set up, and I wanted something relatively simple rather than having to take significant amounts of time configuring my editor any time I use a new machine or want to try out a different language.

Code Completion

Ultimately, I stumbled onto onto Conquer of Completion, or coc. I don't know why it took me so long to find as it seems to be insanely popular, but better later than never. One of coc's goals is to be as easy to use as doing the same thing in VS Code, and I honestly think they've nailed it. I first installed it via vim-plug in init.vim:

Plug 'neoclide/coc.nvim', {'branch': 'release'}

After restarting Neovim and running :PlugInstall, I could now install language servers straight from Neovim by running :CocInstall commands:

:CocInstall coc-json coc-css coc-html coc-htmldjango coc-pyright

After this, I fired up a Python file and saw that I had both error checking and code completion. There was just one final step.

Key Mapping

Given the wide array of key mapping options and customizations that people do, coc doesn't want to make any assumptions about what key mappings are available and which may already be in use. As a result, there are NO custom mappings by default. Instead, they need to be added to your Neovim configuration just like any other mapping changes. However, the project shares a terrific example configuration with some recommended mappings in their documentation. I legitimately just copied the sample into my existing init.vim file. This adds some extremely useful mappings like:

  • gd to take me to the declaration for what I'm hovering.
  • K to show the documentation for what I'm hovering (based on the docstring for Python, for example.)
  • ]g to go to the next error/warning and [g to go to the previous one.
  • Tab and Shift + Tab to move through the options in the code completion floating window.
  • Enter to select the first item in the code completion floating window.
  • A function to remap Ctrl + f and Ctrl + b, which are normally page down and page up, to scroll up and down in floating windows but only if one is present.

And tons of other stuff great stuff. I initially spent about 30 minutes just playing around with some throwaway code to test all of the different options and key mappings. It honestly feels super natural and now gives me the same benefits of VS Code while allowing me to use a much leaner and more productive editor in Neovim.

In my opinion, there's nothing quite like actual projects to really help me learn how to do something. Case in point, I posted last weekend about working on a Python WriteFreely client. I mainly work on it on weekends since, when I finish a day of coding for actual work during the week I usually don't have the motivation to do work on a side project of my own.

While working on it today, I realized that a method in my Post class was actually needed outside of that: check_collection

This method takes the collection passed by the user (think of it like an individual blog, if you're unfamiliar with the API) and validates that it is legitimate. While I initially included this in my Post class, as I added functionality to retrieve a list of posts I realized I needed it in areas where I wouldn't have all of the information to instantiate the Post class.

One immediate option was to just make my Post class more generic so that it could be instantiated and used with less up-front information. However, I didn't particularly like that setup. Instead, I realized that the solution was to simply make a new class, which I called WriteFreely that would serve as a super class. Then I made my Post class a subclass of it via:

from client import WriteFreely
class Post(WriteFreely):

In this way, my only change to the Post class was to delete the check_collection method which it will now naturally inherit from the WriteFreely parent class. I've honestly never really done anything with class inheritance before in a real-world scenario, so to me it's just further proof that I'll never get better experience with something than by simply doing it.

I recently found myself working on a project which required the GSS-NTLMSSP library. The application is going to be delivered via Kubernetes, so I needed to build a Docker image with this library. The project's build instructions are pretty clear about what the dependencies are, but given that this was going to be a Docker image, I wanted to use Alpine Linux as the base image in order to keep the image size as small as possible. The problem with this is that Alpine is just different enough to require different dependencies, publish their packages under different names, etc.

I started off by just doing things manually where I fired up a local Docker container running the base Docker image and then manually installing each package via apk add {package_name} to make troubleshooting easier. Once I had all of the packages from the aforementioned build documentation, I ran ./configure, looked at the errors, figured out which package was missing, installed it, and tried again. After several iterations of this process, ./configure executed successfully and it was time to attempt running make.

make ran for a minute but then would error out with:

undefined reference to 'libintl_dgettext'

This seemed odd to me because while running ./configure I had received an error that msgfmt couldn't be found, and I had installed gettext-dev in order to accommodate that. After some additional packages searches, I discovered the musl-libintl library is also available. I attempted to install that but received an error that it was attempting to modify a file controlled by the gettext-dev package. I uninstalled that via apk del gettext-dev and then ran into another error that—duh—msgfmt was now missing again. I handled that by just installing the vanilla gettext package, not the -dev version, and then finally everything compiled successfully.

The following is the full list of packages that I needed in order to get the build to succeed:

  • autoconf
  • automake
  • build-base
  • docbook-xsl
  • doxygen
  • findutils
  • gettext
  • git
  • krb5-dev
  • libwbclient
  • libtool
  • libxml2
  • libxslt
  • libunistring-dev
  • m4
  • musl-libintl
  • pkgconfig
  • openssl-dev
  • samba-dev
  • zlib-dev

Note that git is included just to clone the repo, and build-base is the meta package I used for compiling C software since just installing something like gcc will not include everything needed.

I'm potentially going to be working on a side project at work which would have me using C#... which I'm not particularly familiar with outside of any similarities it lent to PowerShell, which I use frequently. I did some tutorials on C# about a decade ago of which I remember absolutely nothing. As a result, I've been looking for some tutorials or guides that I can use to get up to speed at least a little bit. Maybe I'm terrible at searching, but C# tutorials are... not good.

Microsoft offers a ton of material, but it all seems to mostly assume that you either:

  1. Are completely new to programming.
  2. Are already experienced with C# and just need to know either advanced topics or what's new in the language.

I really need something that shows me the syntax, how code is organized, etc. Then I can start working on some personal projects and familiarizing myself with more of the specifics from there. This gist is very helpful, especially because I do a lot of work in Python right now. However, I don't think it quite had the depth I wanted.

I'm going to try reading Microsoft's slightly longer Tour of C# and see if that provides enough to get me started.

I had previously posted that I was going to work on a CLI client for posting content to Write Freely. I ended up having some distractions on that front as I got busy with things at work and briefly toyed with the idea of writing my client in Go instead of Python. While I may still circle back at some point on the language (I'm debating between using it as an excuse to learn Go or learn Rust at this point), I ultimately decided that—considering I use Python for the majority of my projects at work and I'm not exactly a master with it—it made the most sense to stick with Python for this project. I need to get good with one thing before I start trying to learn the next thing.

I actually made a decent bit of progress tonight, as is visible by the 2 commits I pushed. The big items I got nailed down were:

  • Authentication to provide credentials and get an API key which is locally cached for future use.
  • Logging out to invalidate the aforementioned API key and clear the locally cached content.
  • Pushing posts via a path to a file.
  • Pushing posts via content piped to STDIN.

All of this is subject to change as I'm currently just trying to get something working, and I'll clean things up later. Right now, testing is a little weird because I call __main__ directly, given that I eventually intend to package this.

Regardless, it's cool to start seeing progress come together. I've worked on—and subsequently abandoned—similar projects for things like Mastodon and Tumblr, and I always end up running into hurdles with authentication. I could, and arguably should, skip the complexities of that immediately and circle back to it later, but I always end up feeling like I should work in order. Luckily, the API for Write Freely is very straight forward with respect to authentication.

In fact, it's very straightforward with respect to basically everything, and it's very well documented on top of that. Huge kudos for that.

Immediate next steps are going to include:

  • Refining the output when posts are made. I'm not sure if it's possible from a quick glance at the documentation, but if it is I'd like to include a URL to the final post for ease of access to see the finished product.
  • Allow for post deletion by running a command to query for post titles and IDs followed by a command to remove a post by ID.

Longer term next steps will be:

  • Adding an interactive mode so that the writepyly command drops the user into a TUI, preferably created with something like Rich so that it looks swanky.
  • Allow post management from the TUI.
  • Allowing for post creation from the TUI.

In my next commit I'll probably try to update the project's README to reflect this as well. In the meantime, look for more garbage posts on my test blog that I set up so that I don't pollute this one with nonsense content in order to test my client.

I've recently been working on a project that requires me to use PowerShell. I actually feel relatively fluent in PowerShell since it was my main scripting language for a little over a decade while I worked as a sysadmin in highly Windows-centric environments that involved me automating as much of my job as possible in order to be able to do things like sleep on occasion. However, my PowerShell work rarely (read: never) went beyond single file scripts.

With this project being a decent bit more complicated and with the potential for some of the code to be useful in future projects, I wanted to figure out how to actually break things up into useful chunks, just like I would do when writing something in Python. Fortunately, it wasn't terribly difficult to figure out, though, as is typically the case with PowerShell, the documentation was a bit wanting. I had to put information together from a few different resource and go through a little trial-and-error to actually figure it out since Microsoft can never just seem to give clear, concise examples of anything PowerShell-related.

The first thing I needed to realize is that I wanted to create my files not with a normal .ps1 extension but with a .psm1 extension to indicate that they were PowerShell modules. Only the file that would normally be executed directly has a .ps1 extension. I kind of hate this since it makes things more difficult to test individually; in Python, for example, I could just create a main function that executes when:

if __name__ == "__main__":

Then I can add things in main while building it out that are later ignored when the code is called from elsewhere. PowerShell doesn't offer anything like this, though it's not a huge ordeal. After creating a .psm1 file, it can contain functions, classes, and/or methods. For example, here's a sample file called helloFunc.psm1 with just a function:

function Write-Hello {

    Write-Output "Hello, $Name."

And here's a file called personClass.psm1 with both a class and a couple of methods:

class Person {

    # Constructor.
    Person([String]$Name, [int]$Age) {
        $this.Name = $Name
        $this.Age = $Age

    [String]GreetPerson([String]$PreferredGreeting) {
        if($PreferredGreeting -eq "" -or $null -eq $PreferredGreeting) {
            $PreferredGreeting = "Hello"
        return "$PreferredGreeting, $($this.Name). I can't believe you're $($this.Age) years old."

    [void]HaveBirthday() {

Neither file has an entrypoint, though that's expected since they're designed to be called from somewhere else. Here's the main.ps1 file which ties them all together:

#!/usr/bin/env pwsh
using module ./personClass.psm1
using module ./helloFunc.psm1

Write-Hello -Name "Garrett"

$me = [Person]::new("Garrett", 9000)
Write-Output $me.GreetPerson("Salutations")
Write-Output $me.GreetPerson("Salutations")

The most important thing here are the two using statements, which specify that I'm going to import the two aforementioned files. Once I do this, I can then call classes, methods, and functions in those files directly.

I had to laugh when I received the following after attempting to run some Go code I've been working on:

The last error in particular was funny:

too many errors

Don't I know it.