<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://technicallyshane.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://technicallyshane.com/" rel="alternate" type="text/html" /><updated>2026-03-20T13:45:45+00:00</updated><id>https://technicallyshane.com/feed.xml</id><title type="html">Shane’s blog</title><subtitle>I wouldn&apos;t worry about it.</subtitle><entry><title type="html">Morning Briefing</title><link href="https://technicallyshane.com/2026/03/20/morning-briefing.html" rel="alternate" type="text/html" title="Morning Briefing" /><published>2026-03-20T13:44:00+00:00</published><updated>2026-03-20T13:44:00+00:00</updated><id>https://technicallyshane.com/2026/03/20/morning-briefing</id><content type="html" xml:base="https://technicallyshane.com/2026/03/20/morning-briefing.html"><![CDATA[<p>I started my sabbatical with the hope of writing some software that I could set up as a passive income. That didn’t quite happen.</p>

<p>I did write some software which I use every day: a Docker registry wrapper with e-commerce features (I’m not sure I highly recommend using it, as I won’t find time to maintain it), a video game which needs much more time spent on it, and continued work on my radio software for my art project, 24 Hours of Radio.</p>

<p>I started my sabbatical in June, and whilst I was away from work a massive AI-aided software development revolution took place. The registry wrapper that took me months could be done in hours when vibe coded. My return to work has been fun, but having to pick up this tech quite quickly has left me with a bit of shock. With senior leadership and Tech Leads remarking, “in a couple years no one will be writing code”. And weirdly, I believe them. I needed a weather widget for another project, and instead of it taking a weekend to put together, Cursor did it for me in two hours. I feel some guilt about that, but I don’t wash my clothes by hand anymore either.</p>

<p>This post isn’t about AI, by the way.</p>

<h2 id="journalism">Journalism</h2>

<p>Filling the silence on <a href="https://radio.shane.computer">24 Hours of Radio</a> isn’t going at any particular rate. In fact, on some days it will be less than the day before because I’ve ended up doing a daily show that I replace each day.</p>

<p>The Morning Briefing started at the end of December as a Radio exclusive, where I spoke about the weather for cycling, an event in Nottingham, and some tech news. The tech news would later turn into Nottingham news, as I realised investigating Bitcoin and hacked social networks wasn’t all that interesting to me.</p>

<p>Since then, it felt a bit of a shame to not have that content so ethemeral, so I’ve been working on www.morningbriefing.org.</p>

<p>Over the past week or so, I’ve tried to merge the event and the news item so I end up with more traditional articles with one topic. Still an event but the “news” for the Briefing is trying to find an angle for the event that’s highly relevant to Nottingham. Even if you’re not going, you’ve still learnt something about your home.</p>

<p>I started that project at the end of my sabbatical, over Christmas. If I had started it in June, I imagine my life would be quite different right now.</p>

<p>Journalism - even the kind of event and civic news I’m doing - takes a lot of work. Take today’s Briefing as an example: <a href="https://www.morningbriefing.org/from-riawakening-to-riabellion-ria-lina-returns-to-nottingham-stage/">a comedy event in Nottingham</a>. Yesterday was an especially busy day which found me still writing that Briefing at midnight. (For context, I’m usually in bed by 11.) Due to that, I didn’t have enough time to find a strong enough hook. I did mention the <em>Nottingham Comedy Festival</em>, but it would have been nice to call them and ask more about them.</p>

<p>The investigation is very, very fun. There’s the same kind of problem solving that you find in software development. I’ve had to dig around masses of data to find part of the story I need, where the data isn’t code by a freedom of information request about bicycle thefts. Digging for a hook is quite fun - that’s very much “what does the end user need from this?”.</p>

<p>I’ve been able to speak to lots of people that I’d never have the opportunity to in a normal day-to-day routine. People who run <a href="https://www.morningbriefing.org/base-51s-outburst-supported-by-panthers-at-pride-night/">charities</a>, <a href="https://www.morningbriefing.org/nottingham-frontrunners-is-more-than-running/">community clubs</a>, <a href="https://www.morningbriefing.org/the-dice-box-sneinton-hopes-to-open-this-spring/">business owners</a>, and <a href="https://www.morningbriefing.org/afternoon-at-the-proms-brings-big-band-classics-to-beeston-today/">musicians</a>. Turns out, given an excuse, people love talking about their thing. And it’s fun hearing it.</p>

<p>There are proper, civic, news stories around Nottingham that need covering. Nottinghamshire County Council recently signed off on a report deciding how the council should use AI, but skipped over all the “risks” the report mentioned. Nottingham Police recently told me that <a href="https://www.morningbriefing.org/police-seize-five-more-unlawful-e-bikes/">“they prefer education over action”</a> when it comes to unlawful e-scooters on the roads (and pavements), despite there being three quite serious incidents involving them including a death recently. Nottingham City are making life very difficult for the people of Victoria Market, in an attempt to force them out of their lease. The War Memorial cemetery is literally falling apart, with subsided graves and toppled headstones. Just today, someone emailed me with concerns that the City Council is reluctant to give a statement on a particular matter - a story of genuine civic importance.</p>

<p>None of these stories are covered well enough by Reach PLC, the only mass market journalistic organisation around.</p>

<p>They all require time to investigate, on top of the daily Briefing. I can cram them in at weekends, but not enough to do them justice (literally, in some cases).</p>

<p>It’s just frustrating because I’m quite enjoying it, I genuinely think it’s a worthwhile project for Nottingham, but I have not enough time.</p>

<p>There are solutions I’m thinking on, but none which aren’t seismic in nature. At this point, I feel it would but stupid, rather than brave, to go all in on this. Maybe I’m wrong on that.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[I started my sabbatical with the hope of writing some software that I could set up as a passive income. That didn’t quite happen.]]></summary></entry><entry><title type="html">Stop Staring at the Code</title><link href="https://technicallyshane.com/2025/12/13/stop-staring-at-the-code.html" rel="alternate" type="text/html" title="Stop Staring at the Code" /><published>2025-12-13T22:35:00+00:00</published><updated>2025-12-13T22:35:00+00:00</updated><id>https://technicallyshane.com/2025/12/13/stop-staring-at-the-code</id><content type="html" xml:base="https://technicallyshane.com/2025/12/13/stop-staring-at-the-code.html"><![CDATA[<p>I have this feature in my Docker registry tool that lists the tags that belong to a blob. You can change the tag in the select box to update the copy-and-pasteable commands in the templates below. That seems to have stopped working. Lets figure out why.</p>

<p><img src="/assets/tollport-tags-only-listing-default.png" alt="A dropdown with only 'latest' shown" /></p>

<p>Double checking there actually is a bug is the first thing to do here. Are there actually multiple tags for that image? Lets force some to make sure.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>shane@macbook tollport/ <span class="o">(</span>main<span class="o">)</span> % docker build <span class="nb">.</span> <span class="nt">--tag</span> localhost:5500/tollport:latest <span class="nt">--tag</span> localhost:5500/tollport:foobar
<span class="o">[</span>... succesful build]
shane@macbook tollport/ <span class="o">(</span>main<span class="o">)</span> % docker image <span class="nb">ls</span> | <span class="nb">grep</span> <span class="s2">"localhost:5500/tollport"</span>
localhost:5500/tollport                         foobar            e56d8f086145   41 seconds ago   996MB
localhost:5500/tollport                         latest            e56d8f086145   41 seconds ago   996MB
shane@macbook tollport/ <span class="o">(</span>main<span class="o">)</span> % docker push localhost:5500/tollport <span class="nt">--all-tags</span>
The push refers to repository <span class="o">[</span>localhost:5500/tollport]
4ce9944a219f: Pushed
<span class="o">[</span>...]
foobar: digest: sha256:5ad2945213f1dbfea3d478bf7e27704c5e7e71a4779c3675ce280a6e71878e03 size: 2422
4ce9944a219f: Pushed
<span class="o">[</span>...]
latest: digest: sha256:5ad2945213f1dbfea3d478bf7e27704c5e7e71a4779c3675ce280a6e71878e03 size: 2422
</code></pre></div></div>

<p>Definitely two tags there now. How’s that dropdown looking? → Still just the default ‘latest’ value in there. That confirms our bug.</p>

<p>How am I expecting that dropdown list to get populated?</p>

<div class="language-ruby highlighter-rouge"><div class="highlight"><pre class="highlight"><code>  <span class="c1"># app/controllers/repositories_controller.rb</span>
  <span class="k">def</span> <span class="nf">show</span>
    <span class="n">tags</span> <span class="o">=</span> <span class="no">Distribution</span><span class="o">::</span><span class="no">RepositoryTags</span><span class="p">.</span><span class="nf">new</span><span class="p">(</span><span class="n">repository</span><span class="p">,</span> <span class="n">current_user</span><span class="p">).</span><span class="nf">tags</span>
    <span class="n">tags</span> <span class="o">=</span> <span class="p">[</span> <span class="s2">"latest"</span> <span class="p">]</span> <span class="k">if</span> <span class="n">tags</span><span class="p">.</span><span class="nf">empty?</span>

    <span class="n">render</span> <span class="ss">:show</span><span class="p">,</span> <span class="ss">locals: </span><span class="p">{</span>
      <span class="n">repository</span><span class="p">:,</span>
      <span class="ss">tags:
    </span><span class="p">}</span>
  <span class="k">end</span>
  
  <span class="c1"># app/lib/distribution/repository_tags.rb</span>
  <span class="no">ENDPOINT</span> <span class="o">=</span> <span class="s2">"/v2/&lt;name&gt;/tags/list"</span><span class="p">.</span><span class="nf">freeze</span>

  <span class="k">def</span> <span class="nf">tags</span>
    <span class="n">tags_response</span> <span class="o">=</span> <span class="n">client</span><span class="p">.</span><span class="nf">get</span><span class="p">(</span><span class="n">endpoint</span><span class="p">)</span>

    <span class="n">tags_response</span><span class="p">.</span><span class="nf">fetch</span><span class="p">(</span><span class="s2">"tags"</span><span class="p">,</span> <span class="p">[])</span>
  <span class="k">end</span>
</code></pre></div></div>

<p>Since we’re seeing the default “latest” value only, we can fairly assume that we’re hitting the second argument of that <code class="language-plaintext highlighter-rouge">fetch</code>: whatever the client is returning, it doesn’t have any tags in it. What is it actually returning? Some jiggery-pokery is required to see the request response, so excuse this.</p>

<div class="language-ruby highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">shane</span><span class="vi">@macbook</span> <span class="n">tollport</span><span class="o">/</span> <span class="p">(</span><span class="n">main</span><span class="p">)</span> <span class="o">%</span> <span class="n">docker</span> <span class="n">compose</span> <span class="nb">exec</span> <span class="o">-</span><span class="n">it</span> <span class="n">web</span> <span class="n">bin</span><span class="o">/</span><span class="n">rails</span> <span class="n">c</span>
<span class="no">Loading</span> <span class="n">development</span> <span class="n">environment</span> <span class="p">(</span><span class="no">Rails</span> <span class="mf">8.1</span><span class="o">.</span><span class="mi">1</span><span class="p">)</span>
<span class="n">tollport</span><span class="p">(</span><span class="n">dev</span><span class="p">):</span><span class="mo">001</span><span class="o">&gt;</span> <span class="n">repository</span> <span class="o">=</span> <span class="no">Repository</span><span class="p">.</span><span class="nf">where</span><span class="p">(</span><span class="ss">slug: </span><span class="s1">'tollport'</span><span class="p">).</span><span class="nf">first</span>
<span class="n">tollport</span><span class="p">(</span><span class="n">dev</span><span class="p">):</span><span class="mo">002</span><span class="o">&gt;</span> <span class="no">Distribution</span><span class="o">::</span><span class="no">RepositoryTags</span><span class="p">.</span><span class="nf">new</span><span class="p">(</span><span class="n">repository</span><span class="p">,</span> <span class="n">repository</span><span class="p">.</span><span class="nf">maintainer</span><span class="p">).</span><span class="nf">tags</span> <span class="c1"># whilst we're here, lets confirm...</span>
<span class="p">[</span><span class="o">...</span><span class="p">]</span>
<span class="o">=&gt;</span> <span class="p">[]</span> <span class="c1"># yep!</span>
<span class="n">tollport</span><span class="p">(</span><span class="n">dev</span><span class="p">):</span><span class="mo">003</span><span class="o">&gt;</span> <span class="no">Distribution</span><span class="o">::</span><span class="no">RepositoryTags</span><span class="p">.</span><span class="nf">new</span><span class="p">(</span><span class="n">repository</span><span class="p">,</span> <span class="n">repository</span><span class="p">.</span><span class="nf">maintainer</span><span class="p">).</span><span class="nf">send</span><span class="p">(</span><span class="ss">:client</span><span class="p">).</span><span class="nf">get</span><span class="p">(</span><span class="s2">"/v2/tollport/tags/list"</span><span class="p">)</span>
  <span class="no">PersonalAccessToken</span> <span class="no">Create</span> <span class="p">(</span><span class="mf">1.7</span><span class="n">ms</span><span class="p">)</span>  <span class="no">INSERT</span> <span class="no">INTO</span> <span class="s2">"personal_access_tokens"</span> <span class="p">(</span><span class="s2">"user_id"</span><span class="p">,</span> <span class="s2">"token"</span><span class="p">,</span> <span class="s2">"created_at"</span><span class="p">,</span> <span class="s2">"updated_at"</span><span class="p">,</span> <span class="s2">"name"</span><span class="p">,</span> <span class="s2">"blanket"</span><span class="p">,</span> <span class="s2">"pull_access"</span><span class="p">,</span> <span class="s2">"push_access"</span><span class="p">)</span> <span class="no">VALUES</span> <span class="p">(</span><span class="vg">$1</span><span class="p">,</span> <span class="vg">$2</span><span class="p">,</span> <span class="vg">$3</span><span class="p">,</span> <span class="vg">$4</span><span class="p">,</span> <span class="vg">$5</span><span class="p">,</span> <span class="vg">$6</span><span class="p">,</span> <span class="vg">$7</span><span class="p">,</span> <span class="vg">$8</span><span class="p">)</span> <span class="no">RETURNING</span> <span class="s2">"id"</span>  <span class="p">[[</span><span class="s2">"user_id"</span><span class="p">,</span> <span class="mi">1</span><span class="p">],</span> <span class="p">[</span><span class="s2">"token"</span><span class="p">,</span> <span class="s2">"[FILTERED]"</span><span class="p">],</span> <span class="p">[</span><span class="s2">"created_at"</span><span class="p">,</span> <span class="s2">"2025-12-13 20:58:37.615714"</span><span class="p">],</span> <span class="p">[</span><span class="s2">"updated_at"</span><span class="p">,</span> <span class="s2">"2025-12-13 20:58:37.615714"</span><span class="p">],</span> <span class="p">[</span><span class="s2">"name"</span><span class="p">,</span> <span class="s2">"Service token"</span><span class="p">],</span> <span class="p">[</span><span class="s2">"blanket"</span><span class="p">,</span> <span class="kp">true</span><span class="p">],</span> <span class="p">[</span><span class="s2">"pull_access"</span><span class="p">,</span> <span class="s2">"{}"</span><span class="p">],</span> <span class="p">[</span><span class="s2">"push_access"</span><span class="p">,</span> <span class="s2">"{}"</span><span class="p">]]</span>
  <span class="p">[</span><span class="o">...</span> <span class="n">some</span> <span class="n">loading</span> <span class="n">of</span> <span class="n">users</span> <span class="n">and</span> <span class="n">repositories</span><span class="p">]</span>
<span class="err">🛳️</span> <span class="no">Authenticating</span> <span class="n">with</span> <span class="p">{</span><span class="ss">:iss</span><span class="o">=&gt;</span><span class="s2">"tollport"</span><span class="p">,</span> <span class="ss">:sub</span><span class="o">=&gt;</span><span class="s2">"1"</span><span class="p">,</span> <span class="ss">:aud</span><span class="o">=&gt;</span><span class="s2">"tollport-registry"</span><span class="p">,</span> <span class="ss">:exp</span><span class="o">=&gt;</span><span class="mi">1765663117</span><span class="p">,</span> <span class="ss">:nbf</span><span class="o">=&gt;</span><span class="mi">1765659517</span><span class="p">,</span> <span class="ss">:iat</span><span class="o">=&gt;</span><span class="mi">1765659517</span><span class="p">,</span> <span class="ss">:jti</span><span class="o">=&gt;</span><span class="s2">"4222bd92-41d8-4f9e-91ab-8dc1275f46bd"</span><span class="p">,</span> <span class="ss">:access</span><span class="o">=&gt;</span><span class="p">[{</span><span class="ss">:type</span><span class="o">=&gt;</span><span class="s2">"repository"</span><span class="p">,</span> <span class="ss">:name</span><span class="o">=&gt;</span><span class="s2">"tollport"</span><span class="p">,</span> <span class="ss">:actions</span><span class="o">=&gt;</span><span class="p">[</span><span class="s2">"pull"</span><span class="p">,</span> <span class="s2">"push"</span><span class="p">]}]}</span>
  <span class="no">PersonalAccessToken</span> <span class="no">Destroy</span> <span class="p">(</span><span class="mf">1.6</span><span class="n">ms</span><span class="p">)</span>  <span class="no">DELETE</span> <span class="no">FROM</span> <span class="s2">"personal_access_tokens"</span> <span class="no">WHERE</span> <span class="s2">"personal_access_tokens"</span><span class="o">.</span><span class="s2">"id"</span> <span class="o">=</span> <span class="vg">$1</span>  <span class="p">[[</span><span class="s2">"id"</span><span class="p">,</span> <span class="mi">82</span><span class="p">]]</span>
<span class="o">=&gt;</span> <span class="p">{</span><span class="s2">"errors"</span><span class="o">=&gt;</span><span class="p">[{</span><span class="s2">"code"</span><span class="o">=&gt;</span><span class="s2">"UNAUTHORIZED"</span><span class="p">,</span> <span class="s2">"message"</span><span class="o">=&gt;</span><span class="s2">"authentication required"</span><span class="p">,</span> <span class="s2">"detail"</span><span class="o">=&gt;</span><span class="p">[{</span><span class="s2">"Type"</span><span class="o">=&gt;</span><span class="s2">"repository"</span><span class="p">,</span> <span class="s2">"Class"</span><span class="o">=&gt;</span><span class="s2">""</span><span class="p">,</span> <span class="s2">"Name"</span><span class="o">=&gt;</span><span class="s2">"tollport"</span><span class="p">,</span> <span class="s2">"Action"</span><span class="o">=&gt;</span><span class="s2">"pull"</span><span class="p">}]}]}</span>
</code></pre></div></div>

<p>Well, there we go. That answers a few questions. There certainly <em>are no <code class="language-plaintext highlighter-rouge">tags</code> returned</em>. Plus, we know that the auth has gone wrong somewhere.</p>

<p>This is a real heart sinking feeling because getting the auth working in the first place was tonnes of trying to understand how to implement JWTs. I guess at this point we have to recap how all that works. From memory for the moment. (A note from a few minutes in the future: writing this up was a struggle, which might be indicative of the bug and is certainly indicative that I’m not on solid ground here.)</p>

<ul>
  <li><strong>PersonalAccessTokens</strong> are a Tollport concept, much like an API key. Users create these keys and allow them to have push and/or pull access to specific repositories.</li>
  <li>The user logs into their docker client using their username and an API key.</li>
  <li>When <code class="language-plaintext highlighter-rouge">docker</code> CLI attempts an action, Tollport’s registry will send the PAT to Tollport, who returns a JWT with permissions detailed.</li>
</ul>

<p>That flow (using docker CLI) is all working, as we saw I’m able to push images just fine. This flow, where the server is using the Distribution API rather than going via the CLI like a user would, is a little different.</p>

<ol>
  <li>A request is made to <code class="language-plaintext highlighter-rouge">"/v2/&lt;name&gt;/tags/list"</code>, with an Authorization header.
    <ul>
      <li>The auth header is a JWT token, listing the permissions required. Specifically, the permission required is “type: registry, name: catalog”, with pull access to each of the relevant repositories.</li>
      <li>As this is a special, internal call I do something sneaky: I create a “blanket” PAT token. I call these service tokens.</li>
      <li>That’s a token where the user (or the system in this case) doesn’t care to define granular access and will just allow access to everything.</li>
      <li>After it’s used, it gets deleted.
    - The registry then decodes the … (narrator: it is at this moment that Shane realised the bug) JWT and sends the PAT token across to the normal auth endpoint, just to double check it’s all kosher.</li>
      <li>So, the token is generated by the Railsy bit of Tollport, and then sent to the registry, and then the registry confirms it with Tollport.</li>
    </ul>
  </li>
</ol>

<p>Here’s what I think the issue is: I delete the special service token before that cycle is finished using it.</p>

<p>Indeed take a look at this:</p>

<div class="language-ruby highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">module</span> <span class="nn">Distribution</span>
  <span class="k">class</span> <span class="nc">RepositoryTags</span>
    <span class="p">[</span><span class="o">...</span><span class="p">]</span>

    <span class="k">def</span> <span class="nf">tags</span>
      <span class="n">tags_response</span> <span class="o">=</span> <span class="n">client</span><span class="p">.</span><span class="nf">get</span><span class="p">(</span><span class="n">endpoint</span><span class="p">)</span>

      <span class="n">tags_response</span><span class="p">.</span><span class="nf">fetch</span><span class="p">(</span><span class="s2">"tags"</span><span class="p">,</span> <span class="p">[])</span>
    <span class="k">end</span>

    <span class="kp">private</span>

    <span class="k">def</span> <span class="nf">client</span>
      <span class="vi">@client</span> <span class="o">||=</span> <span class="no">Client</span><span class="p">.</span><span class="nf">new</span><span class="p">(</span><span class="no">ENV</span><span class="p">.</span><span class="nf">fetch</span><span class="p">(</span><span class="s2">"TOLLPORT_REGISTRY_URI"</span><span class="p">),</span> <span class="n">auth_token</span><span class="p">:)</span>
    <span class="k">end</span>

	<span class="p">[</span><span class="o">...</span><span class="p">]</span>

    <span class="k">def</span> <span class="nf">auth_token</span>
      <span class="n">user</span><span class="p">.</span><span class="nf">with_service_token</span> <span class="k">do</span> <span class="o">|</span><span class="n">personal_access_token</span><span class="o">|</span>
        <span class="no">AuthToken</span><span class="p">.</span><span class="nf">new</span><span class="p">(</span><span class="n">personal_access_token</span><span class="p">:).</span><span class="nf">token</span><span class="p">(</span><span class="n">repository</span><span class="p">.</span><span class="nf">name</span><span class="p">)</span>
      <span class="k">end</span>
    <span class="k">end</span>
  <span class="k">end</span>
<span class="k">end</span>
</code></pre></div></div>

<p><code class="language-plaintext highlighter-rouge">auth_token</code> gets the service token from the User, and generates the JWT with it. Lets see what that does.</p>

<div class="language-ruby highlighter-rouge"><div class="highlight"><pre class="highlight"><code>  <span class="k">def</span> <span class="nf">with_service_token</span>
    <span class="n">token</span> <span class="o">=</span> <span class="n">personal_access_tokens</span><span class="p">.</span><span class="nf">create</span><span class="p">(</span><span class="ss">name: </span><span class="s2">"Service token"</span><span class="p">,</span> <span class="ss">blanket: </span><span class="kp">true</span><span class="p">)</span>

    <span class="k">yield</span> <span class="n">token</span>

    <span class="n">token</span><span class="p">.</span><span class="nf">destroy</span>
  <span class="k">end</span>
</code></pre></div></div>

<p>Yikes! So before I get a chance to send it to the registry, the PAT required to authenticate the JWT has been deleted.</p>

<p>Lets confirm that by… not deleting the token for a moment.</p>

<p><img src="/assets/tollport-multiple-tags-available.png" alt="A dropdown with foobar and latest shown" /></p>

<p>Okay - that’s good news! Lets add back the destroy and move around the scope of the key.</p>

<div class="language-ruby highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">shane</span><span class="vi">@macbook</span> <span class="n">tollport</span><span class="o">/</span> <span class="p">(</span><span class="n">main</span><span class="p">)</span> <span class="o">%</span> <span class="n">git</span> <span class="n">diff</span>
<span class="n">diff</span> <span class="o">--</span><span class="n">git</span> <span class="n">a</span><span class="o">/</span><span class="n">app</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">distribution</span><span class="o">/</span><span class="n">repository_tags</span><span class="p">.</span><span class="nf">rb</span> <span class="n">b</span><span class="o">/</span><span class="n">app</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">distribution</span><span class="o">/</span><span class="n">repository_tags</span><span class="p">.</span><span class="nf">rb</span>
<span class="n">index</span> <span class="n">fd2fa65</span><span class="o">..</span><span class="mi">4</span><span class="n">ff20af</span> <span class="mi">100644</span>
<span class="o">---</span> <span class="n">a</span><span class="o">/</span><span class="n">app</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">distribution</span><span class="o">/</span><span class="n">repository_tags</span><span class="p">.</span><span class="nf">rb</span>
<span class="o">+++</span> <span class="n">b</span><span class="o">/</span><span class="n">app</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">distribution</span><span class="o">/</span><span class="n">repository_tags</span><span class="p">.</span><span class="nf">rb</span>
<span class="err">@@</span> <span class="o">-</span><span class="mi">18</span><span class="p">,</span><span class="mi">17</span> <span class="o">+</span><span class="mi">18</span><span class="p">,</span><span class="mi">20</span> <span class="err">@@</span> <span class="k">module</span> <span class="nn">Distribution</span>
     <span class="kp">private</span>

     <span class="k">def</span> <span class="nf">client</span>
<span class="o">-</span>      <span class="vi">@client</span> <span class="o">||=</span> <span class="no">Client</span><span class="p">.</span><span class="nf">new</span><span class="p">(</span><span class="no">ENV</span><span class="p">.</span><span class="nf">fetch</span><span class="p">(</span><span class="s2">"TOLLPORT_REGISTRY_URI"</span><span class="p">),</span> <span class="n">auth_token</span><span class="p">:)</span>
<span class="o">+</span>      <span class="n">user</span><span class="p">.</span><span class="nf">with_service_token</span> <span class="k">do</span> <span class="o">|</span><span class="n">personal_access_token</span><span class="o">|</span>
<span class="o">+</span>        <span class="no">Client</span><span class="p">.</span><span class="nf">new</span><span class="p">(</span>
<span class="o">+</span>          <span class="no">ENV</span><span class="p">.</span><span class="nf">fetch</span><span class="p">(</span><span class="s2">"TOLLPORT_REGISTRY_URI"</span><span class="p">),</span>
<span class="o">+</span>          <span class="ss">auth_token: </span><span class="n">auth_token</span><span class="p">(</span><span class="n">personal_access_token</span><span class="p">:)</span>
<span class="o">+</span>        <span class="p">)</span>
<span class="o">+</span>      <span class="k">end</span>
     <span class="k">end</span>

     <span class="k">def</span> <span class="nf">endpoint</span>
       <span class="no">ENDPOINT</span><span class="p">.</span><span class="nf">gsub</span><span class="p">(</span><span class="s2">"&lt;name&gt;"</span><span class="p">,</span> <span class="n">repository</span><span class="p">.</span><span class="nf">name</span><span class="p">)</span>
     <span class="k">end</span>

<span class="o">-</span>    <span class="k">def</span> <span class="nf">auth_token</span>
<span class="o">-</span>      <span class="n">user</span><span class="p">.</span><span class="nf">with_service_token</span> <span class="k">do</span> <span class="o">|</span><span class="n">personal_access_token</span><span class="o">|</span>
<span class="o">-</span>        <span class="no">AuthToken</span><span class="p">.</span><span class="nf">new</span><span class="p">(</span><span class="n">personal_access_token</span><span class="p">:).</span><span class="nf">token</span><span class="p">(</span><span class="n">repository</span><span class="p">.</span><span class="nf">name</span><span class="p">)</span>
<span class="o">-</span>      <span class="k">end</span>
<span class="o">+</span>    <span class="k">def</span> <span class="nf">auth_token</span><span class="p">(</span><span class="n">personal_access_token</span><span class="p">:)</span>
<span class="o">+</span>      <span class="no">AuthToken</span><span class="p">.</span><span class="nf">new</span><span class="p">(</span><span class="n">personal_access_token</span><span class="p">:).</span><span class="nf">token</span><span class="p">(</span><span class="n">repository</span><span class="p">.</span><span class="nf">name</span><span class="p">)</span>
     <span class="k">end</span>
   <span class="k">end</span>
 <span class="k">end</span>
<span class="n">diff</span> <span class="o">--</span><span class="n">git</span> <span class="n">a</span><span class="o">/</span><span class="n">app</span><span class="o">/</span><span class="n">models</span><span class="o">/</span><span class="n">user</span><span class="p">.</span><span class="nf">rb</span> <span class="n">b</span><span class="o">/</span><span class="n">app</span><span class="o">/</span><span class="n">models</span><span class="o">/</span><span class="n">user</span><span class="p">.</span><span class="nf">rb</span>
<span class="n">index</span> <span class="mi">66</span><span class="n">db028</span><span class="o">..</span><span class="n">c0e3304</span> <span class="mi">100644</span>
<span class="o">---</span> <span class="n">a</span><span class="o">/</span><span class="n">app</span><span class="o">/</span><span class="n">models</span><span class="o">/</span><span class="n">user</span><span class="p">.</span><span class="nf">rb</span>
<span class="o">+++</span> <span class="n">b</span><span class="o">/</span><span class="n">app</span><span class="o">/</span><span class="n">models</span><span class="o">/</span><span class="n">user</span><span class="p">.</span><span class="nf">rb</span>
<span class="err">@@</span> <span class="o">-</span><span class="mi">27</span><span class="p">,</span><span class="mi">9</span> <span class="o">+</span><span class="mi">27</span><span class="p">,</span><span class="mi">11</span> <span class="err">@@</span> <span class="k">class</span> <span class="nc">User</span> <span class="o">&lt;</span> <span class="no">ApplicationRecord</span>
   <span class="k">def</span> <span class="nf">with_service_token</span>
     <span class="n">token</span> <span class="o">=</span> <span class="n">personal_access_tokens</span><span class="p">.</span><span class="nf">create</span><span class="p">(</span><span class="ss">name: </span><span class="s2">"Service token"</span><span class="p">,</span> <span class="ss">blanket: </span><span class="kp">true</span><span class="p">)</span>

<span class="o">-</span>    <span class="k">yield</span> <span class="n">token</span>
<span class="o">+</span>    <span class="n">result</span> <span class="o">=</span> <span class="k">yield</span> <span class="n">token</span>

     <span class="n">token</span><span class="p">.</span><span class="nf">destroy</span>
<span class="o">+</span>
<span class="o">+</span>    <span class="n">result</span>
   <span class="k">end</span>
</code></pre></div></div>

<p>Nice nice. That works.</p>

<h2 id="okay">Okay?</h2>

<p>There’s no way that this particular bug will occur for someone else (or at least, no way someone searching for a solution will find this page). So the question of “why did I bother writing this?” is a good one. The lesson I’m hoping to spread here is that debugging is just about asking questions of your code.</p>

<p>Here’s my flow:</p>

<ol>
  <li>Prove to yourself the bug is real and that it hasn’t been misrepresented.
    <ul>
      <li>It feels dreadful for spend an hour debugging an issue that was reported only to realise… there’s no bug at all.</li>
      <li>If you cannot prove to yourself that the bug is real, you also cannot be sure you’ve resolved the bug.
    - Ask a question of the code.
    - Find evidence to answer the question.</li>
      <li>If you make an assumption, and skip finding evidence, there’s a Sod’s Law chance that that is exactly where your bug is.
    - Lead that answer to the next question.
    - Keep going until it’s clear what happened.</li>
    </ul>
  </li>
</ol>

<p>Sometimes, you’ll find yourself staring at the code. In these times (and you may not realise this at the time) you’re simply hoping that the bug will jump out at you. Unguided code reading is a waste of time. Debugging is an <em>active</em> activity, not a passive one.</p>

<p>Ask a question and answer that question.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[I have this feature in my Docker registry tool that lists the tags that belong to a blob. You can change the tag in the select box to update the copy-and-pasteable commands in the templates below. That seems to have stopped working. Lets figure out why.]]></summary></entry><entry><title type="html">What went wrong with my SolidQueue installation?</title><link href="https://technicallyshane.com/2025/12/12/what-went-wrong-with-my-solidqueue-installation.html" rel="alternate" type="text/html" title="What went wrong with my SolidQueue installation?" /><published>2025-12-12T00:45:00+00:00</published><updated>2025-12-12T00:45:00+00:00</updated><id>https://technicallyshane.com/2025/12/12/what-went-wrong-with-my-solidqueue-installation</id><content type="html" xml:base="https://technicallyshane.com/2025/12/12/what-went-wrong-with-my-solidqueue-installation.html"><![CDATA[<p>You’re joining me <em>in medias res</em> as I’ve completely broken my application
whilst trying to add SolidQueue. I’m writing this <em>now</em> for two reasons. First,
I keep intending to write up debugging sessions and then, when the debugging
has culminated in a working software, I move on quickly to <em>using</em> the working
software rather than lingering on the past. Second, we all know that this kind
of writing is good <a href="https://blog.codinghorror.com/rubber-duck-problem-solving/">rubber
ducking</a>.</p>

<p>I don’t expect this bug to end up being something world changing - it may well
be a typo. It might also be fun for you to spot the issue before I do, like a
murder mystery.</p>

<p>I’m trying to set up SolidQueue, adding it to my already-set-up Rails
application. (ie. I don’t <em>think</em> there was any ActiveJob configuration before
my work today.)</p>

<p>So let me show you exactly where we are right now, and I’ll fill in some
details as we go both forwards and backwards in time:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>shane@macbook tollport/ (activejob) % bundle exec rails db:prepare
Created database 'tollport_queues_development'
Created database 'tollport_queues_test'


shane@macbook tollport/ (activejob) % bin/jobs
/Users/shane/git/tollport/vendor/bundle/ruby/3.2.0/gems/activerecord-8.1.1/lib/active_record/connection_adapters/postgresql/database_statements.rb:167:in `exec': PG::UndefinedTable: ERROR:  relation "solid_queue_processes" does not exist (ActiveRecord::StatementInvalid)
LINE 10:  WHERE a.attrelid = '"solid_queue_processes"'::regclass
                             ^

        from /Users/shane/git/tollport/vendor/bundle/ruby/3.2.0/gems/activerecord-8.1.1/lib/active_record/connection_adapters/postgresql/database_statements.rb:167:in `perform_query'
        from /Users/shane/git/tollport/vendor/bundle/ruby/3.2.0/gems/activerecord-8.1.1/lib/active_record/connection_adapters/abstract/database_statements.rb:571:in `block (2 levels) in raw_execute'
</code></pre></div></div>

<p>We can see here that <code class="language-plaintext highlighter-rouge">db:prepare</code> creates a couple of new databases for us.
It’s done this because when you run <code class="language-plaintext highlighter-rouge">bin/rails solid_queue:install</code> it’ll add
some configuration as well as adding <code class="language-plaintext highlighter-rouge">db/queue_schema.rb</code>. That file is
<em>supposed</em> to contain the database schema for SolidQueue. But…</p>

<p>It does not. In fact, it is just a copy of my normal schema.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>shane@macbook tollport/ (activejob) % diff db/schema.rb db/queue_schema.rb | wc
       0       0       0
</code></pre></div></div>

<p>Well, that explains <em>exactly</em> why <code class="language-plaintext highlighter-rouge">solid_queue_processes</code> isn’t being created -
it’s not defined anywhere.</p>

<p>How the heck did that happen though? I’m very willing to accept that I’ve done
something funky to make this happen. I have a suspicion, and I’ll try to
recreate that issue again in a second (because I think it might be an
unfortunate Rails bug).</p>

<p>First, lets see if we can fix this issue by deleting the incorrect
<code class="language-plaintext highlighter-rouge">db/queue_schema.rb</code> and running <code class="language-plaintext highlighter-rouge">bin/rails solid_queue:install</code> again.</p>

<p>Good news:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>shane@macbook tollport/ <span class="o">(</span>activejob<span class="o">)</span> % <span class="nb">grep</span> <span class="s1">'create_table'</span> db/queue_schema.rb
  create_table <span class="s2">"solid_queue_blocked_executions"</span>, force: :cascade <span class="k">do</span> |t|
  create_table <span class="s2">"solid_queue_claimed_executions"</span>, force: :cascade <span class="k">do</span> |t|
  create_table <span class="s2">"solid_queue_failed_executions"</span>, force: :cascade <span class="k">do</span> |t|
  create_table <span class="s2">"solid_queue_jobs"</span>, force: :cascade <span class="k">do</span> |t|
  create_table <span class="s2">"solid_queue_pauses"</span>, force: :cascade <span class="k">do</span> |t|
  create_table <span class="s2">"solid_queue_processes"</span>, force: :cascade <span class="k">do</span> |t|
  create_table <span class="s2">"solid_queue_ready_executions"</span>, force: :cascade <span class="k">do</span> |t|
  create_table <span class="s2">"solid_queue_recurring_executions"</span>, force: :cascade <span class="k">do</span> |t|
  create_table <span class="s2">"solid_queue_recurring_tasks"</span>, force: :cascade <span class="k">do</span> |t|
  create_table <span class="s2">"solid_queue_scheduled_executions"</span>, force: :cascade <span class="k">do</span> |t|
  create_table <span class="s2">"solid_queue_semaphores"</span>, force: :cascade <span class="k">do</span> |t|
</code></pre></div></div>

<p>Way better. Bad news:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>shane@macbook tollport/ <span class="o">(</span>activejob<span class="o">)</span> % bin/jobs
/Users/shane/git/tollport/vendor/bundle/ruby/3.2.0/gems/activerecord-8.1.1/lib/active_record/connection_adapters/postgresql/database_statements.rb:167:in <span class="sb">`</span><span class="nb">exec</span><span class="s1">': PG::UndefinedTable: ERROR:  relation "solid_queue_processes" does not exist (ActiveRecord::StatementInvalid)
LINE 10:  WHERE a.attrelid = '</span><span class="s2">"solid_queue_processes"</span><span class="s1">'::regclass
                             ^
</span></code></pre></div></div>

<p>Checking the table shows that it definitely has created the correct schema. And
our “undefined” table is there.</p>

<div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">shane</span><span class="o">@</span><span class="n">macbook</span> <span class="n">tollport</span><span class="o">/</span> <span class="p">(</span><span class="n">activejob</span><span class="p">)</span> <span class="o">%</span> <span class="n">rails</span> <span class="n">dbconsole</span> <span class="c1">--database queue</span>
<span class="n">psql</span> <span class="p">(</span><span class="mi">13</span><span class="p">.</span><span class="mi">1</span><span class="p">)</span>
<span class="k">Type</span> <span class="nv">"help"</span> <span class="k">for</span> <span class="n">help</span><span class="p">.</span>

<span class="n">tollport_queues_development</span><span class="o">=#</span> <span class="err">\</span><span class="n">dt</span>
                     <span class="n">List</span> <span class="k">of</span> <span class="n">relations</span>
 <span class="k">Schema</span> <span class="o">|</span>               <span class="n">Name</span>               <span class="o">|</span> <span class="k">Type</span>  <span class="o">|</span> <span class="k">Owner</span>
<span class="c1">--------+----------------------------------+-------+-------</span>
 <span class="k">public</span> <span class="o">|</span> <span class="n">ar_internal_metadata</span>             <span class="o">|</span> <span class="k">table</span> <span class="o">|</span> <span class="n">shane</span>
 <span class="k">public</span> <span class="o">|</span> <span class="n">schema_migrations</span>                <span class="o">|</span> <span class="k">table</span> <span class="o">|</span> <span class="n">shane</span>
 <span class="k">public</span> <span class="o">|</span> <span class="n">solid_queue_blocked_executions</span>   <span class="o">|</span> <span class="k">table</span> <span class="o">|</span> <span class="n">shane</span>
 <span class="k">public</span> <span class="o">|</span> <span class="n">solid_queue_claimed_executions</span>   <span class="o">|</span> <span class="k">table</span> <span class="o">|</span> <span class="n">shane</span>
 <span class="k">public</span> <span class="o">|</span> <span class="n">solid_queue_failed_executions</span>    <span class="o">|</span> <span class="k">table</span> <span class="o">|</span> <span class="n">shane</span>
 <span class="k">public</span> <span class="o">|</span> <span class="n">solid_queue_jobs</span>                 <span class="o">|</span> <span class="k">table</span> <span class="o">|</span> <span class="n">shane</span>
<span class="p">[...]</span>
 <span class="k">public</span> <span class="o">|</span> <span class="n">solid_queue_processes</span>            <span class="o">|</span> <span class="k">table</span> <span class="o">|</span> <span class="n">shane</span>
<span class="p">[...]</span>
<span class="p">(</span><span class="mi">13</span> <span class="k">rows</span><span class="p">)</span>
</code></pre></div></div>

<p>Is SolidQueue using the write database?</p>

<p>Ah - well, this is telling of something. And not just about my potential bug.
Running <code class="language-plaintext highlighter-rouge">solid_queue:install</code> only bothers to tell SolidQueue to run on this
special database for ‘production’.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>shane@macbook tollport/ <span class="o">(</span>activejob<span class="o">)</span> % <span class="nb">grep</span> <span class="nt">-r</span> <span class="s1">'solid_queue.connects_to'</span> config/
config/environments/production.rb:  config.solid_queue.connects_to <span class="o">=</span> <span class="o">{</span> database: <span class="o">{</span> writing: :queue <span class="o">}</span> <span class="o">}</span>
</code></pre></div></div>

<p>Whereas I have this new database set up for all environments, at least in
<code class="language-plaintext highlighter-rouge">config/database.yml</code>. A lot of config is passed via the DATABASE_URL env var.
More on that very shortly.</p>

<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">default</span><span class="pi">:</span> <span class="nl">&amp;default</span>
  <span class="na">adapter</span><span class="pi">:</span> <span class="s">postgresql</span>
  <span class="pi">[</span><span class="nv">...</span><span class="pi">]</span>

<span class="na">development</span><span class="pi">:</span>
  <span class="na">primary</span><span class="pi">:</span>
    <span class="na">&lt;&lt;</span><span class="pi">:</span> <span class="nv">*default</span>
    <span class="na">database</span><span class="pi">:</span> <span class="s">tollport_development</span>
  <span class="na">queue</span><span class="pi">:</span>
    <span class="na">&lt;&lt;</span><span class="pi">:</span> <span class="nv">*default</span>
    <span class="na">database</span><span class="pi">:</span> <span class="s">tollport_queues_development</span>
    <span class="na">migrations_paths</span><span class="pi">:</span> <span class="s">db/queue_migrate</span>

<span class="pi">[</span><span class="nv">...</span><span class="pi">]</span>

<span class="na">production</span><span class="pi">:</span>
  <span class="na">primary</span><span class="pi">:</span>
    <span class="na">&lt;&lt;</span><span class="pi">:</span> <span class="nv">*default</span>
    <span class="na">database</span><span class="pi">:</span> <span class="s">tollport_production</span>
  <span class="na">queue</span><span class="pi">:</span>
    <span class="na">&lt;&lt;</span><span class="pi">:</span> <span class="nv">*default</span>
    <span class="na">url</span><span class="pi">:</span> <span class="s">&lt;%= ENV.fetch('QUEUE_DATABASE_URL', nil) %&gt;</span>
    <span class="na">database</span><span class="pi">:</span> <span class="s">tollport_queues_production</span> <span class="c1"># this does nothing as `url:` tasks precedence, leaving here as an example of what's in QUEUE_DATABASE_URL</span>
    <span class="na">migrations_paths</span><span class="pi">:</span> <span class="s">db/queue_migrate</span>
</code></pre></div></div>

<p>First, I want to quickly confirm my suspicion and answer the question at hand:
is SolidQueue using the write database? Lets make all environments use this
queue database. Moving the above SolidQueue config from
<code class="language-plaintext highlighter-rouge">config/environments/production.rb</code> to <code class="language-plaintext highlighter-rouge">config/application.rb</code> gets us going
again. <code class="language-plaintext highlighter-rouge">bin/jobs</code> works!</p>

<p>The “telling” aspect I mentioned earlier leads me to this: since
<code class="language-plaintext highlighter-rouge">solid_queue:install</code> didn’t add that configuration to <code class="language-plaintext highlighter-rouge">config/application.yml</code>
in the first place, the developers probably don’t intend you to have the two
databases for dev and test. But, as it’s working, and I don’t really mind, I’ll
let that particular issue be. When following the installation instructions, I
assumed it wanted me to add the new ‘queue’ to each of the environments. The
docs only mentions <code class="language-plaintext highlighter-rouge">production</code> though.</p>

<p>So, I had two issues:</p>

<ol>
  <li>SolidQueue’s schema, which <code class="language-plaintext highlighter-rouge">db:prepare</code> uses, was straight up wrong.</li>
  <li>SolidQueue was not aware of the <code class="language-plaintext highlighter-rouge">queue</code> database whilst in dev.</li>
</ol>

<p>By deleting the queue_schema.rb and re-installing Solid Queue, I fixed the
first issue. But how did it happen in the first place?</p>

<p>Well, I’m guessing now because I no longer have the commit, but I believe I
messed up my database.yml.</p>

<p>Before this <code class="language-plaintext highlighter-rouge">activejob</code> branch, my config looked more like this:</p>

<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">default</span><span class="pi">:</span> <span class="nl">&amp;default</span>
  <span class="na">adapter</span><span class="pi">:</span> <span class="s">postgresql</span>
  <span class="na">encoding</span><span class="pi">:</span> <span class="s">unicode</span>

<span class="na">development</span><span class="pi">:</span>
  <span class="na">&lt;&lt;</span><span class="pi">:</span> <span class="nv">*default</span>
  <span class="na">database</span><span class="pi">:</span> <span class="s">tollport_development</span>
</code></pre></div></div>

<p>This is a single database set up. The Rails magic here is that this single
database is called <code class="language-plaintext highlighter-rouge">primary</code>. To make it a multi-database set up, you add in
the database names:</p>

<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">development</span><span class="pi">:</span>
  <span class="na">primary</span><span class="pi">:</span>
    <span class="na">&lt;&lt;</span><span class="pi">:</span> <span class="nv">*default</span>
    <span class="na">database</span><span class="pi">:</span> <span class="s">tollport_development</span>
  <span class="na">queue</span><span class="pi">:</span>
    <span class="na">&lt;&lt;</span><span class="pi">:</span> <span class="nv">*default</span>
    <span class="na">database</span><span class="pi">:</span> <span class="s">tollport_queues_development</span>
    <span class="na">migrations_paths</span><span class="pi">:</span> <span class="s">db/queue_migrate</span>
</code></pre></div></div>

<p>However, I believe I originally did something like this <strong>which is wrong</strong>:</p>

<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">development</span><span class="pi">:</span>
  <span class="na">&lt;&lt;</span><span class="pi">:</span> <span class="nv">*default</span>
  <span class="na">primary</span><span class="pi">:</span>
    <span class="na">database</span><span class="pi">:</span> <span class="s">tollport_development</span>
  <span class="na">queue</span><span class="pi">:</span>
    <span class="na">database</span><span class="pi">:</span> <span class="s">tollport_queues_development</span>
    <span class="na">migrations_paths</span><span class="pi">:</span> <span class="s">db/queue_migrate</span>
</code></pre></div></div>

<p>And I can only imagine that that confused the heck out of Rails so much that my
issue happened.</p>

<p>ah well, all working now.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[You’re joining me in medias res as I’ve completely broken my application whilst trying to add SolidQueue. I’m writing this now for two reasons. First, I keep intending to write up debugging sessions and then, when the debugging has culminated in a working software, I move on quickly to using the working software rather than lingering on the past. Second, we all know that this kind of writing is good rubber ducking.]]></summary></entry><entry><title type="html">ice ice ice</title><link href="https://technicallyshane.com/2025/11/09/ice-ice-ice.html" rel="alternate" type="text/html" title="ice ice ice" /><published>2025-11-09T21:50:00+00:00</published><updated>2025-11-09T21:50:00+00:00</updated><id>https://technicallyshane.com/2025/11/09/ice-ice-ice</id><content type="html" xml:base="https://technicallyshane.com/2025/11/09/ice-ice-ice.html"><![CDATA[<p>I’ve really been enjoying ice skating this past month.</p>

<p>Actually, I just checked my transactions to see when I first went: the first week of October, and since then I’ve been to at least a dozen sessions. It’s difficult for me to tell because two weeks I signed up for lessons, and after that point you get free public ice sessions, so I don’t have a record of those.</p>

<p>The Hackspace is definitely a place where hobbies accrue. You can be working on the laser cutter, and then someone mentions how similar it is to the embroidery machine, and then suddenly you’re combining the two. In this case, a bunch of people in unison (I can’t remember if it was me who instigated it) decided to try out a skating session. Five of us went along, including one very good skater. That was instrumental as he was able to give me starting advice which got me over the initial struggle that might have dissuaded me from going again.</p>

<p>After that, I went to a number of sessions on my own trying to get used to the forward momentum and movement. If there’s ever a sport where “practice makes perfect” is very clear, it’s this one. Each hour made me feel more confident.</p>

<p>Go more often than a normal person would, you notice who else is turning up an unusual amount, and you eventually get talking to them. I remember at many points in my life thinking “how on earth do adults make new friends?” Then after thinking that, I found a wonderful group of friends from Dungeons and Dragons, a brilliant group of friends in the Hackspace, and now a burgeoning group of “ice friends”, as I’ve been calling them. Turns out, all you have to do to make friends is <em>go outside regularly</em>.</p>

<p>Early on, there was a time where a three year old was almost certainly going to collide with me. At this point, I had no idea of how to stop or turn, so I began thinking of what I would say to his parents as I handed them his fingers back. But the little nipper just toe looped right around me.</p>

<p>Since then, I’ve learnt forward reasonably confidently, forward and backwards “lemons”, and I recently learnt backwards mostly. (I have not yet learnt how to stop.)</p>

<p>I’ve been going a bit more than my partner and friends, so I’ve been trying to offer advice to help them catch up. It’s incredibly hard to teach a physical skill. Take “lemons”, for instance. This is a beginner’s movement where you draw a lemon with your skates, a pointy ellipse. It’s essentially teaching the importance of the angle of your feet, and how you can use that angle to push yourself forwards. You can explain all that, but I’ve no idea how to explain which muscles to use to do it.</p>

<p>It’s just lots of practice, I guess.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[I’ve really been enjoying ice skating this past month.]]></summary></entry><entry><title type="html">Git workflows</title><link href="https://technicallyshane.com/2025/10/22/git-workflows.html" rel="alternate" type="text/html" title="Git workflows" /><published>2025-10-22T14:55:00+00:00</published><updated>2025-10-22T14:55:00+00:00</updated><id>https://technicallyshane.com/2025/10/22/git-workflows</id><content type="html" xml:base="https://technicallyshane.com/2025/10/22/git-workflows.html"><![CDATA[<p>The most simple method of working with a Git repo is doing everything in the default branch. If you’re working in your own project, without having to think about other work conflicting with yours or having to review work before it can be deployed, then this is a perfectly good solution. Most of my hobby projects largely have work done in them on the default branch.</p>

<p><em>I can’t remember why I started writing this. I had it in my drafts. I’m setting it free.</em></p>

<blockquote>
  <p><strong>“Default branch”</strong></p>

  <p>Many are used to having <code class="language-plaintext highlighter-rouge">master</code> as the default branch in a repository, but it’s fallen out of favour recently. In around 2020, the community decided that staring at a word all day with so much loaded connotations probably wasn’t good for mental health, some more so than others. Some have noted that the term was never meant in a master-slave way, but rather a “master copy” way. However, either way, it’s not a technically correct word for it. We’ll see in this article that it’s often not the master copy, and is often lagging behind the state of the art. So, <code class="language-plaintext highlighter-rouge">main</code> has been taken on as a less inaccurate (and mercifully shorter) name. It doesn’t have to be your default branch though. Change it to whatever you like. <code class="language-plaintext highlighter-rouge">git config --global init.defaultBranch main</code> So, we’ll just go with calling it the default branch for now.</p>
</blockquote>

<h2 id="working-branches-or-the-github-flow">Working branches or the ‘GitHub flow’</h2>

<p>I’d put money on this being the most common method of using a version control system. If you’re working on a team, or just working in parallel on different parts of the codebase, then it makes sense to have an area just for yourself: your own branch.</p>

<p>For each chunk of work, I’ll start off with <code class="language-plaintext highlighter-rouge">git checkout -b banjo-12393-fix-issue-with-signup</code>. Then in there, I’ll work away on fixing the issue.</p>

<p>Naming these branches is something to pay attention to. <em>Finding</em> the branch you’re working on can become difficult if you’re interrupted and have to look at something else for a while. So, many developers come up with their own system for these. My system is my team name, like a namespace to avoid bumping into anyone else, then the ticket number I’m working on, then a legible description of what of the problem being solved. It’s nice to do <code class="language-plaintext highlighter-rouge">git branch</code> and know what each of the branches are for. It’s also nice to be able to grep through <code class="language-plaintext highlighter-rouge">git branch | grep 12393</code> to find the branch I’m working on.</p>

<p>In my experience, these branches aren’t often shared between developers. The main reason for that is that there comes a point where you step on their toes. Say you add another commit to Alice’s branch, and now she has to <em>realise</em> that somehow, hopefully before pushing their work and overriding yours or having to handle a rebase. Leave these kinds of branches to the developer that created them.</p>

<p>Once the work is complete, this workflow usually leads to a pull request.</p>

<blockquote>
  <p><strong>“Pull request”</strong></p>

  <p>There’s not really any such thing in the core of Git itself that specifies pull requests. Initially, the intended method of sharing code was to send a diff (or a patch) to someone and have them apply it on their end. Git is designed to be distributed - easily clonable elsewhere - but not intended as a team collaboration tool. When Linus Torvolds was putting Git together, there was no plan for everyone to be able to make a branch and push to the Linux kernal as they wanted. He expected patches to be sent around. The idea of a “pull request” was created by the community. In fact, much of the community (like Gitlab), prefer the term “merge request” which shows how non-standard the whole idea is. We’ll see in a bit that Git is not designed to handle dozens of developers working in the same repository synchronously.</p>
</blockquote>

<p>That is then reviewed. I imagine most of us are lucky enough that we have a team around us who review our code before we merge it into the default branch. That kind of accountability is not the only reason to use these branches though; even working solo it’s good to create a pull request and review the code yourself. Often, it’s the first time you’ll see the whole changeset. It’s a good habit to self-review your code before you ask anyone else to.</p>

<p>Once reviewed, it gets merged into the default branch. You can then clear away your local branch <em>and</em> the branch on the remote. Stay tidy.</p>

<p>You may have read this section thinking “this is what I call a feature branch”, and I’d fully agree with you. I’ve gone with “working branch” here for two reasons: 1) often it’s not features that we’re working on, day to day, and 2) to distinguish it from what we all agree is called a feature branch. (Not until writing this post did I realise that I’d been calling the two things the same name, without issue.)</p>

<h2 id="feature-branches">Feature branches</h2>

<p>In a sophisticated workflow that most engineers have come to expect, once your merge to the main branch, continuous integration will kick in and deploy to production automatically. At least, that’s the case in the web dev world. Outside of that, (say an iPhone app), I imagine that’s not always the case. However, when something lands in the default branch, it’s good behaviour for it to be shippable.</p>

<p>In working branches, discussed above, your work is reviewed and then deployed. That’s handy for releases of work that can be done in one ticket. Small features, bug fixes, and other discrete chunks of work.</p>

<p>There are times when you’re given a large amount of work, which is all expected to be moved to the main branch (and become shippable) at the same time.</p>

<p>It may not be feasible for you to do all the work yourself on this kind of ticket. Even if you are taking on the work yourself, raising a pull request with multiple ideas in it is bad practise. Work of this size can often be (and should be!) split up into small units of work. Usually, you’ll add some sort of subtasks. The work can also be parallelisable and worked on by multiple people.</p>

<p>But how do we overcome our twin problems of a) not stepping on the toes of another developer by pushing to a “shared” branch and b) releasing the work at the same time whilst also keeping changes discrete.</p>

<p>This is where feature branches shine.</p>

<p>It’s nothing sophisticated. Simply, you make a new branch for you’ll treat like your default branch for a while. Take a copy of the default branch and name it after your new feature. Then, you can raise pull requests against this new branch, rather than default.</p>

<p>Now, PRs can be small, single units of work that can be reviewed easily.</p>

<p>Once that feature branch is ready and safe to deploy, raise a PR from it to the default branch. Marvel a bit at all the changes you’ve made, though you may not need to review it strenuously as each piece has already been reviewed. Then, merge that and move your ticket to done. (Or “User test” or whatever.)</p>

<p>This workflow is not without it frustrating disadvantages.</p>

<p>The biggest of these is that the default branch will (all too quickly) deviate from the feature branch. You’ll need to stay on top of this by rebasing the feature branch frequently. These conflicts can get gnarly, and only get worse with time, so a feature branch should have a short life span.</p>

<p>This workflow has all but vanished from my workplace because of this issue. We end up spending more time handling conflicts than is worth it. Instead, a more correct solution is <em>feature flags</em>. In that case, you can stick to normal working branches.</p>

<blockquote>
  <p><strong>Feature flags</strong></p>

  <p>Feature flags aren’t a git workflow thing, but since I already brought them up I shall explain them quickly. Instead of hiding your completed work in a feature branch, you want to merge it as soon as it’s ready. The way you can do that, without releasing it to be shipped, is by flagging the feature as unavailable. One way to do this is with a simple <code class="language-plaintext highlighter-rouge">if param[:enable_payment_system]</code> condition in your view, or controller, which skips over the whole feature by default. This may take some thinking about how to do cleanly, but I often find that this kind of thinking leads to cleaner code anyway even after you remove the feature flag.</p>
</blockquote>

<h2 id="gitflow">”gitflow”</h2>

<p>I’m unsure if anyone still bothers with this. I expect some do, likely more corporate places that like to follow operating procedures that show up in books from the 90s. It’s also very possible that this just isn’t popular in continuously delivered web development, but is still very useful in software houses that produce versioned software.</p>

<p>This is a whole system that consists of feature branches (like those above), develop branches (where code has been finished but isn’t ready for a release yet), release branches (for all the changes which will go into version 1.1 or version 1.2, which are both expected to be released at some point), hotfix branches (which jump over the previous kinds of branches), and then master once the code is ready to go into the latest branch to build on top of.</p>

<p>It’s all a bit much. I’ve never used this, and really hope not to.</p>

<h2 id="merging-methodologies">Merging methodologies</h2>

<p>Alongside the ways of using a git repo to work on your code, there are also different ways to getting your code merged.</p>
<h3 id="pushing-straight-to-the-default-branch">Pushing straight to the default branch</h3>

<p>Commit your work, then <code class="language-plaintext highlighter-rouge">git push</code>. Done.</p>

<p>This is a common way of working on personal projects, where you’re alone and not expecting a review of a team mate. It is essentially never done in any professional environment though.</p>

<p>There are two times that come to mind where I’ve seen this done at work.</p>

<p>First, when working on a “spike” that in a new repository. A spike is very quickly throwing together to come to validate an idea. This might not be done solo and can be done with others on the team too. It is useful when speed is important, and the code quality is not. Usually, the ultimate goal of these spikes to to throw the repo away and start again with the insights of the spike on a fresh repo, where normal processes are restarted.</p>

<p>The second is rarer still, and I think it’s not a point of pride, but rather a process that hasn’t fully been solidified yet. In Ruby packages, I’ve seen a new feature get added via a normal Pull Request methodology, but then the version bump of the library happening afterwards (often by the maintainer) who then just pushes the single commit. These commits are often automated, and a review wouldn’t be very helpful.</p>
<h3 id="opening-a-pull-request-straight-to-the-default-branch-and-other-branches-whilst-were-at-it">Opening a Pull Request straight to the default branch (and other branches, whilst we’re at it)</h3>

<p>The vast majority of the time, pull requests are raised against the default branch to be reviewed by other developers. In every professional place I’ve worked code on the default branch gets automatically deployed to production. It’s a good, often hygienic, method of finishing off your work to make sure that code is never hanging around, going stale, and then surprising someone when it fails a few days later when someone gets around to manually deploying.</p>

<p>When working with feature branches, you’d of course be merging into other branches, but <em>those</em> would be pointing at the default branch. (If they’re not, the situation you’ve found yourself is a complicated one!)</p>

<p>I wrote earlier about how it’s often not considered polite to jump into someone else’s working branch and start making changes. There are occasions when you’d like to do that though. For instance, if you’re reviewing a PR and want to suggest a change to it. It might complicate things to do that by simply changing the code on the branch, but you <em>could</em> checkout their branch, make the change, and then push to a different branch. Raising your own pull requests against theirs, so they can see the changes you’re thinking of.</p>

<p>I have worked in one place where we had a ‘staging’ branch, which the gitflow people would call the ‘develop’ branch. The staging branch would get deployed to a staging environment so user acceptance testing could be done before merging everything from that branch into the default branch. Just like with gitflow, my feeling that this style is falling out of fashion. (Instead, see: feature flags.)</p>

<p>It’s your CI process that is protecting you from merging buggy code, so long as your code is well tested. A common practice with CI is halting the entire pipeline if something fails along the way. When this happens, most git hosts will stop you (or strongly persuade you away from) merging your pull request.</p>

<blockquote>
  <p><strong>Continuous Integration</strong></p>

  <p>“CI” servers watch for changes to branches and do some build steps. These build steps might be</p>
  <ul>
    <li>Running the test suite</li>
    <li>Running linters which check the quality of the code changed</li>
    <li>Compile assets</li>
    <li>Generate and push Docker images</li>
    <li>Deploying the code to the server</li>
    <li>Sending notifications or kicking off other systems to begin their work</li>
    <li>Building a package and pushing to the package repository
The CI server might do a better job at running all of the tests than you would locally - maybe faster or more reliably. The output of all of these are often visible on pull requests, which give reviewers confidence that the changes meet certain standards.</li>
  </ul>

  <p>Usually, you can set up systems like GitHub and GitLab to now allow merging of a PR whilst the build is failing (one of the tasks has finished with an unexpected output).</p>

  <p>When I started my career, we were using Jenkins, a self-hosted and open source CI server. The world has changed a lot since then, and CI has become Big Business. A lot of my experience is using CircleCI, which has been getting more and more expensive (and more and more fancy). GitHub also have a very good CI system now, which is nice to have it hooked straight into where your code is anyway. My expectation for the future is that these things will get prohibitively more expensive and we’ll end up going back to self-hosted solutions. Many of these third party providers already let you run your builds on your own servers (and they handle queueing and whatnot).</p>
</blockquote>

<p>The key thing to note about this method is that once your CI build is green and you’re code has been signed off by a peer, when you hit that ‘Merge’ button, it goes straight into the default branch and is ready for everyone to <code class="language-plaintext highlighter-rouge">git fetch</code> and start working on top of it - for better or worse!</p>

<h2 id="merging-strategies">Merging strategies</h2>
<p>Actually, the button you press is likely more complicated than just “Merge”. There are a few different ways of merging.</p>
<h3 id="fast-forward-merging">Fast forward merging</h3>

<p>Whilst this is considered the ‘default’ method of merging by git, it is one I’ve rarely seen in the wild. Likely because it is not Github’s idea of the default.</p>

<p>A fast forward merge is the simplest of merges. They happen where there is no conflict between the two branches, and the commits from the new branch can be placed on top of the main branch, as simple moving <code class="language-plaintext highlighter-rouge">main</code> to the pointer of the merged branch.</p>

<p>The commits are all kept the same. There’s no tweaking the hash because the parent is different: the commits are always the same.</p>

<h3 id="non-fast-forward-merging">Non-fast forward merging</h3>

<p>This is a very similar kind of merge as the above, except that it <em>always</em> adds a merge commit. This is what you’ll see by default in Github. It’s handy for a few reasons:</p>

<p><strong>A bit more provenance.</strong> You can tell exactly where the commits came from: another branch. Probably a <em>named</em> branch that will give you a bit more context. The merge commit will group the new commits together forever.</p>

<p><strong>Optionally, a lot more context for the group of commits.</strong> Github will add the PR description to the merge commit message (and of course you can do that yourself if manually merging). Whilst the individual commits should explain what change they’ve made, a merge commit is a good place for a proper description of why the change needs to be made.</p>

<p><strong>Easy to revert the whole thing.</strong> With a fast forward merge, you’ll need to select commit-by-commit to revert the whole idea. That is a bit more faff than just reverting the merge commit.</p>

<h3 id="squash-and-merge">“Squash and merge”</h3>

<p>This is a destructive method of merging your code. Ultimately, it still does a non-fast forward merge, but Github/lab will force all of your code changes into the merge commit.</p>

<p>This way of working considers the individual commits as an artefact that’s only useful during development. Once merged, who cares. This is nice in some ways: you’ll end up with one commit per ticket, maybe. That’s kinda nice. On merge, there are fewer chances of multiple merge conflicts, as it’ll just be one commit that needs to be checked for conflicts rather than multiple (which might all hit the same conflict).</p>

<p>I dislike this way of working, even though it is how the majority of places I’ve worked at like to do their merging.</p>

<p>You don’t technically lose any commit messages - they’re all added into the merge commit. However, you do end up with that whole, bulky message being displayed in your <code class="language-plaintext highlighter-rouge">git blame</code>, rather than just the single commit that cleanly explained the change to that particular line of code.</p>

<h3 id="rebase-and-merge">“Rebase and merge”</h3>

<p>This merging methodology does a <code class="language-plaintext highlighter-rouge">git rebase original/base_branch</code> and then pushes the base_branch. This is essentially doing a fast forward push, except if you were doing it on the command line you’d be able to handle conflicts interactively. On GitHub, it won’t let you use this option if it involves a conflict.</p>

<p>GitHub will rewrite all of your commits even if there’s no need (say, if the rebase is clean). It will change the author of the commit from your locally configured author, to your GitHub ‘verified’ user.</p>

<p>This is a critical issue for me with this merge method. There are some features, like <code class="language-plaintext highlighter-rouge">git branch -d</code> which will check if the branch you’re deleting has <em>all</em> of its commits somewhere else. Otherwise, it’ll warn you that you’re about to lose some work. That will always happen when the author has been changed (because all of the commits have changed!). So you’re forced to use the more dangerous <code class="language-plaintext highlighter-rouge">-D</code> because you’re not expecting <code class="language-plaintext highlighter-rouge">-d</code> to ever work.</p>
<h3 id="merge-trains">Merge trains</h3>

<p>Merge trains aim to avoid issues with merges in close proximity to each other. Say you have your work in <code class="language-plaintext highlighter-rouge">cool-feature</code> and a colleague has their work in <code class="language-plaintext highlighter-rouge">their-feature</code>. Both branches are pushed, they pass code review at roughly the same time, they have CI run against them and they’re both green builds.</p>

<p>So, hit merge on <code class="language-plaintext highlighter-rouge">cool-feature</code> and your colleague hits merge on <code class="language-plaintext highlighter-rouge">their-feature</code>. Then, something frustrating happens: despite there being no conflicts between the two, behaviour has changed enough now that some tests are failing.</p>

<p>You can revert one of the PRs but the whole thing is a frustrating mess.</p>

<p>The manual way to get around this safely is to rebase your code before merging it, always. In fact, there are Github settings where you can enforce that all PRs be sitting on top of the main branch. This is <em>very</em> annoying though - what if someone beats you to another merge? Then you have to rebase again and hope to catch it as the stars align when that build completes.</p>

<p>Instead, if merge trains were used, both merged PRs would go into a queue. CI would get <code class="language-plaintext highlighter-rouge">cool-feature</code> merged and start building it for deploy (or whatever). Then though, instead of merging <code class="language-plaintext highlighter-rouge">their-feature</code> straight away, it will automatically rebase that code and then run the tests again. If they pass this time, it merges it without any other intervention.</p>

<p>This magic automation has never sat right with me though. I don’t like my code going onto a production server At Some Point In The Future. So, as trendy as merge trains are, I think I’ll stick with more simpler methods of merging.</p>

<p>Though - this is almost always a team decision, and rarely be people align with me on this! It seems everyone has their opinions on git workflows.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[The most simple method of working with a Git repo is doing everything in the default branch. If you’re working in your own project, without having to think about other work conflicting with yours or having to review work before it can be deployed, then this is a perfectly good solution. Most of my hobby projects largely have work done in them on the default branch.]]></summary></entry><entry><title type="html">3 years of Mastodon</title><link href="https://technicallyshane.com/2025/10/22/3-years-of-mastodon.html" rel="alternate" type="text/html" title="3 years of Mastodon" /><published>2025-10-22T11:30:00+00:00</published><updated>2025-10-22T11:30:00+00:00</updated><id>https://technicallyshane.com/2025/10/22/3-years-of-mastodon</id><content type="html" xml:base="https://technicallyshane.com/2025/10/22/3-years-of-mastodon.html"><![CDATA[<p>I started my Mastodon server, d20.social, three years ago (November 6, 2022). It’s been quite a rewarding project to be hosting.</p>

<p>Twitter had been going down hill for quite some time and Elon’s acquisition of it made it quite clear that it has become a source of World Suck. Staying on it, and seeing ads occasionally, was directly funding Trump’s eventual ascension and I wanted no part of that.</p>

<p>I did know of Mastodon before then, but the mass exodus from twitter and everyone redirecting their accounts to fediverse instances was what made me decide it was legit. I didn’t fancy joining one of the big servers though. So I ended up looking for a fun domain, and found d20.social.</p>

<p>My original intent was to make a TTRPG community on that server, but quickly changed my mind when I realised how un-fun moderation would be. There are only three accounts on d20.social: me, my partner, and some random person who signed up and made me realise I didn’t want the stress of moderating even just one person. (That person never actually posted, and I suspended them some time after.)</p>

<h2 id="bringing-a-small-instance-to-life">Bringing a small instance to life</h2>

<p>Tiny instances is how I feel the fediverse should be maintained. The software isn’t entirely designed to be run with so few people though. The ‘Trending’ section is sorely lacking as it seems to only consider items from the local server. It’d be nice for that to be more useful, but it’s the only negative for me about running the server myself.</p>

<p>In the early days, my motive was to follow anyone and everyone that I came across who was posting about something I had an interest in. This works well!</p>

<p>Additionally, Mastodon supports “Relays”. These are endpoints on other servers you tell your own instance about. If the admin approves your relay request, it’ll send all the content it has on its server to your server. So, if I’m interested in everything on ruby.social, all messages will also get received by my server without me having to interact with them. They don’t appear in my timeline - they just end up in my Postgres database.</p>

<p>The beauty of that is that you can now benefit from following hashtags. I can follow the #ruby hashtag and they do all appear in my timeline. Great! I follow #ttrpg and #mychemicalromance.</p>

<p>The issue here is that it puts a bit more load on ruby.social, and @james@ruby.social probably doesn’t want to be paying for that. Their resources are best spent on their own users and not people off-instance. That’s reasonable. Good news though: someone else has taken on that burden. You can visit [https://relay.fedi.buzz/] to use them as a relay. Adding <code class="language-plaintext highlighter-rouge">https://relay.fedi.buzz/instance/ruby.social</code> as a relay (which is auto-accepted by relay.fedi.buzz) will then pipe all of ruby.social’s stuff to my server without the Tragedy of the Commons overloading ruby.social systems.</p>

<p>That’s also the place you can get relays for specific hashtags which will come from anywhere in the fediverse.</p>

<h2 id="social-media-is-a-dangerous-drug">Social media is a dangerous drug</h2>

<p>A constant stream of new and shiny messages is bad for your brain. Doom scrolling starts with a hunt for a dopamine hit, and ends with spiralling around a far-right pit of sadness.</p>

<p>Once I had my instance populated with lots of messages, I had to start pruning weeds.</p>

<p>I heavily use Filters in Mastodon for better mental health. Even when people are shouting opinions I agree with, I have to mute them out for fear of being overwhelmed. ‘Nazi’, ‘Elon’, ‘TERF’, ‘blockchain’, ‘covid’ and dozens more are all words I’ve restricted from posts that appear in my feed. “Titanic” is on the list for some reason.</p>

<p>Certain people on Mastodon love to compare it to Twitter. Or talking about Twitter’s demise. Add “twitter” and “xitter” to your filter list and you’ll have a better life.</p>

<h2 id="running-mastodon">Running mastodon</h2>

<p>I originally ran Mastodon pretty well on a small instance on Digital Ocean. It wasn’t the $5/m one, but it was close to that.</p>

<p>For whatever reason, I was running it ‘by hand’. I had the git repo in a directory and was running it like a locally run Rails application. It doesn’t seem even a bit logical these days but I had a number of tabs in byobu running the webserver, the streaming server, and all the other bits.</p>

<p>Upgrading it was dreadful, because I would have to precompile the assets. That process used a lot more system resources than the small instance had, so I had to go up to $20/m for a few minutes whilst I did that.</p>

<p>It was dreadful. I really don’t know why I put up with it.</p>

<p>After some time I found that Hetzner did very cheap deals on quite good dedicated machines. So now I pay £30~/m for a very good machine. I moved Mastodon over to that, this time using the docker-compose set up. <strong>Definitely do this.</strong></p>

<p>Upgrading Mastodon is simply now:</p>

<ul>
  <li>Run a backup: <code class="language-plaintext highlighter-rouge">docker exec mastodon-db-1 pg_dumpall -U postgres | gzip &gt; postgres_backup_PREUPGRADE_</code>date +’%Y%m%d’<code class="language-plaintext highlighter-rouge">.sql.gz</code></li>
  <li>Edit the docker-compose.yml to point to the tagged version you want.</li>
  <li><code class="language-plaintext highlighter-rouge">docker compose up -d</code></li>
</ul>

<p>Easy.</p>

<p>I will also occasionally run this slew of scripts from inside one the containers:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>RAILS_ENV=production bin/tootctl accounts prune;
RAILS_ENV=production bin/tootctl statuses remove --days 4;
RAILS_ENV=production bin/tootctl media remove --days 4;
RAILS_ENV=production bin/tootctl media remove --remove-headers --include-follows --days 4;
RAILS_ENV=production bin/tootctl preview_cards remove --days 4;
RAILS_ENV=production bin/tootctl media remove-orphans;
</code></pre></div></div>

<p>These trim down the bulk of the saved assets and bloat from the database.</p>

<p>It’s quite an easy piece of software to be running.</p>

<h2 id="today">Today</h2>

<p>I find Mastodon to be one of the most exciting open source projects happening today. I’m still very eager to see what new features come with each patch and I’ll spend more time than I need to looking through the changelog each time.</p>

<p>It’s a very stable system that I find remarkable. It’s all basic Internet Stuff, but I love that I can message a friend on a different server and they can pipe back. Both of us using own infrastructure.</p>

<p>The fediverse movement is also a fun place to be at the moment. I was recently a part of the <a href="https://fediforum.org/">fediforum unconference</a> and that was full of discussions and hope for the future. There’s a good mix of tech and non-techy people, so ideas aren’t constricted by what’s technically feasible, which is a hurdle devs often get bogged down by - we can worry about that later on after we’ve decided what we want.</p>

<p>It’s a movement that aligns itself very well with the indieweb. Mastodon feels like the old web to me, in a way that Facebook and Twitter do not. It feels like real communities who are invested in their environments, and not just content creators helping their overlords sell ad space.</p>

<p>Three years on, I’m still glad I started d20.social. At a time where the Internet seems to be in decay, I’m genuinely optimistic about the future of fediverse.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[I started my Mastodon server, d20.social, three years ago (November 6, 2022). It’s been quite a rewarding project to be hosting.]]></summary></entry><entry><title type="html">Deploying just one configuration file with Docker</title><link href="https://technicallyshane.com/2025/09/13/deploying-just-one-configuration-file-with-docker.html" rel="alternate" type="text/html" title="Deploying just one configuration file with Docker" /><published>2025-09-13T08:13:00+00:00</published><updated>2025-09-13T08:13:00+00:00</updated><id>https://technicallyshane.com/2025/09/13/deploying-just-one-configuration-file-with-docker</id><content type="html" xml:base="https://technicallyshane.com/2025/09/13/deploying-just-one-configuration-file-with-docker.html"><![CDATA[<p>There are two parts of my project. The ./webserver directory is a Rails app and that bundles up easily into an image. I can build that image and then deploy it to my repo, to be pulled down and run by my server. The second part of my project is a 40 line configuration script… how do I deploy that?</p>

<p><code class="language-plaintext highlighter-rouge">just-in-time.liq</code> is a script which populates my Liquidsoap container with what tracks to be playing on <a href="https://radio.shane.computer/">my 24 Hours of Radio project</a>. That’s all I really need for this part of the project.</p>

<p>The just-in-time.liq needs to be accessed by the liquidsoap instance, which I just pull straight down from <code class="language-plaintext highlighter-rouge">image: savonet/liquidsoap:v2.3.3</code>. The just-in-time.liq script is loaded as a volume.</p>

<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">services</span><span class="pi">:</span>
  <span class="na">liquidsoap</span><span class="pi">:</span>
    <span class="na">image</span><span class="pi">:</span> <span class="s">savonet/liquidsoap:v2.3.3</span>
    <span class="na">command</span><span class="pi">:</span> <span class="s">liquidsoap /config/just-in-time.liq</span>
    <span class="na">env_file</span><span class="pi">:</span> <span class="s">.env</span>
    <span class="na">volumes</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s">./liquidsoap/config:/config</span>
      <span class="pi">-</span> <span class="s">media:/media</span>
</code></pre></div></div>

<p>However, to get it onto the server, I’m having to copy and paste it. An easily forgotten deployment task.</p>

<p>There’s a strong argument to say that this should be handled by continuous integration, but that’s not really the root of my issue: that script shouldn’t be hanging around my server’s project-setup directory. That directory ideally contains a docker-compose.yml, a .env, and an nginx.conf. That’s all stored in a git repo. The liquidsoap script doesn’t feel like it should be in amongst that.</p>

<h2 id="tiny-container">Tiny container?</h2>

<p>How about if I build my own version of <code class="language-plaintext highlighter-rouge">savonet/liquidsoap:v2.3.3</code> with that configuration script baked in? This felt very odd to me when I first thought of it, but it’s actually strikingly similar to what we do with Rails projects.</p>

<p>I was worried I’d end up with a bloated image, but in reality it’s just the size of the <code class="language-plaintext highlighter-rouge">savonet/liquidsoap:v2.3.3</code> image I’m using anyway plus the size of the script. No big deal. And I <a href="https://technicallyshane.com/2025/08/29/the-non-facy-way-of-deploying-a-rails-app-and-a-step-toward-the-future.html#distributiondistribution">already have <code class="language-plaintext highlighter-rouge">distribution</code> set up</a>!</p>

<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="s">FROM savonet/liquidsoap:v2.3.3</span>

<span class="s">COPY ./liquidsoap/just-in-time.liq /tmp/just-in-time.liq</span>

<span class="s">CMD ["liquidsoap", "/tmp/just-in-time.liq"]</span>
</code></pre></div></div>

<p>(I’m using the tmp folder here as it’s the only folder that the liquidsoap user has access to. The <code class="language-plaintext highlighter-rouge">/config</code> only worked when docker-compose pulled rank to create it with the right user. I don’t think there’s any auto-cleanup of tmp directories that I should be worried about.)</p>

<p>That’s it! I build this alongside my <code class="language-plaintext highlighter-rouge">radio</code> project and push both of them to the registry.</p>

<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">services</span><span class="pi">:</span>
  <span class="na">liquidsoap</span><span class="pi">:</span>
    <span class="na">build</span><span class="pi">:</span> <span class="s">.</span>
    <span class="na">container_name</span><span class="pi">:</span> <span class="s">radio-liquidsoap</span>
    <span class="na">env_file</span><span class="pi">:</span> <span class="s">.env</span>
    <span class="na">volumes</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s">media:/media</span>
</code></pre></div></div>

<p>The server’s docker-compose.yml changes just a little to support this: point the <code class="language-plaintext highlighter-rouge">image</code> to my own image rather than the official one.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[There are two parts of my project. The ./webserver directory is a Rails app and that bundles up easily into an image. I can build that image and then deploy it to my repo, to be pulled down and run by my server. The second part of my project is a 40 line configuration script… how do I deploy that?]]></summary></entry><entry><title type="html">Blue-Green deployment of a docker compose setup</title><link href="https://technicallyshane.com/2025/08/30/blue-green-deployment-of-a-docker-compose-setup.html" rel="alternate" type="text/html" title="Blue-Green deployment of a docker compose setup" /><published>2025-08-30T12:23:00+00:00</published><updated>2025-08-30T12:23:00+00:00</updated><id>https://technicallyshane.com/2025/08/30/blue-green-deployment-of-a-docker-compose-setup</id><content type="html" xml:base="https://technicallyshane.com/2025/08/30/blue-green-deployment-of-a-docker-compose-setup.html"><![CDATA[<p>Hot on the heels of yesterday’s <a href="https://technicallyshane.com/2025/08/29/the-non-facy-way-of-deploying-a-rails-app-and-a-step-toward-the-future.html">deploying the application</a>, I realised that deploying a new version causes some downtime. Even more downtime if the deploy fails or the application isn’t working for some reason.</p>

<p>I was rolling out a new version like this:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker pull my-app
docker compose up -d
</code></pre></div></div>

<p>Docker compose then notices that my-app has changed. It turns off my-app and then restarts it, recreating with the new image. Even if this is very quick, it still leads to nginx not being able to route requests for that period.</p>

<p>This calls for blue-green deployments; we run two copies of the application (a BLUE and a GREEN), we attempt to roll out the new update to BLUE and check if it’s healthy, then tell nginx to send traffic to there for a bit, and then do the same roll out to GREEN, and send nginx back to GREEN. Whilst GREEN is down and being deployed, BLUE handles all the connections and requests.</p>

<p><a href="https://abdullahob.medium.com/zero-downtime-deployments-implementing-blue-green-with-docker-compose-on-aws-ec2-79cad234c65e">Abdullah Obaid has a great tutorial</a> that I followed, and might be suitable for you. However, my needs were a little different:</p>

<ol>
  <li>I have other services in my docker-compose which I needed to share. (No need to run two Postgres containers.)</li>
  <li>I didn’t want to use <code class="language-plaintext highlighter-rouge">systemctl</code> to inject an environment variable. I want all my configuration to be in my ~/my-app directory, not hidden away elsewhere.</li>
  <li>I have a different idea about how the health check and “switch” script should work.</li>
</ol>

<h2 id="splitting-services">Splitting services</h2>

<p>My docker-compose.yml originally contained <code class="language-plaintext highlighter-rouge">registry</code>, <code class="language-plaintext highlighter-rouge">web</code>, <code class="language-plaintext highlighter-rouge">database</code> services.</p>

<p>I kind of want the registry and database to stick around and be shared between the two. It feels like I’ve wrangled this into working and I’d be interested to hear further thoughts. Here’s what I did.</p>

<ol>
  <li>Keep <code class="language-plaintext highlighter-rouge">docker-compose.yml</code>
    <ol>
      <li>But remove <code class="language-plaintext highlighter-rouge">web</code> service</li>
      <li>Specify a project name to avoid relying on Docker’s autogenerated names: <code class="language-plaintext highlighter-rouge">name: my_app</code>. This will be used to scope our services, which we want to reference later on.</li>
    </ol>
  </li>
  <li>Create a <code class="language-plaintext highlighter-rouge">docker-compose.blue.yml</code>
    <ol>
      <li><code class="language-plaintext highlighter-rouge">name: my_app_blue</code></li>
    </ol>
  </li>
  <li>Create a <code class="language-plaintext highlighter-rouge">docker-compose.green.yml</code>
    <ol>
      <li><code class="language-plaintext highlighter-rouge">name: my_app_green</code></li>
      <li>Give <code class="language-plaintext highlighter-rouge">web</code> a different exposed port</li>
    </ol>
  </li>
</ol>

<p><code class="language-plaintext highlighter-rouge">docker-compose.yml</code> defines a volume which I need access to in BLUE and GREEN. Same with the network too.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># in docker-compose.yml

volumes:
  registry_data:
  
networks:
  external_network:
  internal_network:
    internal: true
  
# in docker-compose.blue.yml

volumes:
  my_app_registry_data:
    external: true
    
networks:
  my_app_external_network:
    external: true
  my_app_internal_network:
    external: true
    internal: true
</code></pre></div></div>

<p>This is a little brittle for a couple of reasons. First, docker-compose.yml has to be loaded first to create those resources. Second, I’m relying on magically getting the name of the resources correctly. It’s the main reason why setting the <code class="language-plaintext highlighter-rouge">name</code> was so important.</p>

<p>This does work, however!</p>

<p>Then we can start up each in order (though, the order only matters once).</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker compose up --file docker-compose.yml -d
docker compose up --file docker-compose.blue.yml -d
docker compose up --file docker-compose.green.yml -d
</code></pre></div></div>

<p><code class="language-plaintext highlighter-rouge">docker ps</code> will show you all of your services working. You can even <code class="language-plaintext highlighter-rouge">rails c</code> into your BLUE to change something in the database, then pop over to GREEN to see it correctly reflected.</p>

<h2 id="directing-traffic-with-nginx">Directing traffic with nginx</h2>

<p>You’ll have an <code class="language-plaintext highlighter-rouge">upstream</code> defined in your nginx.conf somewhere. You’ll remove the existing upstream and define two new ones: one for BLUE and one for GREEN.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>upstream my-app-blue {
  server 127.0.0.1:3845 fail_timeout=0;
}

upstream my-app-green {
  server 127.0.0.1:3844 fail_timeout=0;
}
</code></pre></div></div>

<p>We need to tell nginx which one to use though, and to do that we’ll make a new file, <code class="language-plaintext highlighter-rouge">nginx-my-app-upstream.conf</code>. The full contents of which should be:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>set $my_app_upstream "my-app-green";
</code></pre></div></div>

<p>And then we include that file in our server directive:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>server {
  include /root/my-app/nginx-my-app-upstream.conf;
  server_name my-app.shane.computer;

  location / {
    proxy_pass http://$my_app_upstream;

    # Preserve client headers for Rails
    proxy_set_header Host              $host;
    proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }
}
</code></pre></div></div>

<p>Now, whenever you run <code class="language-plaintext highlighter-rouge">nginx -s reload</code>, it’ll check our extra config file and set the variable to the upstream we want to serve.</p>

<p>So now, all we need is to smartly write to <code class="language-plaintext highlighter-rouge">nginx-my-app-upstream.conf</code>.</p>

<h2 id="toggling-between-upstreams">Toggling between upstreams</h2>

<p>I’ve written a <code class="language-plaintext highlighter-rouge">deploy-latest</code> script (and <code class="language-plaintext highlighter-rouge">chmod +x deploy-latest</code>‘d it) whose job it is to:</p>

<ol>
  <li>Pull the latest tag again. Note that this doesn’t change GREEN or BLUE.</li>
  <li>Start BLUE with that new tag</li>
  <li>Check if it’s working, using our already working healthcheck (comes for free with Rails)</li>
  <li>If yes, toggle the upstream</li>
  <li>Start GREEN with the new tag</li>
  <li>If it’s working, toggle nginx back</li>
  <li>Optionally, you can shut down BLUE at this point</li>
</ol>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#!/usr/bin/env bash</span>
<span class="nb">set</span> <span class="nt">-euo</span> pipefail

docker pull my_app.shane.computer:5500/my_app:latest
docker compose <span class="nt">--file</span> docker-compose.blue.yml up <span class="nt">-d</span>

wait_for_healthy<span class="o">()</span> <span class="o">{</span>
  <span class="nb">local </span><span class="nv">container</span><span class="o">=</span><span class="nv">$1</span>
  <span class="nb">local </span><span class="nv">max_attempts</span><span class="o">=</span><span class="k">${</span><span class="nv">2</span><span class="k">:-</span><span class="nv">15</span><span class="k">}</span>
  <span class="nb">local </span><span class="nv">attempt</span><span class="o">=</span>1

  <span class="k">while</span> <span class="o">[</span> <span class="nv">$attempt</span> <span class="nt">-le</span> <span class="nv">$max_attempts</span> <span class="o">]</span><span class="p">;</span> <span class="k">do
    if </span>docker ps <span class="nt">--filter</span> <span class="s2">"name=</span><span class="nv">$container</span><span class="s2">"</span> <span class="nt">--format</span> <span class="s1">' '</span> | <span class="nb">grep</span> <span class="nt">-q</span> <span class="s2">"(healthy)"</span><span class="p">;</span> <span class="k">then
      </span><span class="nb">echo</span> <span class="s2">"</span><span class="nv">$container</span><span class="s2"> is healthy!"</span>
      <span class="k">return </span>0
    <span class="k">fi
    </span><span class="nb">echo</span> <span class="s2">"[</span><span class="nv">$container</span><span class="s2">] attempt </span><span class="nv">$attempt</span><span class="s2">/</span><span class="nv">$max_attempts</span><span class="s2">: not healthy yet..."</span>
    <span class="nv">attempt</span><span class="o">=</span><span class="k">$((</span> attempt <span class="o">+</span> <span class="m">1</span> <span class="k">))</span>
    <span class="nb">sleep </span>2
  <span class="k">done

  </span><span class="nb">echo</span> <span class="s2">"[</span><span class="nv">$container</span><span class="s2">] did not become healthy in time"</span>
  <span class="k">return </span>1
<span class="o">}</span>

wait_for_healthy my_app_blue-web-1
<span class="nb">echo</span> <span class="s2">"set </span><span class="se">\$</span><span class="s2">my_app_upstream </span><span class="se">\"</span><span class="s2">my_app_blue</span><span class="se">\"</span><span class="s2">;"</span> <span class="o">&gt;</span> nginx-my-app-upstream.conf
nginx <span class="nt">-s</span> reload

docker compose <span class="nt">--file</span> docker-compose.green.yml up <span class="nt">-d</span>
wait_for_healthy my_app-green-web-1
<span class="nb">echo</span> <span class="s2">"set </span><span class="se">\$</span><span class="s2">my_app_upstream </span><span class="se">\"</span><span class="s2">my_app_green</span><span class="se">\"</span><span class="s2">;"</span> <span class="o">&gt;</span> nginx-my-app-upstream.conf
nginx <span class="nt">-s</span> reload
</code></pre></div></div>

<h2 id="deploy-flow">Deploy flow</h2>

<ol>
  <li>On your development machine, build and push your new image.</li>
  <li>On your production machine, run <code class="language-plaintext highlighter-rouge">./deploy-latest</code>.</li>
</ol>

<p>Done!</p>

<p>Some people will tell you to look into Kubernetes or Nomad to help orchestrate and deploy with a more robust blue-green methodology, but if this works, then it works! Bonus: you understand every line of it making debugging easier in the future.</p>

<p>Enjoy.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Hot on the heels of yesterday’s deploying the application, I realised that deploying a new version causes some downtime. Even more downtime if the deploy fails or the application isn’t working for some reason.]]></summary></entry><entry><title type="html">The non-facy way of deploying a Rails app and a step toward the future</title><link href="https://technicallyshane.com/2025/08/29/the-non-facy-way-of-deploying-a-rails-app-and-a-step-toward-the-future.html" rel="alternate" type="text/html" title="The non-facy way of deploying a Rails app and a step toward the future" /><published>2025-08-29T09:56:00+00:00</published><updated>2025-08-29T09:56:00+00:00</updated><id>https://technicallyshane.com/2025/08/29/the-non-facy-way-of-deploying-a-rails-app-and-a-step-toward-the-future</id><content type="html" xml:base="https://technicallyshane.com/2025/08/29/the-non-facy-way-of-deploying-a-rails-app-and-a-step-toward-the-future.html"><![CDATA[<p>This month I’ve deployed three Rails projects to a server. This is all a fairly manual process - it’s only vaguely more sophisticated than the old days of FTPing up files to a server. It has served me well so far. Maybe these notes will help someone else who’s not making use of Heroku, Kamal, Capistrano, or other sophisticated deployment methods yet.</p>

<ol>
  <li>Add a DNS record to point to the server for the application. (Do this first to avoid propagation delays later on.)</li>
  <li>Create a sample nginx.conf file with an ideal setup.</li>
  <li>Create a sample docker-compose file with an ideal setup.</li>
  <li>Add the non-sample version to .gitignore.</li>
  <li>Push the application source code from my working machine to Github.</li>
  <li>Pull that down on Hetzner.</li>
  <li>Copy the two sample files and fill them with server related tweaks.
    <ol>
      <li>Use <code class="language-plaintext highlighter-rouge">docker ps</code> to figure out which ports are available.  After a few Rails projects on one server, you can’t rely on guessing a number in the 3000 range to be unique any more.</li>
    </ol>
  </li>
  <li><code class="language-plaintext highlighter-rouge">ln -s /home/shane/rails-project/nginx.conf /etc/nginx/sites-enabled/rails-project.shane.computer ; nginx -s reload</code> to set up the project for nginx.</li>
  <li><code class="language-plaintext highlighter-rouge">certbot</code> to get the SSL cert set up for the domain.</li>
  <li><code class="language-plaintext highlighter-rouge">docker compose up</code> gets everything going.</li>
  <li>Done!</li>
</ol>

<h2 id="my-least-favourite-part">My least favourite part</h2>

<p>It bugs the heck out of me that I have the entire code base just hanging loose in a directory. There’s a few reasons for this.</p>

<ol>
  <li>Using git as a mechanism for transporting files you end up with a real working directory of files. You cannot push to this from elsewhere. It’s janky.</li>
  <li>Configuration files, like those nginx.conf and docker-compose.yml are hidden away by .gitignore, since you don’t want those committed. At least not in the project codebase! I actually do want them version controlled though, like we saw in my <a href="https://technicallyshane.com/2025/08/17/quick-let-s-set-up-grafana.html">set up of grafana</a>.</li>
  <li>I don’t <em>need</em> all of the project files. Just the files that make it work.</li>
  <li>This forces a <code class="language-plaintext highlighter-rouge">docker build</code> of the project; although I’m quite pleased with my beefy server, it is rather slow at building images. My ancient intel macbook is faster.</li>
</ol>

<p>The obvious, correct way of transporting applications like these are just passing around the docker image. For that you need a container repository.</p>

<h2 id="distributiondistribution">distribution/distribution</h2>

<p>If you host your container images with github you get very little storage space for images, even on a paid plan. On Docker Hub, it’ll run you $16 per month per user. It’s expensive with no real need to be. In fact - that seems to fall right into my sabbatical’s ethos of making SaaSs that are expensive and tailored to teams available for solo developers at a more affordable rate!</p>

<p><a href="https://distribution.github.io/distribution/">distribution/distribution</a> is the open source software that github and docker hub both use to run that side of their business. You docker <code class="language-plaintext highlighter-rouge">docker login</code> to log into your own, private image repository and then can <code class="language-plaintext highlighter-rouge">docker pull</code> and <code class="language-plaintext highlighter-rouge">docker push</code> as normal. It can live on your own server, which you’re already paying for, so why not! Plus, it’s all private if you set it up correctly.</p>

<p>This month, I’ve been working on a Rails wrapper around that service which gives you a lovely interface to manage users, what images they can access and publish, and a few other nice features.</p>

<p>At the moment, it’s very roughly set up. (I’m now quite adept at implementing JWT!) But last night I got my software to deploy itself by using this private repository. So I can now:</p>

<ol>
  <li>Ditch the repo living on the server.</li>
  <li>Build the image locally - fast!</li>
  <li>Push the image.</li>
  <li><code class="language-plaintext highlighter-rouge">docker pull</code> on the server.</li>
  <li><code class="language-plaintext highlighter-rouge">docker compose up</code> to restart with the new code.</li>
</ol>

<p>Exciting!</p>]]></content><author><name></name></author><summary type="html"><![CDATA[This month I’ve deployed three Rails projects to a server. This is all a fairly manual process - it’s only vaguely more sophisticated than the old days of FTPing up files to a server. It has served me well so far. Maybe these notes will help someone else who’s not making use of Heroku, Kamal, Capistrano, or other sophisticated deployment methods yet.]]></summary></entry><entry><title type="html">Quick, let’s set up Grafana</title><link href="https://technicallyshane.com/2025/08/17/quick-let-s-set-up-grafana.html" rel="alternate" type="text/html" title="Quick, let’s set up Grafana" /><published>2025-08-17T14:02:00+00:00</published><updated>2025-08-17T14:02:00+00:00</updated><id>https://technicallyshane.com/2025/08/17/quick-let-s-set-up-grafana</id><content type="html" xml:base="https://technicallyshane.com/2025/08/17/quick-let-s-set-up-grafana.html"><![CDATA[<p>At the Hackspace we graph loads of metrics, including temperature around the space and number of MQTT messages happening across our two networks. We stick all of those in Grafana: <a href="https://grafana.nottinghack.org.uk/d/bdtrbszgl2io0d/hackspace?orgId=1">Nottinghack Grafana</a>. One of our members has gotten over excited and added the printer statuses (including 3D printers)!</p>

<p>It’s a bit contagious, so I want to track some things around my studio. I don’t have much time to do this, so let’s get on with it.</p>

<blockquote>
  <p><em>To do this instantly, you could use Digital Ocean</em></p>

  <p>I’m setting this up on a dedicated machine, but if you just want Grafana <em>right now</em> you can do so via their Marketplace, which gives you a VPS with it already set up. The instructions are on <a href="https://marketplace.digitalocean.com/apps/grafana">the Grafana Marketplace page</a>. <a href="https://m.do.co/c/181470abc83a">You can get $200 of credit by signing up with my link.</a> You’ll be done in three minutes.</p>
</blockquote>

<h2 id="dns">DNS</h2>

<p>Do DNS first, since it can take some time to propagate; let it do that whilst we’re working on other things.</p>

<p>I’m going to be using <code class="language-plaintext highlighter-rouge">graphs.shane.computer</code>.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>A    graphs    &lt;whatever your IP address is&gt;
</code></pre></div></div>

<h2 id="nginx">Nginx</h2>

<blockquote>
  <p><em>root access</em></p>

  <p>I’m doing this all with root. If you’re seeing permission issues, double check what your user has access to.</p>
</blockquote>

<p>I run a bunch of dockerised applications on this server, so I like to try and keep them tidy.</p>

<p>In my home directory (on my server) I have a directory with those projects. Let’s make a new one.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">mkdir </span>grafana
<span class="nb">cd </span>grafana
git init <span class="nb">.</span>
git commit <span class="nt">--allow-empty</span> <span class="nt">-m</span> <span class="s1">'Initial commit'</span>
</code></pre></div></div>

<p>Then let’s start our nginx.conf.</p>

<div class="language-conf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">upstream</span> <span class="n">grafana</span> {
    <span class="n">server</span> <span class="m">0</span>.<span class="m">0</span>.<span class="m">0</span>.<span class="m">0</span>:<span class="m">3112</span> <span class="n">fail_timeout</span>=<span class="m">0</span>;
}

<span class="c"># Needed for websocket support.
</span><span class="n">map</span> $<span class="n">http_upgrade</span> $<span class="n">connection_upgrade</span> {
    <span class="n">default</span> <span class="n">upgrade</span>;
    <span class="s1">''</span> <span class="n">close</span>;
}

<span class="n">server</span> {
    <span class="n">server_name</span> <span class="n">graphs</span>.<span class="n">shane</span>.<span class="n">computer</span>;

    <span class="n">location</span> / {
        <span class="n">proxy_set_header</span> <span class="n">Host</span> $<span class="n">host</span>;
        <span class="n">proxy_pass</span> <span class="n">http</span>://<span class="n">grafana</span>;
    }

    <span class="n">location</span> /<span class="n">api</span>/<span class="n">live</span>/ {
        <span class="n">proxy_http_version</span> <span class="m">1</span>.<span class="m">1</span>;
        <span class="n">proxy_set_header</span> <span class="n">Upgrade</span> $<span class="n">http_upgrade</span>;
        <span class="n">proxy_set_header</span> <span class="n">Connection</span> $<span class="n">connection_upgrade</span>;
        <span class="n">proxy_set_header</span> <span class="n">Host</span> $<span class="n">host</span>;
        <span class="n">proxy_pass</span> <span class="n">http</span>://<span class="n">grafana</span>;
    }

    <span class="n">access_log</span> /<span class="n">var</span>/<span class="n">log</span>/<span class="n">nginx</span>/<span class="n">graphs</span>.<span class="n">access</span>.<span class="n">log</span>;
}
</code></pre></div></div>

<p>The port number I’ve just picked at random. Since I have a bunch of projects running through this nginx server, it’s become a bit of whack-a-mole finding an available port.</p>

<p>Change the servername to the hostname you just set up via DNS.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git add nginx.conf
git commit <span class="nt">-m</span> <span class="s1">'Add simple nginx configuration'</span>
</code></pre></div></div>

<p>Then we need to get nginx actually loading it.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">cd</span> /etc/nginx/sites-enabled
<span class="nb">ln</span> <span class="nt">-s</span> /root/grafana/nginx.conf graphs.shane.computer
nginx <span class="nt">-s</span> reload
</code></pre></div></div>

<h2 id="ssl">SSL</h2>

<p>Get certbot set up using their <a href="https://certbot.eff.org/">super handy tool</a>.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>certbot
</code></pre></div></div>

<p>Certbox is fantastic at guiding you through what needs to be done. You’ll see your new hostname (which comes via the nginx config we just setup). Select that and watch certbot automatically set up your SSL cert.</p>

<p>It’ll change the nginx config. Have a look through that and commit the changes.</p>
<h2 id="docker-compose">docker compose</h2>

<p>Docker compose will handle almost all the hassle here.</p>

<p>make a <code class="language-plaintext highlighter-rouge">docker-compose.yml</code>.</p>

<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">services</span><span class="pi">:</span>
  <span class="na">grafana</span><span class="pi">:</span>
    <span class="na">image</span><span class="pi">:</span> <span class="s">grafana/grafana-enterprise:latest</span>
    <span class="na">container_name</span><span class="pi">:</span> <span class="s">grafana</span>
    <span class="na">restart</span><span class="pi">:</span> <span class="s">unless-stopped</span>
    <span class="na">ports</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s2">"</span><span class="s">3112:3000"</span>
    <span class="na">volumes</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s">grafana-storage:/var/lib/grafana</span>

<span class="na">volumes</span><span class="pi">:</span>
  <span class="na">grafana-storage</span><span class="pi">:</span>
</code></pre></div></div>

<p>There’s that port number again, by the way. In <code class="language-plaintext highlighter-rouge">"3112:3000"</code>, 3112 is the port that will be used locally. 3000 is the port inside the docker container, which should remain as 3000 as that’s where grafana will run by default.</p>

<p>Let’s give it a go.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker compose up
</code></pre></div></div>

<p>Give it a minute to start up. But then you should be able to go to your host and see the login page for Grafana! Nice.</p>

<p>Use admin / admin to sign in. You have to immediately change the password.</p>

<p>Once you’re happy it’s working, we can get docker to run this in the background. Use ctrl+c to stop docker running in the foreground. Then use the <code class="language-plaintext highlighter-rouge">--detach</code> to run it in in the background.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker composer up <span class="nt">--detach</span>
</code></pre></div></div>

<p>All done.</p>

<p>I felt the need to write this up because a) the Grafana docs are split across a few pages for an Nginx install and b) ChatGPT just made shit up. Good news though: it’ll, without permission, crawl this page and be fixed there shortly I expect.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[At the Hackspace we graph loads of metrics, including temperature around the space and number of MQTT messages happening across our two networks. We stick all of those in Grafana: Nottinghack Grafana. One of our members has gotten over excited and added the printer statuses (including 3D printers)!]]></summary></entry></feed>