Network Layout: Services

March 1, 2021    network services

Over the last few months I have been able to settle into this whole Work from Home thing that everyone’s been enjoying for so long myself. But this blessing is also a curse as my experience in Systems and Network Administration make it hard to deal with a “standard” home network. So like a lot of other IT Professionals, I have subjected my poor family to the headaches that come with self-hosting and IT Administration.

I have so far avoided the need for an Identity Provider and any networked storage for anyone’s needs besides my own. My wife is very much a lover of keeping her tech exposure as streamlined and user friendly as possible, so I work to keep her experience as frustration free as I can make it. I do have plans to introduce JumpCloud into my environment as my Identity Provider, but that project will probably be on hold until the Fall, once I finish my degree.

So with that guiding principle in mind, I set out to build my environment into its current state.

Services Diagram

This diagram represents the current state of my network. I have opted to not include networking links because my network is flat and easy to grok by looking at the few cables in place at this time. Eventually, I will create a wiring diagram. As anyone else who has homelabbed can tell you though, this snapshot in time is one piece of the story of any home lab, which is why I am documenting this all on my blog.

Let’s begin expanding on what the diagram shows.


My core infrastructure currently consists of two desktops and a rack mount server.


This is my trusty Dell Precision Tower 3420. With its 16 GB of RAM and Intel i5-6600 giving me 4 cores, I used this to begin putting my environment together while I worked to get spsmtlpvep12 online in the attic. For a few months I ran all of the services (except sieve, unifi, grafana and both influxDB containers) in the diagram on this host.

This weekend I finally put in the effort to deploy spsmtlpvep12 and was able to better distribute my resources. More on that to come.


This Dell R310 is definitely getting a little long in the tooth, with it’s Xeon X3430 processor and paltry 16 GB of RAM. I will probably only keep this in service until I can build a newer host with a modern processor and drastically more RAM. For now though, it makes a welcome addition to my lab since it brings a hardware RAID Controller to the party.


Ah, my baby. My pride and joy. Before there were any of these other machines doing anything, shioxhast was doing all the heavy lifting. With the AMD Ryzen 7 3800X’s 8 hyper-threaded cores and 32 GB of RAM I haven’t found a way to cause myself any problems with it. But being my daily driver also means it’s not suitable to host my key services. So now it’s finally back to being my development box and gaming rig. Big props to my buddy Nathan for helping me spec out this beast at a great price. If you want someone to help you pick parts for your system, I can recommend none higher.



I utilize Proxmox as my hypervisor, with both spsmtlpvep11 and spsmtlpvep12 operating in a cluster. Having used ESXi and Hyper-V professionally, Proxmox has been a real breath of fresh air. Spinning up a new container or VM is effortless and I am able to focus my time on actually deploying applications or working on development projects in those new environments.


I like to use containers wherever possible, because they are so much lighter weight to work with. My primary factor when determining if a new service will run in a container or on a full fledged VM is capabilities. The only reason letsencrypt runs in a VM is getting snapd was going to cost significantly more effort than the utlity I would get in return.

Pi-Hole (filter and sieve)

What homelab wouldn’t be complete without a Pi-Hole handling DNS Filtering? I run a primary and secondary instance to go back to my guiding principle. If I nuke my primary during maintenance for some reason, the back up is sitting there, ready to go.



I have a UniFi AP deployed to an external structure on my property, and this is the controller for it. I intend to introduce more UniFi equipment at some point, but the need isn’t there yet.


I am only in the early stages of my Grafana deployment, with the installation of it and both influxDB containers happening yesterday. Eventually I will have it set up with all my desired monitoring. But it’s very much an active project.


InfluxDB is a time-series database, designed specifically for storing data assocaited with data that occurs over time such as server metrics or application performance monitoring. I am running two instances because my knowledge of InfluxDB has previously been with OSS 1.8 or older and 2.0 introduces some shifts that I need to come up to speed on, including the Flux query language instead of the InfluxQuery that I know. For now, influxdb handles the actual work for storing meaures from my systems while influxdb2 will be used for testing and familiarization with the eventual goal of migrating completely.


I have a few pet projects I am working on that are being built in Python. I have a bare-bones Python container in Proxmox that I have converted into a template. reed.dev1 is my current dev environment for that work, which I access using VS Code Remoting via SSH. This helps me keep my dev environment clean, and allows for me to easily convert my eventual applications into docker images, which is my current plan. If I need a new environment, it’s easy enough to just clone a new container and be up and running in a minute.

Virtual Machines

Where containers are light and agile, VM’s feel bloated and slow to deal with. It’s funny to say that, as not that long ago VM’s felt lightning fast and flexible.

Let’s Encrypt

I have designated a single server to handle all my Let’s Encrypt certs using certbot. As mentioned above, I’d love this to be a container, but due to snapd it’s a VM. From here I generate my SSL certificates for my services so that I can default everything to HTTPS. This isn’t mandatory on a home network, and I know plenty of people don’t do it. But I like to do it for the piece of mind, as well as the experience dealing with certificates I get from it.


I wanted to maintain my own code repositories locally, as well as on GitHub and GitLab is what I settled on. I like it because of the integrated CI/CD tools and integrations with other services I have chosen to use. In fact, this blog post wouldn’t be here without those CI/CD tools. Not because I can’t do this manually, but because I am lazy, and didn’t want to have to take 15 steps every time I wanted to write a blog post. Other nifty features of GitLab include repository mirroring, which pushes my code changes up to GitHub repo’s automatically, letting me only worry about having one remote in my git settings.


Ansible / AWX

In line with being lazy about publishing blog posts, I administer IT equipment all day for work. The last thing I want to do is remember all the steps to update GitLab when a new release is available. Thanks to Ansible and AWX I can easily go kick off my playbook that does that exact task, or countless others. And it does it the same every time. If you work in IT and haven’t gotten the memo yet, automation is vitally important. If you have a repetitive task that consumes time, odds are good it can be automated. And probably should be.



As you can see, I don’t have a lot going on in Docker, with only Heimdall and Portainer actively running. The real reason I have it deployed currently is for the GitLab Runner functionality.


Heimdall is an application dashboard. This is a short way of saying it allows you to manage links to sites and services and access them via a tiled dashboard that includes an organizational structure built around tags.



I deployd Portainer on a whim, to see what kind of tools like this were out there. It’s only real purpose right now is to make it easier to clean up after myself as I experiment with docker. It will probably be replaced by something else to see what else is available in this space.


GitLab Runner

GitLab Runner is the engine behind the CI/CD Pipelines in GitLab. This allows me to perform all sorts of tasks when I commit code, or flag certain commits, or any of a bunch of other qualifiers. I currently have pipelines in place to publish updates to my static sites I manage and maintain, perform Pester Tests against my PowerShell scripts, and build a docker image and push it to my Docker Hub account.

Closing Thoughts

So that’s what I currently have running on my network. But that’s not the end of things for what I have going on. There are some pretty big DR gaps that I need to fill. First and foremost is the lack of backups. Proxmox has recently released a backup server, and I am looking into using it to handle backups on my virtualized services. There are some other big areas I want to focus on, including automation of the failover between filter and sieve as well as syncronizing their data. I also plan to automate deployment of new containers and VM’s to through Ansible to include cool tricks like adding a new DNS record for them to the Pi-Hole’s. And then I want to move DHCP off of filter to a distinct container of its own.

In posts to come I will go into more detail about all of these pieces and how they interact in my various workflows.

Like most things in life, these great ideas just leave me with more questions. And I love it.

comments powered by Disqus