Database recovery is tricky at the best of times, and it never seems to happen when and how you expect. In this case, my self-hosted Zoneminder server crashed hard and came up broken. It appears to be related to some index or table in the database getting corrupted, but I’m not enough of a “database surgeon” to say with any certainty.
When this crash happened, the first sign was that the mariadb service was crashlooping.
Taking photos is fun and easy.
Just kidding, it’s a fractal of complexity, FOMO, and slowly realizing how little you actually know.
However, one nice thing is that using some simple Unix/Linux tools, it’s remarkably easy to mass produce good looking JPEG images from your raw photos, without having to actually learn Lightroom.
After a few days of fiddling with the settings, I’ve come up with this bash script to process my images:
While there are plenty of IOT security cameras that promise privacy, none of them really do. Eufy recently got busted for secretly accessing peoples' feeds, Unifi got breached, and literally every off-shore IOT device is slurping as much metadata (and regular data) off your devices as they possibly can. It’s not hard to understand a need for secure and private home security devices.
Despite its age and legacy, ZoneMinder is still by far the most capable and complete solution for this.
With Mastodon quickly becoming a refuge for former bird-site users fleeing the new regime, many are considering self-hosting their Fediverse instance. There’s many good reasons to do this, such as privacy, data ownership, or even maintaining consistent performance while larger communities struggle to on-board an influx of new users.
But, as always, self-hosting means new responsibilities! In this case, to ensure that the data is safe and secure, operating correctly, and to ensure that the server is not disseminating malware.
Anybody who runs an internet-facing webserver has seen their fair share of spammy scanners in the logs. It varies server to server, but some of mine get up to 15,000 scans per day.
Almost all of these are harmless network mappers, but they still annoy me. Many are compromised hosts or belong to hackers & organized crime rings. While it’s possible to create false positives, it’s probably safe to block all of these.
Several admins and developers like automatically updating their servers with new builds as they become available. Commonly known as “CI/CD”, this process allows teams to iterate much faster and speed up product development.
Often, this is simply pulling from a repo and running a couple docker-compose commands, which is very easy to automate.
A bad way to do this is using a cron job that runs every 10 minutes to pull from the repository and execute any commands.
This guide is for an advanced Debian GNU/Linux installation using the ZFS storage system with an encrypted root volume for security and privacy. It will also be upgraded from the current Stable release (Bullseye) to the rolling-release Unstable version (Sid).
ZFS has long been considered the last word on advanced storage developments. With its advanced safety, efficiency, and performance mechanisms it’s easy to see why it’s popular in the storage world, DIY and enterprise alike.
Since late 2018, I had been a full-time Arch Linux user. At that time, it was worth it for me to spend the extra time dealing with Arch’s quirks, meticulously updating my AUR software, fiddling with all-manual configuration, and manually migrating any software between major versions whenever Pacman updated them. It was both a great learning experience, and… well… A bit of a waste of time ;)
Needless to say, things have changed in my life since then, and I now place a much larger emphasis on ‘boring’ stuff.
I have an odd fascination with radiation… Not to the point that I’m buying “Naturally Occurring Radioactive Materials” (or NORMs for short) on eBay, but certainly to the point that I own a digital geiger counter and regularly measure… things…
Recently, I discovered https://radmon.org, a site where users can connect a counter to their API and send data to the network of scientists that study background radiation in real time.
Log aggregation systems are fantastic. As are time-series metrics databases. But that’s not what this post is about. These methods aren’t a replacement for those systems at all, but a basic way to implement the core basics of monitoring and alerting.
You see, the strength of a SIEM or log aggregation system is its numbers. It correlates data from hundreds or thousands of sources, giving very important insights about overall system usage patterns, login activity, audit trails, and more.
I recently installed Debian Bullseye on an old Intel NUCCAY6H mini PC I had lying around. It’s a great little device for a home server, as it’s very cheap, fits 16G of memory, and with 4 mini-cores it’s no slouch.
The first install attempt didn’t go well, with missing firmware for the NIC causing hanging for a couple minutes during boot. This happens quite a bit with Debian’s hard-line stance on binary blobs, so I re-installed with the non-free install media.
There are lots of “very correct” ways to make your server “very secure.” Most of them rely on paid services, complicated agent-manager topologies, and cool buzzwords like “zero trust”.
However, as they say, perfection is the enemy of progress. Many are discouraged by this absolutist approach to server safety, and forget the very basics. Obviously, the expensive and complex solutions exist for a reason, but at the same time a little goes a long way.
I have wanted to run Kubernetes at home for some time, but the main obstacle has been a reliable solution for providing load balancing for ingress or services, and the lack of a reasonable way to manage NAT transparently. While publicly routable IPv4 addresses are seemingly limitless* in the cloud, typically we only get one at home.
Similarly, there isn’t a straightforward way to build cloud-ey load balancers at home. While Google and Amazon can conjure up magic TCP load balancers on their complex overlay network platform, we don’t really have that luxury outside of the cloud.
Similar to another post about WAN latency, this is a simple system to automate periodic internet speed tests. The two main components are speedtest-cli and ElasticSearch. These were chosen because I already had both set up and running, along with all the visualization and analytical software. To get a basic POC set up, just install ElasticSearch and Kibana with Docker. Once the node/cluster is running, the ‘speedtest client’ server can be set up.
This is a simple, ‘quick and dirty’ way to measure network latency over long periods of time. The only ‘complicated’ part is setting up InfluxDB, but I imagine that many folks already have it set up. To get started, check the official documentation.
Network latency will be measured with the good old ping command, then formatted with generic Unix tools. Then, statistics are stored using the influxdb write endpoint using the line protocol format.
I installed Gentoo Linux on my vintage Thinkpad. This particular device has a rather colourful history. In mid 2015 I recovered it from an e-waste pile at my workplace and brought it back to life. In the years since, it’s been a playground of sorts. In five years it’s had four editions of Windows, three versions of BSD, exotic operating systems like Redox and ReactOS, and of course dozens of different Linux distributions.
About two weeks ago, I upgraded my single node ElasticSearch cluster from 6.8.6 to the latest 7.9 version. Last night, all hell broke loose…
The upgrade itself wasn’t perfect. There were some issues with my setup that the helpful “Upgrade Assistant” didn’t pick up before I had already committed. I was missing a few formerly optional parameters in my elasticsearch.yml config file, there were some odd field mappings that weren’t supported any more, and some date format issues with my grok scripts.
In my endless quest to essentially create a Cisco Firepower firewall for poor people, I found a bit of a gap in the open source security ecosystem. While we have great tools for detecting malicious network traffic patterns, we don’t have easy ways of detecting malicious files in transit. So, a bit of fiddling around later, and I have a fairly basic system for integrating Suricata’s ‘filestore’ functionality with ClamAV’s real time scanning, thereby producing a log containing the source and destination addresses of any malicious files detected.
The idea of this setup is to protect the Small Systems as well as we protect the Big Systems.
This solution uses a proven stack to protect webservers from modern threats. Using OSSEC, Suricata, and the built-in firewall capabilities of a modern Linux system it is possible to build a low maintenance and stable threat protection platform with relatively low performance impacts.
It’s been specifically designed to be simple. The idea is that it will keep you on a ‘need to know basis’ and otherwise stay quiet and do it’s job.
For many years we’ve taken for granted the ability to settle any argument with Wikipedia. For so long, we’ve been able to settle any trivial dispute with a simple text search.
That could change. I’m not really trying to fear-monger, but it’s always possible that the internet might go out and stay out. And like hell I’m going to sit in quarantine with my partner and not be able to settle up with Wikipedia!
On the 19th of January 2020, a malicious actor launched an attack against my home infrastructure. At 42 minutes after midnight a device located in Buenos Aires, Argentina began attacking my proxy server. For the next six minutes, approximately 150 malicious HTTP requests were made.
Fortunately, every single one of these requests was met with a HTTP/400 response, that’s because I don’t use Apache Struts 2 which this bot was attempting to exploit.
Over the years, I’ve chewed through quite a few different routers, firewalls, even virtual appliances to connect my home network to the internet. Though most of these provided positive experiences, all of them had at least one point of friction, sometimes to the point of being a dealbreaker.
PFSense is a great platform, but has terrible ethics. Sophos is proprietary and has an awful CLI. Untangle feels more like an ad than a product.
TL;DR The default settings for Logstash index rotation are bad and will break your cluster after a few months unless you change the rotation strategy.
If you’re anything like me, you probably read somebody’s cool blog about how awesome ELK stack is and just had to have a piece of it. So you went through the quick start guide, googled your way through getting it up and running, then BAM you had an awesome logging system with all the bells and whistles!
Think of it like, “OpenStack for cheapskates."
There are plenty of ways to automate the provisioning of virtual machines, and while this isn’t the best way it certainly works great for me. I am fortunate enough to have a very heterogeneous environment at home; aside from a few appliances nearly all my virtual machines are running Ubuntu 18.04. This approach certainly won’t work for those who have a mixed environment with different versions Linux, Windows, and BSD derivatives.
Do you ever just update everything?
There’s a few times you might need to do this. For example, some nasty vulnerability comes along and ruins your week.
Or maybe you just want to be super up to date because you have a strange compulsion to have the latest and greatest of everything. Ether way, here’s my solution:
Use Ansible inventories to update all your servers I wrote this playbook as a simple way to ‘freshen up’ my homelab after months of neglect.
With the move from sysvinit to systemd, there were lots of small but important changes to the Linux ecosystem. One of them was the move from traditional syslog daemons to Systemd Journald. Now I’m not going to say this is a good or bad thing, as it entirely depends on your old habits and new optimism. What it does mean is a move to a faster and more flexible system log format but at the cost of some added complexity.
I have, for a long time, been fascinated and terrified by “Virtual SAN” solutions.
The idea of combining storage and compute seems on the surface very attractive. It allows us to scale out our storage and compute together or separately in relatively small and affordable units, helping avoid the sticker shock of the upfront cost of storage systems. And as somebody especially prone to capex-phobia, that really is a great solution.
I don’t like Microsoft NPS.
That’s not to say that it’s a very convenient server role, which it absolutely is, or that it doesn’t have a place, which it sort of does. It’s just that it’s almost always, in my own opinion, a better idea to go with another option.
So what’s wrong with using NPS?
Requires a full windows license. Personally, I have an aversion to throwing away money, even if it’s somebody else’s money.
No, I would not like to say hello, Cortana. Nothing is as monotonous, boring, and brain-numbingly automatable as installing Windows, installing applications, joining a domain, and clicking all the right boxes in all the right places. And it’s even worse in the latest versions of Windows 10, where we’re greeted by the condescending robot voice of Halo’s deceptive antagonist artificial intelligence during the OOBE setup phase.
I’ve taken a particular liking to MDT, Microsoft’s solution to the absolute eye-glazing snorefest of configuring a new workstation or server.
Deploying MSI installers with group policy is super neat and super handy…. Most of the time. Sometimes, though, you need a bit more than just the default options when pushing out packages, and for those of us that don’t have a wheelbarrow full of money to burn on System Center there are two ways to do this:
Use a GP Preference Item to distribute a configuration file to managed systems Create a transform set to apply to the MSI installer file While the first approach at first seems more straightforward, it does lead to the inevitable “GPO Spaghetti” once packages are added and removed.
Everybody and their aunt has a NAS at home, but what about something with a bit more pizzazz? How about if I build a system out of standard, off the shelf, ‘surplus sale’ gear and spin it into a really neat storage appliance?
The Gear At Fanshawe College, the ‘Asset Sale’ is a proud tradition. IT students line up around the block to get good deals on retired and scratch-and-dent electronics.