It’s always DNS

I host a few websites for myself and family on DigitalOcean.  Up until recently, I’ve always just spun up a new droplet for each site, so they were all fully independent from each other; this was the easiest and most convenient way to get a new site up and running without jeopardizing uptime on other sites if I made a mistake in configuration, and it was drop-dead easy to map a domain to a static IP.  It had some security benefits, too– if one site was compromised, it wouldn’t affect the rest.

But it was also maintenance-intensive.  I needed to login to multiple servers to run updates; adding plugins had to be redone over and over on each server; and obviously this was starting to get expensive.  So I decided to consolidate my multiple sites on one server, using a fancy feature of WordPress called… “Multisite“.  Imaginative name, I know.

The initial configuration went well, with no real hiccups (other than my accidentally rm’ing most of Apache’s configuration files– but a quick droplet rebuild took care of that).[1]Yes, I could have restored the configuration without too much difficulty, but I was early enough in the build that it was faster to just start over.  The trouble started when I had moved over the sites I was consolidating, and switched the domains to point at my new Multisite server.  I spent two hours trying to figure out why one of the domains refused to point at the new server, only to discover (drumroll, please)… it was DNS.  I use Pi-Hole on my home network to block malicious sites, but it also provides a DNS caching service which usually works great.  In this case, however, it was pointing me back at the old server over and over, until the TTL finally expired.[2]I did set the TTL to a very low number when I started this process, but the old value wasn’t updated until the original one expired.  A quick flush of the DNS cache, and I was able to see that the domain was correctly configured.  Fifteen minutes later, I had SSL up and my plugins configured.

So what’s the lesson in all this?  Even when you think it’s not DNS… it’s DNS.

References

References
1 Yes, I could have restored the configuration without too much difficulty, but I was early enough in the build that it was faster to just start over.
2 I did set the TTL to a very low number when I started this process, but the old value wasn’t updated until the original one expired.

New Host!

I’ve finally moved to a VPS on DigitalOcean, from my previous (free) shared hosting.  I did this for a couple of reasons: first, while my hosting was free for a year with my domain name, that year was almost up.  To renew my hosting for the second+ year, I would have needed to pay $38.88/year; while that’s a decent price, I looked at my options and decided that moving to DigitalOcean wouldn’t cost much more (around $30 more across the year, since I use the weekly backups option), would give me much more control over my server (now I get SSH access!), and would centralize all of my VPS instances in the same place (I’ve used DigitalOcean for several years to host various projects).

Of course, as with so many things, this migration wasn’t sparked by a simple glance at the calendar.  While I’ve intended to move my host for the last month or two, the timing was decided by my messing up a WordPress upgrade on the old site at the beginning of December.  I used the automatic updater, ignored the warnings about making sure everything was backed up first,[1]I didn’t actually ignore this warning.  I had a backup plugin configured on the site; I figured I could probably roll back if I really needed to. and told it to apply the new version.  When WordPress exited maintenance mode, I was locked out of the administration dashboard.  The public part of the website was still up and running, but the backend was locked off.  Since I was entering finals week at my university, I decided to just let it be until I had some time to come back and fix it.  Worst-case, I had backups I could restore from, and I’d been meaning to migrate my site anyway.

Of course, things didn’t work out that way.  When I finally had some time on Christmas Eve, I discovered that a complete backup hadn’t been made in months.

Yes, I committed the cardinal sin of not verifying the state of my backups.  Apparently I’d screwed something up with their configuration, and I’d never tried to restore from them before and hadn’t noticed until I needed them.  At this point, I decided that if the backups weren’t working, there was no point in trying to recover on a host that I was going to be abandoning within a month, and I spun up a WordPress droplet on DigitalOcean to hold the rebuilt site.

I still had copies of all the content that was on the site, so I’d be able to restore everything without much trouble.  Some copy/pasting and time would be required, but I could get everything back to the way it was without too much trouble.  But before I did all of that, I thought “what if I’m overlooking something really simple with the old site?”  I did a little searching, and apparently W3 Total Cache, which I used to create static pages for my site and decrease load times, can cause problems with WordPress upgrades.  I disabled that via FTP,[2]If you’re in a similar situation, just renaming the plugin folder to something else– w3-total-cache to w3-total-cache123, for example– will disable it reloaded the site, and I was able to access the admin area again.  Turns out the simple steps that you should take before completely rebuilding everything are actually worth it.

Since I had already spun up and started configuring my new site, I decided to press onwards.  My task was made considerably easier by my being able to access WP Clone on the original site, which let me move everything from my old site to the new one in just a few minutes.  I redirected the nameservers to DigitalOcean, and ran a few last checks before calling the bulk of my work done.

The next day, when I was tidying up some loose ends and preparing to get SSL set up, I realized that my email no longer worked– my email server resided on the same server that hosted my old website, which meant I needed to find a new solution.

While I have been meaning to setup my own email server sometime soon, I wasn’t confident in my ability to get it up and running quickly, and email is one of those vital services I depend on working 100% of the time.  In years past, I would have simply used Google Apps[3]Which is now G Suite, but that sounds silly. to host my email, but that is no longer the free option it once was.  Luckily, I found a solution thanks to Ian Macalinao at Simply Ian, which is to use Mailgun as a free email server.  Mailgun is designed to send out massive email blasts for major companies, but they also offer a free tier for people and companies that are sending out fewer than 10,000 emails per month.  I send out a fraction of that number, so this was perfect for me (and their mass email prices seem quite reasonable, so I might even use them for that if the need ever arises).  Ian handily provided a set of instructions for how to setup the proper routing, and, while some of the menu options have changed, I was able to get my new email up and running within a few minutes.

So I’d managed to get both the site and my email up and running, but I still couldn’t get SSL up and running.  For those that don’t know, SSL stands for Secure Sockets Layer, and it’s what powers the little green padlock that you see on your address bar when you visit your bank, or PayPal, or this website.  I wrote an explanation on how it works a while back, and I suggest checking that out if you want to learn more.
One of the benefits of hosting my website on a VPS is that I don’t need to use the major third-party SSL providers to get certificates saying my server is who it says it is; I can use the free and open Let’s Encrypt certificate authority instead.  Unfortunately, I just couldn’t get the certificate to work correctly; the automated tool was unable to connect to my server and verify it, which meant that the auto-renewal process wouldn’t complete.  I could have generated an offline certificate and used that, but the certificates only last ninety days and I wasn’t looking forward to going through the setup process every three months.[4]It’s a pretty straightforward and simple process, I just know that I would forget about it at some point, the certificate would expire, and the site would have issues.  If I can automate that … Continue reading  I tried creating new Virtual Hosts files for Apache, my web server, but that just created more of a problem.  Eventually, I figured out that I had misconfigured something somewhere along the line.  Rather than try to figure out which of the dozens of edits I had made was the problem, I gave up and just reverted back to a snapshot I had made before starting down the rabbit hole.[5]Snapshots are essentially DigitalOcean’s version of creating disk images of your server.  I absolutely love snapshots; they’ve saved my bacon more than once, and I try to always take one … Continue reading  After reverting to back before my virtual hosts meddling, I was able to successfully run the Let’s Encrypt tool, generate my certificate, and secure my site.

Lesson learned!


Photo credit Torkild Retvedt.

References

References
1 I didn’t actually ignore this warning.  I had a backup plugin configured on the site; I figured I could probably roll back if I really needed to.
2 If you’re in a similar situation, just renaming the plugin folder to something else– w3-total-cache to w3-total-cache123, for example– will disable it
3 Which is now G Suite, but that sounds silly.
4 It’s a pretty straightforward and simple process, I just know that I would forget about it at some point, the certificate would expire, and the site would have issues.  If I can automate that issue away, I would much rather do that.
5 Snapshots are essentially DigitalOcean’s version of creating disk images of your server.  I absolutely love snapshots; they’ve saved my bacon more than once, and I try to always take one before I embark on any major system changes.

Hacking the Hackers

Have you ever heard of Hacking Team?  It’s an Italian company specializing in “digital infiltration” products for governments, law enforcement agencies, and large corporations.  Simply put, they sell hacking tools.

You might think, given their business model, that they would monitor their own security religiously.  Last year, however, they were hacked.  Majorly hacked.  “Hundreds of Gb” of their internal files, emails, documents, and source code for their products were released online for all to inspect, as were their unencrypted passwords. [1]By the way, here’s some advice: if you are in security (or anything, really, this isn’t security-specific) you should really make sure your passwords are more secure than … Continue reading  Also released was a list of their customers, which included the governments of the United States, Russia, and Sudan—the last being a country controlled by an oppressive regime that has been embargoed by the E.U. [2]As an Italian company, this means that they were technically violating the embargo.

Last Friday, the person claiming responsibility for the attack, “Phineas Phisher”, came forward with details about how they did it.  It’s worth reading through if you’re interested in security; if you’d like an explanation geared more towards the layperson, Ars Technica has a pretty good write-up/summary of the attack.

I was particularly struck by how they gained access to the network.  According to Phineas,

Hacking Team had very little exposed to the internet. For example, unlike Gamma Group, their customer support site needed a client certificate to connect. What they had was their main website (a Joomla blog in which Joomscan didn’t find anything serious), a mail server, a couple routers, two VPN appliances, and a spam filtering appliance… I had three options: look for a 0day in Joomla, look for a 0day in postfix, or look for a 0day in one of the embedded devices. A 0day in an embedded device seemed like the easiest option, and after two weeks of work reverse engineering, I got a remote root exploit…  I did a lot of work and testing before using the exploit against Hacking Team. I wrote a backdoored firmware, and compiled various post-exploitation tools for the embedded device.

Basically, to avoid detection, Phineas discovered a unique vulnerability [3]These unique vulnerabilities are called a “zero-day” in computer security circles, because the hackers find it before the company maintaining the software or device does— so once the … Continue reading in one of their embedded devices (likely one of their routers), figured out how to use it to get into the rest of the network using that vulnerability, and then carried out the attack through that piece of hardware without anybody noticing.  No matter your feelings about the attack, this is an impressive feat.


References

References
1 By the way, here’s some advice: if you are in security (or anything, really, this isn’t security-specific) you should really make sure your passwords are more secure than “P4ssword”, “wolverine”, and “universo”.  Use a passphrase instead.
2 As an Italian company, this means that they were technically violating the embargo.
3 These unique vulnerabilities are called a “zero-day” in computer security circles, because the hackers find it before the company maintaining the software or device does— so once the company finds it, they have zero days to mitigate damage.

Email server admins are underappreciated

Today I reconfigured a server I maintain for the Office of Residential Life and Housing.  It broke yesterday because of a database issue, but I’ve taken this as an opportunity to rebuild and improve it with an included email server.  I have it mostly up and running now, but it’s been a long, slow process that took far longer than I expected it to (as a sidenote, this would have been far easier if the backups I had were up-to-date.  Always check your backups!)

Building an email server is more difficult than I expected.  I almost expected to just run sudo apt-get install postfix and have an email server up and running; sure, it would need some configuration, but I’d be able to start sending and receiving mail almost immediately.  And yes, that might be true if I installed something like Mail-in-a-Box or iRedMail, but I decided that that was too easy, jumped into the deep end, and immediately started configuring a mail server using Postfix, Dovecot, MySQL, and Spamassassin (and would have been instantly lost if it hadn’t been for this awesome guide).  So I spent twelve hours copying and adapting code to my purpose, rewriting databases, adding users, restarting when I messed up.

It was absolutely awesome.

There’s something about taking the blank screen of a terminal, typing commands across it, and making something work.  When you reload the page and it actually works the way you want it to, there is an immense feeling of satisfaction and accomplishment.  You took something that was blank and empty, and turned it into something useful.  There’s no feeling quite like it in the world.

That said, I’m totally using one of the ready-to-deploy email servers next time.  Making something work is fantastic when you have the time to do that, but sometimes you just really need to have whatever you’re working on to be up and running.

Listing image by RobH, from Wikimedia. Used under the Creative Commons Attribution-Share Alike 3.0 Unported license.