Posts in Category: site

Securing my witterings: Cloudflare Universal SSL and WordPress

Ok, a bit unfair – my mind has clearly been infected by skimming one too many clickbait headlines: I am sorry.

I use CloudFlare for most of my non-temporary sites so I can skimp on hosting. I’m pretty sure that’s not the tagline they push, but it works well and gets rid of those annoying image loading lags for the most part with very little effort from me.

I’d been ignoring the Universal SSL stuff as I just don’t have the need for their commercial CDN, but that’s changed with the recent move to enable it for all customers, although just visiting was a mixed bag: yes, my site was served with zero SSL config on my part (and with zero webserver config), but the style sheets had gone, along with all the images.

Changing the site URI in the config of WP just gives an infinite indirect loop – the issue is not that setting but the fact that the site itself thinks it’s insecure and so all constructed links point to the insecure data. It’s the browser which refuses to accept the mixed-security assets (generally a good thing), but like XSS protection, a PITA when you’re testing.

Simple solution: download the SSL Insecure Content Fixer plugin and use the Test is_ssl() option. For me, the solution was to add a single if statement into wp-config.php which allows the plugin to know that my proxy was handling the SSL for me, and so all constructed links should be https:// prefixed. The site is now available via both methods, but once the check is in place it’s also safe to change the site config within WP, meaning that redirects kick in when accessing in plain text.

So I’ve done that: no idea if it’ll be a full-time change, but it’s possibly the first crypto-related change I’ve ever done online that hasn’t left me just wanting to give up and stick with plain-text wire-auth…

New Gallery (again)

Time for another make-over: my old gallery subdomain was fine at the time, but has suffered from 5 years of neglect and has become rather dated and annoying when it comes to trying to navigate or show images to others, so now that I have a WordPress theme which supports both gallery and slideshow modes, I shall be reposting some old images along with newer content, but with added WP tags.

All items will be ‘gallery’ tagged, but collections and other interesting/helpful meta-info will also pop-up in the tags from time-to-time; B+W is certainly going to be making an appearance and I’m currently playing to see if more EXIF info would be helpful or a clutter in the UI.

The old subdomain will redirect to the main tag view, so if you arrived here expecting a totally different view, then now you know why !

WordPress, lighttpd and HTTP 500 errors

So this has been driving me potty, but thanks to this bug report and lots of checkbox clicking it turns out that the Google Sitemap plugin v2.7 from BestWebSoft breaks the admin backend, but the plugin from Arne Brachhold works properly.

Still, not impressed at the 100% opaque 500 response from WP: impossible to debug from browser logs turned up to the max 🙁

CloudFlare: Welcome to the Collective

Home hosting of content is a great idea (I’ve been doing it for over a decade) but at some stage the cons start to outweigh the pros. In particular, the speed of UK ADSL uplinks (448kbps) is a large factor in considering external, commercial, hosting, as is the availability of the line and the amount of SysAdmin time needed to keep ahead of the script kiddies.

Ok, so you don’t have to put in time to beat the scripters: staying on top of security updates is often sufficient, but in the early days of WordPress I found that could loose my outbound bandwidth for half an hour or more as a stream of dumb proxy attacks came in.

The electrical cost of running a home server also varies from the super fast might-as-well-rent-dedicated-server end of the market, down to low power devices that can spend the best part of two days building MySQL.

Now, though, there’s an interesting new twist to the cost/speed spreadsheets from CloudFlare, a start-up from 2010 which is making the idea of low power home serving a much faster and more reliable option. They offer (for free) a distributed CDN (Content Delivery Network) together with a very Borg like security consolidation system, where any recognised attack on any site utilising CloudFlare is instantly blacklisted for every site in the collective.

The basic service is free, supported by a commercial offering with better stats and security offerings. So far it appears to do exactly what they suggest for static content, with one graphics laden WordPress page dropping in load time from 34s to 4.05s – this is for a US site analyser looking at a UK site.

Uptime isn’t perhaps as good as a reliable as the best home host, as they have a very aggressive anti-DoS stance on their website which does attract a lot of DoS attention, but given that they will serve the last known content when your site is entirely off-line and the fact that they do actively monitor and work to mitigate attacks, it’s certainly worth a try.

And no, this domain is currently using them: far too many horrid squirly technicalities with machines right now, but I hope to get there soon.

new theme for mobile viewers

I’m a huge fan of CSS and intelligent use of it such as removing images, background colours and scaling down font sizes for print, but some things need more work. If you’re looking at this site from an iPod Touch, iPhone or Android device, you should now get a much more compact ‘just-the-facts’ style view, courtesy of WPTouch. If you have any problems, or think the layout could still stand to be improved, do let me know in the comments.

Good enough for Amazon, good enough for me

Amazon blocks Phorm adverts scan:

I hadn’t previously bothered to do this, as it seemed to be too early to say how Phorm would turn out and the implementation of the opt-out is so braindead and full of marketing BS that it just made me angry. Yeah, I want to ban all search engines so Phorm doesn’t scan me – right… What about all the other User-Agent strings that robots.txt can handle so nicely ? Oh yes – that’s right: if it was trivial then no-one would let their content be abused in this way.

So email sent: multiple domains and all subdomains thereof requested blocked. See what happens next.

strange things are afoot at the circle k

Previously, most automated vulnerability probing I’ve seen on my systems has been brute force and fairly ignorant: one IP address tries many, many (and in some cases many, many and many) times to get in with varying credentials – the most blocked count recently was over 2500 attempts.

This morning it all changed and the rows and columns of the table of attacking IP’s and target users have basically been switched so that one IP will try one login, and then another IP will try the same login etc. This means that whilst the automatic banning is still in place, I now have a huge list of IP’s that have never attempted to get in a second time.

How do I know it’s a single attack ? The fact that the usernames continue to be tried in alphabetical order is one real giveaway that this is a coordinated attack rather than a series of random one-shot attempts. The only really odd aspect is that the same series of usernames is repeated many times from different groups of addresses – I’d guess that whatever ‘common logins’ are being used have been split into a series of one-shot attempts and distributed to small sub-groups of machines (around 10 to 20 or so) which come in a very fast sequence which then has a pause before it begins again. The pause could be simple latency and random chance, or, more likely, it’s the subgroup reporting back failure on one set of data to a central location (or, more P2P like, the next sub-group of IP’s) before the next set of logins is tried.

Interesting ? Maybe. It’s certainly a great way to tip your hand as to who is a member of a particular botnet as you’re exposing all your hosts in one run. On the other hand, it’s far harder to block and consumes far more bandwidth as you need to answer each attempt to discover who it is they’re trying to get in as – the previous method of just dumping the packets after the first offence did save a noticeable number of bytes when counted over a month. I think it’s actually a response to automatic IP blacklisting – only one valid login needs to get in to halt the attack sequence and the pattern shows that banning repeat offenders was a very successful tactic in halting the crack attempts.

Of course, it could all be a very cunning scheme to exhaust disc space due to excessive logging and so cause a very roundabout DoS…

i is in yr pool, sinking yr servr3s

After meaning to get around to it for a year or so, I’ve finally got my Qube 2 providing time public services to the NTP UK Pool. Check the current status: minimal’s pool servers:

So if you use then there a 1 in 738 chance (currently) that you’ll get data from it, or just use for a direct feed.