wp-cli, Where have you been all my life?

WordPress updates can be pretty insecure. FTP was invented back before there was an Internet, and when when no one thought that bad people might be on the same network you’re using (why even have a password if you let everyone see it?). Ah, for those naïve and simple times!

Yet even today most of the Web-site-in-a-box products you can get to run on your GoDaddy account use FTP. I control my own server, and you can bet your boots that FTP is turned right the hell off.

It can be a hassle setting WordPress up to allow its update features to work in a very secure fashion, however. I was wrangling rsa certificates when I ran across another solution: rather than push a button on a web page to run an update, log into the server and run a command there. Simple, effective, secure, without file permission fiddling and authorized_keys files.

wp-cli does way more than updates, too. It is a tool I’ve been pining for for a long time, without even knowing it. Want to install a plugin? wp plugin install "xyz" and you’re done. Back up the ol’ database? They have you covered. Welcome to my tool belt, wp-cli!

If you’re not afraid to type three commands to update your site, rather than trying to maintain a hole in your security in such a way that only you can use it, then this is a great option for you. Check it out at wp-cli.org.

An Internet Security Vulnerability that had Never Occurred to Me

Luckily for my productivity this afternoon, the Facebook page-loading feature was not working for me. I’d get two or three articles and that was it. But when it comes to wasting time, I am relentless. I decided to do a little digging and figure out why the content loader was failing. Since I spend a few hours every day debugging Web applications, I figured I could get to the bottom of things pretty quickly.

First thing to do: check the console in the debugger tools to see what sort of messages are popping up. I opened up the console, but rather than lines of informative output, I saw this:

Stop!

This is a browser feature intended for developers. If someone told you to copy-paste something here to enable a Facebook feature or “hack” someone’s account, it is a scam and will give them access to your Facebook account.

See https://www.facebook.com/selfxss for more information.

It is quite possible that most major social media sites have a warning like this, and all of them should. A huge percentage of successful “hacks” into people’s systems are more about social engineering than about actual code, and this is no exception. The console is, as the message above states, for people who know what they are doing. It allows developers to fiddle with the site they are working on, and even allows them to directly load code that the browser’s security rules would normally never allow.

These tools are built right into the browsers, and with a small effort anyone can access them. It would seem that unscrupulous individuals (aka assholes) are convincing less-sophisticated users to paste in code that compromises their Facebook accounts, perhaps just as they were hoping to hack someone else’s account.

I use the developer tools every day. I even use them on other people’s sites to track down errors or to see how they did something. Yet it never occurred to me that I could send out an important-sounding email and get people to drop their pants by using features built right into their browsers.

It’s just that sort of blindness that leads to new exploits showing up all the time, and the only cure for the blindness is to have lots of people look at features from lots of different perspectives. Once upon a time Microsoft built all sorts of automation features into Office that turned out to be a security disaster. From a business standpoint, they were great features. But no one thought, “you know, the ability to embed code that talks to your operating system directly into a Word doc is pretty much the definition of a Trojan Horse.”

So, FIRST, if anyone asks you to paste code into the developer’s console of your browser, don’t. SECOND, if you are in charge of a site that stores people’s personal data, consider a warning similar to Facebook’s. Heck, I doubt they’d complain if you straight-up copied it, link and all. THIRD, just… be skeptical. If someone wants you to do something you don’t really understand, don’t do it, no matter how important and urgent the request sounds. In fact, the more urgent the problem sounds, the more certain you can be that you are dealing with a criminal.

2

Muddled Ramblings Going Down for Maintenance

I’m not sure exactly when yet, but Muddled Ramblings & Half-Baked Ideas will be going down for some long-overdue maintenance shortly. You may have noticed occasional outages lately, and with not one, but TWO exciting new sites soon to be hosted on this hardware, it’s time for a little renovation. The Mac Mini behind this site has been running non-stop nigh-on five years, and it has a lot of old experimental junk on it that just needs to go away.

The outage will likely last a few hours, and when things come back up they should be zippier than ever.

Then if I could just move this site design forward by about a decade (the irony that the massive article about rounded corner support in modern browsers uses tiled images to create rounded corners is not lost on me) we’ll be in good shape!

2

Back to 28: A Heck of a Security Hole in Linux

In December of 2008, some guy made a change to a program used by almost every flavor of Linux, and he (probably he, anyway), made a simple mistake. The program is called Grub2, and it’s the part that manages the user password business. For seven years it was broken.

It turns out that due to careless programming, hitting the backspace key could cause Grub2 to clear a very important chunk of memory. Normally this would cause the machine to reboot, but if you hit the backspace key exactly 28 times, it will reboot in the rescue shell, a special feature to allow admins access to the computer when things are fairly badly broken.

In the rescue shell, one can perform all sorts of mischief on a machine, including installing spyware or just deleting everything. Yep, walk up to (almost) any Linux box, hit the backspace key 28 times, press return, and blammo. Its undies are around its ankles.

Worse, a long sequence of backspaces and characters can write all kinds of stuff into this critical memory area. Pretty much anything an attacker wants to write. Like, a little program.

Since, (as far as I know) the attacker has to have physical access to the machine to press the keys or attach a device that can send a more complex key sequence automatically, most of the world’s Linux-based infrastructure is not directly at risk — as long as the Linux machines people use to control the vast network are well-protected.

The emergency patches have been out for a couple of weeks now, so if you use Linux please make sure you apply it. The change comes down to this: If there’s nothing typed, ignore the backspace key. Magical!

You can read more about it from the guys who found it: Back to 28: Grub2 Authentication 0-Day. It’s pretty interesting reading. The article gets steadily more technical, but you can see how a seemingly-trivial oversight can escalate to dire consequences.

The lesson isn’t that Linux sucks and we should all use OpenBSD (which is all about security), but it’s important to understand that we rely on millions and millions of lines of code to keep us safe and secure. Millions and millions of lines of code, often contributed for the greater good without compensation by coders we hope are competent, and not always reviewed with the skeptical eye they deserve. Nobody ever asked “what if cur_len is less than zero?”

The infamous Heartbleed was similar. Nobody asked the critical questions.

Millions and millions of lines of code. There are more problems out there, you can bank on that.

Will the World Break in 2016?

Well, probably not. The world isn’t likely to break until 2017 at the earliest. Here’s the thing: Our economy relies on secure electronic transactions and hack-proof banks. But if you think of our current cyber security as a mighty castle made of stone, you will be rightly concerned to hear that gunpowder has arrived.

A little background: there’s a specific type of math problem that is the focus of much speculation in computer science these days. It’s a class of problem in which finding the answer is very difficult, but confirming the answer is relatively simple.

Why is this important? Because pretty much all electronic security, from credit card transactions to keeping the FBI from reading your text messages (if you use the right service) depends on it being very difficult to guess the right decoder key, but very easy to read the message if you already have the key. What keeps snoops from reading your stuff is simply that it will take hundreds of years using modern computers to figure out your decoder key.

That may come to a sudden and jarring end in the near future. You see, there’s a new kind of computer in town, and for solving very specific sorts of problems, it’s mind-bogglingly fast. It won’t be cheap, but quantum computers can probably be built in the near future specifically tuned to blow all we know about data encryption out of the water.

Google and NASA got together and made the D-Wave two, which, if you believe their hype, is the first computer that has been proven to use quantum mechanical wackiness to break through the limits imposed by those big, clunky atoms in traditional computing.

Pictures abound of the D-Wave (I stole this one from fortune.com, but the same pic is everywhere), which is a massive refrigerator with a chip in the middle. The chip has to be right down there at damn near absolute zero.

d-wave_exterior

The chip inside D-Wave two was built and tuned to solve a specific problem very, very quickly. And it did. Future generations promise to be far more versatile. But it doesn’t even have to be that versatile if it is focussed on breaking 1024-bit RSA keys.

It is entirely possible that the D-Wave six will be able to bust any crypto we have working today. And let’s not pretend that this is the only quantum computer in development. It’s just the one that enjoys the light of publicity. For a moment imagine that you were building a computer that could decode any encrypted message, including passwords and authentication certificates. You’d be able to crack any computer in the world that was connected to the Internet. You probably wouldn’t mention to anyone that you were able to do that.

At Microsoft, their head security guy is all about quantum-resistant algorithms. Quantum computers are mind-boggling fast at solving certain types of math problems; security experts are scrambling to come up with encryption based on some other type of hard-to-guess, easy-to-confirm algorithm, that is intrinsically outside the realm of quantum mojo. But here’s the rub: it’s not clear that other class of math exists.

(That same Microsoft publicity piece is interesting for many other reasons, and I plan to dig into it more in the future. But to summarize: Google wins.)

So what do we do? There’s not really much we can do, except root for the banks. They have resources, they have motivation. Or, at least, let’s all hope that the banks even know there’s a problem yet, and are trying to do something about it. Because quantum computing could destroy them.

Eventually we’ll all have quantum chips in our phones to generate the encryption, and the balance of power will be restored. In the meantime, we may be beholden to the owners of these major-mojo-machines to handle our security for us. Let’s hope the people with the power to break every code on the planet use that power ethically.

Yeah, sorry. It hurts, but that may be all we have.

Up… for now

Techno-troubles here at Muddled Ramblings and Half-Baked Ideas! The faithful little computer that has been serving up this site for the past years is not healthy right now. I didn’t realize how important this damg blog is to me until it stopped working. Just when I was getting some momentum, too.

I’m looking for the best answer now (MacMiniColo.net has a pretty spectacular special running right now), But in the meantime it’s proving tough to keep this thing up. So, sorry in advance for outages.

Junk Science — A Telltale Sign

The other day a friend of mine posted a link to a peer-reviewed scientific study concerning the effects of a vegetarian diet. He posted an excerpt from the paper’s abstract:

Our results revealed that a vegetarian diet is related to a lower BMI and less frequent alcohol consumption. Moreover, our results showed that a vegetarian diet is associated with poorer health (higher incidences of cancer, allergies, and mental health disorders), a higher need for health care, and poorer quality of life.

Before I even clicked the link, alarm bells were going off. Just in those two sentences, they list seven things measured. That’s not science, kids, that’s shooting dice in the alley. If you measure enough things about any group of people you’ll find something that looks interesting. Holy moly, I thought, how many things did this survey try to measure, anyway? (I believe the answer to that is eighteen.)

It’s possible that some of the correlations these guys found actually are significant, and not the result of random chance. It’s not possible to tell which ones they might be, as it’s almost certain that many of the conclusions are completely bogus.

And then there’s selection bias. I read elsewhere (link later) that in Austria, many vegetarians eat that way on Doctor’s orders, because they’re already sick. That will skew the numbers.

But the paper was peer-reviewed, right? I spent a little time trying to figure out who those peers might be, but there’s no sign of them I could find on the site where this paper is self-published. And, frankly, “peer-reviewed” doesn’t mean shit anymore. Peers are for sale all over the place. If you can’t see the credentials of the people who reviewed the work, it may as well not be peer-reviewed at all.

And none of the authors seem to have any credentials or degrees themselves. Perhaps they just didn’t feel compelled to mention them, but that strikes me as odd — especially for Europeans, who traditionally love to lay on the titles and highfalutin name decorations.

The site has 53 references to that article being mentioned in the media. Some of the places that quote this nonsense actually have “science” in their titles. Sigh. Apparently Science 2.0 is Science where you believe every press release that crosses your desk. Perhaps Muddled Ramblings and Half-Baked Ideas will make number 54 — although I suspect the keepers of PLOS ONE might not want this reference promoted. But to their credit they do show the link to an article in that Bastion of Science Outside Online, where at least one journalist took a sniff before pressing the “publish” button.

Outside Online, you do science better than Science 2.0. You have my admiration.

So is this research totally useless? Actually, no. It’s possible a grad student somewhere could find ONE of the claims made in the paper interesting enough to do REAL science to improve our understanding of nutrition and health. The study might be to test the hypothesis “a vegetarian diet increases the chances of lymphoma,” or something like that. A single question, while keeping the rest of the variables as controlled as possible in a human study (which is really tough).

That work would take years to accomplish and would not show up in The Guardian or probably even Outside Online. It would be a small brick in our edifice of understanding, a structure that has been growing for hundreds of years.

So when you read about “a study” that shows many things, look at it with squinty eyes and you’ll see behind it a group of people rolling the dice, and there’s often no telling who their master is. It’s not really a study at all, but a press release with numbers.

Sucky Irony

Today at work I was wrestling with a database connection that was defying all my attempts to make it play nice. I needed to type in a command that I couldn’t pull off the top of my head, but I knew where on this blog to find it.

So quick like a bunny I typed in muddledramblings.com to find the answer, and I was greeted with a screen that said, in big bold letters:

Error establishing database connection.

Sigh.

Obviously it’s fixed now, or you wouldn’t be reading this, but dang.

How Secure is Your Smoke Detector?

heartbleedYou probably heard about that HeartBleed thing a few months ago. Essentially, the people who build OpenSSL made a really dumb mistake and created a potentially massive security problem.

HeartBleed made the news, a patch came out, and all the servers and Web browsers out there were quickly updated. But what about your car?

I don’t want to be too hard on the OpenSSL guys; almost everyone uses their code and apparently (almost) no one bothers to pitch in financially to keep it secure. One of the most critical pieces of software in the world is maintained by a handful of dedicated people who don’t have the resources to keep up with the legion of evil crackers out there. (Google keeps their own version, and they pass a lot of security patches back to the OpenSSL guys. Without Google’s help, things would likely be a lot worse.)

For each HeartBleed, there are dozens of other, less-sexy exploits. SSL, the security layer that once protected your e-commerce and other private Internet communications, has been scrapped and replaced with TLS (though it is still generally referred to as SSL), and now TLS 1.0 is looking shaky. TLS 1.1 and 1.2 are still considered secure, and soon all credit card transactions will use TLS 1.2. You probably won’t notice; your browser and the rest of the infrastructure will be updated and you will carry on, confident that no one can hack into your transactions (except many governments, and about a hundred other corporations – but that’s another story).

So it’s a constant march, trying to find the holes before the bad guys do, and shoring them up. There will always be new versions of the security protocols, and for the most part the tools we use will update and we will move on with our lives.

But, I ask again, what about your car?

What version of SSL does OnStar use, especially in older cars? Could someone intercept signals between your car and the mother ship, crack the authentication, and use the “remote unlock” feature and drive away with your fancy GMC Sierra? I’ve heard stories.

You know that fancy home alarm system you have with the app that allows you to disarm it? What version of OpenSSL is installed in the receiver in your home? Can it be updated?

If your thermostat uses outdated SSL, will some punk neighbor kid download a “hijack your neighbor’s house” app and turn your thermostat up to 150? Can someone pull a password from your smoke detector system and try it on all your other stuff (another reason to only use each password once)?

Washer and dryer? The Infamous Internet Toaster? Hey! The screen on my refrigerator is showing ads for porn sites!

Everything that communicates across the Internet/Cloud/Bluetooth/whatever relies on encrypting the data to keep malicious folks away from your stuff. But many of the smaller, cheaper devices (and cars) may lack the ability to update themselves when new vulnerabilities are discovered.

I’m not saying all of these devices suck, but I would not buy any “smart” appliance until I knew exactly how they keep ahead of the bad guys. If the person selling you the car/alarm/refrigerator/whatever can’t answer that question, walk away. If they don’t care about your security and privacy, they don’t deserve your business.

I’ve been told, but I have no direct evidence to back it up, that much of the resistance in the industry to the adoption of Apple’s home automation software protocols (dubbed HomeKit) are because of the over-the-top security and privacy requirements. (Nest will not be supporting HomeKit, for instance.) In my book, for applications like this, there’s no such thing as over-the-top.

1

Junk Science is Everywhere

You would really expect better from Prevention Magazine

You would really expect better from Prevention Magazine (image lifted from the linked article on io9)

Perhaps you remember the headlines a while back: “Eat Chocolate to Lose Weight!” Every week we learn about a new study that shows that X helps you lose weight. And right there is the first problem:

A study.

Singular. Let’s get something straight right now: A single study has never proven anything, ever. This is a fundamental part of science. When someone makes a discovery, it’s exciting. When enough other people confirm that discovery, it’s knowledge. “A study” is useful to guide future research and to provide fun anecdotes on “Wait, Wait, Don’t Tell Me”. But that’s all.

Back to the chocolate. The finding that chocolate helped weight loss was discovered in a laboratory study with the proper protocols, and published in a peer-reviewed journal. So, that’s real science, right? Even if it hasn’t been independently reproduced, isn’t it still important health news? You can’t blame the health press for jumping on something as sexy as “chocolate makes you thin”.

But then the people who did the study came forward and told the world that it was all bullshit. They’d done it to prove how easy it is to get junk science into the mainstream. Even they had not imagined how easy it would be.

Let’s start with the scientific study itself. It’s generally considered scientifically significant if the result of the test is less than 5% likely to be the result of random chance. Yep, it’s considered acceptable that one in twenty scientific experiments is incorrect just based on random chance. Madness? Not really, when you consider that all the studies in a field eventually have to work into an interlocking puzzle that forms a bigger picture. The studies that were incorrect either by blind bad luck or poor procedures get weeded out when others cannot reproduce the results.

But what if you test twenty things at the same time? Statistically now you’re very likely to hit a false positive. To quote the article:

Here’s a dirty little science secret: If you measure a large number of things about a small number of people, you are almost guaranteed to get a “statistically significant” result.

In the jargon of the junk-science industry, this is called “p-stacking”. An incredible number of the health claims you read are likely the result of this film-flam.

“But,” you might ask, “aren’t there systems to filter this shit out before it goes mainstream?”

Well… yes, but those systems are pretty much broken. First off, science is a discourse, and all new ideas have to run a gauntlet of “peer review”. Ideally, the peers may not agree with the conclusions, but you damn well better dot your i’s and cross your t’s. If you take shortcuts in your process, your peers will keep you out of the journals. In the major journals, the reviewers take their work really seriously.

But now there are journals that, for a price will publish whatever twaddle you wish to sell. While they claim to be peer-reviewed, the peers seem only to be reviewing whether your check clears, and have little interest in the scientific validity of your study.

Academia may not be fooled, but the fifth estate certainly is. Journalists who are trusted to sort through the garbage and bring important health information to their readers instead just blare the sexiest headlines. In some cases, the online comments by the readers of those articles ask the questions the so-called journalist should have asked before even running the story.

In the chocolate scam, they recognized another important fact: if the press release is actually written as an article fit for a magazine, even fewer questions are asked. It’s jut cut, paste, and print.

The press is making hay selling junk science to you and me. We trust them to vet the information they bring us, and they are doing a terrible job. It’s not just health science, but that’s where most of the crap seems to be flying these days.

So if what passes for journalism these days won’t ask the hard questions, we have to. Don’t change your diet because of “a study”. Even honest studies are found to be false later on, and damn few of the health articles we read are based on honest studies. (That “damn few” assertion is totally baseless. I have no statistics to back it up. But you were right there with me, weren’t you?)

For your homework assignment, I’d like you to Stop And Think when you see something on Facebook, especially in the health industry. Maybe do five minutes of research on the people making the claim. Then CALL THEM ON IT. Say, “Hey! I call Junk Science on you!”

Get double-serious when you read the shit in magazines. Let’s publicly shame the so-called journalists who dump this stuff out without asking the hard questions first. Demand footnotes. Check sources. Someone has to teach those bozos their jobs.

2

Another Baby Step Toward Email Privacy

Email is frightfully insecure. Anything you write can and will be read by any number of robots or worse as it bounces across the Internet. Gmail? forget about any shred of privacy. While the Goog champions securing the data as it comes to and from their servers, once it’s there, your private life is fair game.

It doesn’t have to be that way. We can encrypt the contents of our emails so that only the intended recipients can read them. I’m not sure how many more embarrassing corporate, government, and university email hacks will have to happen before people start to take this seriously, but remember, those were only the illegal hacks. Other people are reading your emails all the time already. This bothers me.

Sorting out a solution to this problem has been like having a big jumble of puzzle pieces on my coffee table, and while I’ve pushed the pieces around to get them to fit together, it’s become apparent that there’s a piece missing — until (perhaps) now. To understand the puzzle piece, it’s easiest to start with the hole it needs to fill. Some of this you may have read in posts from days of yore.

Here’s a simplified illustration of how email encryption works. Picture a box with two locks, that take two different keys. When you lock the box with one key, only the other key can open the box again. If you want to send me a message, I give you one of the keys, and you put the message in the box and lock it. Since I’m the only one with the matching key, only I can unlock it. Sorry, Google! You just get gibberish.

Of course, there’s a catch. How do I get your half of the key pair to you? If I put it in an email, any bad guy could switch the key before it got to you, and then your secret message would only be readable by the bad guy. He’d probably pack the message back up and lock it with my key and send it on, so I might not notice right away that that the message had been intercepted.

What’s needed is either a foolproof way to send my public key to you, or a way to confirm that the key you got really came from me.

If there were a foolproof way to send the key, we’d dispense with the whole lockbox thing and just send the original message that way. So until that foolproof way arrives, we are left with the need to authenticate the key I send you, through some trusted, hard-to-fake source. There are competing ways to accomplish this, and they all have flaws. This is the hole in our jigsaw puzzle.

The most common way key-verifying is done is through a series of Certificate Authorities, companies entrusted with issuing and verifying these keys. This works pretty well, as long as every single Certificate Authority can be trusted. The moment one is hacked, the entire system has been compromised. Guess what? CA’s have been hacked. There are also several governments that are CA’s, meaning those governments can listen in on any transaction on the Web today that uses https:// – which is just about all of them. Any of those entities could send a fake key to you and your software would trust it. I don’t know which makes me more nervous, that China is on the list or the United States.

So if you can’t collectively trust a few hundred companies and governments, who can you trust? There are several competing systems now where you and all your friends only have to trust one company. As long as you and I both set up with that company, they will quite effectively safeguard our communications. Your privacy is as good as the security and integrity of a single corporation — unless a jealous government shuts them down, anyway, or they get bought by a less-scrupulous company, or a pissed-off engineer in their IT department decides to drop their corporate pants. Having a single entity hold all the keys is called the “key escrow problem”.

At the far end of the spectrum is crowd-sourcing trust. There exists a large and (alas) floundering network of people who vouch for each other, so if you trust Bob and Bob says my key’s OK, you can choose to trust my key. I’ve tried to participate in the “Web of Trust”, and, well, here I am, still sending emails in the clear.

But now there’s a new kid in town! I just got an invitation to join the alpha-testing stage for a new key-verification service, keybase.io. Let’s say you want to send me a message. You need the public key to my lockbox. You ask keybase for it, and they send you a key. But do you trust that key? No, not at all. Along with the key, the server sends a bunch of links, to things like this blog and my twitter account. The software on your computer automatically checks those links to see if a special code is there, and if it is, invites you to go and look at those links to make sure they point to things I control. You see the special code on Muddled Ramblings or Twitter or whatever that only I could have put there, and you can feel pretty good about the key. You put your own stamp on the key so you don’t have to go through the manual verification again, and away you go!

There are more features to prevent bad guys from other shenanigans like hacking my blog and twitter before giving you a fake key, but you can read about them at http://keybase.io.

The service is still in the pre-pubescent stage; I’m fiddling now to see if I can use keybase-verified keys from my mail software. Failing that, there are other methods to encrypt and decrypt messages you cut and paste from your email. Kinda clunky.

Having set up my keybase identity, I have been given the privilege of inviting four more people aboard. Good thing, too, since otherwise I’d have no one to exchange messages with, to see how it works. I’d be grateful if one (or four!) of y’all out there would like to be a guinea pig with me. Drop me a line if you’re interested. Let’s win one for the little guy!

All in the Name of Science

A while back, in anticipation of America’s Favorite Holiday That Includes Encouraging the Youth of Otherwise Calm Neighborhoods in a Sanctioned Protection Racket, some chick online somewhere put up a list of ideal pairings of wine with Halloween candy.

She got it horribly, badly, wrong. My sweetie, who is exposed to these random “memes” (as the kids call them today) much more than I, decided it was time for someone to do this right. She assembled her crack research team, and off we went to buy booze and candy, focussing on the candy that typically lands in pillow cases and plastic pumpkins between 6 and 8 pm on Halloween night, along with some other iconic candies that appear on the shelves this time of year.

At the booze store, we huddled around the miniatures rack for much of the time, so our shopping cart wouldn’t set off the “Leaving Las Vegas” alarm at the cash register. We loaded up with many, many tiny bottles of booze, some that made me nervous just to look at the labels (marshmallow vodka?), and larger bottles of things that we thought might come in handy on other occasions long after the science was complete.

Oh, and I accidentally chose a very expensive bottle of scotch, rather than the usual rather expensive bottle.

After a pass through Cost Plus to find a few more exotic boozes and brews, we made our way home, pulled out all the stuff, and it started to sink in: science is not always a walk in the park. Doing this important research was going to require dedication, hard work, and more than one fuzzy morning.

Much (but not all) of the alcohol we tested. I knew the panorama feature on my phone would come in handy one day.

Much (but not all) of the alcohol we tested. I knew the panorama feature on my phone would come in handy one day.

With all the different boozes (many in limited quantity) and this array of sugary treats:

The loot from our raid on the impressive candy aisle at Walgreens (expanded for Halloween!).

The loot from our raid on the impressive candy aisle at Walgreens (expanded for Halloween!).

We knew it would be impossible (and palette-blowing) to test every possible combination. We leave it to those who follow to continue in the name of Science, and to try combos we may not have considered.

The results are exhaustively (and humorously) presented at Poetic Pinup, including descriptions of why each pairing worked, and links to some of the more obscure beverages.

Methodology:

Science is meaningless if you don’t show how you got to your conclusion. In our case, we often (but not always) started by choosing an alcoholic beverage. We would each take a sip, then scan over the available candy, looking for ones that our palette memories thought might work:

Zombie Zin on the test bench.

Zombie Zin on the test bench.

Naturally, we also had to try pairings that had a lower chance of success. (Side note, Skittles make a reasonable palette cleanser between tests.) By stepping outside the obvious we allowed Serendipity to stagger into the party with a package of Sugar Babies in one hand and a bottle of 100-proof cinnamon-flavored schnapps in the other, shouting a little too loudly for the room, “Hey, check this out!”

Tastes diverge, of course, and so while the light of my lab enjoyed Seagram’s Sweet Tea flavored vodka with Hot Tamales, I found the beverage undrinkable, and while I prefer Kraken to Sailor Jerry, the sea monster didn’t tickle the palette of the head researcher (she being the one who actually took notes) and I don’t remember the match I liked for that one. Guess it’s time to get back in the lab!

In the end, however, I was a bit surprised by how often we agreed. Some things simply taste good together.

The highlights for me (in no particular order):

  1. Jim Beam Honey and Baby Ruth. I didn’t even expect to like Jim Beam Honey.
  2. Peeps and Absinthe
  3. Hershey’s Special Dark and Black Russian
  4. Honorable mention: Crabbie’s Ginger Beer and More Crabbie’s Ginger Beer. Snappy!

There are a couple of things missing from the list, most notably:

  1. Super-dark high-cocoa-percentage chocolate. I love the stuff, but you won’t find that in a halloween bag in any neighborhood I’ve ever lived in.
  2. Scotch. Remember that expensive bottle? Yeah, Science doesn’t deserve that much love. Blame government funding cuts.
  3. Krackle and Crunch bars. The chocolate is different in the two, different enough that they didn’t pair well with booze in different ways, but in the end we just didn’t find a good match for either. Perhaps someone out there can pick up this loose thread.

A final note:

Always the bridesmaid: Kit Kat Bar. It was good with so many things, but there was always some other option that was even better. It was late in the game when we found Kit Kat’s One True Love, but perseverance paid off. But if you’re going to throw a candy-and-booze bash, Kit Kat will play well with a lot of the liquid offerings.

5

A Very Good Colocation Deal

Just a quickie this morning to say that my hosting provider, macminicolo.net is having a special right now that’s pretty sweet — and lasts forever. Some of you may remember that I switched hosting providers a few times before finally deciding to get full control of my server. It turns out macminicolo.net is hands down, far and away, the cheapest colocation provider I found for the power of the hardware you get. There’s an up-front cost (you own the machine), but then it’s all yours.

Their facility is located where a couple of major transcontinental data trunks converge in Nevada, so no hurricanes or earthquakes will interrupt your service. And they seem like nice guys.

I have a mini there; you’re reading this page from it. I don’t really use it as a Mac, I installed a complete LAMP stack that only talks to the UNIX-like underpinnings of the machine. So even if you’re not a Mac guy, it’s easy enough to close your eyes and pretend it’s Linux (FreeBSD, actually).

So if you’re looking for cheap colo (and who isn’t?), this is a good time to jump in. I try not to be a shill too often, but I like this company and if they can keep offering (relatively) inexpensive colo service, I win.

Tor and Privacy

The other day I was looking for something completely unrelated and I came across an interactive diagram that shows what information is protected when you use a secure Web connection. The diagram also mentions something called “Tor”, which protects other parts of the information that gets transmitted with every message your computing device sends over the Web.

In a nutshell, Tor makes it impossible (as far as we can tell) to trace a message from source to destination. This could be really, really beneficial to people who would like to, for instance, access a site their government does not approve of. (If that government already suspects the citizen is accessing a forbidden site, they can still put sniffers on either end of the pipeline and infer from the timing of messages that the citizen is acting in an unpatriotic fashion, but they can’t just put a sniffer on the forbidden end to see who happens by.)

There are lots of other times you might want to improve your privacy; unfortunately not all those activities are legal or ethical. A lot of verbiage on Tor’s site is to convince the world that the bad guys have even better means of protecting privacy, since they are willing to break the law in the first place. Tor argues that they are at least partially evening the playing field. They mention reporters protecting sources, police protecting informants, and lawyers protecting clients. My take: you had me at “privacy”.

To work, Tor requires a set of volunteer middlemen, who pass encrypted and re-encrypted messages from one to another. Intrigued, I looked into what would be involved in allocating a slice of my underused server to help out the cause. It’s pretty easy to set up, but there’s a catch. If you allow your server to be an “exit point”, a server that will pass messages out of the anonymous network to actual sites, sooner or later someone is going to be pissed off at someone using the Tor network and the only person they’ll be able to finger is the owner of the exit point. Legal bullshit ensues.

Happily, there are lawyers standing by to protect the network, and some of them might even be itching for a showdown with The Man. Still, before I do anything rash, I need to check in with the totally awesome folks at MacMiniColo, because shit could fall on them, since my server is in their building. If they have qualms (they are not a large company), then I could still be a middle node in the network, and that would help some. But simply because of the hassles involved with being an exit node, that’s where I can do the most good.

I’ll keep you posted on how this shakes out. I need to learn more. If I decide to move ahead, there’s a lot of p’s to dot and q’s to cross, and my server company may ixnay the whole idea. In the meantime, check out Tor, especially if you have nothing to hide.

It’s Inside the Building!

You know in that horror movie where the girl is on the phone and there’s some crazy mofo who’s freaking her out but for some reason she doesn’t hang up and eventually it turns out the crazy mofo is already inside the house and really has no reason to call? I had a moment like that tonight. I’ve had a rash of spam lately, all using my Facebook identities. I waited for my spam-catchers to get a clue, but the comments kept coming. “Fine,” thought I, “I’ll just block the addresses they’re coming from.”

I fired up my diagnostics, and found the source. localhost. My server thought the comments were coming from itself! Double-plus ungood, to quote Orwell. Extra double-plus. My spam-detecting software, it turns out, recognized the evil of the comments, but was immediately overridden by the administrator. By me, or a vile piece of software pretending to be me.

I just changed a lot of passwords. I hope I can remember them later. I also set a switch that requires that all comments be approved before they go live. Alas, this is likely more an inconvenience to legit comment traffic, as the evil robot has already proven capable of emulating me and giving permission.

I also spastically updated all my wordpress plugins (I do this fairly often anyway) — including, perhaps significantly or not, the one that passes comments between here and Facebook. Later, going back, I see nothing in that plugin’s update info to the tune of “closed egregious spam hole.” But the attack vector seems to be through my Facebook identities. It may be that the conduit trusted the origin of the messages too much.

So now I wait and watch, and your comments will take a little longer to reach the page. Hopefully I can loosen things up soon.