Took My Data Privacy Training Today

The European Union is enacting a new policy concerning the way companies treat the personal information of their customers. Today I went through the training to make sure I understood what those rules meant to me.

Spoiler: nothing new. But there are a lot of other companies in this neighborhood that are probably scrambling. I’ll name names later.

The new privacy training was pretty much exactly the same as the previous data privacy training I have gone through, with the exception that there is a new report to fill out to make the decision process on using customer data visible to the outside world. There is also a new portal so people can see all the data my employer has collected on them, and request that that data be deleted.

But overall, the new privacy regulations in Europe might have been written by my company, they match our existing policy so closely.

Remember back when Google was “accidentally” collecting information about open home WiFi networks? Accidentally in this case means accidentally creating database tables and queries to store that information. I mean hey, accidents happen. That was a while ago, but that shit is really not going to fly now.

Hey! So much for “later”. I’m naming names.

The regulations go something like this:

  1. You have to spell out what you will be using the data for BEFORE you collect it.
  2. You have to protect that data.
  3. You have to let people see the data and tell you to delete it.

The Google thing was years ago. (There are plenty of current investigations, however.) But hey, remember last week when an Android user discovered Facebook was recording the recipient and duration of all his phone calls? Yeah, the beat goes on. In the aftermath of that I downloaded my own information and there were only a couple of surprises, none shocking. Hint: I don’t use Android.

At Google they must HATE Facebook for being so damn sloppy and leaking data all over the place, rather than just efficiently selling it. Regulators are swarming! Maybe now Google might consider putting in place basic security measures to prevent apps from rooting through shit that is none of their business.

My Facebook information was mostly unsurprising, but I suppose it’s possible that in the last few days Facebook has decided that fraudulently withholding some of the data they have collected on me is better than confessing to all of their shenanigans. Ironically, the ability for people to download their information was probably implemented by Facebook to comply with the new regulations. Sadly for them, the more people who download their personal info, the more trouble will arise for Facebook.

I encourage everyone to request a data download from Facebook. And from Apple, and from Google, and from Amazon. Probably Ebay, too, and the list goes on.

For the rest of this episode, I am full-on partisan. Just so you know. But there’s nothing I’m going to say that is not easily documented.

Google has a vast amount of data on you. If you use Google Wallet, downloading your data might be downright scary; if you use ApplePay instead you will find a big empty nothin’ concerning your spending habits. Apple built it so that it was not possible for them to learn anything about you from your spending. It was not easy to do.

I work for Apple. I am proud that my company puts privacy over profit — that HomeKit is slow to be adopted because it protects privacy and home-gadget manufacturers want to profit from personal data (and the hacking-resistance of HomeKit is more expensive to implement — something I’m also fine with), and I am proud that ApplePay was first out of the gate but isn’t growing as fast as the competitors because privacy requirements make it harder for banks to join in. Apple is losing money protecting privacy.

Unless protecting privacy becomes law. Then, suddenly, my employer is in the catbird seat, having built its information structure around privacy from the get-go. Apple has put a lot of systems in place to make sure they cannot collect large categories of personal data. Currently that data is an asset that they are failing to exploit. In the future, that data will be an onerous responsibility for any company that holds it. I hope so, anyway.

2

Standing Rock and Internet Security

At the peak of the Standing Rock protest, a small city existed where none had before. That city relied on wireless communications to let the world know what was going on, and to coordinate the more mundane day-to-day tasks of providing for thousands of people. There is strong circumstantial evidence that our own government performed shenanigans on the communications infrastructure to not only prevent information from reaching the rest of the world, but also to hack people’s email accounts and the like.

Cracked.com, an unlikely source of “real” journalism, produced a well-written article with links to huge piles of documented facts. (This was not the only compelling article they produced.) They spent time with a team of security experts on the scene, who showed the results of one attack: When all the secure wifi hotspots in the camp were attacked, rendering them unresponsive, a new, insecure hotspot suddenly appeared. When one of the security guys connected to it, his gmail account was attacked.

Notably, a plane was flying low overhead – a very common model of Cessna, but the type known to be used by our government to be fitted with just the sort of equipment to do this sort of dirty work. The Cessna was owned by law enforcement but its flight history is secret.

What does that actually mean? It means that in a vulnerable situation, where communication depends on wireless networks, federal and state law enforcement agencies have the tools to seriously mess with you.

“But I only use secure Internet connections,” you say. “HTTPS means that people between you and the site you’re talking to can’t steal your information.” Alas, that’s not quite true. What https means is that connections to your bank or Gmail can only be monitored by someone endorsed by entities your browser has been told to trust completely. On that list: The US Government, the Chinese government, other governments, and more than a hundred privately-owned corporations. Any of those, or anyone any of those authorities chooses to endorse, or anyone who manages to hack one of those hundred-plus authorities (this has happened) can convince your browser that there is no hanky-panky going on. It shouldn’t surprise you that the NSA has a huge operation to do just that.

The NSA system wasn’t used at Standing Rock (or if it was, that effort was separate from the documented attacks above), because they don’t need airplanes loaded with exotic equipment. But those airplanes do exist, and now we have evidence that state and local law enforcement, and quite possibly private corporations as well, are willing to use them.

The moral of the story is, I guess, “don’t use unsecured WiFi”. There’s pretty much nothing you can do about the NSA. It would be nice if browsers popped up an alert like “Normally this site is vouched for by Verisign, but this time the US Government is vouching for it. Do you want to continue?” But they don’t, and I haven’t found a browser plugin that adds that capability. Which is too bad.

Edit to add: While looking for someone who perhaps had made a browser plug-in to detect these attacks, I came across this paper which described a plugin that apparently no longer exists (if it was ever released). It includes a good overview of the situation, with some thoughts that hadn’t occurred to me. It also shows pages from a brochure for a simple device that was marketed in 2009 to make it very easy for people with CA authority to eavesdrop on any SSL-protected communication. Devices so cheap they were described as “disposable”.

Apple, Machine Learning, and Privacy

There’s a lot of noise about machine learning theses days, and the obviously-better deep-learning machines. You know, because it’s deep. Apple is generally considered to be disadvantaged in this tech derby. Why? Because deep learning requires masses of data from the users of the system, and Apple’s privacy policies prevent the company from harvesting that data.

I work for Apple, just so you know. But the narrative on the street comes down to this: Apple can’t compete with its rivals in the field of machine learning because it respects its users too much. For people who say Apple will shed its stand on privacy when it threatens profit for the company, here’s where I say, “Nuh-uh.” Apple proved its priority on privacy.

A second nuh-uh: ApplePay actively makes it impossible for Apple to know your purchase history. There’s good money in that information; Apple doesn’t want it. You think Google Wallet would ever do that? Don’t make me laugh. That’s why Google made it — so they could collect information about your purchasing habits and sell it. But in the world of artificial intelligence, respect for your customers is considered by pundits to be a negative.

But hold on there, Sparky! Getting back to the actual subject of this episode, my employer recently announced a massive implementation of wacky math shit that I think started at Stanford, that allows both aggregation of user data and protection of user privacy.

Apple recently lifted their kimono just a little bit to let the world know that they are players in this realm. Have been a long time. They want to you to know that while respecting user privacy is inconvenient, it’s an obstacle you can work around with enough intelligence and effort.

This is a message that is very tricky for Apple to sell. In their advertising, they sell, more than anything else, good feelings. They’re never going to say, “buy Apple because everyone else is out to exploit you,” — that makes technology scary and not the betterment of the human condition that Apple sells.

But to the tech press, and to organizations fighting for your privacy, Apple is becoming steadily more vocal. It feels a wee bit disingenuous; Apple wants those other mouths to spread the fear. But it’s a valid fear, and one that more people should be talking about.

From where I sit in my cubicle, completely removed from any strategic discussion, if you were to address Apple’s stand on privacy from a marketing standpoint, it would seem our favorite fruit-flavored gadget company is banking on one of two things: Than people will begin to put a dollar value on their privacy, or that the government will mandate stronger privacy protection and Apple will be ahead of the pack.

Ah, hahaha! The second of those is clearly ridiculous. The government long ago established itself as the enemy of privacy. But what about the first of those ideas? Will people pay an extra hundred bucks on a phone to not have their data harvested? Or will they shrug and say “If my phone doesn’t harvest that information, something else will.”

Honestly, I don’t think it’s likely that Apple will ever make a lot of money by standing up for privacy. It may even be a losing proposition, as HomeKit and ApplePay are slowed in their adaptation because they are encumbered by onerous privacy protection requirements. Maybe I’m wrong; maybe Apple is already making piles of cash as the Guardians of Privacy. But I suspect not.

So why does Apple do it? I don’t know. I’m not part of those conversations. But I do know this: If you were to ask CEO Tim Cook that question, he’d look at you like you’d grown a second head and say, “Because it’s the right thing to do.” Maybe I’m being a homer here, but I really believe Tim when he says stuff like that. Tim has told the shareholders to back off more than once, in defense of doing the right thing.

And as long as Tim is in charge of this company, “Because it’s the right thing to do” will float for me. So as long as Tim’s in charge, I know Apple will continue to respect the privacy of its customers. Maybe to you that’s not such a big deal, but it is to me. I won’t work for anyone I don’t respect.

Email Security 101: A Lesson Yet Unlearned

So it looks like the Russians are doing their best to help proudly racist Trump, by stealing (and perhaps altering) emails passed between members of the Democratic National Committee. It seems like the Democratic party preferred the candidate who was actually part of the party over a guy hitching his wagon to the Democrats to use that political machine as long as it was convenient to him.

But that’s not the point of this episode.

The point is this: Had the Democrats taken the time to adopt email encryption, this would not have happened. When the state department emails were hacked, the same criticism applies.

It is possible to:

  1. Render email unreadable by anyone but the intended recipient
  2. Make alteration of emails provably false

But nobody does it! Not even people protecting state secrets. I used to wonder what email breach was going to be the one that made people take email security seriously. I’m starting to think, now, that there is no breach bad enough. Even the people who try to secure email focus on the servers, when it’s the messages that can be easily hardened.

There is no privacy in email. There is no security in email. But there could be. Google could be the white hat in this scenario, but they don’t want widespread email encryption because they make money reading your email.

Currently only the bad guys encrypt their emails, because the good guys seem to be too fucking stupid.

Security Questions and Ankle-Pants

I’m that guy on Facebook, the party-pooper who, when faced with a fun quiz about personal trivia, rather than answer in kind reminds everyone that personal trivia has become a horrifyingly terrible cornerstone of personal security.

The whole concept is pure madness. Access to your most personal information (and bank account) is gated by questions about your life that may seem private, but are now entirely discoverable on the Internet — and by filling in those fun quizzes you’re helping the discovery process. Wanna guess how many of those Facebook quizzes are started by criminals? I’m going to err on the side to paranoia and say “lots”. Some are even tailored to specific bank sites and the like. Elementary school, pet’s name, first job. All that stuff is out there. Even if you don’t blab it to the world yourself, someone else will, and some innocuous question you answer about who your best friend is will lead the bad guy to that nugget.

There is nothing about you the Internet doesn’t already know. NOTHING. Security questions are simply an official invitation to steal all your stuff by people willing to do the legwork. Set up a security question with an honest answer, and you’re done for, buddy.

On the other hand, security questions become your friend if you treat them like the passwords they are. Whatever you type in as an answer should have nothing to do with the question. Otherwise, as my title suggests, you may as well drop ’em, bend over, and start whistlin’ dixie.

My computer offers me a random password generator and secure place to keep my passwords, FBI-annoying secure as long as I’m careful, but no such facility for security questions. I think there’s an opportunity there.

In the meantime, don’t ever answer a security question honestly. Where were you born? My!Father789Likes2GoFishin. Yeah? I’m from there, too! Never forget that some of those seemingly innocent questions out there on the Internet were carefully crafted to crack your personal egg. But if you never use personal facts to protect your identity, you can play along with those fun Facebook games, and not worry about first-tier evil.

1

Will the World Break in 2016?

Well, probably not. The world isn’t likely to break until 2017 at the earliest. Here’s the thing: Our economy relies on secure electronic transactions and hack-proof banks. But if you think of our current cyber security as a mighty castle made of stone, you will be rightly concerned to hear that gunpowder has arrived.

A little background: there’s a specific type of math problem that is the focus of much speculation in computer science these days. It’s a class of problem in which finding the answer is very difficult, but confirming the answer is relatively simple.

Why is this important? Because pretty much all electronic security, from credit card transactions to keeping the FBI from reading your text messages (if you use the right service) depends on it being very difficult to guess the right decoder key, but very easy to read the message if you already have the key. What keeps snoops from reading your stuff is simply that it will take hundreds of years using modern computers to figure out your decoder key.

That may come to a sudden and jarring end in the near future. You see, there’s a new kind of computer in town, and for solving very specific sorts of problems, it’s mind-bogglingly fast. It won’t be cheap, but quantum computers can probably be built in the near future specifically tuned to blow all we know about data encryption out of the water.

Google and NASA got together and made the D-Wave two, which, if you believe their hype, is the first computer that has been proven to use quantum mechanical wackiness to break through the limits imposed by those big, clunky atoms in traditional computing.

Pictures abound of the D-Wave (I stole this one from fortune.com, but the same pic is everywhere), which is a massive refrigerator with a chip in the middle. The chip has to be right down there at damn near absolute zero.

d-wave_exterior

The chip inside D-Wave two was built and tuned to solve a specific problem very, very quickly. And it did. Future generations promise to be far more versatile. But it doesn’t even have to be that versatile if it is focussed on breaking 1024-bit RSA keys.

It is entirely possible that the D-Wave six will be able to bust any crypto we have working today. And let’s not pretend that this is the only quantum computer in development. It’s just the one that enjoys the light of publicity. For a moment imagine that you were building a computer that could decode any encrypted message, including passwords and authentication certificates. You’d be able to crack any computer in the world that was connected to the Internet. You probably wouldn’t mention to anyone that you were able to do that.

At Microsoft, their head security guy is all about quantum-resistant algorithms. Quantum computers are mind-boggling fast at solving certain types of math problems; security experts are scrambling to come up with encryption based on some other type of hard-to-guess, easy-to-confirm algorithm, that is intrinsically outside the realm of quantum mojo. But here’s the rub: it’s not clear that other class of math exists.

(That same Microsoft publicity piece is interesting for many other reasons, and I plan to dig into it more in the future. But to summarize: Google wins.)

So what do we do? There’s not really much we can do, except root for the banks. They have resources, they have motivation. Or, at least, let’s all hope that the banks even know there’s a problem yet, and are trying to do something about it. Because quantum computing could destroy them.

Eventually we’ll all have quantum chips in our phones to generate the encryption, and the balance of power will be restored. In the meantime, we may be beholden to the owners of these major-mojo-machines to handle our security for us. Let’s hope the people with the power to break every code on the planet use that power ethically.

Yeah, sorry. It hurts, but that may be all we have.

Billion-Person Problems vs. Individual People

I read an article today idolizing Larry Page, head honcho at Google. I have to confess, reading Larry’s quotes, I was pretty damn impressed. Some of his goals are downright “holy fuck, that’s awesome”. If even a small percentage work out lots of people will be helped. Larry calls them his billion-person problems. But…

Can you solve billion-person problems while exploiting a billion individuals?

GoogPut another way: here’s a billion-person problem that Google is central to: the erosion of privacy in the modern age. For instance, Google has taken very seriously securing your information as it travels from your computer to their servers. But once that email hits their hard drives, it’s fair game! As long as no one else can get at your info (well, except governments with leverage over the Goog), all is well with the world.

Before I get too deep in this rant, let me say that the Internet would suck a lot more without Google’s search engine. I use Duck-Duck-Go to exploit the power of the search without yielding up my personal info. I realize that’s kind of like getting sushi and not paying; if everyone did that, search engines would have to start charging for their services and people would be faced with putting a monetary value on their privacy.

And, I think there’s a lot to be said for the way Google runs their company, they way they commit to their managers rather than just making the best engineers the bosses of other engineers. I give them big props for that. That comes from the very top and Larry Page deserves credit.

But now, on with the rant!

What Google knows when you use their payment system (Google Wallet):

Google Wallet records information about your purchases, such as merchant, amount, date and time, method of payment, and, optionally, geolocation.

What Apple (my employer) knows when you use their payment system (Apple Pay): Nothing.

Apple Pay was designed from the ground up so that Apple could not get your personal information. This made it way more complicated to implement and added hardship for banks as well, but it was a fundamental tenet of the system. Apple gets enough aggregate information back from the banks so they can get their fees, but none of your personal information is in that data. In contrast, Google (not just their wallet) has been built from the ground up to collect and sell your personal information.

Of course, the banks still know, and the merchant still knows, and Amazon tells advertisers what’s in your wish list… So it’s not just Google here. But Google has access to information you never intended to be known — a lot of it — and they have a unique opportunity to make meaningful change on this front.

Nest, the hot-spit thermostat/smoke detector company was bought by Google. I was discussing it the other day with a co-worker who is a (mostly) satisfied customer. It sounds like a pretty cool system, but I mentioned there was no reason for the damn thing to be in the cloud just to be operated from my phone — it just needed to be part of a personal network that could talk to all my devices. My friend, who has a buddy who works at Nest, shrugged and said, “they have to collect and aggregate data to make the service work right” (or something like that). I accepted that at the moment, but later I realized: NO THEY DON’T. I want my home automation to be based on ME, not some aggregate of other people. And, if they made the data collection voluntary, I might even opt in if it looked like it would help the collective good. It’s something I do.

I voluntarily share personal information all the time. I share my bike rides (but suppress the exact location of my house). I share my image on Facebook. I share biographical data right here on this blog. I probably share more personal information than I should, but I make a big distinction between voluntary sharing (Facebook) and involuntary sharing (having my emails read by a corporation). Even though I don’t use a gmail account, my emails are still read every time I send a message to a gmail user. Does it matter if I’ve agreed to their terms of service or not? No. No, it doesn’t.

Microsoft took a couple of shots at Google a while back, promoting their email and search services as being more privacy-friendly than Google’s. But, amazingly, Microsoft kind of half-assed it (they had a produced-by-local-TV-station look) and they failed to deliver the message effectively, the way Microsoft is wont to do. Still, at least they tried.

If Google would do one thing, a thing that is in their power to do, I will take back everything else I have said about them. If they provide real encryption for their emails — encryption all the way to their servers, encryption they won’t have a key to unlock, so only the intended recipients can read it, I’ll believe that they care about me, and the other billions of people in the world. And it would be a hell of a selling point for gmail.

How Secure is Your Smoke Detector?

heartbleedYou probably heard about that HeartBleed thing a few months ago. Essentially, the people who build OpenSSL made a really dumb mistake and created a potentially massive security problem.

HeartBleed made the news, a patch came out, and all the servers and Web browsers out there were quickly updated. But what about your car?

I don’t want to be too hard on the OpenSSL guys; almost everyone uses their code and apparently (almost) no one bothers to pitch in financially to keep it secure. One of the most critical pieces of software in the world is maintained by a handful of dedicated people who don’t have the resources to keep up with the legion of evil crackers out there. (Google keeps their own version, and they pass a lot of security patches back to the OpenSSL guys. Without Google’s help, things would likely be a lot worse.)

For each HeartBleed, there are dozens of other, less-sexy exploits. SSL, the security layer that once protected your e-commerce and other private Internet communications, has been scrapped and replaced with TLS (though it is still generally referred to as SSL), and now TLS 1.0 is looking shaky. TLS 1.1 and 1.2 are still considered secure, and soon all credit card transactions will use TLS 1.2. You probably won’t notice; your browser and the rest of the infrastructure will be updated and you will carry on, confident that no one can hack into your transactions (except many governments, and about a hundred other corporations – but that’s another story).

So it’s a constant march, trying to find the holes before the bad guys do, and shoring them up. There will always be new versions of the security protocols, and for the most part the tools we use will update and we will move on with our lives.

But, I ask again, what about your car?

What version of SSL does OnStar use, especially in older cars? Could someone intercept signals between your car and the mother ship, crack the authentication, and use the “remote unlock” feature and drive away with your fancy GMC Sierra? I’ve heard stories.

You know that fancy home alarm system you have with the app that allows you to disarm it? What version of OpenSSL is installed in the receiver in your home? Can it be updated?

If your thermostat uses outdated SSL, will some punk neighbor kid download a “hijack your neighbor’s house” app and turn your thermostat up to 150? Can someone pull a password from your smoke detector system and try it on all your other stuff (another reason to only use each password once)?

Washer and dryer? The Infamous Internet Toaster? Hey! The screen on my refrigerator is showing ads for porn sites!

Everything that communicates across the Internet/Cloud/Bluetooth/whatever relies on encrypting the data to keep malicious folks away from your stuff. But many of the smaller, cheaper devices (and cars) may lack the ability to update themselves when new vulnerabilities are discovered.

I’m not saying all of these devices suck, but I would not buy any “smart” appliance until I knew exactly how they keep ahead of the bad guys. If the person selling you the car/alarm/refrigerator/whatever can’t answer that question, walk away. If they don’t care about your security and privacy, they don’t deserve your business.

I’ve been told, but I have no direct evidence to back it up, that much of the resistance in the industry to the adoption of Apple’s home automation software protocols (dubbed HomeKit) are because of the over-the-top security and privacy requirements. (Nest will not be supporting HomeKit, for instance.) In my book, for applications like this, there’s no such thing as over-the-top.

1

Another Baby Step Toward Email Privacy

Email is frightfully insecure. Anything you write can and will be read by any number of robots or worse as it bounces across the Internet. Gmail? forget about any shred of privacy. While the Goog champions securing the data as it comes to and from their servers, once it’s there, your private life is fair game.

It doesn’t have to be that way. We can encrypt the contents of our emails so that only the intended recipients can read them. I’m not sure how many more embarrassing corporate, government, and university email hacks will have to happen before people start to take this seriously, but remember, those were only the illegal hacks. Other people are reading your emails all the time already. This bothers me.

Sorting out a solution to this problem has been like having a big jumble of puzzle pieces on my coffee table, and while I’ve pushed the pieces around to get them to fit together, it’s become apparent that there’s a piece missing — until (perhaps) now. To understand the puzzle piece, it’s easiest to start with the hole it needs to fill. Some of this you may have read in posts from days of yore.

Here’s a simplified illustration of how email encryption works. Picture a box with two locks, that take two different keys. When you lock the box with one key, only the other key can open the box again. If you want to send me a message, I give you one of the keys, and you put the message in the box and lock it. Since I’m the only one with the matching key, only I can unlock it. Sorry, Google! You just get gibberish.

Of course, there’s a catch. How do I get your half of the key pair to you? If I put it in an email, any bad guy could switch the key before it got to you, and then your secret message would only be readable by the bad guy. He’d probably pack the message back up and lock it with my key and send it on, so I might not notice right away that that the message had been intercepted.

What’s needed is either a foolproof way to send my public key to you, or a way to confirm that the key you got really came from me.

If there were a foolproof way to send the key, we’d dispense with the whole lockbox thing and just send the original message that way. So until that foolproof way arrives, we are left with the need to authenticate the key I send you, through some trusted, hard-to-fake source. There are competing ways to accomplish this, and they all have flaws. This is the hole in our jigsaw puzzle.

The most common way key-verifying is done is through a series of Certificate Authorities, companies entrusted with issuing and verifying these keys. This works pretty well, as long as every single Certificate Authority can be trusted. The moment one is hacked, the entire system has been compromised. Guess what? CA’s have been hacked. There are also several governments that are CA’s, meaning those governments can listen in on any transaction on the Web today that uses https:// – which is just about all of them. Any of those entities could send a fake key to you and your software would trust it. I don’t know which makes me more nervous, that China is on the list or the United States.

So if you can’t collectively trust a few hundred companies and governments, who can you trust? There are several competing systems now where you and all your friends only have to trust one company. As long as you and I both set up with that company, they will quite effectively safeguard our communications. Your privacy is as good as the security and integrity of a single corporation — unless a jealous government shuts them down, anyway, or they get bought by a less-scrupulous company, or a pissed-off engineer in their IT department decides to drop their corporate pants. Having a single entity hold all the keys is called the “key escrow problem”.

At the far end of the spectrum is crowd-sourcing trust. There exists a large and (alas) floundering network of people who vouch for each other, so if you trust Bob and Bob says my key’s OK, you can choose to trust my key. I’ve tried to participate in the “Web of Trust”, and, well, here I am, still sending emails in the clear.

But now there’s a new kid in town! I just got an invitation to join the alpha-testing stage for a new key-verification service, keybase.io. Let’s say you want to send me a message. You need the public key to my lockbox. You ask keybase for it, and they send you a key. But do you trust that key? No, not at all. Along with the key, the server sends a bunch of links, to things like this blog and my twitter account. The software on your computer automatically checks those links to see if a special code is there, and if it is, invites you to go and look at those links to make sure they point to things I control. You see the special code on Muddled Ramblings or Twitter or whatever that only I could have put there, and you can feel pretty good about the key. You put your own stamp on the key so you don’t have to go through the manual verification again, and away you go!

There are more features to prevent bad guys from other shenanigans like hacking my blog and twitter before giving you a fake key, but you can read about them at http://keybase.io.

The service is still in the pre-pubescent stage; I’m fiddling now to see if I can use keybase-verified keys from my mail software. Failing that, there are other methods to encrypt and decrypt messages you cut and paste from your email. Kinda clunky.

Having set up my keybase identity, I have been given the privilege of inviting four more people aboard. Good thing, too, since otherwise I’d have no one to exchange messages with, to see how it works. I’d be grateful if one (or four!) of y’all out there would like to be a guinea pig with me. Drop me a line if you’re interested. Let’s win one for the little guy!

Another Stupid Security Breach

Recently, the State Department’s emails were hacked. Only the non-classified ones (that we know about), but here’s the thing:

Why the hell is the State Department not encrypting every damn email? Why does ANY agency not encrypt its emails? It’s a hassle for individuals to set up secure emails with their friends, but secure email within an institution is not that hard.

JUST DO IT, for crying out loud!

E-mail Privacy

Apparently, it is simply not possible for an American company to offer secure email. Sooner or later the United States Government is going to come knocking, and they’re not above judicial film-flams to get what they want.

Google doesn’t want your email encrypted, either. They want to read it and sell what you’ve written to advertisers.

But there’s nothing stopping you from encrypting your own email, except the inconvenience of getting your communication channels set up with your friends. Unfortunately, that’s still a PITA, especially for friends who cling to browser-based email reading.

My perfect world: every email is encrypted. There is no reliance on a central authority for the encryption. No email company or certificate authority that can be hacked or subpoenaed.

My perfect world may be a tiny bit closer to reality: Apple has announced that the next version of the Mac OS will have streamlined email encryption. S/MIME is already supported in Apple’s Mail app, but it’s not nearly as simple as it should be. If I were in charge, setting up your computer would automatically generate your own identity certificate, and every email you send would have it attached. With a single click anyone who got that email would set up a secure, encrypted email connection with you. And that would be that.

We’ll see how close Apple comes. But it gladdened my crusty old heart to see a big company at least talking about the issue.

Tor and Privacy

The other day I was looking for something completely unrelated and I came across an interactive diagram that shows what information is protected when you use a secure Web connection. The diagram also mentions something called “Tor”, which protects other parts of the information that gets transmitted with every message your computing device sends over the Web.

In a nutshell, Tor makes it impossible (as far as we can tell) to trace a message from source to destination. This could be really, really beneficial to people who would like to, for instance, access a site their government does not approve of. (If that government already suspects the citizen is accessing a forbidden site, they can still put sniffers on either end of the pipeline and infer from the timing of messages that the citizen is acting in an unpatriotic fashion, but they can’t just put a sniffer on the forbidden end to see who happens by.)

There are lots of other times you might want to improve your privacy; unfortunately not all those activities are legal or ethical. A lot of verbiage on Tor’s site is to convince the world that the bad guys have even better means of protecting privacy, since they are willing to break the law in the first place. Tor argues that they are at least partially evening the playing field. They mention reporters protecting sources, police protecting informants, and lawyers protecting clients. My take: you had me at “privacy”.

To work, Tor requires a set of volunteer middlemen, who pass encrypted and re-encrypted messages from one to another. Intrigued, I looked into what would be involved in allocating a slice of my underused server to help out the cause. It’s pretty easy to set up, but there’s a catch. If you allow your server to be an “exit point”, a server that will pass messages out of the anonymous network to actual sites, sooner or later someone is going to be pissed off at someone using the Tor network and the only person they’ll be able to finger is the owner of the exit point. Legal bullshit ensues.

Happily, there are lawyers standing by to protect the network, and some of them might even be itching for a showdown with The Man. Still, before I do anything rash, I need to check in with the totally awesome folks at MacMiniColo, because shit could fall on them, since my server is in their building. If they have qualms (they are not a large company), then I could still be a middle node in the network, and that would help some. But simply because of the hassles involved with being an exit node, that’s where I can do the most good.

I’ll keep you posted on how this shakes out. I need to learn more. If I decide to move ahead, there’s a lot of p’s to dot and q’s to cross, and my server company may ixnay the whole idea. In the meantime, check out Tor, especially if you have nothing to hide.

A New Way to Stop Worrying About Privacy

Hey, if you don’t want to worry about your privacy anymore, why not publish your DNA? The old-fashioned method of publishing your family relationships for the world (and insurance companies) to see still leaves some shreds of privacy and potential for falsification. With this deal, that problem is solved!

More On Egregious Privacy Violations

Last episode (less than an hour old now – you might want to read it first) was about a case of computer rental companies engaging in truly horrifying invasions of privacy. The article I cited finished with a mention of an interview with an anonymous representative of the company DesignerWare, in which he said that he felt his company had done no wrong. DesignerWare is the company that created the software used to steal passwords and get pictures of unsuspecting nekkid people.

They say they’ve done no wrong!? Are you shitting me? They were pure evil!

Wait, no, that’s not quite right. They enabled pure evil. They didn’t activate “Detective Mode” on those computers, the mode that allowed such terrifying transgressions. They wrote the software, and they sold it, but it wasn’t they who turned it on in situations where it wasn’t warranted.

How do we assess the responsibility of DesignerWare? People tried to sue gun makers when people were shot, but with no success. Is Detective Mode like a gun, where the manufacturer can’t be held responsible for the behavior of its customers?

On DesignerWare’s site, they even tout the features they’ve added to protect users’ privacy. But behind the scenes they put in this super-spy-mode feature to help rental companies recover their hardware.

It wasn’t DesignerWare who turned on Detective Mode when it wasn’t warranted. That was something the dickheads at their client companies did. Those bastards deserve to be strung up by their short-and-curlys. No doubt there. But was DesignerWare wrong?

The key word, I believe, is ‘warranted’. Is such an invasion of privacy ever justified? The DesignerWare people would say yes, there are legitimate cases where the rental company has the right to use every means at its disposal to recover its property. Funny thing about ‘warranted’, though – law enforcement would have to get a warrant to conduct similar surveillance. (Well, not any more, but that’s another rant.)

My argument is this: if there’s no legal or ‘warranted’ way to use that software, then at the very least DesignerWare is guilty of fraud for selling it without telling their customers that use of that feature is illegal, rendering it valueless.

Detective Mode is not a gun. Gun companies argue that it’s not their responsibility if their customers use the product illegally. They can do this because there are legal uses of the product, and most gun owners follow those laws. DesignerWare can’t argue that they’re not responsible if their customers use the product illegally, because there is no other use.

So, yep, DesignerWare is evil.

Our Rights, Well-Defended

This morning I came across this brief article: FTC settles PC spying charges with rent-to-own computers. To paraphrase the text: The FTC caught people participating in jaw-dropping invasions of privacy, and brought the miscreants to justice.

Before we get to the penalty phase, let’s review some of the things these people did without the knowledge of the people using rental computers: They captured screen shots (that could have personal information like bank statements and legal documents), they captured user’s keystrokes (a technique for stealing passwords), and they even used the built-in cameras to send back pictures without the knowledge of the users. Apparently (according to other articles) pictures of children and of people having sex were collected.

There’s no reason to do this if you don’t plan to use that information, and there’s no use for that information that isn’t simply evil.

We can be happy then, that the boys at the FTC are on the job! At the very least, you’d figure Washington wants a monopoly on invading our privacy. So what was the ‘settlement’ they reached with these thieving bastards?

Oh, it was severe all right. They got the bad guys to promise not to do it anymore.

Shit, at least make them pick up litter for a weekend.