The First and Last Mile, and Net Neutrality

The hardest part about installing public transportation in a city not built for it is the first and last mile. That’s the mile one has to go to reach the nearest stop, and the mile they have go on the other end to reach their destination. People just plain won’t walk a mile anymore. Older, denser cities don’t have this problem; there is a tram stop nearby no matter where you live.

If Net Neutrality is torpedoed, we will have a new last mile problem. At least in urban areas, near where you live is The Backbone — the actual internet, the information superhighway. Your ISP is an on-ramp, but they’re about to be given the right to control your access to the highway. If you live in a rural area, the last mile might be more than a mile but the concept is the same.

The ISPs are just an on-ramp, but because they control the last mile (they have wires connected to your house), they control your access. That’s why there are currently laws to prevent them from abusing that power. If net neutrality goes away, we’ll have a new first-mile problem. So much information, so close, but held hostage by the wire-owners. That first step.

Some will pay the ISP’s extortionate fees. Some will be cut off from one of the key assets that decides who gets ahead these days. The rich will get richer. To be more specific, the rich people who floated this whole idea will get richer, and they don’t give a crap about anyone else. It’s not that they want the poor to remain poor, that would be evil. They simply don’t care what happens to those people.

Already here in Silicon Valley there is a company promising to be a neutral ISP, no matter what the law says. They solve the last mile with a radio dish pointed at a tower (if I’m reading their propaganda correctly), but at the moment cost/performance is not close to the guys with wires connected to my house. Even so, if the guys with wires make the slightest move toward controlling my access, They should know now that I will not remain their customer for long.

Your Privacy, Sold (Again)

If you watched the last season of South Park, you know what can happen if your entire Internet history is made public. Riots, divorce, the collapse of civilization. But did you know that your Internet Service Provider can keep track of every Web site you visit? Forget privacy mode on your browser; that only affects what gets stored locally. It’s mostly good for letting you do credit card transactions on someone else’s computer, or at an Internet Cafe.

It does not keep a host of companies from recording every site you visit.

Up ’till now, those companies haven’t been allowed to share that information. But that’s about to change. The companies that keep that data have cashed in on the current legislation-for-sale atmosphere and have bought a rule change that will enable them to sell that data.

Our President will no doubt sign the bill, and if there’s any silver lining to all this, it’s that his own browsing history will shortly be available for purchase. If he, or other congressional leaders, had any idea what they were signing, they would have realized that they have more to lose than just about anyone else.

For instance, DNS records already made public don’t look good for the GOP. They were collected by a group who thought the Russians were trying to hack the RNC, only to find that the communication went both ways.

Anyone want to guess how much child porn is in The Donald’s browsing history?

Meanwhile, even though I don’t go to any sites that are remotely illegal, I’ll be taking measures I probably should have done long ago to protect my privacy, rather than rely on laws. To be honest, I’m not sure exactly what I’m going to do; I’m not keen on using the Tor Browser (though I’m open to volunteering some server resources to the project). I’ll be looking at VPN’s (Virtual Private Networks) to see if they offer anonymity.

I’d be happy to hear from anyone out there with knowledge in this area. In any case, I’ll report back what I learn.

2

Defensive Programming: Put the Guards Near the Gate

We can file this one under “not interesting to pretty much anyone who reads this blog,” but it’s an important concept for writing robust code. This is part of a discipline called Defensive Programming.

Let’s say you build yourself a castle in a clearing in the woods. There is one path to the front gate, and you need to guard it. “Hah!” you think, “I’ll put the guards where the path comes out of the woods, to stop shenanigans before they even get close!” You post the guards out there in a little guardhouse, secure in the knowledge that no bad guys will reach your gate.

Until someone makes a new path. Perhaps when the new path is created the path-maker will notice that there are guards on the other path and put a little guardhouse on the new path as well. But perhaps not.

In software, it’s the difference between code that says, “when all conditions are right, call function x”, and having function x test to make sure everything is OK before proceeding.

Putting the guard by the trees:

    function x(myParameter) {
        myParameter.doSomething();
    }

    thing = null;

    ... other stuff that might or might not set 'thing'

    if (thing != null) {
        x(thing);
    }

This is fine as long as everything that calls function x knows to check to make sure the parameter is not null first. It might even seem like a good idea because if ‘thing’ is not set you can save the trouble of calling the function at all. But if some other programmer comes along and doesn’t know this rule, she might not do the check.

    // elsewhere in the code...

    anotherThing = null;

    ... other stuff that might or might not set 'anotherThing'

    x(anotherThing); // blammo!

Better to move the guards close to the gate:

    function x(myParameter) {
        if (myParameter != null) {
            myParameter.doSomething();
        }
    }

Now when someone else writes code that calls function x, you can be confident that your guards will catch any trouble. That doesn’t mean you can’t ALSO put guards out by the edge of the forest, but you shouldn’t rely on them.

That Tingly, Geeky Feeling

My day job is building Web applications you will never see. That is by design; my apps deal with SECRET STUFF.

The first aside about failure: My first Internet application is also one you will also never see, not because of secrecy, but because it failed. We made an immersive app with a rich graphical interface that allowed people to share photos and messages with a select group of friends. The core app acted as an operating system, able to discover compatible services to provide data. It flopped. A few years later MySpace and Facebook provided crappy platforms that allowed the world to shout at each other. In retrospect my biggest mistake (among many) was assuming people valued privacy.

ANYWAY, I build Web applications. But I come from a background of developing desktop apps, and let me tell you, even now the world of Web app development is ridiculously painful. Slowly, slowly, software design principles worked out decades ago are finding their way to the Web.

Another aside about a failure: A while back I created a framework that allowed the UI (still running in a dang browser after all this time) to connect to the server with such efficiency that when anyone anywhere made a change, everyone saw it immediately. In geek terms, I created an MVC system where the central model was shared by all clients in real time. It also allowed anyone to track the entire history of every value in the system. I had a great 3D interface for that I never got to implement. The system worked so well I still get misty thinking about it. It was (still is) marketable. That project was shit-canned for reasons I could have managed better that had nothing to do with the technology.

But goddammit, I’ll fail shooting for making something great over succeeding at the mediocre, and I’ve got the track record to prove it.

I may have that chance again. I can’t be too specific (sorry for the tease), but I’m pretty excited. So this afternoon I snuck out of work early to go and… work. But fun work. Perhaps a chance to take my failures and put them together into a game-changer. I’ve come close before.

1

Standing Rock and Internet Security

At the peak of the Standing Rock protest, a small city existed where none had before. That city relied on wireless communications to let the world know what was going on, and to coordinate the more mundane day-to-day tasks of providing for thousands of people. There is strong circumstantial evidence that our own government performed shenanigans on the communications infrastructure to not only prevent information from reaching the rest of the world, but also to hack people’s email accounts and the like.

Cracked.com, an unlikely source of “real” journalism, produced a well-written article with links to huge piles of documented facts. (This was not the only compelling article they produced.) They spent time with a team of security experts on the scene, who showed the results of one attack: When all the secure wifi hotspots in the camp were attacked, rendering them unresponsive, a new, insecure hotspot suddenly appeared. When one of the security guys connected to it, his gmail account was attacked.

Notably, a plane was flying low overhead – a very common model of Cessna, but the type known to be used by our government to be fitted with just the sort of equipment to do this sort of dirty work. The Cessna was owned by law enforcement but its flight history is secret.

What does that actually mean? It means that in a vulnerable situation, where communication depends on wireless networks, federal and state law enforcement agencies have the tools to seriously mess with you.

“But I only use secure Internet connections,” you say. “HTTPS means that people between you and the site you’re talking to can’t steal your information.” Alas, that’s not quite true. What https means is that connections to your bank or Gmail can only be monitored by someone endorsed by entities your browser has been told to trust completely. On that list: The US Government, the Chinese government, other governments, and more than a hundred privately-owned corporations. Any of those, or anyone any of those authorities chooses to endorse, or anyone who manages to hack one of those hundred-plus authorities (this has happened) can convince your browser that there is no hanky-panky going on. It shouldn’t surprise you that the NSA has a huge operation to do just that.

The NSA system wasn’t used at Standing Rock (or if it was, that effort was separate from the documented attacks above), because they don’t need airplanes loaded with exotic equipment. But those airplanes do exist, and now we have evidence that state and local law enforcement, and quite possibly private corporations as well, are willing to use them.

The moral of the story is, I guess, “don’t use unsecured WiFi”. There’s pretty much nothing you can do about the NSA. It would be nice if browsers popped up an alert like “Normally this site is vouched for by Verisign, but this time the US Government is vouching for it. Do you want to continue?” But they don’t, and I haven’t found a browser plugin that adds that capability. Which is too bad.

Edit to add: While looking for someone who perhaps had made a browser plug-in to detect these attacks, I came across this paper which described a plugin that apparently no longer exists (if it was ever released). It includes a good overview of the situation, with some thoughts that hadn’t occurred to me. It also shows pages from a brochure for a simple device that was marketed in 2009 to make it very easy for people with CA authority to eavesdrop on any SSL-protected communication. Devices so cheap they were described as “disposable”.

The Chinese are Attacking!

screen-shot-2016-12-11-at-11-06-55-am
Every once in a while I check the logs of the server that hosts this blog, to see if there are any shenanigans going on. And every time I check, there ARE shenanigans. The Chinese have been slowly, patiently poking at this machine for a long, long time. The attacks will not succeed; they are trying to log in as “root”, the most powerful account on any *NIX-flavored computer, but on my server root is not allowed to log in from the outside, precisely because it is so powerful.

But the attack itself is an interesting look at the world of institutionalized hacking. It is slow, and patient, only making an attempt every thirty seconds or so. Many attack-blockers use three tries in a minute to detect monkey business; this will fly under that radar. Trying fewer than 200,000 password guesses per day limits the effectiveness of a brute-force attack, but over time (and starting with the million most common passwords), many servers will be compromised.

And in the Chinese view, they have all the time in the world. Some servers will fall to their attacks, others won’t. The ones that are compromised will likely be loaded with software that will, Manchurian-Candidate style, lie dormant until the Chinese government decides to break the Internet. And although servers like mine would provide excellent leverage, located as it is in a data center with high-speed access to the backbone, the bad guys have now discovered that home invasion provides a burgeoning opportunity as well. Consider the participation of refrigerators and thermostats in the recent attack on the Internet infrastructure on the East Coast of the United States and you begin to see the possibilities opened by a constant, patient probing of everything connected to the Internet.

I’ve been boning up on how to block the attack on my server; although in its current form the attack cannot succeed, I know I’ve been warned. The catch is I have to be very careful as I configure my safeguards — some mistakes would result in ME not being able to log in. That would be inconvenient, because if I’m unable to log in I won’t be able to fix my mistake. But like the Chinese, I can take things slowly and make sure I do it right.

Apple, Machine Learning, and Privacy

There’s a lot of noise about machine learning theses days, and the obviously-better deep-learning machines. You know, because it’s deep. Apple is generally considered to be disadvantaged in this tech derby. Why? Because deep learning requires masses of data from the users of the system, and Apple’s privacy policies prevent the company from harvesting that data.

I work for Apple, just so you know. But the narrative on the street comes down to this: Apple can’t compete with its rivals in the field of machine learning because it respects its users too much. For people who say Apple will shed its stand on privacy when it threatens profit for the company, here’s where I say, “Nuh-uh.” Apple proved its priority on privacy.

A second nuh-uh: ApplePay actively makes it impossible for Apple to know your purchase history. There’s good money in that information; Apple doesn’t want it. You think Google Wallet would ever do that? Don’t make me laugh. That’s why Google made it — so they could collect information about your purchasing habits and sell it. But in the world of artificial intelligence, respect for your customers is considered by pundits to be a negative.

But hold on there, Sparky! Getting back to the actual subject of this episode, my employer recently announced a massive implementation of wacky math shit that I think started at Stanford, that allows both aggregation of user data and protection of user privacy.

Apple recently lifted their kimono just a little bit to let the world know that they are players in this realm. Have been a long time. They want to you to know that while respecting user privacy is inconvenient, it’s an obstacle you can work around with enough intelligence and effort.

This is a message that is very tricky for Apple to sell. In their advertising, they sell, more than anything else, good feelings. They’re never going to say, “buy Apple because everyone else is out to exploit you,” — that makes technology scary and not the betterment of the human condition that Apple sells.

But to the tech press, and to organizations fighting for your privacy, Apple is becoming steadily more vocal. It feels a wee bit disingenuous; Apple wants those other mouths to spread the fear. But it’s a valid fear, and one that more people should be talking about.

From where I sit in my cubicle, completely removed from any strategic discussion, if you were to address Apple’s stand on privacy from a marketing standpoint, it would seem our favorite fruit-flavored gadget company is banking on one of two things: Than people will begin to put a dollar value on their privacy, or that the government will mandate stronger privacy protection and Apple will be ahead of the pack.

Ah, hahaha! The second of those is clearly ridiculous. The government long ago established itself as the enemy of privacy. But what about the first of those ideas? Will people pay an extra hundred bucks on a phone to not have their data harvested? Or will they shrug and say “If my phone doesn’t harvest that information, something else will.”

Honestly, I don’t think it’s likely that Apple will ever make a lot of money by standing up for privacy. It may even be a losing proposition, as HomeKit and ApplePay are slowed in their adaptation because they are encumbered by onerous privacy protection requirements. Maybe I’m wrong; maybe Apple is already making piles of cash as the Guardians of Privacy. But I suspect not.

So why does Apple do it? I don’t know. I’m not part of those conversations. But I do know this: If you were to ask CEO Tim Cook that question, he’d look at you like you’d grown a second head and say, “Because it’s the right thing to do.” Maybe I’m being a homer here, but I really believe Tim when he says stuff like that. Tim has told the shareholders to back off more than once, in defense of doing the right thing.

And as long as Tim is in charge of this company, “Because it’s the right thing to do” will float for me. So as long as Tim’s in charge, I know Apple will continue to respect the privacy of its customers. Maybe to you that’s not such a big deal, but it is to me. I won’t work for anyone I don’t respect.