Late-Night Puzzle Solving

First, a warning: this may be my geekiest post ever. If you want to give it a pass, you won’t hurt my feelings. In fact, I found a bunch of fluffy cats for you if you would prefer.

Anywhoo, I frequent a Web site called FiveThirtyEight.com that is about statistics and math, and applying them to sports and politics. On Fridays, they pose little (and not-so-little) math challenges for readers. A couple of weeks ago, they posed a question about numbers that were the difference between two perfect squares. As I was reading the question an ad came up to the side, pointing out that 42 = 1 + 3 + 5 + 7.

The mandate was clear: solve the puzzle, using the information in the ad.

I noodled on the problem idly for a while, and came up with some interesting observations, but it wasn’t until I really, really couldn’t sleep the night before last that I lay in the darkness and chewed on the puzzle (long after the submission deadline to receive accolades on the site, but that’s not what matters).

The question is here, but I’ll copy the relevant chunk for you:

Benjamin likes numbers that can be written as the difference between two perfect squares. He thinks they’re hip. For example, the number 40 is hip, since it equals 72−32, or 49−9. But hold the phone, 40 is doubly hip, because it also equals 112−92, or 121−81.

With apologies to Douglas Adams, 42 is not particularly hip. Go ahead and try finding two perfect squares whose difference is 42. I’ll wait.

Now, Benjamin is upping the stakes. He wants to know just how hip 1,400 might be. Can you do him a favor, and figure out how many ways 1,400 can be written as the difference of two perfect squares? Benjamin will really appreciate it.

Let’s do this! First we need to dig a little deeper into the information in the advertisement: 42 = 1 + 3 + 5 + 7. It turns out you can make this into a rule: a2 = the sum of the first a odd integers. 122 is the sum of the first 12 odd integers, and so forth.

That’s pretty interesting, but the question was about the difference between two perfect squares, and that’s actually where the fun begins (for certain values of fun).

Consider 52 – 32. It’s the first five odd integers, minus the first three odd integers, leaving us with 7 + 9 = 16. The subtracted square cancels out part of the series of odd integers, and the difference is the sum of the ones left over.

So now we know that the difference between two squares can always be expressed as the sum of consecutive odd integers. And we also know that every series of consecutive odd integers sums to the difference between two squares.

Fun fact: Every odd number can be expressed as the difference between two squares: There will always be a value of a where a2 – (a-1)2 = n, where n is our odd number. Crazy!

A little side trip here to button things down: 5+7+9 adds up to a difference between what two perfect squares, a2 – b2? Knowing how to figure this will come in handy later to check assumptions. 9 is the fifth odd integer, so we know a is 5. We can solve that for any series that ends with n to say that a = (n+1)/2. The series we’re working on here is three numbers long, so we can quickly surmise that b = a – 3, or more generally, b = a – l, where l is the length of the series. In this case, 5 + 7 + 9 = 52 – 22 = 21.

So now with that info in hand, we can turn to the actual question, but rephrase it “how many different series of consecutive odd integers add up to 1400?”

This is how far I’d gotten on the problem before the long, terrible, sleepless night. A computational solution would be easy at that point, just walking the numbers and testing the results. I wanted to find an analytical solution, but I kind of assumed it would be beyond me, or that series of odd numbers wouldn’t lend itself to such a generalization.

Wide awake in the darkness at 2am, I started to think about the problem from a programming standpoint, trying to optimize the algorithm. What else do you do when you can’t sleep, amirite?

First Optimization: know when to stop. There’s no point in testing the sum of a series after any term goes past half the total.

Second Optimization: The target number is even, so there’s no point testing series with an odd number of terms.

In fact… somewhere around 3:00 am I found the twist from a computational approach to an analytical one, merely by using the optimizations and discarding the code.

If the series of consecutive odd integers has two terms that add up to 1400, then they must be centered on 1400/2. If there is a series of four consecutive integers that add up to 1400, those numbers must be centered on 1400/4.

Let’s look a little deeper at the simplest case to deconstruct what that all means. 1400/2 = 700. the series of odd integers that centers on 700 is (699, 701). Just for giggles we can confirm that ((701+1)/2)2 – (((701+1)/2) – 2)2 = 1400. And it does!

By 3:30, doodling number lines in my head, I had observed that what I was doing was factoring 1400. But there was a hitch – I considered the number 10. There aren’t two consecutive odd numbers centered around five. I realized that both factors have to be even. For an even number to be the difference of two squares, it has to be a multiple of four. That’s why 42 is not hip.

So now we can finally get to the answer to the Riddler puzzle, by answering, “how many unique pairs of even factors does 1400 have?” To answer that, we can reduce 1400 to its prime factors, and count the different ways to arrange them into two buckets. 1400 is 23 x 52 x 7. Since both factors must be even, there must be a 2 in each bucket. That means there are two ways to distribute the remaining 2 (either in one bucket or the other), three ways to distribute the three 5’s (two in one bucket, one in each, or two in the other bucket), and two ways to distribute the 7. That means 2 x 3 x 2 ways to allocate the remaining factors. But there’s one final hitch, because that method will yield both 2 x 700 and 700 x 2. So the final answer is half of that, or 6.

There are six pairs of integers, a and b, such that a2b2 = 1400.

  • 2 x 700 = 699 + 701 = 3512 – 3492 = 1400
  • 4 x 350 = 347 + 349 + 351 + 353 = 1772 – 1732 = 1400
  • 10 x 140 = 131 + 133 + 135 + 137 + 139 + 141 + 143 + 145 + 147 + 149 = 752 – 652 = 1400
  • and three other examples that are too long to fit here.

I didn’t do the actual factoring that night; by then it was 4:30. But I knew how to get the answer, for that or any number. To find the solution for odd numbers the process is similar, but the length of the number series will always be odd, and obviously there will be no even factors.

There are simpler ways to solve this problem, but I’m pleased that I could put a mostly-useless factoid from an advertisement to good use, right on the Web page it was displayed on. And yesterday involved a whole lot of caffeine.

1

Sweet-o-Meter Fixed!

There is a lot of shit code in this world. This blog uses some of it. I have fixed one aspect of the shittiness, so now you can once again voice your appreciation for words well-spoken. I will look into the problem with the comment upvote thingie… later. It could also use a face-lift, I think, so people even notice it’s there.

1

A Pair of Coding Aphorisms

I write software for a living, and I take great pleasure when fixing a problem means reducing the number of lines of code in the system. In the last two days, I have come up with a couple of observations:

Every line of code is a pre-cancerous cell in the body of your application.

Now, “line of code” can be a deceptive measurement, as cramming a whole bunch of logic into a single line will certainly not make the application more robust. There are even robots that can comb through your code and sniff out overly-complex bits. But just as in humans weight is a proxy for a host of more meaningful health measurements, lines of code is the proxy for a host of complexity measurements.

But the point stands. I recently had to fix a bug where someone had copy/pasted code from one place to another. Then the original was modified, but not the copy. All those apparently-safe lines of code (already tested and everything!) were a liability, where instead a function call so everyone used the same code would have been more compact, easier to read, and much easier to maintain. There’s even an acronym for this type of practice: DRY — Don’t Repeat Yourself.

While that’s one of the more flagrant ways code bloat happens, there are plenty of others, mostly symptoms of not thinking the problem through carefully at the get-go, or not stopping to reconsider an approach as the problem is better-understood. Stopping and thinking will almost always get the project done sooner — and smaller.

One important thing to keep in mind is that programming languages are for the benefit of humans, not the machines that will eventually execute the program. If the purpose of your code is not obvious from reading it, go back and do it again. Comments explaining the code are generally an indication that the code itself is poorly written.

No software is so well-written that it ages gracefully.

I work on a lot of old code written by others, and I know people who work on old code written by me. In some cases, the code was shit to start with, but in others time has simply moved on, requirements have changed, and the code has been fiddled and futzed until the pristine original is lost to a host of semi-documented tweaks.

Naturally no code I have ever written falls into the “shit to start with” category (how could you even think that?), but that doesn’t mean the people who have to maintain that old stuff won’t be cursing my name now and then, as some clever optimization I did back in the day now completely breaks with a new requirement I didn’t have to deal with at the time.

And sometimes even if the code itself is still just fine, the platform it runs on will change, and break stuff. Jer’s Novel Writer was pretty elegant back in the day, but now when I compile I get literally hundreds of warnings about “That’s not how we do things anymore.” Some parts of JersNW are simply broken now. When I no longer work where I do, I will likely rebuild the whole thing from scratch.

Speaking of work, I am very fortunate to work in an environment that allows us to trash applications and rebuild them from scratch every now and then. Having a tiny user base helps in this regard. And as we build the new apps, we can apply what we’ve learned and maybe the next system will age a little better than the one before. Maybe. But sure as the sun rises at the end of a long day of coding, someone will be cursing the new system before too long.

Forward to the Past

I’ve spent the last few days learning Ember, which is a software framework for making apps that run in your browser. It’s fun to learn new things, and it has been fun to learn Ember, which to me is a less-awful-than-most javascript framework.

(Things are going to be technical for a bit; please stand by for the rant that is the foundation of this episode — which is also technical.)

The good news: I have never taken a tutorial for any framework on any platform that put testing right up front the way Ember’s did. That is magnificent. The testing facilities are extensive, and to showcase them in the training can only help the new adopters understand their value. Put the robots to work finding bugs!

Also good news: Efficient route handling. Nested routes that efficiently know which parts of the page need to be redrawn, while providing bookmarkable URLs for any given state is pretty nice.

But… I’m still writing html and css shit. WTF?

Yeah, baby, it’s ranting time.

Let’s just start with this: HTML is awful. It is a collection of woefully-shortsighted and often random decisions that made developing useful Web applications problematic. But if your app is to work in a browser, it must generate HTML. Fine. But that shouldn’t be my problem anymore.

When I write an application that will run on your computer or your phone or whatever, I DON’T CARE how the application draws its stuff on the screen. It’s not important to me. I say, at a very high level, that I expect text in a particular location, it will have a certain appearance based on its role in my application, and that if something changes the text will update. That’s all.

I don’t want to hear about html tags. Tags are an implementation detail that the framework should take care of if my application is running in a browser. Tags are the HOW of my text appearing where I want it. I DON’T CARE HOW. Just do it!

When I came to work in my current organization, the Web clients of all our applications were built with a homegrown library called Maelstrom. It was flawed in many ways, being the product of two programmers who also had to get their projects done, and neither of whom were well-suited for the task of ground-up framework design (in their defense, the people who invented HTML were even less qualified). But Maelstom had that one thing. It had the “you don’t have to know how browsers work, just build your dang application” ethos.

There was work to be done. But with more love and a general overhaul of the interfaces of the components, it could have been pretty awesome.

Why don’t other javascript libraries adopt that approach? I think it’s because they are made by Web developers who, to get where they are, have learned all the HTML bullshit until it is second nature. The HOW has been part of their life since they were script-kiddies. It’s simply never occurred to them that the HOW should not be something the app developer has to think about.

There have been exceptions — SproutCore comes to mind — but I have to recognize that I am a minority voice. Dealing with presentation minutiae is Just Part of the Job for most Web client developers. They haven’t been spoiled by the frameworks available on every other platform that take care of that shit.

My merry little band of engineers has moved on from Maelstrom, mainly because something like that is a commitment, and we are few, and we wanted to be able to leverage the efforts of other people in the company. So our tiny group has embraced Ember, and on top of that a huge library of UI elements that fit the corporate standards.

It’s good mostly, and the testing facilities are great. Nothing like that in Maelstrom! But here I am, back to dealing with fucking HTML and CSS.

A Thing I said to a Friend Recently

“Dude, you’ve crossed the threshold into big data now. It’s a moment where you have to swallow hard and denormalize.”

I’m Doing it Wrong

It is a lovely evening, and I’m enjoying patio life. My employer had a beer bash today, but The Killers are playing and I didn’t reserve a spot in time. So I came home instead, and after proper family greetings I repaired to the patio to do creative stuff. It’s blogtober, after all.

So what creative stuff have I been up to?

Creating a class that extends Event Service Sessions to add calendar server capabilities. (php is about the worst language on the planet for injecting new context-related capabilities into an existing class definition. In other words, php is not friendly to duck punching, or “Monkey Patching” as the kids call it these days.

The linked Wikipedia article completely misses the most common use-case for this practice, in which I want to get a thing from some service and then augment it. But php doesn’t flex that way, so I just have to deal with it.

Which is to say, I’m doing Friday evening wrong. It is lovely out, my co-workers are chugging down the last of their beers as The Killers wrap up. I am on my patio with my dogs, the air finally starting to cool after an unusually warm day. It is nice. You’d think I could find a better use of this time than wrangling with a programming language.

But apparently you’d think wrong.

3

Sometimes it’s not so Good when People are Happy to see You

“There he is!” both my fellow engineers said as I walked into the office this morning. I knew right away that this was not going to be the Monday I expected it to be.

“The database is down,” my boss said before I reached my chair. “Totally dead.”

“Ah, shit,” I replied, dropping my backpack and logging in. I typed a few commands so my tools were pointing at the right place, then started to check the database cluster.

It looked fine, humming away quietly to itself.

I checked the services that allow our software running elsewhere to connect to the database. All the stuff we controlled looked perfectly normal. That wasn’t a surprise; it would take active intervention to break them.

Yet, from the outside, those services were not responding. Something was definitely wrong, but it was not something our group had any control over.

As I was poking and prodding at the system, my boss was speaking behind me. “Maybe you could document some troubleshooting tips?” he asked. “I got as far as the init command, then I had no idea what to do.”

“I can write down some basics,” I said, too wrapped up in my own troubleshooting world to be polite, “but it’s going to assume you have a basic working knowledge of the tools.” I’d been hinting for a while that maybe he should get up to speed on the stuff. Fortunately my boss finds politeness to be inferior to directness. He’s an engineer.

After several minutes confirming that there was nothing I could do, I sent an urgent email to the list where the keepers of the infrastructure communicate. “All our systems are broken!” I said.

Someone else jumped in with more info, including the fact that he had detected the problem long before, while I was commuting. Apparently it had not occurred to him to notify anyone.

The problem affected a lot of people. There was much hurt. I can’t believe that while I was traveling to work and the system went to hell, no one else had bothered to mention it in the regular communication channels, from either the consumer or provider side.

After a while, things worked again, then stopped working, and finally started working for realsies. Eight hours after the problem started, and seven hours after it was formally recognized, the “it’s fixed” message came out, but by then we had been operating normally for several hours.

By moving our stuff to systems run by others, we made an assumption that those others are experts at running systems, and they could run things well enough that we could turn our own efforts into making new services. It’s an economically parsimonious idea.

But those systems have to work. When they don’t, I’m the one that gets the stink-eye in my department. Or the all-too-happy greeting.

1

WordPress Geekery: Reversing the Order of Posts in Specific Categories

Most blogs show their most recent posts at the top. It’s a sensible thing to do; readers want to see what’s new. Muddled Ramblings and Half-Baked Ideas mostly works that way, but there are a couple of exceptions. There are two bits of serial fiction that should be read beginning-to-hypothetical-end.

A while back I set up custom page templates for those two stories. I made a WordPress category for each one, and essentially copied the default template and added query_posts($query_string.'&order=ASC'); and hey-presto the two fiction-based archive pages looked just right.

Until infinite scrolling came along. Infinite scrolling is a neat little feature that means that more episodes load up automagically as you scroll down the page. Unfortunately, the infinite scrolling code knew noting about the custom page template with its tweaked query.

The infinite scrolling is provided by a big, “official” package of WordPress enhancements called Jetpack. The module did have a setting to reverse the order for the entire site, but that was definitely a non-starter. I dug through the code and the Jetpack documentation and found just the place to apply my custom code, a filter called infinite_scroll_query_args. The example even showed changing the sort order.

I dug some more to figure out how to tell which requests needed the order changed, and implemented my code.

It didn’t work. I could demonstrate that my code was being called and that I was changing the query argument properly, but it had no effect on the return data. Grrr.

There is another hook, in WordPress itself, that is called before any query to get posts is executed. It’s a blunt instrument for a case like this, where I only need to change behavior in very specific circumstances (ajax call from the infinite scrolling javascript for specific category archive pages), but I decided to give it a try.

Success! It is now possible to read all episodes of Feeding the Eels and Allison in Animeland from top to bottom, the way God intended.

I could put some sort of of settings UI on this and share it with the world, but I’m not going to. But if you came here with the same problem I had, here’s some code, free for your use:

/**
 * reverses the order of posts for listed categories, specifically for infinite Jetpack scrolling
 */
function muddled_reverse_category_post_order_pre_get_posts( $query ) {
	if (isset($_REQUEST['query_args']['category_name'])) {
 
		$ascCategories = [
			'feeding-the-eels',
			'allison-in-animeland',
		];
 
		if (in_array($_REQUEST['query_args']['category_name'], $ascCategories)) {
			$query->set( 'order', 'ASC' );
		}
	}
}
add_filter( 'pre_get_posts', 'muddled_reverse_category_post_order_pre_get_posts' );

In the meantime, feel free to try the silly serial fiction! It will NOT change your life!
Feeding the Eels
Allison in Animeland

1

Gimme Swift

As a computer programmer, I live in a familiar cycle: Write some code, then run it repeatedly to work out all the kinks. There is a moment when you hit “run” for the first time, already anticipating what the errors might be, thinking about next steps when the error inevitably presents itself.

It’s been weird writing server-side Swift. I do my hacking, adding a feature or refactoring or whatever, I make the compiler happy, then it’s time to get to the nitty-gritty. I roll up my sleeves, start the program… and it works. Just like that. I run the tests against the other systems. It works.

It’s like you’re all ready for a fight and the other guy doesn’t show up. NOW what are you going to do?

Swift can be annoying with how hard-assed it is about certain things, but that picky compiler that sometimes forces long-winded syntax is like that really picky English teacher who you realize after the fact gave you a command of words you didn’t have before. If you have a null pointer in Swift, you went out of your way to create it.

Programming languages exist for the convenience of humans, not machines. So if you can make a language that makes it harder for humans to make a mistake, why wouldn’t you?

Man I enjoy writing code in Swift. Of the four languages I use regularly, Swift is hands-down the one I’m most productive with, even though I’ve been using the others for far longer. And just today I remembered that functions could return tuples, and I was like, “Damn!” all over again, thinking how I can shrink my interfaces.

That and a performance profile comparable to C (each is better for certain sorts of operations), and you have a language with some mojo. This ain’t JavaScript, homey.

Most of my days are consumed writing code in other languages (at least for now), and what strikes me every day is that the mistakes I make would not have been possible in Swift. Think of that!

3

Facebook, Continuous Integration, and Fucking Up

If you ask the engineers at Facebook (I have), they are experts at continuously evolving their platform almost invisibly to the users. If you ask the users, Facebook is really fucking annoying because shit is breaking all the time and the button that was there yesterday is nowhere to be found.

Continuous Integration is a development practice that means that each little tweak to the software goes through the tests and then goes live. It’s a powerful idea, and can massively decrease the risk of publishing updates — rather than push out the work of several geek-years all at once, with all the risk of something going terribly wrong, you push out the result of a couple of geek-weeks of effort on a regular basis, taking baby-steps to the promised land. Tick, tick, tick, with an army of robots making sure no old bugs sneak back in again.

I fully embrace this idea.

Never has a company been more proud of accomplishing this than Facebook. They crow about it around here. Also, never has a company been so bad at actually doing it. What Facebook has managed to do is annoy users with endless changes that affect how people work, while still publishing bugs.

The key is that a continuous, minor set of tweaks to software is good, but endless tweaks to how people experience the software is bad. People don’t want to be constantly adjusting to improvements. So in continuous integration, you can enhance the user experience, but you can’t lightly take away something that was there before. You can’t move things around every couple of weeks.

Back in the day when I went on Facebook more frequently, I was constantly bemused by a user interface that felt like quicksand. Meanwhile, frequent users reported a never-ending stream of bugs.

Facebook, you are the champion of Continuous Integration, and the poster child for CI Gone Wrong.

1

A Guide to Commenting Your Code

I spend a lot of time working with code that someone else wrote. The code has lots of comments, but they actually do little to improve the understandability of the work. I’m here to provide a concise set of examples to demonstrate the proper way to comment your code so that those who follow will be able to understand it easily and get to work.

These examples are in php, but the principles transcend language.

WRONG:

// get the value of the thing
$val = gtv();

RIGHT:

$thingValue = getTheValueOfTheThing();

WRONG:

// get the value of the thing
$val = getTheValueOfTheThing();

RIGHT:

$thingValue = getTheValueOfTheThing();

Oh so very WRONG:

// Let's get the value of the thing
$val = getTheValueOfTheThing();

We’re not pals on an adventure here.

RIGHT:

$thingValue = getTheValueOfTheThing();

You might have noticed that so far all my examples of the proper way to comment your code don’t have comments at all. They have code that doesn’t need a comment in the first place.

Computer languages are not created to make things easier to understand for the machine, they are to make sets of instructions humans can read that (secondarily) tell the computer what to do. So, if the code syntax is for the benefit of humans, treat it that way.

If you have to write a comment to explain what is going on in your code, you probably wrote it wrong. Or at the very least, if you need to write a comment, it means you’re not finished. I write many comments that start TODO, which my tools recognize and give me as a to-do list.

Stopping to come up with the perfect name for a variable, class, or function is an important part of programming. It’s more than a simple label, it’s an understanding of what that symbol means, and how it works in the system. If you can’t name it, you’re not ready to code it.

There is a special category of comments in code called doc blocks. These are massive comments above every function that robots can harvest to generate documentation. It’s a beautiful idea.

Here’s my world (not a standard doc block format but that’s irrelevant):

/*
|--------------------------------------------------------------------------
| @name "doSomething"
|--------------------------------------------------------------------------
| @expects "id (int)"
|--------------------------------------------------------------------------
| @returns "widget"
|--------------------------------------------------------------------------
| @description "returns the widget of the frangipani."
|--------------------------------------------------------------------------
*/
public function doSomething($id, $otherId) {
    $frangipani = getFrangipani($id);
    multiplex($frangipani, $otherId);
 
    return $frangipani->widgets();
}

The difficulty with the above is that the laborious description of what the function does is harmfully wrong. The @expects line says it needs one parameter, when actually it needs two. It says it returns a widget but in fact the function returns an array of widgets. If you were to try to understand the function by the doc block, you would waste a ton of time.

It happens all the time – a programmer changes the code but neglects to update the doc block. And if you’re not using robots to generate documentation, the doc block is useless if you write your code well.

public function getFrangipaniWidgets($id, $multiplexorId) {
    $frangipani = getFrangipani($id);
    multiplex($frangipani, $multiplexorId);
 
    return $frangipani->widgets();
}

Doc blocks are a commitment, and if you don’t have a programmer or tech writer personally responsible for their accuracy, the harm they cause will far surpass any potential benefit.

I have only one exception to the “comments indicate where you have more work to do” rule: Don’t try this at home.

public function getFrangipaniWidgets($id, $multiplexorId) {
    $frangipani = getFrangipani($id);
 
    // monoplex causes data rehash, invalidating the frangipani
    multiplex($frangipani, $multiplexorId);
 
    return $frangipani->widgets();
}

This is useful only when the obvious, simple solution to a problem had a killing flaw that is not obvious. This is a warning sign to the programmer coming after you that you have tried the obvious. Often, when leaving notes like this, and explaining why I did something the hard way, I realize that the easy way would have worked after all. At which point I fix my code and delete the comment. But at least in that case the comment did something useful.

2

An Exchange with HackerOne

In a recent episode I rambled about a system that pays good guys for finding and reporting security holes in the software we rely on every day. Fired up with enthusiasm for the cause, I sent this message to HackerOne:

I appreciate what you are doing here, and would love if there were a tip jar where I could contribute to the rewards you give out for making the world a better place. Like Zaphod, I’m just a guy, you know? But I’d happily pitch a little bit each month to promote what you do here, and to support the people who actually make the Internet less unsecure.

I debated “insecure” versus “unsecure”, and went with “un” for reasons I don’t exactly recall. Beer may have been a factor.

I got a very nice letter back.

Thank you so much for reaching out to us with this feedback on what we are doing. We appreciate you taking the time to reach out to speak with us about what you think of the program and how you would like to participate it make HackerOne a success.

You are correct about us not having a tip jar, however, our community can support us by word of mouth let others know what we do and what our goal is and if you are a hacker or know any white hat hackers we encourage you all to use our platform and help us with making the internet safer.

We really do appreciate you reaching out and I am going to share your message with the rest of the company.

Best,
Shay | HackerOne Support

The missing word and tough-to-parse sentence make me think that this was a hand-typed response. I am happy to contribute to their word-of-mouth buzz. I do not fit the profile of the geek HackerOne is looking for, and I suspect no one who will ever read these words is pondering the question “How can I break things and still be a good guy?” But if that’s you, head to HackerOne.

On the other hand, If you own a commercial Web site and want to get a major security audit, consider posting a bounty at HackerOne. You’ll get some really skilled people trying to break in, only in this case they won’t rob you blind if they get in.

2

The First and Last Mile, and Net Neutrality

The hardest part about installing public transportation in a city not built for it is the first and last mile. That’s the mile one has to go to reach the nearest stop, and the mile they have go on the other end to reach their destination. People just plain won’t walk a mile anymore. Older, denser cities don’t have this problem; there is a tram stop nearby no matter where you live.

If Net Neutrality is torpedoed, we will have a new last mile problem. At least in urban areas, near where you live is The Backbone — the actual internet, the information superhighway. Your ISP is an on-ramp, but they’re about to be given the right to control your access to the highway. If you live in a rural area, the last mile might be more than a mile but the concept is the same.

The ISPs are just an on-ramp, but because they control the last mile (they have wires connected to your house), they control your access. That’s why there are currently laws to prevent them from abusing that power. If net neutrality goes away, we’ll have a new first-mile problem. So much information, so close, but held hostage by the wire-owners. That first step.

Some will pay the ISP’s extortionate fees. Some will be cut off from one of the key assets that decides who gets ahead these days. The rich will get richer. To be more specific, the rich people who floated this whole idea will get richer, and they don’t give a crap about anyone else. It’s not that they want the poor to remain poor, that would be evil. They simply don’t care what happens to those people.

Already here in Silicon Valley there is a company promising to be a neutral ISP, no matter what the law says. They solve the last mile with a radio dish pointed at a tower (if I’m reading their propaganda correctly), but at the moment cost/performance is not close to the guys with wires connected to my house. Even so, if the guys with wires make the slightest move toward controlling my access, They should know now that I will not remain their customer for long.

3

Your Privacy, Sold (Again)

If you watched the last season of South Park, you know what can happen if your entire Internet history is made public. Riots, divorce, the collapse of civilization. But did you know that your Internet Service Provider can keep track of every Web site you visit? Forget privacy mode on your browser; that only affects what gets stored locally. It’s mostly good for letting you do credit card transactions on someone else’s computer, or at an Internet Cafe.

It does not keep a host of companies from recording every site you visit.

Up ’till now, those companies haven’t been allowed to share that information. But that’s about to change. The companies that keep that data have cashed in on the current legislation-for-sale atmosphere and have bought a rule change that will enable them to sell that data.

Our President will no doubt sign the bill, and if there’s any silver lining to all this, it’s that his own browsing history will shortly be available for purchase. If he, or other congressional leaders, had any idea what they were signing, they would have realized that they have more to lose than just about anyone else.

For instance, DNS records already made public don’t look good for the GOP. They were collected by a group who thought the Russians were trying to hack the RNC, only to find that the communication went both ways.

Anyone want to guess how much child porn is in The Donald’s browsing history?

Meanwhile, even though I don’t go to any sites that are remotely illegal, I’ll be taking measures I probably should have done long ago to protect my privacy, rather than rely on laws. To be honest, I’m not sure exactly what I’m going to do; I’m not keen on using the Tor Browser (though I’m open to volunteering some server resources to the project). I’ll be looking at VPN’s (Virtual Private Networks) to see if they offer anonymity.

I’d be happy to hear from anyone out there with knowledge in this area. In any case, I’ll report back what I learn.

2

Defensive Programming: Put the Guards Near the Gate

We can file this one under “not interesting to pretty much anyone who reads this blog,” but it’s an important concept for writing robust code. This is part of a discipline called Defensive Programming.

Let’s say you build yourself a castle in a clearing in the woods. There is one path to the front gate, and you need to guard it. “Hah!” you think, “I’ll put the guards where the path comes out of the woods, to stop shenanigans before they even get close!” You post the guards out there in a little guardhouse, secure in the knowledge that no bad guys will reach your gate.

Until someone makes a new path. Perhaps when the new path is created the path-maker will notice that there are guards on the other path and put a little guardhouse on the new path as well. But perhaps not.

In software, it’s the difference between code that says, “when all conditions are right, call function x”, and having function x test to make sure everything is OK before proceeding.

Putting the guard by the trees:

    function x(myParameter) {
        myParameter.doSomething();
    }

    thing = null;

    ... other stuff that might or might not set 'thing'

    if (thing != null) {
        x(thing);
    }

This is fine as long as everything that calls function x knows to check to make sure the parameter is not null first. It might even seem like a good idea because if ‘thing’ is not set you can save the trouble of calling the function at all. But if some other programmer comes along and doesn’t know this rule, she might not do the check.

    // elsewhere in the code...

    anotherThing = null;

    ... other stuff that might or might not set 'anotherThing'

    x(anotherThing); // blammo!

Better to move the guards close to the gate:

    function x(myParameter) {
        if (myParameter != null) {
            myParameter.doSomething();
        }
    }

Now when someone else writes code that calls function x, you can be confident that your guards will catch any trouble. That doesn’t mean you can’t ALSO put guards out by the edge of the forest, but you shouldn’t rely on them.