Rounded Corners and CSS3

NOTE — June 7, 2010:
This page is a little out of date; the main Webkit browsers now work better with NO prefix on the styles. It’s time to say goodbye to -webkit-. In the following discussion, using the standard syntax will work with Chrome, Safari, and Opera as well. The table referenced below has been updated to reflect the newer browsers.

If you poke around this site you will see boxes with rounded corners. If you use Safari or Firefox, you will see even more.

Rounded corners are implemented here in two different ways. The main boxes with the drop shadows are done the old-fashioned way, the way that works on most browsers. Each corner is a graphic with an alpha-channel shadow, and the edges are yet more graphics, repeated as needed to span the distance between the corners. The boxes expand and contract infinitely in both directions. It’s not bad. It’s also a pain in the butt.

Yet, I like rounded corners. They seem friendlier. I have broken down, therefore, and in a few places I have added browser-specific style information to create a softer-feeling blog. Since the rounded corners are purely cosmetic — everything still works just fine in browsers that don’t support border-radius — I’m not too worried about it.

However, while I was looking into the border-radius CSS property, I discovered several sources that didn’t get it right.

Here’s the deal. The CSS3 standards draft includes a property called border-radius. Exactly how that property is going to work has not been finalized, but it’s not likely to undergo any more major revision. Meanwhile, Firefox and Safari have already worked out their own border-radius implementations, called -moz-border-radius and -webkit-border-radius respectively. Other browsers see the -moz and the -webkit prefixes and ignore the property.

Unfortunately, neither implementation matches how the proposed border-radius property will act. Oh, dear. When the browsers are updated to match the standard, those -vendor-border-radius properties may break. A lot of Web designers out there don’t seem to realize that.

NOTE: probably at this point you should open up this handy table to follow along.

It’s not all doom and gloom, however. As long as people using the vendor-specific border-radius properties keep things really simple, there won’t be a problem. Here’s the skinny:

std-br-15
All four corners with 15px radius
<style type="text/css"> .roundedBox { -webkit-border-radius: 15px; -moz-border-radius: 15px; border-radius: 15px; } </style>

will put a nice rounded corners on any block element of class roundedBox. Safari 4 and Firefox 3.5 (the browsers I have to test on) will work today, and when the formal border-radius is adopted and the other browsers support it, everyone will be happy. (Remember, of course, that in the meantime a large part of your audience will still see squared-off corners.)

The tricky part comes when one wants to specify elliptical corners, or specify different radii of curvature on different corners. When you start getting fancy, things get a little messed up. Let’s tackle the second one first, because it’s possible to find a way to specify the different corners that makes everyone happy. It’s just long-winded.

border-radius is really shorthand for four properties: border-top-left-radius, border-top-right-radius, and so forth. Therefore it’s perfectly safe to specify each corner independently, and all the browsers will act the same way:

moz-br-20-10
top-left and bottom-right 20px radius, others 10px
<style type="text/css"> .roundedBox { -webkit-border-top-left-radius: 20px; -webkit-border-top-right-radius: 10px; -webkit-border-bottom-right-radius: 20px; -webkit-border-bottom-left-radius: 10px; /* different! */ -moz-border-radius-topleft: 20px; -moz-border-radius-topright: 10px; -moz-border-radius-bottomright: 20px; -moz-border-radius-bottomleft: 10px; border-top-left-radius: 20px; border-top-right-radius: 10px; border-bottom-right-radius: 20px; border-bottom-left-radius: 10px; } </style>

Note that the names of the four corner properties are different for Mozilla. Aargh. All the more reason to hope the spec is finalized soon. I put the four properties in the order the software considers them when parsing the shorthand notation, just to get into the habit.

All those lines of CSS can be a pain in the butt, but it’s bulletproof and will work on into the future. But wouldn’t it be nice if you could use shorthand for the border radius the same way you do for margins and padding? The boys at Mozilla thought so, and the CSS3 standards team thought so, too. Webkit (Safari) seems content to only support the long-winded method for now (at least support it properly – more on that later).

Before talking about the differences between the browsers and the CSS3 spec, let’s take a quick look at the theory. As with properties like border, the border-radius property is just a shorthand so you don’t have to specify each corner individually. If you use one number, like border-radius: 10px; the style will be applied to all the corners. If you supply four values, the four corners each get their radius set, starting with the upper left and working clockwise. So far, so good, but there’s trouble ahead.

[The following has been edited since it was first published. I first said that Mozilla was doing the following drawing wrong, but it looks like they have it right and Safari is wrong. Sorry for any confusion. To make up for it I added box-shadow here and there for browsers that support it. They’re sweet!]

The difference is elliptical corners. CSS3 calls for them, but the draft isn’t very well-written. The mystery lies in what should happen when two values are specified: border-radius: 20px 10px. When you are specifying a single corner, the result will be an elliptical curve. When using the shorthand, however, Safari draws all four corners with the same ellipse, but Firefox (and the CSS3 spec) draw round corners that turn out just like the example above.

According to the spec (by my reading), when using shorthand if you don’t use slashes you don’t get ellipses.

std-br-20-10
All four corners with elliptical curvature
<style type="text/css"> .roundedBox { /* four elliptical corners */ -webkit-border-radius: 20px 5px; moz-border-radius: 20px / 5px; border-radius: 20px / 5px; } </style>

NOTE: The most recent builds from webkit.org match the spec. I don’t know when those changes will reach Safari, but sites using the two-value shorthand may have to deal with some inconsistencies between browser versions. Not sure, but I would avoid using that syntax just in case.

What about if four values are specified?

std-br-20-10-5-30
All corners different
<style type="text/css"> .roundedBox { /* four different circular corners */ /* no effect! */ -webkit-border-radius: 20px 10px 5px 30px; -moz-border-radius: 20px 10px 5px 30px; border-radius: 20px 10px 5px 30px; } </style>

Once again Webkit-based browsers like Safari and Chrome fall short. The Webkit team seems content to get the longhand method of specifying corners right, but not the shorthand. Mozilla, in the meantime, has worked out the most complex and versatile form of the shorthand, but disagrees with the spec fundamentals.

To use shorthand to specify four different elliptical corners, you would use something like:

-moz-border-radius: 20px 10px 20px 5px / 5px 10px;

where you specify up to four horizontal radii and then up to four vertical radii. The numbers before the slash are the horizontal radii, starting from the top left. If only two numbers are given, they alternate. Three numbers means top-right and bottom-left share. The y-radius values are the numbers after the slash, and are distributed the same way. Clear? Good.

I have read that if the text rendering is vertical, the horizontal and vertical parts are reversed, but I see nothing about that in the proposed specification.

This will make a lot more sense if you study the twoblue-shaded lines of the table.

While we’re looking at the table, note that Safari is perfectly capable of displaying the most complex borders, but they have not implemented the shorthand notation (except for the bit they did wrong). They’ve done the hard part, but left out the one-day coding job of parsing the shorthand strings into the properties for each corner. Odd. The rules are really very simple for a machine.

So what does this all mean?

In conclusion, while it’s possible to write different sets of -vendor-border-radius CSS properties and get what you want, things start to get quite messy. It’s a lot of effort for aesthetic touches that half your audience won’t see for the next couple of years. I’d advise just staying away from elliptical corners for now, and specifying round corners individually if any are different. It’s a bit more typing, but it’s a lot safer. Stay away from -webkit-border-radius: with two values.

A Browser Experiment

Quite by accident this morning I stumbled across an image format that might turn out to be really cool. Unfortunately, like all things Internet, it’s not much use until the various browsers agree on how it should work. Just for giggles, I thought I’d play around with it a bit. Internet Explorer users — even IE 8 — need not continue with this episode.

One of the cool things about SVG is that it’s more a drawing system than an image format. Image files contain a set of instructions the computer uses to render the picture. That’s not especially new, but it’s nice to have a standard system built into browsers. With something like this I can write code on the server to generate very sophisticated and pretty graphs, without a lot of technical hoo-ha. It would be especially nice for some of the images used in the basic design of this site.

So here is an svg image, plopped into the page the way any image would be:

Emblem-fun

Alas, only those using Opera and Safari will see it. (PLEASE correct me if I’m wrong!) Alternately, here’s the contents of that same image file, plopped into the regular XHTML of this site in a big ol’ svg tag:

You can look at the source for the page and there it will be, all the drawing instructions used to render this happy little face. (Note that I removed some extraneous parts that connected to the source of the graphic (sodipodi) to see if I could make the image work.)

Except… hmm. The latter doesn’t work at all anywhere (that I know of). Obviously I’m missing something, but at this point it’s not worth figuring out. I did try to paste in an example directly from Mozilla’s site; maybe WordPress is subtly messing up the data. Or something else. If SVG ever becomes more universal, I’ll revisit it.

Edited to add: it looks like the browser has to load a file with an xhtml extension to know how to deal with other xml embedded in the code like that. Unfortunately, if your tell the browser that you are using xhtml, you have to use it exactly correctly. Alas, several of the plugins, and amazon, and Google, provide code that is not strictly compliant, and I shudder to think what would happen if I tried to validate all those old episodes I brought over from iBlog. Firefox can also use the <embed> tag to display the graphic, but ironically it is not compliant.

Let’s try the <object> tag and see if Firefox has relented and begun to support it:

Just for grins I specified a different size, to show the S in SVG. Safari didn’t do it right, but my version of Firefox and Opera did.

Note: The original graphic is under GPL and I got it from here.

Note 2: Since this episode, I’ve done some pretty extensive work with SVG, including using scripts to modify the image — even changing the actual structure of the image interactively. Try the dots!

The Worst Thing That Ever Happened to the Internet

I mentioned in the last episode that Internet Explorer was the second-worst thing that ever happened to the Internet. Today I’ll talk about the absolute worst. It’s really a long technical rant that doesn’t matter, but it feels good to let it out. What follows is an underinformed ramble about the scourge that did the most harm to the developing computer network that went on to transform our lives — damage that we still live with today. Without this one corrupting influence, we would have had Internet applications that didn’t suck a decade ago, if not longer. In fact, it was because of this electronic plague that Microsoft was able to cause so much harm with Internet Explorer.

The culprit? The ball and chain that modern technology has dragged along despite its obvious flaws? Hypertext Markup Language, or HTML.

First, let’s start with the name. HTML is not a language. Not even close. It is a document format. That its inventors did not recognize the difference tells you that the wrong guys were doing it.

Second, it’s not a very good document format. At its heart, the inventors wanted a format that did three things: connect related documents, embed external resources (like images) and contain standard formatting information that would be interpreted by viewing software consistently. They were not the only ones developing systems like this; Josten’s Learning invented a similar system when they built the first multimedia encyclopedia for Compton’s New Media. Where Berners-Lee and friends had URL’s, Josten’s engineers created BRU’s, but beyond the initials the function was the same.

I don’t want to be too harsh on Berners-Lee, Cailliau, and the others who grew HTML, but I wish they’d been a little more far-sighted. I say ‘grew’ rather than ‘invented’ because it’s clear that they never sat back and asked themselves “What is a tag? What roles do they perform?” Even now, XHTML, the supposedly more rigorous (if still misnamed) descendant of HTML has fundamental inconsistencies.

For a simple example, take the <br /> tag. It exists because in HTML all whitespace (tabs, spaces, and returns) are mushed together and presented on the screen as a single space. Thus

<p>this markup</p>

and

<p>this
 
        markup</p>

come out the same on the screen. That’s fine if you know what’s going on. But what if you want to put in a line break or a space? Well, for a space you add a special character code &nbsp; and for break you add a tag <br />. Why is one a character and one a tag? Because on the day HTML’s inventors decided they needed line breaks, a tag seemed like a good way to go, even though semantically it had nothing to do with the roles of other tags. It could just as easily been &br; or something like that. That’s how HTML grew up. And thus the World Wide Web was born.

Another fundamental flaw is that the content (what to display) is all mixed up with the presentation (how to display it). What if you want to show the same document in different formats? Nope. While some tags were geared toward identifying the type of content that they enclosed (like the <p> tag), others were direct formatting instructions (like the <i> tag). This inconsistency in the role of tags in a document is a reflection of the organic (and sloppy) way that HTML was grown.

I really can’t blame the inventors of HTML for what came next. Everyone started using it. Everyone. The flaws and inadequacies of the format quickly became apparent. Different document viewers (browsers) rendered things differently. Formatting options were extremely limited. The systems were vulnerable to abuse by unscrupulous people. Right then, there was a chance for people to say, “hold on a second! Let’s take the idea of HTML and apply the lessons we’ve already learned in other branches of computing, and make something that doesn’t suck.”

Rather than scrap HTML, browser makers and others set out to fix it. That was the Big Mistake. After twenty years of tweaking and bickering and incompatible extensions introduced by browser manufacturers and squabbles and lawsuits, HTML has been upgraded from awful to poor. Along the way, companies like Adobe and Macromedia thought to get their technology adopted as a replacement to HTML (the Web in pdf? Interesting…) but those efforts were doomed from the start because they did not provide free, simple tools to create the content.

HTML’s greatest shining virtue (and it’s an awesome one) is that it’s accessible to anyone who can type. Anyone. No special tools required.

So, now we have style sheets to help separate content and presentation, XHTML to fix some of the semantic craziness of HTML, and browsers are finally starting to agree on what all the formatting instructions actually mean. We could have had that fifteen years ago if people had just let go of HTML, but here we are now, with an almost-functional system. There are still plenty of flaws, however. Things that seem so normal now that we don’t even think about how dumb they are.

Take this blog, for instance. It’s a pretty well-built Web application, based on reasonably up-to-date practices. Yet were you to click the comment link at the bottom of this episode, you would go to a new page. On that new page the browser would reload the same header and the same sidebar it just erased. What a waste! Why does it do it? Because that’s how HTML (and HTTP, the underlying part that communicates with servers) works. There have been abortive attempts to fix that over the years, but they have all been flawed. Now, at long last, techniques have been developed to overcome that problem, but they are not quite ready for prime time yet. For one thing, they are very complicated, and for another they rely on browsers working just right. Why was it so hard to implement? Because at its core the Web was not made that way.

Even in the days when almost everyone was on dialup (except the people inventing HTML), no one stopped to say, “hey, let’s make a way to only update the content that changes.” That problem has now been ‘solved’ by adding a new layer of complexity on Web sites. By adding this layer (on top of CSS and so forth), we get sensible Web applications at last, but we take away the one super-cool thing about HTML. It is no longer a simple format that can be harnessed by anyone with a text editor. We have lost the attribute that was the only reason to keep HTML around in the first place.

So now we have a system that is both inaccessibly arcane and flawed. Yay!

3

The Ghost of Projects Past

I couldn’t sleep last night, and on nights like that it is natural to think of things that might have been. One of the thoughts that grabbed hold of my too-active brain was the memory of PeoplePost, an Internet-based photo-sharing application that allowed groups of people to build scrapbooks together. We called it a virtual refrigerator door. It was pretty slick.

The project failed for a number of reasons. First, we tried to ‘roll our own’ instead of springing for sophisticated Web development tools. (Back then, the tools were very expensive.) To save the cash we added months to the development, and in the meantime something fundamentally changed on the Internet. People began to expect everything to be free. You remember the two-year span when Web services stopped trying to make money and figured they would find some way to be profitable in the future? Probably not, but those were the years we were working on PeoplePost.

This happened as the dot-com boom was just getting started, before Google had finished making the Web a useful place. WordPress did not exist then. No MySpace, no Facebook, no Friendster. Geocities was around, but had PeoplePost taken off, we would have had to invent modern social networking as the next logical step. At the time, our networks were closed communities with no way to discover what other groups were up to.

Another thing that killed us was a dead-wrong prediction I made way back then. I said that the browser was the Swiss Army Knife of the Internet, and that soon people would turn to specific applications to perform specific tasks. “Swiss knife is good,” I said, “but soon people are going to want cutlery.” Boy, was I wrong about that. Instead of using applications designed for a specific purpose, people worked with really crappy applications that worked through the browser. People tolerated crap that worked in some browsers and not others, and they tolerated bad aesthetics, wasted bandwidth (on their modems!), and wretched user interfaces that left them cursing the screen. Why? I still don’t get it.

Nevertheless, we made PeoplePost a downloadable application (with a really slick self-updating scheme), and when people downloaded and installed it, they would then go back to the browser and wonder what to do next. It’s the Internet! It must be in the browser!

The application was written in Java (not Swing, but that’s another post), so we managed to get the whole thing shoehorned into the browser — suddenly dealing with four different security systems and a host of other issues, like Microsoft’s passive-aggressive antipathy toward the language. What a pain. Still, a few people started to use it.

What we really needed at that stage was widespread broadband. We were diligent about saving bandwidth (all graphic elements preinstalled, for instance), but with advertising banners now harshing the lovely fridge door environment and eating up precious pipe, the user experience on a slow modem was not so great. Pictures are big. Still, we got Compaq and HP excited (shared photos become printed photos, which moves paper), and they helped get the product out there.

But we couldn’t charge for it, and we weren’t making money on advertising. It was going to be a long haul to make the product a financial success. An expensive haul. We couldn’t do it.

Skip forward to today. Finally, browsers are getting consistent enough and powerful enough that it’s almost (but not really) possible to make a decent application that runs in the browser. Meanwhile we’ve all been trained to put up with shitty software while online, so actual good software on the Web is big news. Now Internet Explorer (the second-worst thing to happen to the Internet) is finally close enough to the standards that people can write sophisticated user interfaces, using techniques that are often bundled under the term AJAX.

In the intervening years, galleries of many stripes have popped up on the Web, but nothing like PeoplePost. There are places people can share pictures, but they boil down to “here’s a big pile of my pictures; now post a big pile of your pictures.” Nice, but it could be better. A lot better. I was reminded of how cool PeoplePost would be this summer when the family was looking for a place to share photos from the eclipse cruise. There is nothing that allows people to collaborate, to build an album with text and photos and comments, and to allow everyone to contribute to the same album and build a true group identity. Combine that with modern social networking and you’ve got something.

Maybe it’s time to dust off the old failure. Maybe the world is ready for it now.

1

New Sidebar Feature – Tag Cloud (sort of)

Most blog systems support tags these days. Put simply, tags are just words that can be used to create informal groups of posts. Tags aren’t as rigidly defined as categories, and so a ramble that covers many topics can have many tags. The purpose of the tags is to allow folks like you to find similar stuff. Since moving to WordPress I’ve started to pay more attention to tags, and at the bottom of each episode you can find a link or three to episodes with similar tags. It’s kind of cool, and it’s search-engine friendly.

Now I have added a widget to the sidebar that provides a ‘tag cloud’ — a list of the tags with the most-used tags in larger font. (I think this is a misuse of ‘cloud’, which in this context is also supposed to show relationships. A true cloud would group tags by how often they are used together.) There are much fancier tag cloud widgets out there, but I was starting to spend way too much time investigating the options. I settled on a nice, simple, colorful widget which is over there now. It’s called “ILW Colorful Tag Cloud” (or something like that). There are a few aesthetic tweaks I’d like to make, like condensing the text, but that shouldn’t be too much trouble.

The widgit’s all right, but the colors are arbitrarily set by me. It would be cool if the colors actually meant something. Since the number of times a tag is used is already represented in the font size, color could be used to show relationships or (better yet) indicate how many times a tag has been clicked. That way the tags more people found interesting would be highlighted.

Another minor problem with the tag cloud as it stands is that most of the 1200 episodes I created with my old blog system have no tags. I’ve gone back to retrofit tags on a few obvious ones, but overall most of this blog is untagged.

But no, not today. No widget modifications, and no more tag retrofitting. I’ve already spent far too much time on this silly feature.

Lite Brite

Last night as my sweetie and I were sharing a big salad and watching TV, she turned to me and said, “We should do Lite Brite!” I readily agreed. I had never seen an actual Lite Brite in action.

You remember Lite Brite, don’t you? It is a backlit frame into which you can stick translucent plastic pegs. The colored pegs glow merrily. Lite Brite! You can paint with light! the jingle went (approximately).

I had given the Lite Brite a lot of thought back when I was roughly four years old, and occasionally thereafter. I only remember little bits and pieces of the kids’ program Captain Kangaroo, but I remember the Lite Brite ads that supported the good Captain and his loyal sidekick, Mr. Greenjeans. I remember the ads very well, because it was one of the earliest engineering challenges I ever tackled. How the heck did the dang thing WORK?

Lite Brite Masterpiece: Ducks

Lite Brite Masterpiece: Ducks

In the ads, the pegs are pushed into a black surface and light up. Sweet! obviously there is something backlit and when a peg is pushed in it glows. At first I tried to come up with a system where pegs could be placed anywhere, and stay in place. And then came the real engineering challenge: making the holes close back up when the peg was removed. This last feature was obvious—otherwise the toy would not be reusable, and the smallest mistake meant you ruined everything.

After more careful observation, I saw that the pegs were always in a grid pattern on the board. So, I realized, there was a grid of holes that the pegs could be punched into. With that knowledge, I imagined a system with little spring-loaded doors for each hole. Push the peg in, the flap opens and light comes through. Pull it out, and the flap closes. I watched the ads closely for any sign of the doors. There was none. The black surface seemed completely uniform. Perplexing. Over the years I mentally fiddled with different designs for the Lite Brite doors that would not be prone to light leaks.

Fast-forward forty years, when I came to live with someone who owns an honest-to-God Lite Brite. At last the Engineering mystery would be resolved.

The answer: black paper. No doors, no flaps, no self-repairing gelatinous layers. You mount opaque paper over the grid and punch holes in it with the pegs. There is no undo. The black papers that come with the LIte Brite have little letters printed on them, for color-by-numbers fun. And really, can you imagine how long the delicate little mechanisms I had been imagining since my very first days of TV watching would have lasted? In my gut I knew that there had to be a simpler answer, but I never let go of my assumption that you could take the pegs back out again.

We sat on the floor, my sweetie and I, taking turns punching in the little pegs (I had trouble differentiating the pink and orange ones before punching them in), and had a good ol’ time. When we were done we kept the Lite Brite plugged in to bask in the glory of our masterpiece. And it was good.

1

Figuring out WordPress Roles

A couple of regulars have wished out loud that they could edit their own comments. “No problem,” thought I, “I will create accounts so they can log in. Once the system knows who they are, I’m sure it will allow them to edit their own stuff.

Not so fast. Apparently the ability to edit one’s own comments is tied to the ability to create new posts as well. I’m writing this post as Jerry II, a new user on this blog with the exalted role of ‘Contributor’. It’s possible to mix and match exactly which capabilities a user has (with the help of a WordPress plugin), but the same capability, edit_post, is ties to editing one’s own comments and to writing new post content.

It’s not a total disaster; I can’t publish the episode I’m writing. It will go into a pile to await the approval of the administrator, so no unauthorized content will reach your tender retinas. It’s just extra complexity for other users who don’t want it.

Oh, well, they’re smart people. I’m sure they can overcome this.

Quest for the Perfect Moon Widget

You may have noticed that as of this moment there are three different moon phase widgets over on the sidebar. None of them are perfect, alas (although the Japanese one is perfectly inscrutable). I looked around at other WordPress widgets and did not find one that gave out all the information I was interested in (especially for the eclipse) and was aesthetically pleasing. I thought I might spend a few hours and make my own.

The design was very simple. I would write a little Flash thingie that read XML data from a server and draw the moon with great precision and also look nice doing it. In addition I could put numerical readouts for more interesting (to me) numbers. Piece of cake.

I started my quest looking for a server with current moon info. The US Naval Observatory has all sorts of lunar data available, presumably calculated with far greater precision that I will ever need. The only problem is, they didn’t have data for right now. They had almanac generators and whatnot, but nothing that I could ping and get back a message that said, “at this moment, the moon is…” I couldn’t find anything at NASA, either. I broadened my search and found that nobody seems to be providing this service. “fine, then,” I thought. “I’ll make my own moon server. I’m sure there are plenty of places I can find algorithms for calculating this stuff.”

Only, that didn’t turn out to be so simple, either. The motion of the moon is incredibly complex. There exists a thing called ELP 2000-85 which is the latest attempt to make the math match what the moon actually does. What the thing does is loop through a set of calculations a bazillion times, each time with tweaked coefficients that make smaller and smaller corrections to the calculation. Compiling the tables of coefficients must have been a real pain in the butt. Refining the tables is still ongoing. The accuracy of your calculation comes down to how many times you loop through the coefficients before you decide that the computer power is better used for something else.

Nobody in their right mind would actually use all the tweaks in the ELP 2000 for anything as simple as a moon phase widget, or, for that matter, a moon landing. Along came a guy named Jean Meeus, who published a book full of handy formulas for calculating where things are going to be. He includes simplifications of the ELP 2000 (only looping through 64 iterations), and while they’re not as precise, they’re pretty damn good. I don’t have that book, either.

Time wasted so far: 3 hours. Completion of widget: 0%

But now my search began to bear fruit. I didn’t have Meeus’ formulas, but other people did, and had written software. I found some open-source code that implemented some of his stuff. Yay! I implemented the code, moving it from c to PHP so I could run it on my server. After a few routine hitches the code was up and running and telling me just where the moon was, relative to the Earth, accurate to a couple of arcseconds.

Time wasted so far: 6 hours. Completion of widget: 5%

Unfortunately, it didn’t tell me anything else. This particular code did not provide any information that required data about the sun — like, say, the phase of the moon. Harrumph. Back to the Internet I went. Fairly quickly I found some different code, this time in JavaScript, that also cited Meeus. It was much, much, simpler, ignoring many of the more difficult-to-calculate corrections, but I figured that the first code sample had already done most of that. It was simply a matter of adding the new code to what I already had. Naturally, despite having the same source reference, all the variable names were completely different.

After a great deal of forensics (that’s a big word for ‘wasted time’) I established which quantities I had accurate versions of and which I still needed to calculate. I got everything set up and ran some tests. The results were not good.

Time wasted so far: 12 hours. Completion of widget: 3%

I had expected some problems like this – perhaps in one body of code an angle was expressed in degrees and the other expected radians. Things like that. I started working through things. Only after another day of head-scratching did I test the code I’d based the second half of my project on. It was wrong. So there I was with Frankenstein’s monster of code sewn together from different sources, and one of the sources was broken before I even started. Sigh. Back to the drawing board.

Time wasted so far: 20 hours. Completion of widget: 2%

I should mention along in here somewhere that there are people who sell moon software for quite a bit of money. My little server could potentially put a dent in their sales by bringing accurate calculations to anyone who asks, but its not really the calculations they are selling, but the application around it. I’m not too worried for them.

Back to the Web and by now I was getting better searches because I knew the key terms to look for. I found two more code examples, both of which take precision to the most extreme available. One is a complete implementation of the ELP 2000-82b. This honey consists of 36 files with tables with hundreds of rows of numbers, and a sample program in Fortran that shows how to use them. For ridiculously accurate calculations, I couldn’t do much better. But… It only calculates the position of the moon, just like the first code I implemented. I’d still need to work out the phases and whatnot.

The other code I found is based on earlier math, but really concentrates on what an observer would see from a given point on the Earth. It includes corrections for the optical effects of the atmosphere and for the friggin’ speed of light. It’s got a lot of stuff I don’t need (other planets, for instance), but it has everything I’d be looking for. The thing is, the code is horrible. It’s in c, and the writer apparently never heard of parameters or returning values. Or structs, or anything else that might help organize the information. It is impossible to read a function and know what it does or where all the numbers it uses come from. It would be a big task to translate the pieces I need, mainly because it’s very difficult to tell which pieces I need. Still, it’s an option.

Time wasted so far: 24 hours. Completion of widget: 3%

And that’s where I stand. You know, maybe I’ll wait until I’m on a boat full of moon geeks. I bet one of them even knows a Web site that gives current moon data.

1

What’s with all the moon stuff?

I have added a couple of widgets over in the sidebar that show the phase of the moon. Why? Because when the moon gets back to new, I’ll be somewhere in the ocean around Iwo Jima, staring straight up and burning my eyeballs as the moon passes between them and the sun. Total Eclipse of the Sun, baby, and I’ll be there!

I added two different moon phase thingies because one was more aesthetically pleasing, while the other held more cultural interest. If you hold the mouse over the Japanese characters, you will be given important information about how to carry out your day. If you can figure out what it means.

I’ll be writing more about this adventure as I gear up for the cruise. A boat full of astronomy geeks! Woo hoo!

Rant of a Geek

So I destroyed the forum at Jer’s Software Hut. By pure blind luck—the purest and blindest variety of this luck: Extra Virgin Pure Blind Luck, I made a backup two minutes before destroying the forums. I have yet to restore the forums from the backup for reasons I’m not sure of, but the data is there, and I know I will be able to pull it off eventually.

So I have this file that should restore the database to its previous condition. Groovy. Only problem is, it doesn’t work. I’ll figure it out. But that’s not my beef here. My beef is about units. The maximum size for an uploaded restore file is 102 kKiB.

How big again?

There’s been a movement afoot to try to separate the binary “thousand” from the decimal thousand. Thus a thousand meters is a kilometer (km), and a 1026 bytes is no longer a kilobyte (kB), but a kibibyte (KiB). I’m down with that. It’s a distinction I already made in my head, and now it’s codified.

But then there’s 102 kKiB. No. No, no, no. You’re at three decimal point precision here, there’s really not any reason whatsoever to be mixing your numbering systems. (I’m cc’ing this message to the people at myPhPAdmin.) Why not just say 99 MiB? Every mainstream operating system reports file size in MiB (though they call it MB), so suddenly there’s no deciphering involved.

Maybe it’s just the residual physics geek in me, but units, properly used, make things simpler. I got out of a second semester of class by unit-analyzing my way through a test. I had no idea what the question was asking, but I knew what I had and I knew what units the answer had to be in, and most of the time that and a little calculus is enough.

But that has nothing to do with my current rant. My rant is this: 102 kKiB is really effing retarded.

I feel better now.

Edited to add: Apparently, the “MB” numbers on hard drives are the absolutely retarded 1000 x 1024 Bytes, or 1000 KiB, or 1 kKiB. Even though it’s stupid, I will swallow and not be annoyed when the unit is used in direct reference to a hard drive, for truth in advertising. In the case of a file upload, there is still no excuse.

A Day of Design

I had other things I needed to do today, and the new blog sucked up WAY too much of my time. I’m working on making the new banner actually look cool, rather than merely function. It’s going… OK, I guess. I’ve already spent a long time trying to figure out colors, when I think the core problem is that the fonts just plain don’t work well together. The guest poem system is mostly done, but I don’t have it displaying the author pictures yet. There’s a bit of a problem there; For most of the poems there’s plenty of room for a picture, but there are a few poems that need a lot of space. I’ll work something out.

I also worked on the comment popup window over there. It’s not great, but it’s a heck of a lot better than it was.

Overall, what do you think? Still to come: sound effects (and a mute button), and a way to play “All for me grog!” I might sneak in a couple of other surprises, too.

Geekery: Transferring this blog from iBlog 2 to WordPress

Note: For those looking to move from iBlog 2 to wordpress, this article and some follow-up can be found at the iBlog survivors’ forum. The complete script is available there for download. You really don’t have to understand all this stuff.

I started using iBlog several years ago, when it was new and I was new to blogging. It had one advantage over other blogging packages: it came free with my .mac account back in the day and it worked on .mac servers, which are, to put it kindly, inflexible.

Two things have happened in the intervening years: first, all the blogging platforms have gotten much better, including the ability to work on the blog while offline. The second is that iBlog made an abortive step forward to iBlog 2, which was a major improvement, but then the whole company stalled before that release was really finished (although by then I was fully committed to it). I will miss iBlog 2, but not as much as I will enjoy getting my stuff onto a faster, more versatile platform.

After a rather exhaustive search of blogging and CMS systems, I settled on WordPress. While it’s not perfect, it is a straightforward MySQL-Apache-php application that is easy to fiddle with, and some of the customizations I was looking for were much easier with WordPress than with others.

WordPress has a whole bunch of tools and instructions for importing your stuff from other blog systems. None of those did me much good at all, however, as iBlog was too obscure for anyone to worry about. After searching the Internet I found some helpful information, but it all applied to iBlog 1 – most people never made the move to the ill-fated upgrade. I was pretty much on my own.

WordPress can import data in a variety of formats, but it was up to me to get the data out of iBlog in a format WrodPress could understand. The most versatile format was one created by the folks at WordPress, which could include information specific to WordPress. Cool! Decision made, I was on my way.

Except… the folks at WordPress have never bothered to document the structure of their files. Apparently It’s something they’ve been meaning to get around to eventually (though the people writing translation software for the other major blogging software have long since muddled through it). I did what everyone else has had to do to export data: copy one of WordPress’s files and fiddle with it until it works. Not only is this a pain in the patoot, there might be tags that don’t appear in my examples that could nonetheless be useful to me. Oh, well.

I needed my import file to include definitions of categories, and then each of the blog entries, with correct category associations. My example file had a lot of fields that seemed redundant for my purposes, but without documentation I wasn’t going to waste time trying to figure out which tags were required and which weren’t.

Here is a very small (one episode) export file. We’ll go into the details of things like nicename later:

<rss>
<channel>
    <title>Muddled Ramblings and Half-Baked Ideas</title>
    <link>http://jerssoftwarehut.com/muddled</link>
    <description>blog!</description>
    <pubDate>Thu, 28 Jun 2007 21:32:21 +0000</pubDate>
    <generator>Jers Very Clever Script</generator>
    <language>en</language>
    <wp:wxr_version>1.0</wp:wxr_version>
    <wp:base_site_url>http://jerssoftwarehut.com/muddled</wp:base_site_url>
    <wp:base_blog_url>http://jerssoftwarehut.com/muddled</wp:base_blog_url>
 
<wp:category>
    <wp:category_nicename>bars-of-the-world-tour</wp:category_nicename>
    <wp:category_parent></wp:category_parent>
    <wp:posts_private>0</wp:posts_private>
    <wp:links_private>0</wp:links_private>
    <wp:cat_name><![CDATA[Bars of the World Tour]]></wp:cat_name>
    <wp:category_description><![CDATA[blah blah blah]]></wp:category_description>
</wp:category>
 
<item>
    <title>Delayed by Weather</title>
    <link></link>
    <pubDate>2007-03-27 18:23:57</pubDate>
    <dc:creator><![CDATA[Jerry]]></dc:creator>
    <category><![CDATA[Bars of the World Tour]]></category>
    <category domain="category" nicename="bars-of-the-world-tour"><![CDATA[Bars of the World Tour]]></category>
    <content:encoded><![CDATA[<p>The Weather Channel is calling the roads around here "a big mess", so I'm going to take time out from driving and catch up on some writing. Unfortunately, TWC is also calling for dangerous surf and "rough bar conditions". I'd better leave the laptop in my room.</p>]]></content:encoded>
    <excerpt:encoded><![CDATA[&amp;nbsp;]]></excerpt:encoded>
    <wp:post_id>1065</wp:post_id>
    <wp:post_date>2007-03-27 18:23:57</wp:post_date>
    <wp:post_date_gmt>2007-03-27 18:23:57</wp:post_date_gmt>
    <wp:comment_status>open</wp:comment_status>
    <wp:ping_status>open</wp:ping_status>
    <wp:post_name>Delayed by Weather</wp:post_name>
    <wp:status>publish</wp:status>
    <wp:post_parent>0</wp:post_parent>
    <wp:post_type>post</wp:post_type>
</item>
 
</channel>
</rss>

But how to create the file? The data for iBlog 2 is distributed over (literally) thousands of files. Writing a program to track down all the information and make sense of it would be a major chore. That’s where AppleScript came in. iBlog’s programmer took the time to provide access to the iBlog data through the Apple Scripting system. I was able to let iBlog read all of its silly scattered files and make sense of them, then provide the data to me in a coherent fashion. So far, so good. All I needed to do was loop through all the episodes, pull out the data I needed, and shovel it into a text file that WordPress could read.

[IMPORTANT NOTE: I’ve tried to go back and reconstruct the scripts as they were at the appropriate stage in development, but the snippets are untested.]

[ALSO IMPORTANT: you don’t really have to understand the code. If you are in this boat, I will help you. You should understand the challenges, but I’m here for you.]

on run

set exportFile to 0

try

set exportFile to open for access “Users:JerryTi:Documents:scripts:” & niceName & “.xml” with write permission

set eof of exportFile to 0

tell application “iBlog” to set cats to the categories of the first blog

repeat with cat in cats

tell application “iBlog” to set catname to (the name of cat) as text

set niceName to the first word of catname

write rssHead to exportFile as «class utf8» — xml/rss header stuff that’s always the same

set catDescription to “blah blah blah”

write out the category info

tell application “iBlog” to set nextText to “<wp:category>” & newLine & tab & “<wp:category_nicename>” & niceName & “</wp:category_nicename>” & newLine & tab & “<wp:category_parent></wp:category_parent>” & newLine & tab & “<wp:posts_private>0</wp:posts_private>” & newLine & tab & “<wp:links_private>0</wp:links_private>” & newLine & tab & “<wp:cat_name><![CDATA[” & catname & “]]></wp:cat_name>” & newLine & tab & “<wp:category_description><![CDATA[” & catDescription & “]]></wp:category_description>” & newLine & “</wp:category>” & newLine & newLine

write nextTex
t
to exportFile as «class utf8» — have to coerce the text from 16-bit unicode

tell application “iBlog” to set ents to the entries of cat

repeat with ent in ents

get the stuff in iBlog’s world, work with it here

tell application “iBlog”

set titl to (the title of ent)

set desc to (the summary of ent)

set bod to (the body of ent)

set postDate to the post date of ent

end tell

set nextText to (((“<item>” & newLine & tab & “<title>” & titl & “</title>” & newLine & tab & “<link></link>” & newLine & tab & “<pubDate>” & postDate) & “</pubDate>” & newLine & tab & “<dc:creator><![CDATA[Jerry]]></dc:creator>” & newLine & tab & “<category><![CDATA[” & the name of cat & “]]></category>” & newLine & tab & “<category domain=”category” nicename=”” & niceName & “”><![CDATA[” & the name of cat & “]]></category>” & newLine & tab & “<content:encoded><![CDATA[” & bod & “]]></content:encoded>” & newLine & tab & “<excerpt:encoded><![CDATA[” & desc & “]]></excerpt:encoded>” & newLine & tab & “<wp:post_id></wp:post_id>” & newLine & tab & “<wp:post_date>” & postDate) & “</wp:post_date>” & newLine & tab & “<wp:post_date_gmt>” & postDate) & “</wp:post_date_gmt>” & newLine & tab & “<wp:comment_status>open</wp:comment_status>” & newLine & tab & “<wp:ping_status>open</wp:ping_status>” & newLine & tab & “<wp:post_name>” & titl & “</wp:post_name>” & newLine & tab & “<wp:status>publish</wp:status>” & newLine & tab & “<wp:post_parent>0</wp:post_parent>” & newLine & tab & “<wp:post_type>post</wp:post_type>” & newLine & “</item>” & newLine & newLine

write nextText to exportFile as «class utf8»

end repeat

end repeat

write rssTail to exportFile as «class utf8» — xml/rss file closing stuff

on error errStr number errorNumber

if exportFile is not equal to 0 then

close access exportFile

set exportFile to 0

end if

error errStr number errorNumber

end try

if exportFile is not equal to 0 then

close access exportFile

set exportFile to 0

end if

end run

So far things are pretty simple. The script loops through the categories, and in each category it pulls out all the episodes. Only it kept stalling. It turns out that sometimes iBlog took so long to respond that the script gave up waiting. I added

with timeout of 600 seconds

at the start to make the script wait a full ten minutes for iBlog to respond. Yes, iBlog certainly is no jackrabbit of a program.

Now the program ran! The only problem is, the resulting file doesn’t work. Hm. The first thing the importer reports is that it can’t read the dates the way AppleScript formats them. So, I added a function to reformat all the dates to match the example. Then it was importing categories, but not items. Why not?

Um… actually I don’t remember the answer to that one. Let’s just say that it took a lot of fiddling and testing to get it right. Eventually, hurrah! There in my WordPress installation were episodes from iBlog.

And they looked like crap. The thing is, that iBlog included unnecessary HTML tags around the blog title, excerpt, and body. It’s going to be a lot easier to clean them up now, while we’re mucking with each bit of text anyway, so back to AppleScript’s lousy string functions we go to clean up iBlog’s mess. Now, after we get all the data from iBlog, we call a series of functions to clean it all up:

set titl to stripParagraphTags(titl)

set desc to stripParagraphTags(desc)

set postDate to formatDate(postDate)

set bod to fixBlogBodyText(bod, postDate)


The actual functions are available in the attached final script.

Things are looking better, but still not very good. Much of this is due to some junk iBlog did when converting my older episodes into iBlog 2 format. One thing it did was to insert hard line breaks in the text of the blog body. No idea why. Maybe they were there all along and I had no way to see them. WordPress helpfully assumes that if you have a line break in the data it imports, you want a line break when it shows on the screen. So, every line break is replaced by a <br /> tag when imported into WordPress. This will not do. Additionally, iBlog replaced paragraph breaks </p><p> with a pair of break tags: <br /><br />. Once again, the reason for this is a mystery. The latter issue is less important, but we may as well address it while the hood is up.

Back we go into the fixBlogBodyText function, to repair more silly iBlog formatting. The resulting function looks like this:

on fixBlogBodyText(s, postDate)

this assumes that if an episode is supposed to start with a div, it will have a style or class

if (the offset of “<div>” in s) is equal to 1 then

set s to text 6 thru (the (length of s) – 6) of s

in some cases there was an extra line feed at the end of the text as well

if the last character of s is “<” then

set s to text 1 thru (the (length of s) – 1) of s

end if

set s to “<p>” & s & “</p>”

end if

clean up iBlog junk (lots of this stuff is the result of upgrading to iBlog 2 – the conversion was not clean

replace all line breaks with spaces

set s to replaceAll(s, “

“, ” “)

replace all double-break tags with paragraph tags

set s to replaceAll(s, “<br /><br />”, “</p>” & newLine & “<p>”)

replace all old-fashioned double-break tags with paragraph tags

set s to replaceAll(s, “<br><br>”, “</p>” & newLine & “<p>”)

get rid of some pointless span class info


set s to replaceAll(s, ” class=”Apple-style-span””, “”)

return s

end fixBlogBodyText

note: replaceAll is a utility function I wrote that does pretty much what it says. You will find it in the attached source file. newLine is a variable I defined because left to it’s own devices AppleScript uses the obsolete Mac OS 9 line endings. What’s up with that?

At this point the text is importing mostly nicely. But wait! I was running my tests just working with one category to save time. When I looked at Allison in Anime on WordPress, some really weird things started happening. It turns out that when importing the data, you need line breaks every now and then, otherwise the importer will insert them. That would be nice to put in the documentation somewhere! In one of my episodes, the newline was inserted right in the middle of a <div> tag, which led to all kinds of trouble. So, to the above script I added a line that inserts a line break between </p><p> tags. As long as any one paragraph isn’t too long, I’ll be all right.

set s to replaceAll(s, “</p><p>”, “</p>” & newLine & “<p>”)

And with that, we’ve done it! We’ve written a script that will export all the data from iBlog 2 and format it in a way that WordPress can accept. Time to run it on the whole blog, go take a little break, and come back and see how things went…

Dang. Didn’t work. There’s a maximum file size for import, and my blog is too damn big. Not a huge problem, just a bit of modification to make each category a separate file. Now, at last, the data is imported, the text looks nice, and we’re ready to make the move to our new home.

Except…

The images don’t show up, and links between episodes are broken. Also, it would be nice if people could still read the old Haloscan comments. I guess we’re not done yet.

Image links were the easiest to repair. In iBlog 2 the source code always looks for the image at path /https://muddledramblings.com/wp-content/uploads/iblog/. We just have to find those links and replace them with new info. I used Automator to find all the image files in the iBlog data folders, then I copied them all up to a directory on the WordPress server, and pointed all the links there. Worked like a charm! (Icerabbit goes into more detail on that process here. I used different tools, but the process is the same.)

Links between episodes turned out to be a lot trickier. It came down to this: How do I know what the URL of the episode is going to be when I load it into WordPress? I had to either know what the episode’s id was going to be, or I had to know what its nicename was going to be.

Nicename is a modified title that can be used in URL’s – no spaces and whatnot. “Rumblings from the Secret Labs” becomes “rumblings-from-the-secret-labs”. If I set up wordpress to use the nicename to link to an episode rather than the ID number, it would have some advantages, but I can get long-winded (have you noticed?) and that applies to my episode titles as well. The URL’s for my episodes could get really long. Therefore, I’d rather use the episode’s ID for its permalink. (If you try the icerabbit link above, you will see the nicename version of a link.)

Happily, the import file format allows me to specify the id of episodes I upload. (I don’t know what it does if there’s already an episode with that ID.) After some fiddling I managed to specify reliably what ID to give each episode. Now in my script I make a big table with the iBlog paths to each episode and the ID I will assign it. Before the main loop I have another that builds the table:

first loop

set postID to firstPostID

set idTableRef to a reference to episodeIDTable

tell application “iBlog” to set cats to the categories of the first blog

repeat with cat in cats

set cat to item 1 of cats

tell application “iBlog” to set catFolderName to the folder name of cat

display dialog catFolderName

copy {catFolderName, -1} to the end of idTableRef

tell application “iBlog” to set ents to the entries of cat

repeat with ent in ents

tell application “iBlog” to set episodeFolderName to the folder name of ent

set episodePath to catFolderName & “/” & episodeFolderName

copy {episodePath, postID} to the end of idTableRef

set postID to postID + 1

end repeat

end repeat


Now it’s possible to look up the id of any episode, and build the new link. The lookup code is in the attached script, and also handles the special cases of linking to a category page and to the main page. For category pages, I just hand-built a table of the category ID’s I needed based on previous import tests.

Finally, there is the task of preserving the links to the old comment system. Happily, those Haloscan comments are also connected based on the file path of the episode. (Though it looks like really old comments are not accessible, anyway, which is a bummer.)

In the main loop, after the body text has been cleaned up, tack the link to Haloscan on the end, complete with hooks to allow CSS formatting:

set bod to bod & newLine & newLine & “<div class=”jsOldCommentBlock”><span>Legacy Comment System:</span> <a href=”javascript:HaloScan(‘” & entFolder & “‘);”><script type=”text/javascript”>postCount(‘” & entFolder & “‘); </script></a></div>”

Not mentioned above are functions for logging errors and a few other utililties that are in the main script file. They should be pretty obvious. The script includes code that is specific to issues I encountered, but it should be a good start for anyone who wants to export iBlog 2 data for import into another system. It SHOULD be safe to execute on your iBlog data; it doesn’t change anything on the iBlog side of things. I don’t know if there’s anyone else in the world even using iBlog 2 anymore, but if you would like help with this script, let me know.

Is the Hut running?

Hey, can someone test these links for me?

From where I’m sitting right now, I can’t access Jer’s Software Hut or the blog construction site. I can reach everything else on the Web, so I’m wondering if my server is down or if my IP address has been blocked by my host’s security robots (again). I went to sleep with an open connection to my WordPress database and that might have triggered something. Can anyone out there load those pages and let me know? Thanks!

New Blog Design Progressing Sideways

Ambitions are skyrocketing here at the Hut as the new blog starts to take shape. Too bad you can’t see most of it. But I’d like to ask two things:

1) When you wander over there, can you tell me what you see? I’ve got some of the same CSS that kills Internet Explorer over here at work over there as well, but I think I have things constrained so that the poor software can handle it.

2) Do you see a really dumb animated header? If not, what do you see?

Behind the scenes, that dumb header is grabbing haiku from a database using XML. The perfect storm of tech and art. Best of all, some of those haiku were written in a spreadsheet.

Which, now that I think of it, leads me to another way someone can help. All the old poems in the rotation are image files. Now I need them as text. Anyone want to transcribe them? It would be a big help! Just need a nice table (or spreadsheet!) with poem, author, comment, and link, if applicable. Surely someone out there is looking for a way to contribute to the arts.

So there we have it. My head is in such a technical realm right now that I can’t even watch cartoons. I amused myself tonight with wine and the ActionScript 3.0 documentation, with brief forays into php and WordPress APIs, thinking all the while about how to tackle a page count memory leak in Jer’s Novel Writer. Yeah, I know how to party on a Saturday night.

Getting the Hut Back Up and Rolling

Um… actually two releases. The first didn’t last long.

It’s been a while since I’ve really knuckled down and worked on Jer’s Novel Writer, but after wrestling with the script to extract data from iBlog to export to WordPress, my brain has been sliding into technomode, and it was nice to work in a programming environment that was less frustrating than AppleScript. I had a version of Jer’s Novel Writer that I’d done some work on a while back, but it took a while to get myself back up to speed on just what was going on in the code.

I missed something on my first try. Happily a loyal user caught it almost right away, and one day later version 1.1.8 is out there, helping people write. Whew! Slowly things are returning to the balance I’d managed to keep for the last few years. The last few months have been… less balanced. (Obviously I’m operating in the geek hemisphere right now. No metaphors for you today!)

Meanwhile, a few days ago I got this!

Jer's%20Novel%20Writer_award.png

2