Forward to the Past

I’ve spent the last few days learning Ember, which is a software framework for making apps that run in your browser. It’s fun to learn new things, and it has been fun to learn Ember, which to me is a less-awful-than-most javascript framework.

(Things are going to be technical for a bit; please stand by for the rant that is the foundation of this episode — which is also technical.)

The good news: I have never taken a tutorial for any framework on any platform that put testing right up front the way Ember’s did. That is magnificent. The testing facilities are extensive, and to showcase them in the training can only help the new adopters understand their value. Put the robots to work finding bugs!

Also good news: Efficient route handling. Nested routes that efficiently know which parts of the page need to be redrawn, while providing bookmarkable URLs for any given state is pretty nice.

But… I’m still writing html and css shit. WTF?

Yeah, baby, it’s ranting time.

Let’s just start with this: HTML is awful. It is a collection of woefully-shortsighted and often random decisions that made developing useful Web applications problematic. But if your app is to work in a browser, it must generate HTML. Fine. But that shouldn’t be my problem anymore.

When I write an application that will run on your computer or your phone or whatever, I DON’T CARE how the application draws its stuff on the screen. It’s not important to me. I say, at a very high level, that I expect text in a particular location, it will have a certain appearance based on its role in my application, and that if something changes the text will update. That’s all.

I don’t want to hear about html tags. Tags are an implementation detail that the framework should take care of if my application is running in a browser. Tags are the HOW of my text appearing where I want it. I DON’T CARE HOW. Just do it!

When I came to work in my current organization, the Web clients of all our applications were built with a homegrown library called Maelstrom. It was flawed in many ways, being the product of two programmers who also had to get their projects done, and neither of whom were well-suited for the task of ground-up framework design (in their defense, the people who invented HTML were even less qualified). But Maelstom had that one thing. It had the “you don’t have to know how browsers work, just build your dang application” ethos.

There was work to be done. But with more love and a general overhaul of the interfaces of the components, it could have been pretty awesome.

Why don’t other javascript libraries adopt that approach? I think it’s because they are made by Web developers who, to get where they are, have learned all the HTML bullshit until it is second nature. The HOW has been part of their life since they were script-kiddies. It’s simply never occurred to them that the HOW should not be something the app developer has to think about.

There have been exceptions — SproutCore comes to mind — but I have to recognize that I am a minority voice. Dealing with presentation minutiae is Just Part of the Job for most Web client developers. They haven’t been spoiled by the frameworks available on every other platform that take care of that shit.

My merry little band of engineers has moved on from Maelstrom, mainly because something like that is a commitment, and we are few, and we wanted to be able to leverage the efforts of other people in the company. So our tiny group has embraced Ember, and on top of that a huge library of UI elements that fit the corporate standards.

It’s good mostly, and the testing facilities are great. Nothing like that in Maelstrom! But here I am, back to dealing with fucking HTML and CSS.

1

4 thoughts on “Forward to the Past

  1. I suspect that the reversion to HTML is also a response to impossible expectations from users. If you have an interface that lets you put this element here, and that element there, and align this other element at the bottom of the screen, users will complain that, hey, it works on my 27-inch screen, why doesn’t it work when I view it from home on my iPhone?

    The answer being, of course, that you set it to be 13 inches apart, and your iPhone isn’t that wide.

    Requiring coding in HTML highlights that you’re not telling the browser where to put something, you’re telling the browser sort of what the purpose of this markup is.

    Apple does a pretty good job of this in their UI tools for designing interfaces on the iPhone/iPad, but even then there comes a point where I can’t see why this element isn’t aligning the way I want it to, and maybe I should just hard-code its position depending on whether it’s being run on my iPhone or my iPad, and screw everyone else’s. Which is easier for me, since I’m only writing for my iPhone or iPad anyway, but it does mean recoding when I get a new device.

    • Those are good observations, and I maintain software that was written by people with big screens and it’s painful watching people try to use those applications on their smaller laptops. Giant sections of the top and left of the application that are simply not interesting once you get where you are going, but there they are, crowding out the useful areas. iPad? Don’t bother.

      Layout guidelines rather than absolute layout can be important, but even those are expressed so badly in html/css that I find myself making float-vs-flex decisions that may be different for different browsers and all I want is a split pane. I should be able to say, “make a resizable split pane and put this on one side and that on the other” and have the details of the css and javascript to make that happen on a given browser invisible.

      As I dig deeper into the in-house UI component library (one of them — there’s one for every major javascript client) I see glimmers of that (but no split pane). So there’s hope.

      But the bottom line is I should say “split pane” and the application framework generates the nonsensical instructions to make that happen.

  2. This holy grail went out the window when (a) designers decided that letting arbitrarily-sized info flow into available space and letting users (especially those with special needs) decide on formatting was gauche and (b) the browser manufacturers not only couldn’t agree on standards but also wanted browser-specific features AND to accommodate crappy code. What you’re asking for is basically impossible, particularly in a responsive environment. I can’t tell you how many times a customer has asked for a “simple” change and I have to explain that it breaks everything in one enviro and/or is internally contradictory in another and/or really ought to be tested in more than just the phone they attempt to do all their work in.

    • I’ve been noodling on a reply to this for a while, but the bottom line is that even if that bullshit is going on, there’s no reason my framework has to burden me with the nuts and bolts of that bullshit. Not only to I have to deal with the nonsense, I have to deal with the HOW of the nonsense. There is seriously no reason that the silly shit can’t be abstracted away from me and I’m just burdened with dealing with all the different responsive options.

      As Jerry S. pointed out above, the code frameworks for iOS do a moderately competent job of doing that. It should not matter to me in the slightest whether I’m writing for iOS, Android, or Web. The problems are the same. The abstractions are the same. The implementation matters to me not at all.

Leave a Reply

Your email address will not be published. Required fields are marked *