Loving HTML5

Half of standards making is minutia; the other half is politics. Rightly or wrongly, I’ve long suspected that Atom was born, not of necessity, but because of conflicts between the XML crowd and the founders of RSS. Likewise, rightly or wrongly, I reckoned HTML5 was at least partly Hixie’s revenge against XHTML served as text/html.

And then a funny thing happened. Some friends and I gathered at Happy Cog’s New York studio to hash out the pros and cons of HTML5 from the perspective of semantic-markup-oriented web designers (as opposed to the equally valid perspectives of browser engineers and web application developers—the two perspectives that have primarily driven the creation of HTML 5).

Our first task was to come to a shared understanding of the spec. During the two days and nights we spent poring over new and changed semantic elements, we discovered that many things we had previously considered serious problems were fixable issues related to language.

Easy language problems

Some of these language problems are trivial indeed. For instance, on both the WHATWG and W3C sites, the specification is sometimes called “HTML 5” (with a space) and sometimes called “HTML5” (with no space). A standard should have a standard name. (Informed of this problem, Hixie has removed the space everywhere in the WHATWG version of the spec.)

Likewise, as an end-user, I found it confusing to be told that there is an “HTML5 serialization of HTML 5,” let alone an “XHTML 5 version of HTML5.” I requested that the two serializations be referred to as “HTML” and “XHTML”—emphasizing the distinction between the two kinds of syntax rather than drawing needless attention to version numbers. (Again, Hixie promptly updated the spec.)

Names and expectations

Some language problems are tougher—but still eminently fixable, because they are language problems that mar the presentation of good ideas, not bad ideas that require rethinking.

For example, in order to choose suitable names for the new semantic elements in HTML 5, Hixie analyzed classnames on thousands of websites to see what web designers and developers were already doing. If many designers and developers use classnames like “header” and “footer” to contain certain kinds of content, then HTML 5 should use these labels, too, Hixie and his colleagues reasoned. Doing so would make the purpose of the new elements intuitively obvious to working web professionals, removing the learning curve and encouraging proper element use from the get-go.

It’s a beautiful theory that comes straight out of Bert Bos’s W3C Design Principles. There’s just one problem. Header, and especially footer, behave differently from what their names will lead web designers and developers to expect. Developers will use it for the footer of the page—not for the footer of each section. And they will be frustrated that the footer in HTML5 forbids navigation links. After all, the footer at the bottom of web pages almost always includes navigation links.

To avoid misuse and frustration, the content model of footer should change to match that of header, so that it may be used concurrently as a template level element (the expected use) and a sub-division of section (the new use). Alternately, the element’s name should be changed (to almost anything but “footer”). Expanding the content model is clearly the better choice.

For the love of markup

HTML5 is unusual in many ways. Chiefly, it is the first HTML created in the time of web applications. It is also the first to be initiated outside the W3C (although it now develops there in parallel).

Not surprisingly in a specification that goes on for 900 pages, there are at least a dozen places in HTML5 where a thoughtful standardista might request clarification, suggest a change, or both. My friends and I have taken a stab at this ourselves, and will soon publish our short list of recommendations and requests for clarification.

Nevertheless, the more I study the direction HTML5 is taking, the better I like it. In the words of the HTML5 Super Friends, “Its introduction of a limited set of additional semantic elements, its instructions on how to handle failure, and its integration of application development tools hold the promise of richer and more consistent user experiences, faster prototyping, and increased human and machine semantics.”

Update

[4:47 PM EST] Calling all cars! The HTML5 Super Friends declaration of support is now live, as is the Super Friends Guide to HTML5 Hiccups (i.e. our technical recommendations).

ShortURL: zeldman.com/?p=2438

Click My Lit Panel

In “New Publishing and Web Content,” a proposed panel for SXSW Interactive, I will lead book and new media publisher and entrepreneur Lisa Holton, designer, writer, and W.W. Norton creative director Mandy Brown, novelist, web geek, and Harper’s editor Paul Ford, and writer, editor, and content strategist Erin Kissane in an honest and freewheeling exploration of the creative, strategic, and marketing challenges of traditional and online publishing—and how content strategy and design can help.

Topics covered will include:

  1. What is content strategy?
  2. For magazines that are born digital, what opportunities and challenges does the internet offer editors and publishers?
  3. For traditional magazines, what opportunities and challenges does the internet offer editors and publishers?
  4. How can traditional book publishers harness the energy and talent of the online community?
  5. What new forms are made possible by the intersection of traditional publishing and social networking?
  6. How can design facilitate reading?
  7. How can design encourage readers to become writers and publishers?
  8. What is the future of magazines and newspapers?
  9. What is the future of books?
  10. How can editors and publishers survive and thrive in this new climate?

If this sounds like a panel you’d enjoy seeing, vote for New Publishing and Web Content via the SXSW Interactive Panel picker.

ShortURL: zeldman.com/x/55

Shorten this

In April of 2009, in a post every web designer, publisher, or business person should read, Joshua Schachter told how URL shortening services like TinyURL and Bit.ly came to be, and why the latest ones were so addictive. (Missing from Joshua’s account of their utility is the benefit URL shorteners can provide when sharing an otherwise obscenely long link on the printed page.)

The prescient post concludes that, despite their benefits, such services ultimately harm the web, decreasing clarity while increasing the odds of linkrot and spam:

[S]hortening services add another layer of indirection to an already creaky system. A regular hyperlink implicates a browser, its DNS resolver, the publisher’s DNS server, and the publisher’s website. With a shortening service, you’re adding something that acts like a third DNS resolver, except one that is assembled out of unvetted PHP and MySQL, without the benevolent oversight of luminaries like Dan Kaminsky and St. Postel.

There are three other parties in the ecosystem of a link: the publisher (the site the link points to), the transit (places where that shortened link is used, such as Twitter or Typepad), and the clicker (the person who ultimately follows the shortened links). Each is harmed to some extent by URL shortening.

There’s more, and you should read it all.

One of Joshua’s recommendations to minimize some of the harm is that websites do their own URL shortening instead of relying on middlemen. I’ve done some of that here, via the ShortURL plug-in for WordPress. Thus I use zeldman.com/x/48 instead of a much longer URL to notify my friends on Twitter about a new comment on an oldish thread. Likewise, zeldman.com/x/49 redirects to yesterday’s big post, “Write When Inspired.”

Rolling your own mini-URLs lessens the chance that your carefully cultivated links will rot if the third-party URL shortening site goes down or goes out of business, as is happening to tr.im, a URL shortener that is pulling the plug because it could neither monetize nor sell its service.

tr.im is now in the process of discontinuing service, effective immediately….

No business we approached wanted to purchase tr.im for even a minor amount.

There is no way for us to monetize URL shortening — users won’t pay for it — and we just can’t justify further development since Twitter has all but annointed bit.ly the market winner.

The Short URL Plugin for WordPress installs automatically. It provides simple statistics, telling you how many times a link has been clicked, sets up redirects automatically, allows you to choose a custom link style, and more. You’re not limited to shortening your own URLs, although that’s mainly how I use it; you can also shorten third-party URLs, turning your site into a miny TinyURL. I’ve used this plugin for months, with nothing but joy in its cleverness and usability.

[tags]ShortURL, plugin, WordPress, plugins, joshua schachter, tr.im, bit.ly, URL, Twitter, TinyURL, web, usability, internet, links, security[/tags]

Why Standards Fail

Back in 2000, CSS co-creator Bert Bos set out to explain the W3C’s design principles—“to make explicit what the developers in the various W3C working groups mean when they invoke words like efficiency, maintainability, accessibility, extensibility, learnability, simplicity, [and] longevity….”

Eventually published in 2003, the essay, although ostensibly concerned with explaining W3C working group principles to the uninitiated, actually articulates the key principle that separates great design from the muck we normally wade through. It also serves as a warning to Bert’s fellow W3C wizards not to seek the dark magic of abstract purity at the expense of the common good. Tragically for these wizards, and for we who use their technologies, it is a warning some developers of W3C specifications continue to overlook.

Design is for people

In his introduction, Bert summarizes the humanistic value that is supposed to be at the core of every web standard:

Contrary to appearances, the W3C specifications are for the most part not designed for computers, but for people. … Most of the formats are in fact compromises between human-readability and computer efficiency….

But why do we want people to read them at all? Because all our specs are incomplete. Because people, usually other people than the original developers, have to add to them….

For the same reason we try to keep the specifications of reasonable size. They must describe a useful chunk of technology, but not one that is too large for an individual to understand.

Over the succeeding 25 web pages (the article is chunked out in pamphlet-sized pages, each devoted to a single principle such as “maintainability” and “robustness”) Bert clearly, plainly, and humbly articulates a series of rather profound ideas that are key to the web’s growth and that might apply equally admirably to realms of human endeavor beyond the web.

For instance, in the page entitled “Use What Is There,” Bert says:

The Web now runs on HTML, HTTP and URLs, none of which existed before the ’90s. But it isn’t just because of the quality of these new formats and protocols that the Web took off. In fact, the original HTTP was a worse protocol than, e.g., Gopher or FTP in its capabilities….

And that fact shows nicely what made the Web possible at all: it didn’t try to replace things that already worked, it only added new modules, that fit in the existing infrastructure. …

And nowadays (the year 2000), it may look like everything is XML and HTTP, but that impression is only because the “old” stuff is so well integrated that you forget about it: there is no replacement for e-mail or Usenet, for JPEG or MPEG, and many other essential parts of the Web.

He then warns:

There is, unfortunately, a tendency in every standards organization, W3C not excluded, to replace everything that was created by others with things developed in-house. It is the not-invented-here syndrome, a feeling that things that were not developed “for the Web” are somehow inferior. And that “we” can do better than “them.” But even if that is true, maybe the improvement still isn’t worth spending a working group’s resources on.

Shrinkage and seduction

In his gentle way, Bert seems to be speaking directly to his W3C peers, who may not always share his and Håkon‘s humanism. For, despite what designers new to CSS, struggling for the first time with concepts like “float” and the box model may think, Bert and Håkon designed the web’s layout language to be easy to learn, teach, implement, maintain, and (eventually) extend. They also designed CSS not to overwhelm the newcomer with advanced power at the cost of profound complexity. (“CSS stops short of even more powerful features that programmers use in their programming languages: macros, variables, symbolic constants, conditionals, expressions over variables, etc. That is because these things give power-users a lot of rope, but less experienced users will unwittingly hang themselves; or, more likely, be so scared that they won’t even touch CSS. It’s a balance.”)

This striving to be understood and used by the inexperienced is the underlying principle of all good design, from the iPhone to the Eames chair. It’s what Jared Spool would call usability and you and I may consider the heart of design. When anything new is created, be it a website, a service, or a web markup language, there is a gap between what the creator knows (which is everything about how it’s supposed to work), and what you and I know (which is nothing). The goal of design is to shrink this ignorance gap while seducing us into leaping across it.

What were once vices are now habits

You can see this principle at work in CSS, whose simplicity allowed us to learn it. Although we now rail against the limitations of CSS 1 and even CSS 2.1, what we are really complaining about is the slow pace of CSS 3 and the greater slowness with which browser makers (some more than others) adopt bits of it.

Note that at one time we would have railed against browser makers who implemented parts of a specification that was still under development; now we admire them. Note, too, that it has taken well over a decade for developers to understand and browsers to support basic CSS, and it is only from the perspective of the experienced customer who craves more that advanced web designers now cry out for immediate CSS 3 adoption and chafe against the “restrictions” of current CSS as universally supported in all browsers, including IE8.

If CSS had initially offered the power, depth, and complexity that CSS 3 promises, we would still be designing with tables or Flash. Even assuming a browser had existed that could demonstrate the power of CSS 3, the complexity of the specification would have daunted everyone but Eric Meyer, had CSS 1 not come out of the gate first.

The future of the future of standards

It was the practical simplicity of CSS that enabled browser engineers to implement it and tempted designers to use (and then evangelize) it. In contrast, it was the seeming complexity and detachment from practical workaday concerns that doomed XHTML 2, while XHTML 1.0 remains a valid spec that will likely still be working when you and I have retired (assuming retirement will be possible in our lifetime—but that’s another story).

And yet, compared to some W3C specs in progress, XHTML 2 was a model of accessible, practical, down-to-earth usability.

To the extent that W3C specifications remain modular, practical, and accessible to the non-PhD in computer science, they will be adopted by browser makers and the marketplace. The farther they depart from the principles Bert articulated, the sooner they will peter out into nothingness, and the likelier we are to face a crisis in which web standards once again detach from the direction in which the web is actually moving, and the medium is given over to incompatible, proprietary technologies.

I urge everyone to read “What is a Good Standard?“, and I thank my friend Tantek for pointing it out to me.

[tags]W3C, design, principles, bertbos, maintainability, accessibility, extensibility, learnability, simplicity, specs, standards, css, markup, code, languages, web, webdesign, webstandards, webdevelopment, essays[/tags]

Web fonts, HTML 5 roundup

Over the weekend, as thoughtful designers gathered at Typecon 2009 (“a letterfest of talks, workshops, tours, exhibitions, and special events created for type lovers at every level”), the subject of web fonts was in the air and on the digital airwaves. Worthwhile reading on web fonts and our other recent obsessions includes:

Jeffrey Zeldman Questions The “EOT Lite” Web Font Format

Responding to a question I raised here in comments on Web Fonts Now, for Real, Richard Fink explains the thinking behind Ascender Corp.’s EOT Lite proposal . The name “EOT Lite” suggests that DRM is still very much part of the equation. But, as Fink explains it, it’s actually not.

EOT Lite removes the two chief objections to EOT:

  • it bound the EOT file, through rootstrings, to the domain name;
  • it contained MTX compression under patent by Monotype Imaging, licensed by Microsoft for this use.

Essentially, then, an “EOT Lite file is nothing more than a TTF file with a different file extension” (and an unfortunate but understandable name).

A brief, compelling read for a published spec that might be the key to real fonts on the web.

Web Fonts—Where Are We?”

@ilovetypography tackles the question we’ve been pondering. After setting out what web designers want versus what type designers and foundries want, the author summarizes various new and old proposals (“I once heard EOT described as ‘DRM icing on an OpenType cake.’”) including Tal Leming and Erik van Blokland‘s .webfont, which is gathering massive support among type foundries, and David Berlow’s permissions table, announced here last week.

Where does all of this net out? For @ilovetypography, “While we’re waiting on .webfont et al., there’s Typekit.”

(We announced Typekit here on the day it debuted. Our friend Jeff Veen’s company Small Batch, Inc. is behind Typekit, and Jason Santa Maria consults on the service. Jeff and Jason are among the smartest and most forward thinking designers on the web—the history of Jeff’s achievements would fill more than one book. We’ve tested Typekit, love its simple interface, and agree that it provides a legal and technical solution while we wait for foundries to standardize on one of the proposals that’s now out there. Typekit will be better when more foundries sign on; if foundries don’t agree to a standard soon, Typekit may even be the ultimate solution, assuming the big foundries come on board. If the big foundries demur, it’s unclear whether that will spell the doom of Typekit or of the big foundries.)

The Power of HTML 5 and CSS 3

Applauding HTML 5’s introduction of semantic page layout elements (“Goodbye div soup, hello semantic markup”), author Jeff Starr shows how HTML 5 facilitates cleaner, simpler markup, and explains how CSS can target HTML 5 elements that lack classes and IDs. The piece ends with a free, downloadable goodie for WordPress users. (The writer is the author of the forthcoming Digging into WordPress.)

Surfin’ Safari turns up new 3-D HTML5 tricks that give Flash a run for its money

Just like it says.

Read more

  • Web Fonts Now, for Real: David Berlow of The Font Bureau publishes a proposal for a permissions table enabling real fonts to be used on the web without binding or other DRM. — 16 July 2009
  • Web Fonts Now (How We’re Doing With That): Everything you ever wanted to know about real fonts on the web, including commercial foundries that allow @font-face embedding; which browsers already support @font-face; what IE supports instead; Håkon Wium Lie, father of CSS, on @font-face at A List Apart; the Berlow interview at A List Apart; @font-face vs. EOT; Cufón; SIFR; Cufón combined with @font-face; Adobe, web fonts, and EOT; and Typekit, a new web service offering a web-only font linking license on a hosted platform; — 23 May 2009
  • HTML 5 is a mess. Now what? A few days ago on this site, John Allsopp argued passionately that HTML 5 is a mess. In response to HTML 5 activity leader Ian Hickson’s comment here that, “We don’t need to predict the future. When the future comes, we can just fix HTML again,” Allsopp said “This is the only shot for a generation” to get the next version of markup right. Now Bruce Lawson explains just why HTML 5 is “several different kind of messes.” Given all that, what should web designers and developers do about it? — 16 July 2009
  • Web Standards Secret Sauce: Even though Firefox and Opera offered powerfully compelling visions of what could be accomplished with web standards back when IE6 offered a poor experience, Firefox and Opera, not unlike Linux and Mac OS, were platforms for the converted. Thanks largely to the success of the iPhone, Webkit, in the form of Safari, has been a surprising force for good on the web, raising people’s expectations about what a web browser can and should do, and what a web page should look like. — 12 July 2009
  • In Defense of Web Developers: Pushing back against the “XHTML is bullshit, man!” crowd’s using the cessation of XHTML 2.0 activity to condescend to—or even childishly glory in the “folly” of—web developers who build with XHTML 1.0, a stable W3C recommendation for nearly ten years, and one that will continue to work indefinitely. — 7 July 2009
  • XHTML DOA WTF: The web’s future isn’t what the web’s past cracked it up to be. — 2 July 2009

[tags]@font-face, berlow, davidberlow, CSS, permissionstable, fontbureau, webfonts, webtypography, realtypeontheweb, HTML5, HTML4, HTML, W3C, WHATWG, markup, webstandards, typography[/tags]

HTML 5 is a mess. Now what?

A few days ago on this site, John Allsopp argued passionately that HTML 5 is a mess. In response to HTML 5 activity leader Ian Hickson’s comment here that, “We don’t need to predict the future. When the future comes, we can just fix HTML again,” Allsopp said “This is the only shot for a generation” to get the next version of markup right. Now Bruce Lawson explains just why HTML 5 is “several different kind of messes:”

  1. It’s a mess, Lawson says, because the process is a mess. The process is a mess, he claims, because “[s]pecifying HTML 5 is probably the most open process the W3C has ever had,” and when you throw open the windows and doors to let in the fresh air of community opinion, you also invite sub-groups with different agendas to create competing variant specs. Lawson lists and links to the various groups and their concerns.
  2. It’s a “spec mess,” Lawson continues, citing complaints by Allsopp and Matt Wilcox that many elements suffer from imprecise or ambiguous specification or from seemingly needless restrictions. (Methinks ambiguities can be resolved, and needless restrictions lifted, if the Working Group is open to honest, accurate community feedback. Lawson tells how to contact the Working Group to express your concerns.)
  3. Most importantly, Lawson explains, HTML 5 is a backward compatibility mess because it builds on HTML 4:

[I]f you were building a mark-up language from scratch you would include elements like footer, header and nav (actually, HTML 2 had a menu element for navigation that was deprecated in 4.01).

You probably wouldn’t have loads of computer science oriented elements like kbd,var, samp in preference to the structural elements that people “fake” with classes. Things like tabindex wouldn’t be there, as we all know that if you use properly structured code you don’t need to change the tab order, and accesskey wouldn’t make it because it’s undiscoverable to a user and may conflict with assistive technology. Accessibility would have been part of the design rather than bolted on.

But we know that now; we didn’t know that then. And HTML 5 aims to be compatible with legacy browsers and legacy pages. …

There was a cartoon in the ancient satirical magazine Punch showing a city slicker asking an old rural gentleman for directions to his destination. The rustic says “To get there, I wouldn’t start from here”. That’s where we are with HTML. If we were designing a spec from scratch, it would look much like XHTML 2, which I described elsewhere as “a beautiful specification of philosophical purity that had absolutely no resemblance to the real world”, and which was aborted by the W3C last week.

Damned if you do

The third point is Lawson’s key insight, for it illuminates the dilemma faced by HTML 5 or any other honest effort to move markup forward. Neither semantic purity nor fault-tolerance will do, and neither approach can hope to satisfy all of today’s developers.

A markup based on what we now know, and can now do thanks to CSS’s power to disconnect source order from viewing experience, will be semantic and accessible, but it will not be backward compatible. That was precisely the problem with XHTML 2, and it’s why most people who build websites for a living, if they knew enough to pay attention to XHTML 2, soon changed the channel.

XHTML 2 was conceived as an effort to start over and get it right. And this doomed it, because right-wing Nativists will speak Esperanto before developers adopt a markup language that breaks all existing websites. It didn’t take a Mark Pilgrim to see that XHTML 2 was a dead-end that would eventually terminate XHTML activity (although Mr Pilgrim was the first developer I know to raise this point, and he certainly looks prescient in hindsight).

It was in reaction to XHTML 2’s otherworldliness that the HTML 5 activity began, and if XHTML suffered from detachment from reality, HTML 5 is too real. It accepts sloppiness many of us have learned to do without (thereby indirectly and inadvertently encouraging those who don’t develop with standards and accessibility in mind not to learn about these things). It is a hodgepodge of semantics and tag soup, of good and bad markup practices. It embraces ideas that logically cancel each other out. It does this in the name of realism, and it is as admirable and logical for so doing as XHTML 2 was admirable and logical in its purity.

Neither ethereal purity nor benign tolerance seems right, so what’s a spec developer to do? They’re damned either way—which almost suggests that the web will be built with XHTML 1.0 and HTML 4.01 forever. Most importantly for our purposes, what are we to do?

Forward, compatibly

As the conversation about HTML 5 and XHTML has played out this week, I’ve felt like Regan in The Exorcist, my head snapping around in 360 degree arcs as one great comment cancels out another.

In a private Basecamp discussion a friend said,

Maybe I’m just confused by all the competing viewpoints, but the twisted knots of claim and counterclaim are getting borderline Lovecraftian in shape.

Another said,

[I] didn’t realize that WHATWG and the W3C’s HTML WG were in fact two separate bodies, working in parallel on what effectively amounts to two different specs [1, 2—the entire thread is actually worth reading]. So as far as I can tell, if Ian Hickson removes something from the WHATWG spec, the HTML WG can apparently reinsert it, and vice versa. [T]his… seems impossibly broken. (I originally used a different word here, but, well, propriety and all that.)

Such conversations are taking place in rooms and chatrooms everywhere. The man in charge of HTML 5 appears confident in its rightness. His adherents proclaim a new era of loaves and fishes before the oven has even finished preheating. His articulate critics convey a palpable feeling of crisis. All our hopes now hang on one little Hobbit. What do we do?

As confused as I have continually felt while surfing this whirlwind, I have never stopped being certain of two things:

  1. XHTML 1.0—and for that matter, HTML 4.01—will continue to work long after I and my websites are gone. For the web’s present and for any future you or I are likely to see, there is no reason to stop using these languages to craft lean, semantic markup. The combination of CSS, JavaScript, and XHTML 1.0/HTML 4.01 is here to stay, and while the web 10 years from now may offer features not supported by this combination of technologies, we need not fear that these technologies or sites built on them will go away in the decades to come.
  2. That said, the creation of a new markup language concerns us all, and an informed community will only help the framers of HTML 5 navigate the sharp rocks of tricky shoals. Whether we influence HTML 5 greatly or not at all, it behooves us to learn as much as we can, and to practice using it on real websites.

Read more

  • Web Fonts, HTML 5 Roundup: Worthwhile reading on the hot new web font proposals, and on HTML 5/CSS 3 basics, plus a demo of advanced HTML 5 trickery. — 20 July 2009
  • Web Standards Secret Sauce: Even though Firefox and Opera offered powerfully compelling visions of what could be accomplished with web standards back when IE6 offered a poor experience, Firefox and Opera, not unlike Linux and Mac OS, were platforms for the converted. Thanks largely to the success of the iPhone, Webkit, in the form of Safari, has been a surprising force for good on the web, raising people’s expectations about what a web browser can and should do, and what a web page should look like. — 12 July 2009
  • In Defense of Web Developers: Pushing back against the “XHTML is bullshit, man!” crowd’s using the cessation of XHTML 2.0 activity to condescend to—or even childishly glory in the “folly” of—web developers who build with XHTML 1.0, a stable W3C recommendation for nearly ten years, and one that will continue to work indefinitely. — 7 July 2009
  • XHTML DOA WTF: The web’s future isn’t what the web’s past cracked it up to be. — 2 July 2009

[tags]HTML5, HTML4, HTML, W3C, WHATWG, markup, webstandards[/tags]

HTML 5: nav ambiguity resolved

AN EMAIL from Chairman Hickson resolves an ambiguity in the nav element of HTML 5.

One of the new things HTML 5 sets out to do is to provide web developers with a standardized set of semantic page layout structures. For example, it gives us a nav element to replace structures like div class="navigation".

This is exciting, logical, and smart, but it is also controversial.

The controversy is best expressed in John Allsopp’s A List Apart article, Semantics in HTML 5, where he worries that the new elements may not be entirely forward-compatible, as they are constrained to today’s understanding of what makes up a page. An extensible mechanism, although less straightforward, would offer more room to grow as the web evolves, Allsopp argues.

We’re pretty sure Ian Hickson, the main force behind HTML 5, has heard that argument, but HTML 5 is proceeding along the simpler and more direct line of adding page layout elements. The WHAT Working Group Mr Hickson chairs has solicited designer and developer opinion on typical web page structures in order to come up with a short list of new elements in HTML 5.

nav is one of these elements, and its description in the spec originally read as follows:

The nav element represents a section of a page that links to other pages or to parts within the page: a section with navigation links. Not all groups of links on a page need to be in a nav element — only sections that consist of primary navigation blocks are appropriate for the nav element.

The perceived ambiguity was expressed by Bruce Lawson (AKA HTML 5 Doctor) thusly:

“Primary navigation blocks” is ambiguous, imo. A page may have two nav blocks; the first is site-wide naviagtion (“primary navigation”) and within-page links, eg a table of contents which many would term “secondary nav”.

Because of the use of the phrase “primary navigation block” in the spec, a developer may think that her secondary nav should not use a

Chairman Hickson has resolved the ambiguity by changing “primary” to “major” and by adding an example of secondary navigation using nav.

Read more

  • Web Fonts, HTML 5 Roundup: Worthwhile reading on the hot new web font proposals, and on HTML 5/CSS 3 basics, plus a demo of advanced HTML 5 trickery. — 20 July 2009
  • Web Standards Secret Sauce: Even though Firefox and Opera offered powerfully compelling visions of what could be accomplished with web standards back when IE6 offered a poor experience, Firefox and Opera, not unlike Linux and Mac OS, were platforms for the converted. Thanks largely to the success of the iPhone, Webkit, in the form of Safari, has been a surprising force for good on the web, raising people’s expectations about what a web browser can and should do, and what a web page should look like. — 12 July 2009
  • In Defense of Web Developers: Pushing back against the “XHTML is bullshit, man!” crowd’s using the cessation of XHTML 2.0 activity to condescend to—or even childishly glory in the “folly” of—web developers who build with XHTML 1.0, a stable W3C recommendation for nearly ten years, and one that will continue to work indefinitely. — 7 July 2009
  • XHTML DOA WTF: The web’s future isn’t what the web’s past cracked it up to be. — 2 July 2009

Translations

Sour Outlook

It’s outrageous that the CSS standard created in 1996 is not properly supported in Outlook 2010. Let’s do something about it.

Hundreds of millions use Microsoft Internet Explorer to access the web, and Microsoft Outlook to send and receive email. As everyone reading this knows, the good news is that in IE8, Microsoft has released a browser that supports web standards at a high level. The shockingly bad news is that Microsoft is still using the Word rendering engine to display HTML email in Outlook 2010.

What does this mean for web designers, developers, and users? In the words of the “Let’s Fix It” project created by the Email Standards Project, Campaign Monitor, and Newism, it means exactly this:

[F]or the next 5 years your email designs will need tables for layout, have no support for CSS like float and position, no background images and lots more. Want proof? Here’s the same email in Outlook 2000 & 2010.

It’s difficult to believe that in 2009, after diligently improving standards support in IE7 and now IE8, Microsoft would force email designers to use nonsemantic table layout techniques that fractured the web, squandered bandwidth, and made a joke of accessibility back in the 1990s.

Accounting for stupidity

For a company that claims to believe in innovation and standards, and has spent five years redeeming itself in the web standards community, the decision to use the non-standards-compliant, decades-old Word rendering engine in the mail program that accompanies its shiny standards-compliant browser makes no sense from any angle. It’s not good for users, not good for business, not good for designers. It’s not logical, not on-brand, and the very opposite of a PR win.

Rumor has it that Microsoft chose the Word rendering engine because its Outlook division “couldn’t afford” to pay its browser division for IE8. And by “couldn’t afford” I don’t mean Microsoft has no money; I mean someone at this fabulously wealthy corporation must have neglected to budget for an internal cost. Big companies love these fictions where one part of the company “pays” another, and accountants love this stuff as well, for reasons that make Jesus cry out anew.

But if the rumor’s right, and if the Outlook division couldn’t afford to license the IE8 rendering engine, there are two very simple solutions: use Webkit or Gecko. They’re both free, and they both kick ass.

Why it matters

You may hope that this bone-headed decision will push millions of people into the warm embrace of Opera, Safari, Chrome, and Firefox, but it probably won’t. Most people, especially most working people, don’t have a choice about their operating system or browser. Ditto their corporate email platform.

Likewise, most web designers, whether in-house, agency, or freelance, are perpetually called upon to create HTML emails for opt-in customers. As Outlook’s Word rendering engine doesn’t support the most basic CSS layout tools such as float, designers cannot use our hard-won standards-based layout tools in the creation of these mails—unless they and their employers are willing to send broken messages to tens millions of Outlook users. No employer, of course, would sanction such a strategy. And this is precisely how self-serving decisions by Microsoft profoundly retard the adoption of standards on the web. Even when one Microsoft division has embraced standards, actions by another division ensure that millions of customers will have substandard experiences and hundreds of thousands of developers still won’t get the message that our medium has standards which can be used today.

So it’s up to us, the community, to let Microsoft know how we feel.

Participate in the Outlook’s Broken project. All it takes is a tweet.

[tags]browsers, bugs, IE8, outlook, microsoft, iranelection[/tags]

NSFW tag in HTML 5

A “Not Safe for Work” Tag has been proposed for HTML 5:

One of the most common descriptive notes people have to write using text when they post links or images to blogs, comments or anywhere in HTML is to say “this link is not safe for work” or simply “NSFW”. By adding the <NSFW> tag, this could be made much simpler and standardized. Browsers could then have an option to automatically hide all <NSFW> content. A tag is preferred to an attribute since it could then also be used around content and not just links.

Examples:
<nsfw><a href=”http://www.example.com”>Pics here!</a></nsfw>
<nsfw><img src=”badkitten.jpg”></nsfw>

(Via Bruce Lawson)

Drew McLellan of The Web Standards Project thinks it’s a nice idea that won’t work:

@brucel we looked into #nsfw in microformats. It’s an unworkable minefield. #

it’s used when linking to something that you might want to save until you get home. e.g. http://ampleboobies.info (NSFW) #

So a browser could conceivably be configured not to follow links or display content tagged nsfw. Sounds a good idea, but unworkable. #

The use of tags (rather than CSS and JavaScript) to hide or show content is an intriguing and controversial aspect of HTML 5. It’s intriguing because using a standard tag—instead of writing custom CSS and JavaScript that someone else may someday have to maintain—potentially simplifies web development and maintenance, bringing advanced techniques of content presentation to more sites for less money. It’s controversial because it sticks presentation and behavior back in markup, after we all just spent a decade separating site structure and semantics from behavior and presentation.

We’re going to be following these developments and trying to make buzzword-free sense of them for you.

[tags]standards, webstandards, HTML, HTML5, tags, NSFW, W3C[/tags]