Journal tags: hosting

22

sparkline

CSS Day 2024

My stint as one of the hosts of CSS Day went very well indeed. I enjoyed myself and people seemed to like the cut of my jib.

During the event there was a real buzz on Mastodon, which was heartening to see. I was beginning to worry that hashtagging events was going to be collatoral damage from Elongate, but there was plenty of conference-induced FOMO to be experienced on the fediverse.

The event itself was, as always, excellent. Both in terms of content and organisation.

Some themes emerged during CSS Day, which I always love to see. These emergent properties are partly down to curation and partly down to serendipity.

The last few years of CSS Day have felt like getting a firehose of astonishing new features being added to the language. There was still plenty of cutting-edge stuff this year—masonry! anchor positioning!—but there was also a feeling of consolidation, asking how to get all this amazing new stuff into our workflows.

Matthias’s opening talk on day one and Stephen’s closing talk on the same day complemented one another perfectly. Both managed to inspire while looking into the nitty-gritty practicalities of the web design process.

It was, astoundingly, Matthias’s first ever conference talk. I have no doubt it won’t be the last—it was great!

I gave Stephen a good-natured roast in my introduction, partly because it was his birthday, partly because we’re old friends, but mostly because it was enjoyable for me to watch him squirm. Of course his talk was, as always, superb. Don’t tell him, but he might be one of my favourite speakers.

The topic of graphic design tools came up more than once. It’s interesting to see how the issues with them have changed. It used to be that design tools—Photoshop, Sketch, Figma—were frustrating because they were writing cheques that CSS couldn’t cash. Now the frustration is the exact opposite. Our graphic design tools aren’t capable of the kind of fluid declarative design we can now accomplish in web browsers.

But the biggest rift remains not with tools or technologies, but with people and mindsets. Our tools can reinforce mindsets but the real divide happens in how different people approach CSS.

Both Josh and Kevin get to the heart of this in their tremendous tutorials, and that was reflected in their talks. They showed the difference between having the bare minimum understanding of CSS in order to get something done as quickly as possible, and truly understanding how CSS works in order to open up a world of possibilities.

For people in the first category, Sarah Dayan was there to sing the praises of utility-first CSS AKA atomic CSS. I commend her bravery!

During the Q&A, I restrained myself from being too Paxmanish. But I did have l’esprit d’escalier afterwards when I realised that the entire talk—and all the answers afterwards—depended on two mutually-incompatiable claims:

  1. The great thing about atomic CSS is that it’s a constrained vocabulary so your team has to conform, and
  2. The other great thing about it is that it’s utility-first, not utility-only so you can break out of it and use regular CSS if you want.

Insert .gif of character from The Office looking to camera.

Most of the questions coming in during the Q&A reflected my own take: how about we use utility classes for some things, but not all things. Seems sensible.

Anyway, regardless of what I or anyone else thinks about the substance of what Sarah was saying, there was no denying that it was a great presentation. They were all great presentations. That’s unusual, and I say that as a conference organiser as well as an attendee. Everyone brings their A-game to CSS Day.

Mind you, it is exhausting. I say it every year, but it always feels like one talk too many. Not that any individual talk wasn’t good, but the sheer onslaught of deep dives into the innards of CSS has my brain exploding before the day is done.

A highlight for me was getting to introduce Fantasai’s talk on the design principles of CSS, which was right up my alley. I don’t think most people realise just how much we owe her for her years of work on standards. The web would be in a worse place without the Herculean work she’s done behind the scenes.

Another highlight was getting to see some of the students I met back in March. They were showing some of their excellent work during the breaks. I find what they’re doing just as inspiring as the speakers on stage.

In fact, when I was filling in the post-conference feedback form, there was a question: “Who would you like to see speak at CSS Day next year?” I was racking my brains because everyone I could immediately think of has already spoken at some point. So I wrote, “It would be great to see some of those students speaking about their work.”

I think it would be genuinely fascinating to get their perspective on what we consider modern CSS, which to them is just CSS.

Either way I’ll back next year for sure.

It’s funny, but usually when a conference is described as “inspiring” it’s because it’s tackling big galaxy-brain questions. But CSS Day is as nitty-gritty as it gets and I found it truly inspiring. Like, I couldn’t wait to open up my laptop and start writing some CSS. That kind of inspiring.

Hosting

I haven’t spoken at any conferences so far this year, and I don’t have any upcoming talks. That feels weird. I’m getting kind of antsy to give a talk.

I suspect my next talk will have something to do with HTML web components. If you’re organising an event and that sounds interesting to you, give me a shout.

But even though I’m not giving a conference talk this year, I’m doing a fair bit of hosting. There was the lovely Patterns Day back in March. And this week I’m off to Amsterdam to be one of the hosts of CSS Day. As always, I’m very much looking forward to that event.

Once that’s done, it’ll be time for the biggie. UX London is just two weeks away—squee!

There are still tickets available. If you haven’t got yours yet, I highly recommend getting it before midnight on Friday—that’s when the regular pricing ends. After that, it’ll be last-chance passes only.

How green is my server?

The Session does very well in terms of performance. You can see the data from the Chrome UX Report (CRUX).

What’s good for performance is good for the environment. Sure enough, The Session gets a very high score from the website carbon calculator:

Hurrah! This web page achieves a carbon rating of A+

This is cleaner than 99% of all web pages globally

But under the details about hosting it says:

Oh no, it looks like this web page uses bog standard energy

The Session is hosted on DigitalOcean, who tend to be quite tight-lipped about their energy suppliers. Fortunately others have done some sleuthing to figure out which facilities are running on green energy.

One of the locations to get the green thumnbs up is the Amsterdam facility housed by Equinix. That’s where The Session is hosted.

I’m glad that I was able to find out that the site is running on 100% renewable energy, but I wish I didn’t need to go searching to find this out. DigitalOcean need to be a lot more transparent about the energy sources for their hosting facilities.

Junevents

Every week of June sees me at a web event, but in a different capacity each time.

At the end of the first full week in June, I went to CSS Day in Amsterdam as an attendee. It was thought-provoking, as always. And it was great to catch up with my front-of-the-front-end friends.

Last week I went to Pixel Pioneers in Bristol as a speaker. Fortunately I was on first so I was able to get the speaking done with and enjoy the rest of the talks. It was a lovely little event and there was yet more catching up with old friends and making new ones.

This week is the big one. UX London is happening this week. This time I’m not there as an attendee or a speaker. I’m there as the curator and host.

On the one hand, I’m a bag of nerves. I’ve been preparing for this all year and now it’s finally happening. I keep thinking of all the things that could possibly go wrong.

On the other hand, I’m ridiculously excited. I know I should probably express some modesty, but looking at the line-up I’ve assembled, I feel an enormous sense of pride. I’m genuinely thrilled at the prospect of all those great talks and workshops.

Nervous and excited. Those are the two wolves inside me right now.

If you’re going to be at UX London, I hope that you’re equally excited (and not nervous). There are actually still some last-minute tickets available if you haven’t managed to get one yet.

See you there!

Hosting DIBI

I was up in Edinburgh for the past few days at the Design It; Build It conference.

I was supposed to come back on Saturday but then the train strikes were announced so I changed my travel plans to avoid crossing a picket line, which gave me an extra day to explore Auld Reekie.

I spoke at DIBI last year so this time I was there in a different capacity. I was the host. That meant introducing the speakers and asking them questions after their talks.

I’m used to hosting events now, what with UX London and Leading Design. But I still get nervous beforehand. At least with a talk you can rehearse and practice. With hosting, it’s all about being nimble and thinking on your feet.

I had to pay extra close attention to each talk, scribbling down potential questions to ask. It’s similar to the feeling I get when I’m liveblogging talks.

There were some line-up changes and schedule adjustments along the way, but everything went super smoothly. I pride myself on running a tight ship so the timings were spot-on.

When it came to the questions, I tried to probe under the skin of each presentation. For some talks, that involved talking shop—the finer points of user research or the design process, say. But for the big-picture talks, I made sure to get each speaker to defend their position. So after Dan Makoski’s kumbaya-under-capitalism talk, I gave him a good grilling. Same with Philip Lockwood-Holmes who gave me permission beforehand to be merciless with him.

It was all quite entertaining. Alas, I think I may have put the fear of God into the other speakers who saw me channeling my inner Jeremy Paxman. But they needn’t have worried. I also lobbed some softballs. Like when I asked Levon Sharrow from Patagonia if there was such thing as ethical consumption under capitalism.

I had fun, but I was also aware of that fine line between being clever and being an asshole. Even though part of my role was to play devil’s advocate, I tried to make sure I was never punching down.

All in all, an excellent couple of days spent in good company.

Hosting was hard work, but very rewarding. I’ve come to realise it’s one of those activities that comes relatively easy to me, but it is very hard (and stressful) for others. And I’m pretty gosh-darned good at it too, false modesty bedamned.

So if you’re running an event but the thought of hosting it fills you with dread, we should talk.

Brandolini’s blockchain

I’ve already written about how much I enjoyed hosting Leading Design San Francisco last week.

All the speakers were terrific. Lola’s talk was particularly …um, interesting:

In this talk, Lola will share her adventures in the world of blockchain, the hostility she experienced in her first go-round in 2018, and why she’s chosen to head back to a technology that is going through its largest reputational and social crisis to date.

Wait …I was supposed to stand on stage and introduce a talk that was (at least partly) about blockchain? I have opinions.

As it turned out, Lola warned me that I’d be making an appearance in her talk. She was going to quote that blog post. Before the talk, I asked her how obnoxious I could be about blockchain in her intro. She told me to bring it.

So in the introduction, I deployed all the sarcasm I had in me and said:

Listen, we designers have a tendency to be over-critical of things sometimes. There are all these ideas that we dismiss: phrenology, homeopathy, flat-earthism …blockchain. Haters gonna hate.

I remember somebody asking online a while back, “Why the hate for web3?” And someone I know responded by saying “We hate it because we understand it.” I think there’s a lot of truth to that.

But look, just because blockchains are powering crypto ponzi schemes and N F fucking Ts, it’s worth remembering that it’s also simply a technology. It’s a technological solution in search of a problem.

To be fair, it’s still early days. After all, it’s only been over a decade now.

It’s like the law of instrument says; when all you have is a hammer, everything looks like a nail. Blockchain is like that. Except the hammer is also made of glass.

Anyway, Lola is going to defend the indefensible and talk about blockchain. One thing to keep in mind is this: remember when everyone was talking about “The Cloud”? And then it turned out that you could substitute the phrase “someone else’s server” for “The Cloud?” Well, every time you hear Lola say the word “blockchain”, I’d like you to mentally substitute the phrase “multiple copies of a spreadsheet.”

Please give an open mind and a warm welcome to Lola Oyelayo Pearson!

I got some laughs. I also got lots of gasps and pearl-clutching, as though I were saying something taboo. Welcome to San Francisco.

Lola gave as good as she got. I got a roasting in her talk.

And just to clarify, Lola and I are friends—this was a consensual smackdown.

There was a very serious point to Lola’s talk. Cryptobollocks and other blockchain-powered schemes have historically been very bro-y, and exploitative of non-bro communities. Lola wants to fight that trend.

I get it. But it reminds me a bit of the justifications you hear from people who go to work at Facebook claiming that they can do more good from the inside. Whatever helps you sleep at night.

The crux of Lola’s belief is this: blockchain technology is inevitable, therefore it is uncumbent on us as ethical designers to ensure that the technology is deployed in a way that empowers people instead of exploiting them.

But I take issue with the premise. Blockchain technology is not inevitable. That’s the worst kind of technological determinism. It’s defeatist. It’s a depressing view of “progress” driven not by people, but by technological forces beyond our control.

I refuse to accept that anti-humanist deterministic view.

In any case, for technological determinism to have any validity, there needs to be something to it. At least virtual reality and machine learning are based on some actual technologies. In the case of cryptobollocks, there is no there there. There is nothing except the hype, which is why you’ll see blockchain enthusiasts trying to ride the coattails of trending technologies in a logical fallacy that goes something like this:

  1. There are technologies that will be really big in the future,
  2. blockchain is a technology, therefore
  3. blockchain will be really big in the future.

Blockchain is bullshit. It isn’t even very clever bullshit. And it certainly isn’t inevitable.

Change

I’ve spent the last few days in San Francisco where I was hosting Leading Design.

It was excellent. Rebecca did an absolutely amazing job with the curation, and the Clearleft delivered a terrific event, as always. I’m continually amazed by the way such a relatively small agency can punch above its weight when it comes to putting on world-class events and delivering client work.

I won’t go into much detail on what was shared at Leading Design. There’s an understanding that it’s a safe space for people to speak freely and share their experiences in an open and honest way. I can tell you that there were some tough topics. Given the recent rounds of layoffs in this neck of the woods, this was bound to happen.

I was chatting with Peter at breakfast on the second day and he was saying that maybe there was too much emphasis on the negative, like we were in danger of wallowing in our own misery. It’s a fair point, but I offered a counterpoint that I also heard other people express: when else do these people get a chance to let their guard down and have a good ol’ moan? These are design leaders who need to project an air of calm reassurance when they’re at work. Leading Design is a welcome opportunity to just let it all out.

When we did Leading Design in New York in March of 2022, it was an intimate gathering and the overwhelming theme was togetherness. After two years of screen-based interactions, it was cathartic to get together in the same location to swap stories and be reminded you are not alone.

Leading Design San Francisco was equally cathartic, but the theme this time was change. Change can be scary. But it can also be energising.

After two days of introducing and listening to fascinating talks on the topic of change, I closed out my duties by quoting the late great Octavia Butler. I spoke the mantra of the secular Earthseed religion founded in Parable Of The Sower:

All that you touch
You Change.

All that you Change
Changes you.

The only lasting truth
Is Change.

God
Is Change.

2022

This time last year when I was looking back on 2021, I wrote:

2020 was the year of the virus. 2021 was the year of the vaccine …and the virus, obviously, but still it felt like the year we fought back. With science!

Science continued to win the battle in 2022. But it was messy. The Situation isn’t over yet, and everyone has different ideas about the correct levels of risk-taking.

It’s like when you’re driving and you think that everyone going faster than you is a maniac, and everyone going slower than you is an idiot.

The world opened up more in 2022. I was able to speak at more in-person events. I really missed that. I think I’m done with doing online talks.

There was a moment when I was speaking at Web Dev Conf in Bristol this year (a really nice little gathering), and during my presentation I was getting that response from the audience that you just don’t get with online talks, and I distinctly remember thinking, “Oh, I’ve really missed this!”

But like I said, The Situation isn’t over, and that makes things tricky for conferences. Most of the ones I spoke at or attended were doing their best to make things safe. CSS Day, Clarity, State Of The Browser: they all took measures to try to look out for everyone’s health.

For my part, I asked everyone attending dConstruct to take a COVID test the day before. Like I said at the time, I may have just been fooling myself with what might have been hygiene theatre, but like those other events, we all wanted to gather safely.

That can’t be said for the gigantic event in Berlin that I spoke at in Summer. There were tens of thousands of people in the venue. Inevitably, I—and others—caught COVID.

My bout of the ’rona wasn’t too bad, and I’m very glad that I didn’t pass it on to any family members (that’s been my biggest worry throughout The Situation). But it did mean that I wasn’t able to host UX London 2022.

That was a real downer. I spent much of 2022 focused on event curation: first UX London, and then dConstruct. I was really, really proud of the line-up I assembled for UX London so I was gutted not to be able to introduce those fabulous speakers in person.

Still, I got to host dConstruct, Leading Design, and Clarity, so 2022 was very much a bumper year for MCing—something I really, really enjoy.

Already I’ve got more of the same lined up for the first half of 2023: hosting Leading Design San Francisco in February and curating and hosting UX London in June.

I hope to do more speaking too. Alas, An Event Apart is no more, which is a real shame. But I hope there are other conferences out there that might be interested in what I have to say. If you’re organising one, get in touch.

Needless to say, 2022 was not a good year for world events. The callous and cruel invasion of Ukraine rightly dominated the news (sporting events and dead monarchs are not the defining events of the year). But even in the face of this evil, there’s cause for hope, seeing the galvanised response of the international community in standing up to Putin the bully.

In terms of more personal bad news, Jamie’s death is hard to bear.

I got to play lots of music in 2022. That’s something I definitely want to continue. In fact, 2023 kicked off with a great kitchen session yesterday evening—the perfect start to the year!

And I’ve got my health. That’s something I don’t take for granted.

One year ago, I wrote:

Maybe 2022 will turn out to be similar—shitty for a lot of people, and mostly unenventful for me. Or perhaps 2022 will be a year filled with joyful in-person activities, like conferences and musical gatherings. Either way, I’m ready.

For the most part, that played out. 2022 was thankfully fairly uneventful personally. And it was indeed a good year for in-person connections. I very much hope that continues in 2023.

Leading Design San Francisco 2023

My upcoming appearance at An Event Apart next week to talk about declarative design isn’t the only upcoming trip to San Francisco in my calendar.

Two months from today I’ll be back in San Francisco for Leading Design. It’s on February 7th and 8th.

This event is long overdue. We’ve never had Leading Design in San Francisco before, but we were all set to go ahead with the inaugural SF gathering …in March 2020. We all know what happened next.

So this event will be three years in the making.

Rebacca is doing amazing work, as usual, putting together a fantastic line-up of speakers:

They’ll be sharing their insights, their stories and their ideas — as well as some of their pain from past challenges. It’s all designed to help you navigate your own leadership journey.

I’ll be there to MC the event, which is a great honour for me. And I reckon I’ll be up to the challenge, having just done the double whammy of hosting Leading Design London and Clarity back-to-back.

I would love to see you in San Francisco! If you’ve attended a Leading Design event before, then you know how transformational it can be. If you haven’t, then now is your chance.

Early bird tickets are still available until mid December, so if you’re thinking about coming, I suggest making that decision now.

If you know anyone in the bay area who’s in a design leadership position, be sure to tell them about Leading Design San Francisco—they don’t want to miss this!

Democratising dev

I met up with a supersmart programmer friend of mine a little while back. He was describing some work he was doing with React. He was joining up React components. There wasn’t really any problem-solving or debugging—the individual components had already been thoroughly tested. He said it felt more like construction than programming.

My immediate thought was “that should be automated.”

Or at the very least, there should be some way for just about anyone to join those pieces together rather than it requiring a supersmart programmer’s time. After all, isn’t that the promise of design systems and components—freeing us up to tackle the meaty problems instead of spending time on the plumbing?

I thought about that conversation when I was listening to Laurie’s excellent talk in Berlin last month.

Chatting to Laurie before the talk, he was very nervous about the conclusion that he had reached and was going to share: that the time is right for web development to be automated. He figured it would be an unpopular message. Heck, even he didn’t like it.

But I reminded him that it’s as old as the web itself. I’ve seen videos from very early World Wide Web conferences where Tim Berners-Lee was railing against the idea that anyone would write HTML by hand. The whole point of his WorldWideWeb app was that anyone could create and edit web pages as easily as word processing documents. It’s almost an accident of history that HTML happened to be just easy enough—but also just powerful enough—for many people to learn and use.

Anyway, I thoroughly enjoyed Laurie’s talk. (Except for a weird bit where he dunks on people moaning about “the fundamentals”. I think it’s supposed to be punching up, but I’m not sure that’s how it came across. As Chris points out, fundamentals matter …at least when it comes to concepts like accessibility and performance. I think Laurie was trying to dunk on people moaning about fundamental technologies like languages and frameworks. Perhaps the message got muddled in the delivery.)

I guess Laurie was kind of talking about this whole “no code” thing that’s quite hot right now. Personally, I would love it if the process of making websites could be democratised more. I’ve often said that my nightmare scenario for the World Wide Web would be for its fate to lie in the hands of an elite priesthood of programmers with computer science degrees. So I’m all in favour of no-code tools …in theory.

The problem is that unless they work 100%, and always produce good accessible performant code, then they’re going to be another example of the law of leaky abstractions. If a no-code tool can get someone 90% of the way to what they want, that seems pretty good. But if that person than has to spend an inordinate amount of time on the remaining 10% then all the good work of the no-code tool is somewhat wasted.

Funnily enough, the person who coined that law, Joel Spolsky, spoke right after Laurie in Berlin. The two talks made for a good double bill.

(I would link to Joel’s talk but for some reason the conference is marking the YouTube videos as unlisted. If you manage to track down a URL for the video of Joel’s talk, let me know and I’ll update this post.)

In a way, Joel was making the same point as Laurie: why is it still so hard to do something on the web that feels like it should be easily repeatable?

He used the example of putting an event online. Right now, the most convenient way to do it is to use a third-party centralised silo like Facebook. It works, but now the business model of Facebook comes along for the ride. Your event is now something to be tracked and monetised by advertisers.

You could try doing it yourself, but this is where you’ll run into the frustrations shared by Joel and Laurie. It’s still too damn hard and complicated (even though we’ve had years and years of putting events online). Despite what web developers tell themselves, making stuff for the web shouldn’t be that complicated. As Trys put it:

We kid ourselves into thinking we’re building groundbreakingly complex systems that require bleeding-edge tools, but in reality, much of what we build is a way to render two things: a list, and a single item. Here are some users, here is a user. Here are your contacts, here are your messages with that contact. There ain’t much more to it than that.

And yet here we are. You can either have the convenience of putting something on a silo like Facebook, or you can have the freedom of doing it yourself, indie web style. But you can’t have both it seems.

This is a criticism often levelled at the indie web. The barrier to entry to having your own website is too high. It’s a valid criticism. To have your own website, you need to have some working knowledge of web hosting and at least some web technologies (like HTML).

Don’t get me wrong. I love having my own website. Like, I really love it. But I’m also well aware that it doesn’t scale. It’s unreasonable to expect someone to learn new skills just to make a web page about, say, an event they want to publicise.

That’s kind of the backstory to the project that Joel wanted to talk about: the block protocol. (Note: it has absolutely nothing to do with blockchain—it’s just an unfortunate naming collision.)

The idea behind the project is to create a kind of crowdsourced pattern library—user interfaces for creating common structures like events, photos, tables, and lists. These patterns already exist in today’s silos and content management systems, but everyone is reinventing the wheel independently. The goal of this project is make these patterns interoperable, and therefore portable.

At first I thought that would be a classic /927 situation, but I’m pleased to see that the focus of the project is not on formats (we’ve been there and done that with microformats, RDF, schema.org, yada yada). The patterns might end up being web components or they might not. But the focus is on the interface. I think that’s a good approach.

That approach chimes nicely with one of the principles of the indie web:

UX and design is more important than protocols, formats, data models, schema etc. We focus on UX first, and then as we figure that out we build/develop/subset the absolutely simplest, easiest, and most minimal protocols and formats sufficient to support that UX, and nothing more. AKA UX before plumbing.

That said, I don’t think this project is a cure-all. Interoperable (portable) chunks of structured content would be great, but that’s just one part of the challenge of scaling the indie web. You also need to have somewhere to put those blocks.

Convenience isn’t the only thing you get from using a silo like Facebook, Twitter, Instagram, or Medium. You also get “free” hosting …until you don’t (see GeoCities, MySpace, and many, many more).

Wouldn’t it be great if everyone had a place on the web that they could truly call their own? Today you need to have an uneccesary degree of technical understanding to publish something at a URL you control.

I’d love to see that challenge getting tackled.

Eventing

In person events are like buses. You go two years without one and then three come along at once.

My buffer is overflowing from experiencing three back-to-back events. Best of all, my participation was different each time.

First of all, there was Leading Design New York, where I was the host. The event was superb, although it’s a bit of a shame I didn’t have any time to properly experience Manhattan. I wasn’t able to do any touristy things or meet up with my friends who live in the city. Still the trip was well worth it.

Right after I got back from New York, I took the train to Edinburgh for the Design It Build It conference where I was a speaker. It was a good event. I particularly enjoyed Rafaela Ferro talk on accessibility. The last time I spoke at DIBI was 2011(!) so it was great to make a return visit. I liked that the audience was seated cabaret style. That felt safer than classroom-style seating, allowing more space between people. At the same time, it felt more social, encouraging more interaction between attendees. I met some really interesting people.

I got from Edinburgh just in time for UX Camp Brighton on the weekend, where I was an attendee. I felt like a bit of a moocher not giving a presentation, but I really, really enjoyed every session I attended. It’s been a long time since I’ve been at a Barcamp-style event—probably the last Indie Web Camp I attended, whenever that was. I’d forgotten how well the format works.

But even with all these in-person events, online events aren’t going anywhere anytime soon. Yesterday I started hosting the online portion of Leading Design New York and I’ll be doing it again today. The post-talk discussions with Julia and Lisa are lots of fun!

So in the space of just of a couple of weeks I’ve been a host, a speaker, and an attendee. Now it’s time for me to get my head back into one other event role: conference curator. No more buses/events are on the way for the next while, so I’m going to be fully devoted to organising the line-up for UX London 2022. Exciting!

Hosting Leading Design New York

I went to New York to host the Leading Design conference. It was weird and wonderful.

Weird, because it felt strange and surreal to be back in a physical space with other people all sharing the same experience.

Wonderful, for exactly the same reasons.

Leading Design

This was a good way to ease back into live events. It wasn’t a huge conference. Just over a hundred people. So it felt intimate, while still allowing people to quite literally have space to themselves.

I can’t tell you much about the post-talk interviews I conducted with the speakers. That’s because what happens at Leading Design stays at Leading Design, at least when it comes to the discussions after the talks. We made it clear that Leading Design was a safe place for everyone to share their stories, even if—especially if—those were stories you wouldn’t want to share publicly or at work.

I was bowled over by how generous and open and honest all the speakers were. Sure, there were valuable lessons about management and leadership, but there were also lots of very personal stories and insights. Time and time again I found myself genuinely moved by the vulnerability that the speakers displayed.

Leadership can be lonely. Sometimes very lonely. I got the impression that everyone—speakers and attendees alike—really, really appreciated having a non-digital space where they could come together and bond over shared travails. I know it’s a cliché to talk about “connecting” with others, but at this event it felt true.

The talks themselves were really good too. I loved seeing how themes emerged and wove themselves throughout the two days. Rebecca did a fantastic job of curating the line-up. I’ve been to a lot of events over the years and I’ve seen conference curation of varying degrees of thoughtfulness. Leading Design New York 2022 is right up there with the best of them. It was an honour to play the part of the host (though I felt very guilty when people congratulated me on such a great event—“Don’t thank me”, I said, “Thank Rebecca—I’m just the public face of the event; she did all the work!”)

My hosting duties aren’t over. This week we’ve got the virtual portion of Leading Design New York. There’ll be two days of revisiting some of the conference talks, and one day of workshops.

For the two days of talks, I’m going to be joined by two brilliant panelists for post-talk discussions—Julia Whitney and Lisa Welchman. This should be fun!

Best of all, for this portion of the event I don’t need to get into an airplane and cross the Atlantic.

That said, the journey was totally worth it for Leading Design New York. Also, by pure coincidence, the event coincided with St. Patrick’s Day. For the first time in two years, New York hosted its legendary parade and it was just a block or two away from the conference venue.

I nipped out during the lunch break to cheer on the marching bands. Every county was represented. When the representatives from county Cork went by, there’d be shouts of “Up Cork!” When the county Donegal delegation went by, it was “Up Donegal!”

It’s just a shame I couldn’t stick around for the representatives from county Down.

Going to New York

I’m flying to New York on Monday. That still sounds a little surreal to me, but it’s happening.

I’ll be hosting Leading Design New York. Even a month ago it wasn’t clear if the in-person event would even be going ahead. But there was a go/no-go decision and it was “go!” Now, as New York relaxes its mandates, it’s looking more and more like the right decision. It’s still probably going to feel a bit weird to be gathering together with other people …but it’s also going to feel long overdue.

Rebecca has put together a fantastic line-up of super-smart design leaders. My job will be to introduce them before they speak and then interview them afterwards, also handling questions from the audience.

I’m a little nervous just because I want to do a really good job. But I’ve been doing my homework. And given how well the hosting went for UX Fest, I’m probably being uneccesarily worried. I need to keep reminding myself to enjoy it. It’s a real privilege that I get to spend two days in the company of such erudite generous people. I should make the most of it.

If you’re going to be at Leading Design New York, I very much look forward to seeing you there.

If you’re not coming to Leading Design but you’re in the neighbourhood, let me know if you’ve got any plans for St. Patrick’s Day. I’ve already got my green paisley shirt picked out for my on-stage duties that day.

Announcing UX London 2022

For the past two years, all of Clearleft’s events have been online. Like everyone else running conferences, we had to pivot in the face of The Situation.

In hindsight, it’s remarkable how well those online events went. This was new territory for everyone—speakers, attendees, and organisers.

UX Fest was a real highlight. I had the pleasure of hosting the event, giving it my Woganesque best. It was hard work, but it paid off.

Still, it’s not quite the same as gathering together with your peers in one place for a shared collective experience. I’ve really been missing in-person events (and from what I’ve seen in people’s end-of-year blog posts, I’m not alone).

That’s why I’m absolutely thrilled that UX London is back in 2022! Save the dates; June 28th to 30th. We’ve got a new venue too: the supremely cool Tobacco Dock.

This is going to be a summertime festival of design. It’ll be thought-provoking, practical, fun, and above all, safe.

It feels kind of weird to be planning an in-person event now, when we’re just emerging from The Omicron Variant, but putting on UX London 2022 isn’t just an act of optimism. It’s a calculated move. While nothing is certain, late June 2022 should be the perfect time to safely gather the UX community again.

It’s a particularly exciting event for me. Not only will I be hosting it, this time I’m also curating the line-up.

I’ve curated conference line-ups before: dConstruct, Responsive Day Out, and Patterns Day. But those were all one-day events. UX London is three times as big!

It’s a lot of pressure, but I’m already extremely excited about the line-up. If my plan comes together, this is going to be an unmissable collection of mindbombs. I’ve already got some speakers confirmed so keep an eye on the website, Twitter or sign up for the newsletter to get the announcements as when they happen.

The format of UX London has been honed over the years. I think it’s got just the right balance.

Each day has a morning of inspiring talks—a mixture of big-picture keynotes and punchy shorter case studies. The talks are all on a single track; everyone shares that experience. Then, after lunch, there’s an afternoon of half-day workshops. Those happen in parallel, so you choose which workshop you want to attend.

I think this mixture of the inspirational and the practical is the perfect blend. Your boss can send you to UX London knowing that you’re going to learn valuable new skills, but you’ll also leave with your mind expanded by new ideas.

Like I said, I’m excited!

Naturally, I’m nervous too. Putting on an event is a risky endeavour at the best times. Putting an event after a two-year pandemic is even more uncertain. What if no one comes? Maybe people aren’t ready to return to in-person events. But I can equally imagine the opposite situation. Maybe people are craving a community gathering after two years of sitting in front of screens. That’s definitely how I’m feeling.

If you’re feeling the same, then join me in London in June. Tickets are on sale now. You can get three-day early-bird pass, or you can buy a ticket for an individual day. But I hope you’ll join me for the whole event—I can’t wait to see you there!

Hosting online events

Back in 2014 Vitaly asked me if I’d be the host for Smashing Conference in Freiburg. I jumped at the chance. I thought it would be an easy gig. All of the advantages of speaking at a conference without the troublesome need to actually give a talk.

As it turned out, it was quite a bit of work:

It wasn’t just a matter of introducing each speaker—there was also a little chat with each speaker after their talk, so I had to make sure I was paying close attention to each and every talk, thinking of potential questions and conversation points. After two days of that, I was a bit knackered.

Last month, I hosted an other event, but this time it was online: UX Fest. Doing the post-talk interviews was definitely a little weirder online. It’s not quite the same as literally sitting down with someone. But the online nature of the event did provide one big advantage…

To minimise technical hitches on the day, and to ensure that the talks were properly captioned, all the speakers recorded their talks ahead of time. That meant I had an opportunity to get a sneak peek at the talks and prepare questions accordingly.

UX Fest had a day of talks every Thursday in June. There were four talks per Thursday. I started prepping on the Monday.

First of all, I just watched all the talks and let them wash me over. At this point, I’d often think “I’m not sure if I can come up with any questions for this one!” but I’d let the talks sit there in my subsconscious for a while. This was also a time to let connections between talks bubble up.

Then on the Tuesday and Wednesday, I went through the talks more methodically, pausing the video every time I thought of a possible question. After a few rounds of this, I inevitably ended up with plenty of questions, some better than others. So I then re-ordered them in descending levels of quality. That way if I didn’t get to the questions at the bottom of the list, it was no great loss.

In theory, I might not get to any of my questions. That’s because attendees could also ask questions on the day via a chat window. I prioritised those questions over my own. Because it’s not about me.

On some days there was a good mix of audience questions and my own pre-prepared questions. On other days it was mostly my own questions.

Either way, it was important that I didn’t treat the interview like a laundry list of questions to get through. It was meant to be a conversation. So the answer to one question might touch on something that I had made a note of further down the list, in which case I’d run with that. Or the conversation might go in a really interesting direction completely unrelated to the questions or indeed the talk.

Above all, these segments needed to be engaging and entertaining in a personable way, more like a chat show than a post-game press conference. So even though I had done lots of prep for interviewing each speaker, I didn’t want to show my homework. I wanted each interview to feel like a natural flow.

To quote the old saw, this kind of spontaneity takes years of practice.

There was an added complication when two speakers shared an interview slot for a joint Q&A. Not only did I have to think of questions for each speaker, I also had to think of questions that would work for both speakers. And I had to keep track of how much time each person was speaking so that the chat wasn’t dominated by one person more than the other. This was very much like moderating a panel, something that I enjoy very much.

In the end, all of the prep paid off. The conversations flowed smoothly and I was happy with some of the more thought-provoking questions that I had researched ahead of time. The speakers seemed happy too.

Y’know, there are not many things I’m really good at. I’m a mediocre developer, and an even worse designer. I’m okay at writing. But I’m really good at public speaking. And I think I’m pretty darn good at this hosting lark too.

Weighing up UX

You can listen to an audio version of Weighing up UX.

This is the month of UX Fest 2021—this year’s online version of UX London. The festival continues with masterclasses every Tuesday in June and a festival day of talks every Thursday (tickets for both are still available). But it all kicked off with the conference part last week: three back-to-back days of talks.

I have the great pleasure of hosting the event so not only do I get to see a whole lot of great talks, I also get to quiz the speakers afterwards.

Right from day one, a theme emerged that continued throughout the conference and I suspect will continue for the rest of the festival too. That topic was metrics. Kind of.

See, metrics come up when we’re talking about A/B testing, growth design, and all of the practices that help designers get their seat at the table (to use the well-worn cliché). But while metrics are very useful for measuring design’s benefit to the business, they’re not really cut out for measuring user experience.

People have tried to quantify user experience benefits using measurements like NetPromoter Score, which is about as useful as reading tea leaves or chicken entrails.

So we tend to equate user experience gains with business gains. That makes sense. Happy users should be good for business. That’s a reasonable hypothesis. But it gets tricky when you need to make the case for improving the user experience if you can’t tie it directly to some business metric. That’s when we run into the McNamara fallacy:

Making a decision based solely on quantitative observations (or metrics) and ignoring all others.

The way out of this quantitative blind spot is to use qualitative research. But another theme of UX Fest was just how woefully under-represented researchers are in most organisations. And even when you’ve gone and talked to users and you’ve got their stories, you still need to play that back in a way that makes sense to the business folks. These are stories. They don’t lend themselves to being converted into charts’n’graphs.

And so we tend to fall back on more traditional metrics, based on that assumption that what’s good for user experience is good for business. But it’s a short step from making that equivalency to flipping the equation: what’s good for the business must, by definition, be good user experience. That’s where things get dicey.

Broadly speaking, the talks at UX Fest could be put into two categories. You’ve got talks covering practical subjects like product design, content design, research, growth design, and so on. Then you’ve got the higher-level, almost philosophical talks looking at the big picture and questioning the industry’s direction of travel.

The tension between these two categories was the highlight of the conference for me. It worked particularly well when there were back-to-back talks (and joint Q&A) featuring a hands-on case study that successfully pushed the needle on business metrics followed by a more cautionary talk asking whether our priorities are out of whack.

For example, there was a case study on growth design, which emphasised the importance of A/B testing for validation, immediately followed by a talk on deceptive dark patterns. Now, I suspect that if you were to A/B test a deceptive dark pattern, the test would validate its use (at least in the short term). It’s no coincidence that a company like Booking.com, which lives by the A/B sword, is also one of the companies sued for using distressing design patterns.

Using A/B tests alone is like using a loaded weapon without supervision. They only tell you what people do. And again, the solution is to make sure you’re also doing qualitative research—that’s how you find out why people are doing what they do.

But as I’ve pondered the lessons from last week’s conference, I’ve come to realise that there’s also a danger of focusing purely on the user experience. Hear me out…

At one point, the question came up as to whether deceptive dark patterns were ever justified. What if it’s for a good cause? What if the deceptive dark pattern is being used by an organisation actively campaigning to do good in the world?

In my mind, there was no question. A deceptive dark pattern is wrong, no matter who’s doing it.

(There’s also the problem of organisations that think they’re doing good in the world: I’m sure that every talented engineer that worked on Google AMP honestly believed they were acting in the best interests of the open web even as they worked to destroy it.)

Where it gets interesting is when you flip the question around.

Suppose you’re a designer working at an organisation that is decidedly not a force for good in the world. Say you’re working at Facebook, a company that prioritises data-gathering and engagement so much that they’ll tolerate insurrectionists and even genocidal movements. Now let’s say there’s talk in your department of implementing a deceptive dark pattern that will drive user engagement. But you, being a good designer who fights for the user, take a stand against this and you successfully find a way to ensure that Facebook doesn’t deploy that deceptive dark pattern.

Yay?

Does that count as being a good user experience designer? Yes, you’ve done good work at the coalface. But the overall business goal is like a deceptive dark pattern that’s so big you can’t take it in. Is it even possible to do “good” design when you’re inside the belly of that beast?

Facebook is a relatively straightforward case. Anyone who’s still working at Facebook can’t claim ignorance. They know full well where that company’s priorities lie. No doubt they sleep at night by convincing themselves they can accomplish more from the inside than without. But what about companies that exist in the grey area of being imperfect? Frankly, what about any company that relies on surveillance capitalism for its success? Is it still possible to do “good” design there?

There are no easy answers and that’s why it so often comes down to individual choice. I know many designers who wouldn’t work at certain companies …but they also wouldn’t judge anyone else who chooses to work at those companies.

At Clearleft, every staff member has two levels of veto on client work. You can say “I’m not comfortable working on this”, in which case, the work may still happen but we’ll make sure the resourcing works out so you don’t have anything to do with that project. Or you can say “I’m not comfortable with Clearleft working on this”, in which case the work won’t go ahead (this usually happens before we even get to the pitching stage although there have been one or two examples over the years where we’ve pulled out of the running for certain projects).

Going back to the question of whether it’s ever okay to use a deceptive dark pattern, here’s what I think…

It makes no difference whether it’s implemented by ProPublica or Breitbart; using a deceptive dark pattern is wrong.

But there is a world of difference in being a designer who works at ProPublica and being a designer who works at Breitbart.

That’s what I’m getting at when I say there’s a danger to focusing purely on user experience. That focus can be used as a way of avoiding responsibility for the larger business goals. Then designers are like the soldiers on the eve of battle in Henry V:

For we know enough, if we know we are the kings subjects: if his cause be wrong, our obedience to the king wipes the crime of it out of us.

Hosting UX Fest

I quite enjoy interviewing people. I don’t mean job interviews. I mean, like, talk show interviews. I’ve had a lot of fun over the years moderating panel discussions: @media Ajax in 2007, SxSW in 2008, Mobilism in 2011, the Progressive Web App Dev Summit and EnhanceConf in 2016.

I’ve even got transcripts of some panels I’ve moderated:

I enjoyed each and every one. I also had the pleasure of interviewing the speakers at every Responsive Day Out. Hosting events like that is a blast, but what with The Situation and all, there hasn’t been much opportunity for hosting conferences.

Well, I’m going to be hosting an event next month: UX Fest. It’s this year’s online version of UX London.

An online celebration of digital design, taking place throughout June 2021.

I am simultaneously excited and nervous. I’m excited because I’ll have the chance to interview a whole bunch of really smart people. I’m nervous because it’s all happening online and that might feel quite different to an in-person discussion.

But I have an advantage. While the interviews will be live, the preceding talks will be pre-recorded. That means I have to time watch and rewatch each talk, spot connections between them, and think about thought-provoking questions for each speaker.

So that’s what I’m doing between now and the beginning of June. If you’d like to bear witness to the final results, I encourage you to get a ticket for UX Fest. You can come to the three-day conference in the first week of June, or you can get a ticket for the festival spread out over the following three Thursdays in June, or you can get a combo ticket for both and save some money.

There’s an inclusion programme for the conference and festival days:

Anyone from an underrepresented group is invited to apply. We especially invite and welcome Black, indigenous & people of colour, LGBTQIA+ people and people with disabilities.

Here’s the application form.

There’ll also be a whole bunch of hands-on masterclasses throughout June that you can book individually. I won’t be hosting those though. I’ll have plenty to keep me occupied hosting the conference and the festival.

I hope you’ll join me along with Krystal Higgins, David Dylan Thomas, Catt Small, Scott Kubie, Temi Adeniyi, Teresa Torres, Tobias Ahlin and many more wonderful speakers—it’s going to fun!

Downloading from Google Fonts

If you’re using web fonts, there are good performance (and privacy) reasons for hosting your own font files. And fortunately, Google Fonts gives you that option. There’s a “Download family” button on every specimen page.

But if you go ahead and download a font family from Google Fonts, you’ll notice something a bit odd. The .zip file only contains .ttf files. You can serve those on the web, but it’s far from the best choice. Woff2 is far leaner in file size.

This means you need to manually convert the downloaded .ttf files into .woff or .woff2 files using something like Font Squirrel’s generator. That’s fine, but I’m curious as to why this step is necessary. Why doesn’t Google Fonts provide .woff or .woff2 files in the downloaded folder? After all, if you choose to use Google Fonts as a third-party hosting service for your fonts, it most definitely serves up the appropriate file formats.

I thought maybe it was something to do with the licensing. Maybe some licenses only allow for unmodified truetype files to be distributed? But I’ve looked at fonts with different licenses—some have Apache 2 licensing, some have Open Font licensing—and they’re all quite permissive and definitely allow for modification.

Maybe the thinking is that, if you’re hosting your own font files, then you know what you’re doing and you should be able to do your own file conversion and subsetting. But I’ve come across more than one website in the wild serving up .ttf files. And who can blame them? They want to host their own font files. They downloaded those files from Google Fonts. Why shouldn’t they assume that they’re good to go?

It’s all a bit strange. If anyone knows why Google Fonts only provides .ttf files for download, please let me know. In a pinch, I will also accept rampant speculation.

Trys also pointed out some weird default behaviour if you do let Google Fonts do the hosting for you. Specifically if it’s a variable font. Let’s say it’s a font with weight as a variable axis. You specify in advance which weights you’ll be using, and then it generates separate font files to serve for each different weight.

Doesn’t that defeat the whole point of using a variable font? I mean, I can see how it could result in smaller file sizes if you’re just using one or two weights, but isn’t half the fun of having a weight axis that you can go crazy with as many weights as you want and it’s all still one font file?

Like I said, it’s all very strange.

Opening up the AMP cache

I have a proposal that I think might alleviate some of the animosity around Google AMP. You can jump straight to the proposal or get some of the back story first…

The AMP format

Google AMP is exactly the kind of framework I’d like to get behind. Unlike most front-end frameworks, its components take a declarative approach—no knowledge of JavaScript required. I think Lea’s excellent Mavo is the only other major framework that takes this inclusive approach. All the configuration happens in markup, and all the styling happens in CSS. Excellent!

But I cannot get behind AMP.

Instead of competing on its own merits, AMP is unfairly propped up by the search engine of its parent company, Google. That makes it very hard to evaluate whether AMP is being used on its own merits. Instead, the evidence suggests that most publishers of AMP pages are doing so because they feel they have to, rather than because they want to. That’s a real shame, because as a library of web components, AMP seems pretty good. But there’s just no way to evaluate AMP-the-format without taking into account AMP-the-ecosystem.

The AMP ecosystem

Google AMP ostensibly exists to make the web faster. Initially the focus was specifically on mobile performance, but that distinction has since fallen by the wayside. The idea is that by using AMP’s web components, your pages will be speedy. Though, as Andy Davies points out, this isn’t always the case:

This is where I get confused… https://independent.co.uk only have an AMP site yet it’s performance is awful from a user perspective - isn’t AMP supposed to prevent this?

See also: Google AMP lowered our page speed, and there’s no choice but to use it:

According to Google’s own Page Speed Insights audit (which Google recommends to check your performance), the AMP version of articles got an average performance score of 87. The non-AMP versions? 95.

Publishers who already have fast web pages—like The Guardian—are still compelled to make AMP versions of their stories because of the search benefits reserved for AMP. As Terence Eden reported from a meeting of the AMP advisory committee:

We heard, several times, that publishers don’t like AMP. They feel forced to use it because otherwise they don’t get into Google’s news carousel — right at the top of the search results.

Some people felt aggrieved that all the hard work they’d done to speed up their sites was for nothing.

The Google AMP team are at pains to point out that AMP is not a ranking factor in search. That’s true. But it is unfairly privileged in other ways. Only AMP pages can appear in the Top Stories carousel …which appears above any other search results. As I’ve said before:

Now, if you were to ask any right-thinking person whether they think having their page appear right at the top of a list of search results would be considered preferential treatment, I think they would say hell, yes! This is the only reason why The Guardian, for instance, even have AMP versions of their content—it’s not for the performance benefits (their non-AMP pages are faster); it’s for that prime real estate in the carousel.

From A letter about Google AMP:

Content that “opts in” to AMP and the associated hosting within Google’s domain is granted preferential search promotion, including (for news articles) a position above all other results.

That’s not the only way that AMP pages get preferential treatment. It turns out that the secret to the speed of AMP pages isn’t the web components. It’s the prerendering.

The AMP cache

If you’ve ever seen an AMP page in a list of search results, you’ll have noticed the little lightning icon. If you’ve ever tapped on that search result, you’ll have noticed that the page loads blazingly fast!

That’s not down to AMP-the-format, alas. That’s down to the fact that the page has been prerendered by Google before you even went to it. If any page were prerendered that way, it would load blazingly fast. But currently, this privilege is reserved for AMP pages only.

If, after tapping through to that AMP page, you looked at the address bar of your browser, you might have noticed something odd. Even though you might have thought you were visiting The Washington Post, or The New York Times, the URL of the (blazingly fast) page you’re looking at is still under Google’s domain. That’s because Google hosts any AMP pages that it prerenders.

Google calls this “the AMP cache”, but it would be better described as “AMP hosting”. The web page sent down the wire is hosted on Google’s domain.

Here’s that AMP letter again:

When a user navigates from Google to a piece of content Google has recommended, they are, unwittingly, remaining within Google’s ecosystem.

Through gritted teeth, I will refer to this as “the AMP cache”, because that’s what everyone else calls it. But make no mistake, Google is hosting—not caching—these pages.

But why host the pages on a Google domain? Why not prerender the original URLs?

Prerendering and privacy

Scott summed up the situation with AMP nicely:

The pitch I think site owners are hearing is: let us host your pages on our domain and we’ll promote them in search results AND preload them so they feel “instant.” To opt-in, build pages using this component syntax.

But perhaps we could de-couple the AMP format from the AMP cache.

That’s what Terence suggests:

My recommendation is that Google stop requiring that organisations use Google’s proprietary mark-up in order to benefit from Google’s promotion.

The AMP letter, too:

Instead of granting premium placement in search results only to AMP, provide the same perks to all pages that meet an objective, neutral performance criterion such as Speed Index.

Scott reiterates:

It’s been said before but it would be so good for the web if pages with a Lighthouse score over say, 90 could get into that top search result area, even if they’re not built using Google’s AMP framework. Feels wrong to have to rebuild/reproduce an already-fast site just for SEO.

This was also what I was calling for. But then Malte pointed out something that stumped me. Privacy.

Here’s the problem…

Let’s say Google do indeed prerender already-fast pages when they’re listed in search results. You, a search user, type something into Google. A list of results come back. Google begins pre-rendering some of them. But you don’t end up clicking through to those pages. Nonetheless, the servers those pages are hosted on have received a GET request coming from a Google search. Those publishers now know that a particular (cookied?) user could have clicked through to their site. That’s very different from knowing when someone has actually arrived at a particular site.

And that’s why Google host all the AMP pages that they prerender. Given the privacy implications of prerendering non-Google URLs, I must admit that I see their point.

Still, it’s a real shame to miss out on the speed benefit of prerendering:

Prerendering AMP documents leads to substantial improvements in page load times. Page load time can be measured in different ways, but they consistently show that prerendering lets users see the content they want faster. For now, only AMP can provide the privacy preserving prerendering needed for this speed benefit.

A modest proposal

Why is Google’s AMP cache just for AMP pages? (Y’know, apart from the obvious answer that it’s in the name.)

What if Google were allowed to host non-AMP pages? Google search could then prerender those pages just like it currently does for AMP pages. There would be no privacy leaks; everything would happen on the same domain—google.com or ampproject.org or whatever—just as currently happens with AMP pages.

Don’t get me wrong: I’m not suggesting that Google should make a 1:1 model of the web just to prerender search results. I think that the implementation would need to have two important requirements:

  1. Hosting needs to be opt-in.
  2. Only fast pages should be prerendered.

Opting in

Currently, by publishing a page using the AMP format, publishers give implicit approval to Google to host that page on Google’s servers and serve up this Google-hosted version from search results. This has always struck me as being legally iffy. I’ve looked in the AMP documentation to try to find any explicit granting of hosting permission (e.g. “By linking to this JavaScript file, you hereby give Google the right to serve up our copies of your content.”), but no luck. So even with the current situation, I think a clear opt-in for hosting would be beneficial.

This could be a meta element. Maybe something like:

<meta name="caches-allowed" content="google">

This would have the nice benefit of allowing comma-separated values:

<meta name="caches-allowed" content="google, yandex">

(The name is just a strawman, by the way—I’m not suggesting that this is what the final implementation would actually look like.)

If not a meta element, then perhaps this could be part of robots.txt? Although my feeling is that this needs to happen on a document-by-document basis rather than site-wide.

Many people will, quite rightly, never want Google—or anyone else—to host and serve up their content. That’s why it’s so important that this behaviour needs to be opt-in. It’s kind of appalling that the current hosting of AMP pages is opt-in-by-proxy-sort-of.

Criteria for prerendering

Which pages should be blessed with hosting and prerendering? The fast ones. That’s sorta the whole point of AMP. But right now, there’s a lot of resentment by people with already-fast websites who quite rightly feel they shouldn’t have to use the AMP format to benefit from the AMP ecosystem.

Page speed is already a ranking factor. It doesn’t seem like too much of a stretch to extend its benefits to hosting and prerendering. As mentioned above, there are already a few possible metrics to use:

  • Page Speed Index
  • Lighthouse
  • Web Page Test

Ah, but what if a page has good score when it’s indexed, but then gets worse afterwards? Not a problem! The version of the page that’s measured is the same version of the page that gets hosted and prerendered. Google can confidently say “This page is fast!” After all, they’re the ones serving up the page.

That does raise the question of how often Google should check back with the original URL to see if it has changed/worsened/improved. The answer to that question is however long it currently takes to check back in on AMP pages:

Each time a user accesses AMP content from the cache, the content is automatically updated, and the updated version is served to the next user once the content has been cached.

Issues

This proposal does not solve the problem with the address bar. You’d still find yourself looking at a page from The Washington Post or The New York Times (or adactio.com) but seeing a completely different URL in your browser. That’s not good, for all the reasons outlined in the AMP letter.

In fact, this proposal could potentially make the situation worse. It would allow even more sites to be impersonated by Google’s URLs. Where currently only AMP pages are bad actors in terms of URL confusion, opening up the AMP cache would allow equal opportunity URL confusion.

What I’m suggesting is definitely not a long-term solution. The long-term solutions currently being investigated are technically tricky and will take quite a while to come to fruition—web packages and signed exchanges. In the meantime, what I’m proposing is a stopgap solution that’s technically a lot simpler. But it won’t solve all the problems with AMP.

This proposal solves one problem—AMP pages being unfairly privileged in search results—but does nothing to solve the other, perhaps more serious problem: the erosion of site identity.

Measuring

Currently, Google can assess whether a page should be hosted and prerendered by checking to see if it’s a valid AMP page. That test would need to be widened to include a different measurement of performance, but those measurements already exist.

I can see how this assessment might not be as quick as checking for AMP validity. That might affect whether non-AMP pages could be measured quickly enough to end up in the Top Stories carousel, which is, by its nature, time-sensitive. But search results are not necessarily as time-sensitive. Let’s start there.

Assets

Currently, AMP pages can be prerendered without fetching anything other than the markup of the AMP page itself. All the CSS is inline. There are no initial requests for other kinds of content like images. That’s because there are no img elements on the page: authors must use amp-img instead. The image itself isn’t loaded until the user is on the page.

If the AMP cache were to be opened up to non-AMP pages, then any content required for prerendering would also need to be hosted on that same domain. Otherwise, there’s privacy leakage.

This definitely introduces an extra level of complexity. Paths to assets within the markup might need to be re-written to point to the Google-hosted equivalents. There would almost certainly need to be a limit on the number of assets allowed. Though, for performance, that’s no bad thing.

Make no mistake, figuring out what to do about assets—style sheets, scripts, and images—is very challenging indeed. Luckily, there are very smart people on the Google AMP team. If that brainpower were to focus on this problem, I am confident they could solve it.

Summary

  1. Prerendering of non-Google URLs is problematic for privacy reasons, so Google needs to be able to host pages in order to prerender them.
  2. Currently, that’s only done for pages using the AMP format.
  3. The AMP cache—and with it, prerendering—should be decoupled from the AMP format, and opened up to other fast web pages.

There will be technical challenges, but hopefully nothing insurmountable.

I honestly can’t see what Google have to lose here. If their goal is genuinely to reward fast pages, then opening up their AMP cache to fast non-AMP pages will actively encourage people to make fast web pages (without having to switch over to the AMP format).

I’ve deliberately kept the details vague—what the opt-in should look like; what the speed measurement should be; how to handle assets—I’m sure smarter folks than me can figure that stuff out.

I would really like to know what other people think about this proposal. Obviously, I’d love to hear from members of the Google AMP team. But I’d also love to hear from publishers. And I’d very much like to know what people in the web performance community think about this. (Write a blog post and send me a webmention.)

What am I missing here? What haven’t I thought of? What are the potential pitfalls (and are they any worse than the current acrimonious situation with Google AMP)?

I would really love it if someone with a fast website were in a position to say, “Hey Google, I’m giving you permission to host this page so that it can be prerendered.”

I would really love it if someone with a slow website could say, “Oh, shit! We’d better make our existing website faster or Google won’t host our pages for prerendering.”

And I would dearly love to finally be able to embrace AMP-the-format with a clear conscience. But as long as prerendering is joined at the hip to the AMP format, the injustice of the situation only harms the AMP project.

Google, open up the AMP cache.

The meaning of AMP

Ethan quite rightly points out some semantic sleight of hand by Google’s AMP team:

But when I hear AMP described as an open, community-led project, it strikes me as incredibly problematic, and more than a little troubling. AMP is, I think, best described as nominally open-source. It’s a corporate-led product initiative built with, and distributed on, open web technologies.

But so what, right? Tom-ay-to, tom-a-to. Well, here’s a pernicious example of where it matters: in a recent announcement of their intent to ship a new addition to HTML, the Google Chrome team cited the mood of the web development community thusly:

Web developers: Positive (AMP team indicated desire to start using the attribute)

If AMP were actually the product of working web developers, this justification would make sense. As it is, we’ve got one team at Google citing the preference of another team at Google but representing it as the will of the people.

This is just one example of AMP’s sneaky marketing where some finely-shaved semantics allows them to appear far more reasonable than they actually are.

At AMP Conf, the Google Search team were at pains to repeat over and over that AMP pages wouldn’t get any preferential treatment in search results …but they appear in a carousel above the search results. Now, if you were to ask any right-thinking person whether they think having their page appear right at the top of a list of search results would be considered preferential treatment, I think they would say hell, yes! This is the only reason why The Guardian, for instance, even have AMP versions of their content—it’s not for the performance benefits (their non-AMP pages are faster); it’s for that prime real estate in the carousel.

The same semantic nit-picking can be found in their defence of caching. See, they’ve even got me calling it caching! It’s hosting. If I click on a search result, and I am taken to page that has a URL beginning with https://www.google.com/amp/s/... then that page is being hosted on the domain google.com. That is literally what hosting means. Now, you might argue that the original version was hosted on a different domain, but the version that the user gets sent to is the Google copy. You can call it caching if you like, but you can’t tell me that Google aren’t hosting AMP pages.

That’s a particularly low blow, because it’s such a bait’n’switch. One of the reasons why AMP first appeared to be different to Facebook Instant Articles or Apple News was the promise that you could host your AMP pages yourself. That’s the very reason I first got interested in AMP. But if you actually want the benefits of AMP—appearing in the not-search-results carousel, pre-rendered performance, etc.—then your pages must be hosted by Google.

So, to summarise, here are three statements that Google’s AMP team are currently peddling as being true:

  1. AMP is a community project, not a Google project.
  2. AMP pages don’t receive preferential treatment in search results.
  3. AMP pages are hosted on your own domain.

I don’t think those statements are even truthy, much less true. In fact, if I were looking for the right term to semantically describe any one of those statements, the closest in meaning would be this:

A statement used intentionally for the purpose of deception.

That is the dictionary definition of a lie.

Update: That last part was a bit much. Sorry about that. I know it’s a bit much because The Register got all gloaty about it.

I don’t think the developers working on the AMP format are intentionally deceptive (although they are engaging in some impressive cognitive gymnastics). The AMP ecosystem, on the other hand, that’s another story—the preferential treatment of Google-hosted AMP pages in the carousel and in search results; that’s messed up.

Still, I would do well to remember that there are well-meaning people working on even the fishiest of projects.

Except for the people working at the shitrag that is The Register.

(The other strong signal that I overstepped the bounds of decency was that this post attracted the pond scum of Hacker News. That’s another place where the “well-meaning people work on even the fishiest of projects” rule definitely doesn’t apply.)