April 9th, 2009 |
Published in
conferences, distributed systems, HTTP, REST, reuse, web | Bookmark on Pinboard.in
You may have already seen this on InfoQ or on Stefan’s blog, but the video of my 2008 QCon London presentation “REST, Reuse, and Serendipity” is now available.
Here it is, just a little over a year after I gave that presentation, and REST continues to deliver extremely well for my work. For example, I just finished a meeting a couple hours ago where some client code needs to interact with a particular part of my system via HTTP but wants XML instead of the JSON currently provided. Simple — it’s just a different representation of the same resources, and of course it wasn’t hard to guess months ago that such a need would eventually come down the pike, so fitting it into the system will be trivial. Can you imagine the hoops one would have to jump through with typical RPC-oriented systems for this case, where the marshaling format is typically tied to the protocol and you can’t change either one? You’d have to write a new service interface with new verbs and new messages and get the client side to use it, or write client-side wrappers around whatever you already have and ask the client programmers to somehow incorporate those wrappers into their code. Either way, there’s simply no chance of reusing existing agreements; instead, both sides require non-trivial specialization.
One problem I noticed, though, was that the client developers asked for a “REST-like interface” and also for a document listing all resource URIs, and for each one, the HTTP verbs that apply to it, the representations available from it, and what status codes to expect from invoking operations on it. Those two requests are sort of mutually exclusive, depending on what “REST-like” means; for a proper RESTful system, you don’t need a document like that, at least not the type of document they’re asking for.
January 22nd, 2008 |
Published in
distributed systems, integration, performance, publishing, REST, reuse, scalability, services | Bookmark on Pinboard.in
As you may know, I’m a columnist for IEEE Internet Computing (IC), and I’m also on their editorial board. Our annual board meeting is coming up, so to help with planning, we’ve issued a call for special issue proposals.
The topics that typically come up in this blog and others it connects to are pretty much all fair game as special issue topics: REST and the programmatic web, service definition languages, scalability issues, intermediation, tools, reuse, development languages, back-end integration, etc. Putting together a special issue doesn’t take a lot of work, either. It requires you to find 3-4 authors each willing to contribute an article, reviewers to review those articles (and IC can help with that), and a couple others to work with you as editors. As editors you also have to write a brief introduction for the special issue. I’ve done a few special issues over the years and if you enlist the right authors, it’s a lot less work than you might think.
As far as technical magazines go, IC is typically one of the most cited, usually second only to IEEE Software, as measured by independent firms. I think one reason for this is that it has a nice balance of industry and academic articles, so its pages provide information relevant to both the practitioner and the researcher.
January 5th, 2008 |
Published in
column, objects, REST, reuse, services, SOA | Bookmark on Pinboard.in
Reusability is often promoted not only as a goal but also as a feature of all kinds of software architectures, designs, and systems. For example, in the CORBA, WS-*, and SOA worlds I formerly haunted, everyone spoke nonchalantly of reuse as if it were a given. You were supposed to simply identify the objects or services required to support your business processes, and then specify their interfaces. Then, anyone wanting to provide one or more of those objects or services was merely supposed to follow the appropriate interfaces and write implementations for them. Applications were written to the interfaces and thus were automatically decoupled from the implementations. As a result of these reusable interfaces, you could also potentially reuse the objects and services that implemented them, as well as the applications that consumed them.
Problem is, it never seemed to work out as easily as that. Most of the time, the interfaces people came up with were just too specific, and nobody could agree to apply them widely. Think of all the time people spent over the years in OMG, JSR, and W3C WS meetings trying to agree on just the infrastructure interfaces and not always succeeding. It’s therefore not surprising that there was never much success at defining broadly-accepted standard interfaces up at the application level; the scope is simply far too wide up there.
So, with technologies that promote interface variability, planned reuse is pretty hard. Consequently, serendipitous reuse, where services and facilities can be combined and reused beneficially in unforeseen ways, is virtually out of the question.
Stu Charlton’s blog was where I first saw those terms used, and I found them very enlightening. So did Bob Warfield.
One of the first things that attracted me to REST was the uniform interface constraint. Mark Baker first brought it to my attention nearly 8 years ago, and before I looked at it, I thought it was just a bad idea, like a totally generic doIt()
interface, devoid of any meaningful semantics. But of course, there’s much more to it than that. The HTTP interface, for example, strikes a great balance that allows it to be efficiently reused across a very wide variety of applications. And when such a uniform interface is reused, the applications that use it stand a much better chance of being reusable themselves than if they were written against a non-uniform application-specific interface.
My latest Internet Computing column, entitled Serendipitous Reuse (and here’s the pdf if you prefer), explores how the uniform interface contributes to reuse of both the planned and serendipitous kinds.