Blog posts tagged "oauth"

Flickr, Twitter, OAuth: A Secret History

July 1st, 2009

I remember it as a dark and stormy night, that seems unlikely, but I’m sure it was late and chilly and damp.

I remember being tired from a long day in the salt mines; that was during a period when I was always tired after work.

I remember there being whiskey, and knowing @maureen, that seems likely.

I’d just won some internal battles regarding delegated auth, and implemented Google AuthSub for the new Blogger Beta, as well as Amazon auth for a side project. So when I wanted to share photos from Flickr to Twitter, I knew it wasn’t going to be over HTTP Basic Auth.

A few weeks earlier @blaine and @factoryjoe had pulled me a into a project called OpenAuth that they’d been talking about for a couple of months — an alternative to yet another auth standard, and a solution for authenticating sites using OpenID.

So one late, damp night along Laguna St. with whiskey, we did a pattern extraction, identifying the minimal possible set of features to offer compatibility against existing best practice API authorization protocols. And wrote down the half pager that became the very first draft of the OAuth spec.

That spec wasn’t the final draft. That came later, after an open community standardization process allowing experts from the security, web, and usability community to weigh in and iterate on the design. But many of those decisions (and some of the mistakes) from that night made it into the final version.

Yesterday, a little over two years later, we finally shipped Flickr2Twitter.

So it was nice yesterday when people commented on the integration:

“Uses OAuth!” “Doesn’t ask for your Twitter password” “Great use of OAuth”.

And I thought to myself, “It better be, this is what OAuth was invented for — literally”.

New Amazon AWS Signature Version 2 is “OAuth-compatible”

December 30th, 2008

Enigma rotors

Spent a couple hours last night writing the core of a stripped down, PHP4 compatible API library for Amazon SimpleDB (in the style of my flickr simple library. Just not a fan of abstraction for its own sake). In the process I discovered that Amazon had revved the version on their “Signature Method”. Which is good news as SignatureVersion 1 contains a classic crypto-blunder in its design, namely it encourages collisions. (more details, also why you care about collisions) To date the solution was use SSL, and wait patiently, very patiently. So yay for Amazon fixing this! And in fairness, first couple of drafts of the OAuth spec contained a similar issue, though it got ironed out quickly. Yay for many eyes and the open web.

“OAuth-compatible” signing

Great things are more secure, good news and all, but that isn’t what caught my eye. This block of text did:

Here is what’s different about forming the string to sign for signature version 2:

  • You include additional components of the request in the string to sign
  • You include the query string control parameters (the equals signs and ampersands) in the string to sign
  • You sort the query string parameters using byte ordering
  • You URL encode the query string parameters and their values before signing the request

You really have to be an OAuth-dork to find anything special with that paragraph, but if you were, you’d notice that those 4 bullets are an incredibly succinct description of generating an OAuth signature. (in fact a more succinct description then appears anywhere in the OAuth documentation

Which meant that my SimpleDB library can reuse most of the logic from my OAuth library to do the trickiest part of the API call, namely the signing. (Additionally it means that security reviews of both protocols support each other)

So my AWS signing method is a approximately a dozen characters different then my OAuth method and as straightforward as:

    .....

    $signature = aws_request_signature(AWS_SECRET_KEY, $http_method, AWS_SIMPLEDB_SERVICEURL, $parameters);
    $parameters['Signature'] = $signature;

    $encoded_params = array();

    foreach ($parameters as $k => $v){
        $encoded_params[] = oauth_urlencodeRFC3986($k).'='.oauth_urlencodeRFC3986($v);
    }

    $request_url = AWS_SIMPLEDB_SERVICEURL . '?' . implode('&', $encoded_params);

    .....

    function aws_request_signature($key, $http_method, $service_url, $parameters) {
        $base_string = aws_base_string($http_method, $service_url, $parameters);
        return base64_encode(hash_hmac('sha1', $base_string, $key, true));
    }

    function aws_base_string($http_method, $service_url, $parameters) {
        $parsed = parse_url($service_url);

        $host = strtolower($parsed['host']);
        $path = $parsed['path'] ? $parsed['path'] : '/';
        $data = array(
            strtoupper($http_method),
            $host,
            $path,
            oauth_normalized_request_params($parameters)
        );

        $base_string = join("\n", $data);
        return $base_string;
    }

(this uses my personal OAuth library, but your library should have similar methods)

Sure made my jobs of implementing a library easier. If you’re going to invent a new crypto protocol, please consider doing like Amazon, and re-using the basic building blocks. (which also happen to be best practices)

Netflix API: Looking good

October 1st, 2008

Netflix was pretty much the last place I was Web 2.0 style share cropping, creating value without a way to get it out. The Netflix API has been rumored for a long time, but with today’s release they really did an excellent job.

Also versioned documentation, and a quite reasonable set of branding guidelines.

The Netflix Web APIs provide the ability for you to integrate Netflix user services into your application. The APIs provide the following capabilities:
  • Performing searches of movies, TV series, cast members, and directors
  • Retrieving catalog titles, including details about the title such as name, box art, director, cast, etc.
  • Determining the subscriber’s relationship to a specific title, e.g, in queue, saved, available on DVD, etc.
  • Managing and displaying queues for users
  • Providing conveniences such as auto-completion of partial search terms typed by a user.
  • Displaying a user’s ratings and reviews.
  • Including functional Add and Play buttons in your web application.

Congratulations to Netflix, and Mashery.

Advanced OAuth Wrangling

May 9th, 2008

I’ve been terrible about uploading my talks this year. So here are the Advanced OAuth Wrangling slides from my talk today. (even though I really want to spend a couple of hours cleaning them up)

And as its a 85 slides to be given in 45 minutes you can imagine that there is a fair amount of information missing from the slides. Simon made me promise to upload an annotated version, and I’ll try to do that soon.

(and unfortunately the process of saving the slides down to a PDF killed the transparency on the grey backdrops)

Strange Viewings

April 25th, 2008

I didn’t make it to the keynote to see our new CTO speak (meetings that morning), but it was very strange, bordering on deeply surreal to watch the video of it.

  1. Interesting to see my “Flickr is the 2nd largest API ” meme work its way up the tree. I didn’t make that factoid up per se, and I’d probably stand behind it if pushed, but I did reason from very limited data. (also AWS screws up the story, is utility computing an API?)

  2. Still haven’t quite adjusted to the transition of OAuth from being a personal project that the “Paranoids” (official title of Yahoo’s internal security experts) were angry at me for working on (against Yahoo policy for Yahoos to work on security related projects), to a the company wide standard, at least on paper.

Upcoming Talks, Web2Expo, etc

April 19th, 2008

I’m speaking next Friday at the SF Web2Expo on Casual Privacy. I’m speaking in Dublin Speaking Thursday May 8th (2 weeks later) in Dublin on Advanced OAuth Wrangling. Hope to see you at one or both of those talks.

I’m also excited about a dozen other talks next week, as you can see from my Web2/iCalico schedule.

Flickr: Beehive Launches without Phishing

March 31st, 2008

Overview of relationships between groups, removing highly redundant groups

Congrats to waferbaby, mroth, and ph for totally owning on today’s friend importing feature (aka beehive).

We’re a little late to the game but its awfully nice to be able to launch with zero screenscraping, and zero phishing-creepy-give-us-your-password. This is what data-portability-open-data-delegated-trust future looks like.

update: and yes, we’re cheating, because Yahoo’s addressbook API is still internal+partners only. We’re working on it.

Fire Eagle: Interesting Choices

March 5th, 2008

Fire Eagle

Other folks are talking about and writing about the long germinating, launched in beta, location broker from Yahoo’s Brickhouse, Fire Eagle.

I wanted to call out just a couple of the cool, and non-intuitve decisions they made.

Is NOT a consumer brand

Fire Eagle is a service for building and sharing location data. Its the application built on top of it that you’ll interact with, unless you’re building stuff.

Fire Eagle does NOT manage the social graph

Its a service for sharing your data with friends (or services, or your toaster), but it doesn’t know who your friends are. The social graph has been outsource. Best example of a small piece loosely joined I’ve seen in a long time.

Cares about privacy and ease of use

Ninja privacy is built in. But you don’t have to care. The TOS requires developers to discuss how the data is used. And privacy levels are front and center. And from day one data is delete-able, and in fact data is flushed on a regular basis.

Built on OAuth

Yay!

OAuth in PHP (for Twitter)

October 16th, 2007

Mike released HTTP_Request_OAuth today, so I spent a little while this evening coding up Service_Twitter as helper class for making OAuth authorized requests against the Twitter API.

Both are early enough in the dev cycle to be called proof of concepts.

Mostly I wrote it because I had always envisioned there being wrapper libraries around the low level OAuth implementations that wrapped the calls, and constants, and as Mike graciously went out and wrote a low level library I felt compelled to write a wrapper.

Also twittclient, an interactive client for getting an authed access token, essential to bootstrapping development.

And nota bene, HRO currently only supports the MD5 signing algorithm, which is undefined in the core spec, and subject to change. (Just in case you didn’t believe me about the early state of things.)

update 2008/4/18

This code no longer works because Twitter has taken down their (slightly non-compliant) OAuth endpoint. When they add OAuth support back in, I’ll link to it.

FOO: Crowdvine, iCalico, Pathable, a Study in Collusion

July 11th, 2007

I didn’t make it to FOO this year, but I did send software in my stead, and its nice to hear that folks liked it.

We slaved iCalico to Crowdvine to add a social networking layer, a network that was walked, mapped, and color coded by the Pathable folks.

Tony has a nice report back on it, as does Shelly from Pathable (6 weeks aka a couple of late nights). And Scott Berkun (who owes me a copy of “Art of Project Management”!) said super nice things.

Collusion Patterns

So how do you do that — stitch together 3 different sites to provide a unified experience? Visions of APIs, Internet scale SSO, and messaging layers spring to mind. Or more likely hash and slash patches, jury rigged shunts, juggled install directories.

We did the dumb easy thing, and I’m surprised more people don’t do it.

  1. Crowdvine.com sets a cookie collusion. This cookie contains the data we needed to display the logged in view of iCalico. (you’re nickname and optional your URL). In addition it contained a md5 hash of the concatted data, plus sekret known only to Tony and myself.

  2. If we find the cookie collusion, we load the described user from the database, or create it on the fly behind the scenes.

  3. There is no step 3.

Amazingly useful, trivially simple, ultimately flexible. Niche sites are great, but you need techniques for stitching them together before they can realize their potential as pieces of an ecosystem. I don’t necessarily expect to see this kind of integration become more common, but I think it would be great if it did. (and in the name of transparency disposable apps are huge enablers, disposable sites/apps is another pattern I’m puzzled we don’t see more of — its as if we more inclined to converse bits then landfill)

update: Whoops, it was pointed out there was a step 3, or rather a step 1.5: use CNAMEs to point to individual components on sub-domains.