riak, part 1

  1. how it works?
  2. HTTP API
  3. hooks
  4. map/reduce
  5. map/reduce, system functions
  6. map/reduce, static args
  7. map/reduce, key filtering
  8. siblings

how it works?

riak is a distributed, scalable key/value store

buckets and keys are the only way to organize data inside of riak:

riak treats the bucket/key pair as a single entity when performing fetch/store

buckets might be compared to tables, and keys - to primary keys in relational databases


objects

riak objects are structs identified by bucket/key and composed of the following parts:

nodes

physical servers, referred to in the cluster as nodes, run a certain number of virtual nodes, or vnodes
  • nodes can be added and removed from the cluster dynamically
  • all nodes in the cluster are equal
  • each node is fully capable of serving any client request

  • vector clock

    each update is tracked by a vector clock. vector clocks allow to determine causal ordering and detect conflicts

    riak has two ways of resolving update conflicts:

  • the last update automatically win
  • return both versions to the client

  • N-value

    controls how many replicas of a datum are stored. this value has a per-node default but can be overridden on each bucket. objects inherit the N-value of their bucket. all nodes in the same cluster should use the same N-value

    R-value

    client is allowed to supply an R-value on each direct fetch

    the R-value represents the number of riak nodes which must return results for a read before the read is considered successful

    subtracting R from N will tell you the number of down nodes a riak cluster can tolerate before becoming unavailable for reads


    W-value

    client is allowed to supply a W-value on each update

    the W-value represents the number of riak nodes which must report success before an update is considered complete

    subtracting W from N will tell you the number of down nodes a riak cluster can tolerate before becoming unavailable for writes


    REST API

  • storage operations use PUT/POST
  • fetches use GET
  • deletes use DELETE
  • operations are submitted to a pre-defined URL

  • hooks

    hooks are defined on a per-bucket basis and are stored in the target bucket’s properties. they are run once per successful response

    they are invoked before (pre-commit) or after (post-commit) a value is persisted

    they can:

  • allow a write to occur with an unmodified object
  • modify the object
  • prevent any modifications
  • post-commit hooks should not modify the object. point


    map/reduce

    allows to process data in real-time in parallel

  • jobs are encoded in JSON (using a set of nested hashes describing the inputs, phases, and timeout)
  • a job can consist of an arbitrary number of Map and Reduce phases
  • a job is submitted via HTTP
  • the results are returned in JSON-encoded form

  • SecondaryIndexes

    tag object with one or more field/value pairs. the object is indexed under these field/value pairs, and the application can later query the index to retrieve a list of matching keys

    the indexes are defined at the time the object is written. to change the indexes simply write the object with a different set of indexes

    indexing is real-time and atomic; the results show up in queries immediately after the write operation completes

    indexes can be stored and queried via the HTTP interface

    index results can feed directly into a Map/Reduce operation, allowing further filtering and processing


    RiakSearch

    it is a distributed, easily-scalable, failure-tolerant, real-time, full-text search engine. Riak Search allows you to find and retrieve your riak objects using the objects’ values

    Links

    metadata that establish one-way relationships between objects

    riak can also return objects based on links stored on the object

    link walking can be used to return a set of related objects from a single request


    HTTP API

    never expose Riak at Internet - use some kind of proxy instead!

    storing

    comes in two forms, depending on whether you want to use a key of your choosing, or let riak assign a key

      POST /buckets/bucket_name/keys data           # with riak-defined key
      PUT /buckets/bucket_name/keys/key_name  data  # with user-defined key
    
    or:
      PUT /riak/bucket_name/key_name data           # using ring' raw-name
    
    with riak-defined key:
      $> curl -d 'this is a test' -H "content-Type: text/plain" http://127.0.0.1:8098/riak/foo
      $> curl localhost:8098/buckets/foo/keys?keys=true
       {"keys":["WgHaAtvQeBll0peGn65kiv49mxI"]}
      $> curl localhost:8098/buckets/foo/keys/WgHaAtvQeBll0peGn65kiv49mxI
       this is a test
    
    with user-defined key:
      $ curl -d '{"bar":"baz"}' -H "content-Type: application/json" \ 
      > localhost:8098/riak/foo/doc?returnbody=true
       {"bar":"baz"}
      $> curl localhost:8098/buckets/foo/keys?keys=true
       {"keys":["WgHaAtvQeBll0peGn65kiv49mxI","doc"]}
      $> curl localhost:8098/buckets/foo/keys/doc
       {"bar":"baz"}
    

    reading

      GET /buckets/bucket_name/keys/key_name  
    
    or:
      GET /riak/bucket_name/key_name          # using ring' raw-name
    
    response will be the contents of the object (except when siblings are present)
      $> curl localhost:8098/buckets/foo/keys/doc
       {"bar":"baz"}
    

    deletion

      DELETE /buckets/bucket_name/keys/key_name   
      DELETE /buckets/bucket_name/keys/key_1,key_2,...,key_N
    
    or:
      DELETE /riak/bucket_name/key_name   
      DELETE /riak/bucket_name/key_1,key_2,...,key_N
    
      $> curl -X DELETE http://127.0.0.1:8098/riak/foo/doc
    
    there is no straightforward way to delete an entire bucket. to delete all the keys in a bucket, you’ll need to delete them all individually


    get keys

      GET /buckets/bucket_name/keys?keys=true
      GET /buckets/bucket_name/keys?keys=stream
    
      $> curl -X GET localhost:8098/buckets/foo/keys?keys=true
      {"keys":["6Rz1uaW1G3GtRnhTddmsNLT9A68","one"]}
    

    get buckets

      GET /buckets?buckets=true
    
      $> curl -X GET localhost:8098/buckets?buckets=true
      {"buckets":["bar","baz","foo"]}
    

    get the bucket properties

      GET /buckets/bucket_name/props 
    
      $> curl -X GET localhost:8098/buckets/foo/props
      {"props":
         {"name":"foo",
          "allow_mult":false,
          "basic_quorum":false,
          "big_vclock":50,
          "chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},
          "dw":"quorum",
          "last_write_wins":false,
          "linkfun":{"mod":"riak_kv_wm_link_walker","fun":"mapreduce_linkfun"},
          "n_val":3,
          "notfound_ok":true,
          "old_vclock":86400,
          "postcommit":[],
          "pr":0,
          "precommit":[],
          "pw":0,
          "r":"quorum",
          "rw":"quorum",
          "small_vclock":50,
          "w":"quorum",
          "young_vclock":20}
       }
    
    for each bucket a number of configuration properties can be selectively defined, overriding the defaults
    n_val
    integer (default: 3)
    specifies the number of copies of each object to be stored in the cluster

    r, w
    all | quorum | one | integer (default: quorum)
    sets for reads and writes the number of responses required before an operation is considered successful

    precommit
    a list of erlang/javascript functions to be executed before writing an object

    postcommit
    a list of erlang functions to be executed after writing an object

    set the bucket properties

      PUT /riak/bucket_name propsdata
    
      $> curl -X PUT localhost:8098/riak/foo \
      >  -H "content-Type: application/json" \
      >  -d '{"props": {"n_val": 1}}'
    

    hooks

    functions which are executed during write/delete operations. they are defined on a per-bucket basis and are stored in the target bucket' properties

    pre-commit hooks

    the pre-commit hook function should take a single argument: the object being modified

    Erlang pre-commit functions are allowed three possible return values:

    errors that occur when processing Erlang pre-commit hooks will be reported in the sasl-error.log file with lines that start with “problem invoking hook”

    deletes are also considered “writes” so pre-commit hooks will be fired when a delete occurs. hook functions will need to inspect the object for the X-Riak-Deleted metadata entry to determine when a delete is occurring

      -module(myprecommit).
      -export([check/1]).
    
      check(Object) ->
        case erlang:byte_size(riak_object:get_value(Object)) of
          Size when Size > 65535 ->
            {fail, "Object is too large."};
          _ ->
            Object
        end.
    

    the default value of the bucket precommit property is an empty list. adding one or more pre-commit hook functions to the list will cause riak to start evaluating those hook functions when bucket entries are created, updated, or deleted

    riak stops evaluating pre-commit hooks when a hook function fails the commit


    post-commit hooks

    post-commit hooks are run after the write has completed successfully.

    functions must accept a single argument, the object instance just written

    the return value of the function is ignored

    errors that occur when processing post-commit hooks will be reported in the sasl-error.log file with lines that start with “problem invoking hook”

    as with pre-commit hooks, deletes are considered writes so post-commit hook functions will need to inspect object metadata for the presence of X-Riak-Deleted to determine when a delete has occurred

      -module (mypostcommit).
      -export ([log/1]).
    
      log (Object) ->
        K = riak_object:key (Object),
        {ok, S} = file:open ("/tmp/opt/log", [append]),
        io:format (S, "object with key ~s inserted at ~p ~p~n", [K, date(), time()]),
        file:close (S),
        ok.
    

    configuration

    add a reference to your hook function to the list of functions stored in the precommit/postcommit bucket property {"mod" : "mymodname", "fun" : "myfuncname"}

    pre-commit hooks are stored under the bucket property precommit

      $> curl -X PUT localhost:8098/riak/foo \
      > -H "content-Type: application/json" \
      > -d '{"props": {"precommit": [{"mod":"myprecommit", "fun":"check"}]}}'
    
    post-commit hooks use the bucket property postcommit
      $> curl -X PUT localhost:8098/riak/foo \
      > -H "content-Type: application/json" \
      > -d '{"props": {"postcommit": [{"mod":"mypostcommit", "fun":"log"}]}}'
    
    put the .beam files in designated directory

    for example if your riak config file (in my configuration - /opt/riak/etc/app.config) contains:

      {riak_kv, [ 
          ... 
          {add_paths, ["/opt/riak/hooks/"]}, 
          ... 
      ]} 
    

    then put compiled beam files in dir /opt/riak/hooks/


    map/reduce

    in riak, map/reduce is the method for non-key-based querying

    in riak map/reduce is intended for batch processing, not for interactive querying


    when to use

  • you know the set of objects you want to map/reduce over
  • you need flexibility in querying

    when not to use

  • you want to query an entire bucket
  • you want predictable latency
  • m/r queries have two components:

  • list of inputs (with elements as bucket/key pairs)
  • list of phases (with elements as chunks of information)
  • job consists of a list of phases (either map or a reduce)

    the map phase consists of a function and a list of objects the function will operate on

    the reduce phase consists of a function and a list of preliminary results the function will operate on

    executive order:

    1. client makes a request to riak
    2. the node client contacts to becomes the coordinating node
    3. coordinator uses the list of objects to route the object keys and the function with a request for the vnode
    4. vnode runs that function over those particular objects
    5. results are sent back to the coordinating node (as a list)
    6. coordinating node concatenates lists from different vnodes and passes that list to a reduce phase function
    7. coordinating node sent the result of reduce phase back to client

    HTTP API

      POST /mapred -H 'content-type: application/json' data 
    
    content-Type must always be application/json. this request must include data (usually - as file), which is the JSON form of the map/reduce query

    JSON and riak

    javascript object { "foo" : "word", "bar" : false, "baz" : 3.7 } is equivalent of { [ { <<"foo">>, <<"word">> }, { <<"bar">>, false }, { <<"baz">>, 3.7 } ] } Erlang term. in Erlang lingua it is a proplist wrapped in a tuple

    function mochijson2:decode/1 translates JS types into Erlang types:

      ---------------+----------------------------------------
          JS         |     Erlang
      ---------------+----------------------------------------
          Num        |     number()
          Str        |     string()
          Array      |     {array, [term()]}
          Object     |     {struct, {[{string(), term()}]}}
          true       |     true
          false      |     false
          null       |     null
      ---------------+-----------------------------------------
    
    function mochijson2:encode/1 makes inverse translation
      $> sudo riak attach
      Attaching to /tmp//opt/riak1/erlang.pipe.1 (^D to exit)
    
      (riak1@127.0.0.1)1> B = mochijson2:decode(binary_to_list(<<"{\"sKey\" : \"Val\", \"iKey\" : 10,
      (riak1@127.0.0.1)1> \"aKey\" : [1, 2, 3], \"oKey\":{\"a\":1.1, \"b\":2.3, \"c\":true}}">>)).
      {struct,[{<<"sKey">>, <<"Val">>},
               {<<"iKey">>, 10},
               {<<"aKey">>, [1,2,3]},
               {<<"oKey">>, {struct,[{<<"a">>,1.1},{<<"b">>,2.3},{<<"c">>,true}]}}]}
      (riak1@127.0.0.1)2> iolist_to_binary(mochijson2:encode(B)).
      <<"{\"sKey\":\"Val\",\"iKey\":10,\"aKey\":[1,2,3],\"oKey\":{\"a\":1.1,\"b\":2.3,\"c\":true}}">>
    
    map/reduce queries have a default timeout of 60000 milliseconds. the default timeout can be overridden by supplying a different value, in milliseconds, in the JSON document
     
      {"inputs":[...inputs...], "query":[...query...], "timeout": 90000}
     
    when the timeout hits, the node coordinating the MapReduce request cancels it and returns an error to the client

    the list of input objects is given as a list of 2-element lists of the form:

      [Bucket, Key]
    
    or 3-element lists of the form:
      [Bucket, Key, KeyData]
    
    you may also pass just the name of a bucket {"inputs" : "mybucket", ...}, which is equivalent to passing all of the keys in that bucket as inputs (i.e. a map/reduce across the whole bucket). this triggers expensive list keys operation

    map phase functions

    map function takes three arguments (arity/3):
    1. Value : the value found at a key
      in Erlang it is manipulated by the riak_object module
    2. KeyData : key data that was submitted with the inputs to the query or phase
    3. Arg : a static argument for the entire phase that was submitted with the query
    function should produce a list of values (usually list with single value)

    reduce phase functions

    reduce function takes two arguments:
    1. ValueList : the list of values produced by the preceding phase in the MapReduce query
    2. Arg : a static argument for the entire phase that was submitted with the query
    function should produce a list of values


    map/reduce on native erlang

    1. in riak config file ($RIAKDIR/riak/etc/app.config):
    2.   {riak_kv, [ 
          ... 
          {add_paths, ["/opt/hooks/","/opt/mapred/"]}, 
          ... 
        ]} 
      
    3. input data:
    4.   curl -XPUT localhost:8098/buckets/foo/keys/mykey1 \
        -H 'content-Type: application/json' -d '{"ammount":1}'  
        curl -XPUT localhost:8098/buckets/foo/keys/mykey2 \
        -H 'content-Type: application/json' -d '{"ammount":2}'  
        curl -XPUT localhost:8098/buckets/foo/keys/mykey3 \
        -H 'content-Type: application/json' -d '{"ammount":4}'  
        curl -XPUT localhost:8098/buckets/foo/keys/mykey4 \
        -H 'content-Type: application/json' -d '{"ammount":8}'  
      
    5. module for map phase:
    6. if you stored the object via HTTP API, then object will be binary and if its content is JSON then you need to parse it, but you already have a JSON decoder mochijson2:decode/1 inside riak:
        > mochijson2:decode(<<"{\"ammount\" : 4}">>).
         {struct, [ {<<"ammount">>, 4} ]}
      
      btw, the opposite encode function is:
        > iolist_to_binary(mochijson2:encode({struct, [ {ammount, 4} ]})).           
         <<"{\"ammount\" : 4}">>
      
      so:
        -module(mymapred).
        -export([mymap/3]).
      
        mymap(VData, _KData, _Arg) ->
          ObjJson = riak_object:get_value(VData),
          {struct, JsonData} = mochijson2:decode(ObjJson),
          X = proplists:get_value(<<"ammount">>, JsonData),
          [X].
      
      compile it and upload mymapred.beam to /opt/mapred/

    7. prepare myerlang.json data file:
    8.   {"inputs": [ ["foo","mykey1"], ["foo","mykey2"], ["foo","mykey3"], ["foo","mykey4"] ],
         "query" : [
                     {"map"    :  {"language" : "erlang",
                                   "module"   : "mymapred",
                                   "function" : "mymap"           }},
                     {"reduce" :  {"language" : "erlang",
                                   "module"   : "riak_kv_mapreduce",
                                   "function" : "reduce_sum"      }}
                   ]
        }
      
    9. load module:
    10.   $> cd /opt/mapred/ 
        $> riak attach 
          Attaching to /tmp//opt/riak/erlang.pipe.1 (^D to exit)
      
         (riak@127.0.0.1)1> l(mymapred).
         {module,mymapred}
      
      or, you could just restart cluster (in dev cycle)
    11. now, let us use all that stuff:
    12.   $> curl -XPOST localhost:8098/mapred -d @myerlang.json \
        > -H 'content-type: application/json'
         [15]
      
    profit!

    map/reduce, system functions

    there are some predefined functions in system module riak_kv_mapreduce:
    map_identity/3
    returns each handed object
    map_object_value/3
    returns the values of the objects from the input

    reduce_identity/2
    returns back [Bucket, Key] for each BKey
    reduce_sum/2
    produces the sum of the inputs
    reduce_count_inputs/2
    counts the inputs
     
      {"inputs": [ ["foo","mykey2"], ["foo","mykey3"], ["foo","mykey7"] ],
       "query" : [
          {"map"    :  {"language" : "erlang",
                        "module"   : "riak_kv_mapreduce",
                        "function" : "map_identity" }},
          {"reduce" :  {"language" : "erlang",
                        "module"   : "riak_kv_mapreduce",
                        "function" : "reduce_count_inputs" }} ]}
     
    or
      {"inputs": [ ["foo","mykey16"], ["foo","mykey13"], ["foo","mykey11"] ],
       "query" : [ {"map"      :  {"language" : "erlang",
                                   "module"   : "riak_kv_mapreduce",
                                   "function" : "map_object_value" }},
                   {"reduce"   : .........................................  }]}
    

    map/reduce, static args

    you can pass to map/reduce phase function external static args. they should be defined as key/val pair where key is the string "arg" and value is object of key/value pairs. static args passed as the third argument of phase function

    these static args passed to map/reduce functions as third argument in form: {struct,[{<<"argName">>,argVal}]}, so you shoud not decode them - pattern match to obtain Erlang proplist

    suppose you have file mystat.erl with content:

      -module(mystat).
      -export([mysum/3]).
    
      mysum(VData, _KData, Args) ->
        ObjJson = riak_object:get_value(VData),
        {struct, JsonData} = mochijson2:decode(ObjJson),
        X = proplists:get_value(<<"ammount">>, JsonData),
    
        {struct, ArgsPropLst} = Args,
        Min = proplists:get_value(<<"vmin">>, ArgsPropLst, 1),
        Max = proplists:get_value(<<"vmax">>, ArgsPropLst, 100),
    
        case (X >= Min andalso X =< Max) of
          true  -> [X];
          false -> [0]
        end.
    
    compile it and save mystat.beam in propriate directory

    create file mystat.json:

      {"inputs":[
        ["foo","mykey21"], ["foo","mykey22"], ["foo","mykey23"], ["foo","mykey24"] ],
       "query":[
         {"map":{"arg":{"vmin":10.0, "vmax":20.0},
                 "language":"erlang",
                 "module":"mystat",
                 "function":"mysum"}},
         {"reduce":{"language":"erlang",
                    "module":"riak_kv_mapreduce",
                    "function":"reduce_sum"}}]}
    
    populate data:
      $> rs=localhost:8098
      $> curl -XPUT $rs/buckets/foo/keys/mykey21 -H 'content-Type: application/json' -d '{"ammount":5}'  
      $> curl -XPUT $rs/buckets/foo/keys/mykey22 -H 'content-Type: application/json' -d '{"ammount":12}'  
      $> curl -XPUT $rs/buckets/foo/keys/mykey23 -H 'content-Type: application/json' -d '{"ammount":17}'  
      $> curl -XPUT $rs/buckets/foo/keys/mykey24 -H 'content-Type: application/json' -d '{"ammount":24}'  
    
    and now:
      $> curl $rs/mapred -H'content-type:application/json' -d@mystat.json
      [29]
    

    map/reduce, key filtering

    Map/Reduce can filter on key name. key filters are a way to pre-process Map/Reduce inputs from a full bucket query simply by examining the key — without loading the object

    transform functions

    transform key-filter functions turn the key into a format suitable for testing by the predicate functions
    int_to_string
    turns an integer (previously extracted with string_to_int), into a string
    string_to_int
    turns a string into an integer
    float_to_string
    turns a floating point number (previously extracted with string_to_float), into a string
    string_to_float
    turns a string into a floating point number
    to_upper
    changes all letters to uppercase
    to_lower
    changes all letters to lowercase
    tokenize
    splits the input on the string given as the first argument and returns the nth token specified by the second argument
      [["tokenize", "/", 4]]  
    urldecode
    URL-decodes the string

    predicate functions

    predicate functions should be specified last in a sequence of key-filters (often preceded by transform functions)
    greater_than
    tests that the input is greater than the argument
      [["greater_than", 50]]  
    less_than
    tests that the input is less than the argument
      [["less_than", 10]]  
    greater_than_eq
    tests that the input is greater than or equal to the argument
      [["greater_than_eq", 2000]]  
    less_than_eq
    tests that the input is less than or equal to the argument
      [["less_than_eq", -2]]  
    between
    tests that the input is between the first two arguments. if the third argument is given, it is whether to treat the range as inclusive. if the third argument is omitted, the range is treated as inclusive
      [["between", 10, 20, false]]  
    matches
    tests that the input matches the regular expression given in the argument
      [["matches", "solutions"]]  
    eq
    tests that the input is equal to the argument
      [["eq", "bar"]]  
    neq
    tests that the input is not equal to the argument
      [["neq", "foo"]]  
    set_member
    tests that the input is contained in the set given as the arguments
      [["set_member", "foo", "bar", "baz"]]  
    similar_to
    tests that input is within the Levenshtein distance of the first argument given by the second argument. the Levenshtein distance between two words is equal to the number of single-character edits required to change one word into the other
      [["similar_to", "newyork", 3]]  
    starts_with
    tests that the input begins with the argument (a string)
      [["starts_with", "closed"]]  
    ends_with
    tests that the input ends with the argument (a string)
      [["ends_with", "0603"]]  

    logical functions

    and
    joins two or more key-filter operations with a logical AND operation
      ["and", [["ends_with", "0603"]], [["starts_with", "foo"]]]  
    or
    joins two or more key-filter operations with a logical OR operation
      ["or", [["eq", "google"]], [["less_than", "g"]]]  
    not
    negates the result of key-filter operations
      ["not", [["matches", "solution"]]]  

    example

    file filter.json:
      {"inputs":{ "bucket":"baz",
                  "key_filters":[["and", [["starts_with", "mykey3"]], [["neq", "mykey32"]]]]},
       "query" :{ .............  }}
    

    siblings

    when allow_mult is set to true in the bucket properties, concurrent updates are allowed to create sibling objects, meaning that the object has any number of different values that are related to one another by the vector clock. this allows your application to use its own conflict resolution technique

    an object with multiple sibling values will result in a 300 Multiple Choices HTTP response. if the Accept header prefers multipart/mixed, all siblings will be returned in a single request as sections of the multipart/mixed response body. otherwise, a list of “vtags” will be given in a simple text format. you can request individual siblings by adding the vtag query parameter

      $> curl -X PUT localhost:8098/riak/foo \
      > -H "content-Type: application/json" \
      > -d '{"props":{"allow_mult":true}}'
    
      $> curl -X PUT -d '{"bar":"faz"}' \ 
      > -H "content-Type: application/json" \
      > localhost:8098/riak/foo/doc?returnbody=true
    
      --QGmbJ27hW8flvB9kEKft7sDn3ln
      Content-Type: application/json
      Link: ; rel="up"
      Etag: 24Kzb7A7IK3ZLojN7XhH5c
      Last-Modified: Sat, 26 Jan 2013 02:13:12 GMT
    
      {"bar":"baz"}
      --QGmbJ27hW8flvB9kEKft7sDn3ln
      Content-Type: application/json
      Link: ; rel="up"
      Etag: 3APkPFgHehgOIJnkfV9ILi
      Last-Modified: Sat, 26 Jan 2013 03:07:05 GMT
    
      {"bar":"faz"}
      --QGmbJ27hW8flvB9kEKft7sDn3ln--
    
    manually requesting siblings:
      $> curl http://127.0.0.1:8098/riak/foo/doc
       Siblings:
       24Kzb7A7IK3ZLojN7XhH5c
       3APkPFgHehgOIJnkfV9ILi
    
      $> curl http://127.0.0.1:8098/riak/foo/doc?vtag=24Kzb7A7IK3ZLojN7XhH5c
       {"bar":"baz"}
      $> curl http://127.0.0.1:8098/riak/foo/doc?vtag=3APkPFgHehgOIJnkfV9ILi
       {"bar":"faz"}
    
    get all siblings in one request:
      $> curl http://127.0.0.1:8098/riak/foo/doc -H "Accept: multipart/mixed"
    
      --6CU09OFNxgReA7Ye5ccdAO2y5To
      Content-Type: application/json
      Link: ; rel="up"
      Etag: 24Kzb7A7IK3ZLojN7XhH5c
      Last-Modified: Sat, 26 Jan 2013 02:13:12 GMT
    
      {"bar":"baz"}
      --6CU09OFNxgReA7Ye5ccdAO2y5To
      Content-Type: application/json
      Link: ; rel="up"
      Etag: 3APkPFgHehgOIJnkfV9ILi
      Last-Modified: Sat, 26 Jan 2013 03:07:05 GMT
    
      {"bar":"faz"}
      --6CU09OFNxgReA7Ye5ccdAO2y5To--