view mercurial/peer.py @ 37060:2ec1fb9de638

wireproto: add request IDs to frames One of my primary goals with the new wire protocol is to make operations faster and enable both client and server-side operations to scale to multiple CPU cores. One of the ways we make server interactions faster is by reducing the number of round trips to that server. With the existing wire protocol, the "batch" command facilitates executing multiple commands from a single request payload. The way it works is the requests for multiple commands are serialized. The server executes those commands sequentially then serializes all their results. As an optimization for reducing round trips, this is very effective. The technical implementation, however, is pretty bad and suffers from a number of deficiencies. For example, it creates a new place where authorization to run a command must be checked. (The lack of this checking in older Mercurial releases was CVE-2018-1000132.) The principles behind the "batch" command are sound. However, the execution is not. Therefore, I want to ditch "batch" in the new wire protocol and have protocol level support for issuing multiple requests in a single round trip. This commit introduces support in the frame-based wire protocol to facilitate this. We do this by adding a "request ID" to each frame. If a server sees frames associated with different "request IDs," it handles them as separate requests. All of this happening possibly as part of the same message from client to server (the same request body in the case of HTTP). We /could/ model the exchange the way pipelined HTTP requests do, where the server processes requests in order they are issued and received. But this artifically constrains scalability. A better model is to allow multi-requests to be executed concurrently and for responses to be sent and handled concurrently. So the specification explicitly allows this. There is some work to be done around specifying dependencies between multi-requests. We take the easy road for now and punt on this problem, declaring that if order is important, clients must not issue the request until responses to dependent requests have been received. This commit focuses on the boilerplate of implementing the request ID. The server reactor still can't manage multiple, in-flight request IDs. This will be addressed in a subsequent commit. Because the wire semantics have changed, we bump the version of the media type. Differential Revision: https://phab.mercurial-scm.org/D2869
author Gregory Szorc <gregory.szorc@gmail.com>
date Wed, 14 Mar 2018 16:51:34 -0700
parents 115efdd97088
children
line wrap: on
line source

# peer.py - repository base classes for mercurial
#
# Copyright 2005, 2006 Matt Mackall <mpm@selenic.com>
# Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.

from __future__ import absolute_import

from . import (
    error,
    pycompat,
    util,
)

# abstract batching support

class future(object):
    '''placeholder for a value to be set later'''
    def set(self, value):
        if util.safehasattr(self, 'value'):
            raise error.RepoError("future is already set")
        self.value = value

class batcher(object):
    '''base class for batches of commands submittable in a single request

    All methods invoked on instances of this class are simply queued and
    return a a future for the result. Once you call submit(), all the queued
    calls are performed and the results set in their respective futures.
    '''
    def __init__(self):
        self.calls = []
    def __getattr__(self, name):
        def call(*args, **opts):
            resref = future()
            # Please don't invent non-ascii method names, or you will
            # give core hg a very sad time.
            self.calls.append((name.encode('ascii'), args, opts, resref,))
            return resref
        return call
    def submit(self):
        raise NotImplementedError()

class iterbatcher(batcher):

    def submit(self):
        raise NotImplementedError()

    def results(self):
        raise NotImplementedError()

class localiterbatcher(iterbatcher):
    def __init__(self, local):
        super(iterbatcher, self).__init__()
        self.local = local

    def submit(self):
        # submit for a local iter batcher is a noop
        pass

    def results(self):
        for name, args, opts, resref in self.calls:
            resref.set(getattr(self.local, name)(*args, **opts))
            yield resref.value

def batchable(f):
    '''annotation for batchable methods

    Such methods must implement a coroutine as follows:

    @batchable
    def sample(self, one, two=None):
        # Build list of encoded arguments suitable for your wire protocol:
        encargs = [('one', encode(one),), ('two', encode(two),)]
        # Create future for injection of encoded result:
        encresref = future()
        # Return encoded arguments and future:
        yield encargs, encresref
        # Assuming the future to be filled with the result from the batched
        # request now. Decode it:
        yield decode(encresref.value)

    The decorator returns a function which wraps this coroutine as a plain
    method, but adds the original method as an attribute called "batchable",
    which is used by remotebatch to split the call into separate encoding and
    decoding phases.
    '''
    def plain(*args, **opts):
        batchable = f(*args, **opts)
        encargsorres, encresref = next(batchable)
        if not encresref:
            return encargsorres # a local result in this case
        self = args[0]
        cmd = pycompat.bytesurl(f.__name__)  # ensure cmd is ascii bytestr
        encresref.set(self._submitone(cmd, encargsorres))
        return next(batchable)
    setattr(plain, 'batchable', f)
    return plain