Mercurial > public > mercurial-scm > hg
view mercurial/logexchange.py @ 44118:f81c17ec303c
hgdemandimport: apply lazy module loading to sys.meta_path finders
Python's `sys.meta_path` finders are the primary objects whose job it
is to find a module at import time. When `import` is called, Python
iterates objects in this list and calls `o.find_spec(...)` to find
a `ModuleSpec` (or None if the module couldn't be found by that
finder). If no meta path finder can find a module, import fails.
One of the default meta path finders is `PathFinder`. Its job is to
import modules from the filesystem and is probably the most important
importer. This finder looks at `sys.path` and `sys.path_hooks` to do
its job.
The `ModuleSpec` returned by `MetaPathImporter.find_spec()` has a
`loader` attribute, which defines the concrete module loader to use.
`sys.path_hooks` is a hook point for teaching `PathFinder` to
instantiate custom loader types.
Previously, we injected a custom `sys.path_hook` that told `PathFinder`
to wrap the default loaders with a loader that creates a module object
that is lazy.
This approach worked. But its main limitation was that it only applied
to the `PathFinder` meta path importer. There are other meta path
importers that are registered. And in the case of PyOxidizer loading
modules from memory, `PathFinder` doesn't come into play since
PyOxidizer's own meta path importer was handling all imports.
This commit changes our approach to lazy module loading by proxying
all meta path importers. Specifically, we overload the `find_spec()`
method to swap in a wrapped loader on the `ModuleSpec` before it
is returned. The end result of this is all meta path importers should
be lazy.
As much as I would have loved to utilize .__class__ manipulation to
achieve this, some meta path importers are implemented in C/Rust
in such a way that they cannot be monkeypatched. This is why we
use __getattribute__ to define a proxy.
Also, this change could theoretically open us up to regressions in
meta path importers whose loader is creating module objects which
can't be monkeypatched. But I'm not aware of any of these in the
wild. So I think we'll be safe.
According to hyperfine, this change yields a decent startup time win of
5-6ms:
```
Benchmark #1: ~/.pyenv/versions/3.6.10/bin/python ./hg version
Time (mean ? ?): 86.8 ms ? 0.5 ms [User: 78.0 ms, System: 8.7 ms]
Range (min ? max): 86.0 ms ? 89.1 ms 50 runs
Time (mean ? ?): 81.1 ms ? 2.7 ms [User: 74.5 ms, System: 6.5 ms]
Range (min ? max): 77.8 ms ? 90.5 ms 50 runs
Benchmark #2: ~/.pyenv/versions/3.7.6/bin/python ./hg version
Time (mean ? ?): 78.9 ms ? 0.6 ms [User: 70.2 ms, System: 8.7 ms]
Range (min ? max): 78.1 ms ? 81.2 ms 50 runs
Time (mean ? ?): 73.4 ms ? 0.6 ms [User: 65.3 ms, System: 8.0 ms]
Range (min ? max): 72.4 ms ? 75.7 ms 50 runs
Benchmark #3: ~/.pyenv/versions/3.8.1/bin/python ./hg version
Time (mean ? ?): 78.1 ms ? 0.6 ms [User: 70.2 ms, System: 7.9 ms]
Range (min ? max): 77.4 ms ? 80.9 ms 50 runs
Time (mean ? ?): 72.1 ms ? 0.4 ms [User: 64.4 ms, System: 7.6 ms]
Range (min ? max): 71.4 ms ? 74.1 ms 50 runs
```
Differential Revision: https://phab.mercurial-scm.org/D7954
author | Gregory Szorc <gregory.szorc@gmail.com> |
---|---|
date | Mon, 20 Jan 2020 23:51:25 -0800 |
parents | d783f945a701 |
children | 89a2afe31e82 |
line wrap: on
line source
# logexchange.py # # Copyright 2017 Augie Fackler <raf@durin42.com> # Copyright 2017 Sean Farley <sean@farley.io> # # This software may be used and distributed according to the terms of the # GNU General Public License version 2 or any later version. from __future__ import absolute_import from .node import hex from . import ( pycompat, util, vfs as vfsmod, ) # directory name in .hg/ in which remotenames files will be present remotenamedir = b'logexchange' def readremotenamefile(repo, filename): """ reads a file from .hg/logexchange/ directory and yields it's content filename: the file to be read yield a tuple (node, remotepath, name) """ vfs = vfsmod.vfs(repo.vfs.join(remotenamedir)) if not vfs.exists(filename): return f = vfs(filename) lineno = 0 for line in f: line = line.strip() if not line: continue # contains the version number if lineno == 0: lineno += 1 try: node, remote, rname = line.split(b'\0') yield node, remote, rname except ValueError: pass f.close() def readremotenames(repo): """ read the details about the remotenames stored in .hg/logexchange/ and yields a tuple (node, remotepath, name). It does not yields information about whether an entry yielded is branch or bookmark. To get that information, call the respective functions. """ for bmentry in readremotenamefile(repo, b'bookmarks'): yield bmentry for branchentry in readremotenamefile(repo, b'branches'): yield branchentry def writeremotenamefile(repo, remotepath, names, nametype): vfs = vfsmod.vfs(repo.vfs.join(remotenamedir)) f = vfs(nametype, b'w', atomictemp=True) # write the storage version info on top of file # version '0' represents the very initial version of the storage format f.write(b'0\n\n') olddata = set(readremotenamefile(repo, nametype)) # re-save the data from a different remote than this one. for node, oldpath, rname in sorted(olddata): if oldpath != remotepath: f.write(b'%s\0%s\0%s\n' % (node, oldpath, rname)) for name, node in sorted(pycompat.iteritems(names)): if nametype == b"branches": for n in node: f.write(b'%s\0%s\0%s\n' % (n, remotepath, name)) elif nametype == b"bookmarks": if node: f.write(b'%s\0%s\0%s\n' % (node, remotepath, name)) f.close() def saveremotenames(repo, remotepath, branches=None, bookmarks=None): """ save remotenames i.e. remotebookmarks and remotebranches in their respective files under ".hg/logexchange/" directory. """ wlock = repo.wlock() try: if bookmarks: writeremotenamefile(repo, remotepath, bookmarks, b'bookmarks') if branches: writeremotenamefile(repo, remotepath, branches, b'branches') finally: wlock.release() def activepath(repo, remote): """returns remote path""" # is the remote a local peer local = remote.local() # determine the remote path from the repo, if possible; else just # use the string given to us rpath = remote if local: rpath = util.pconvert(remote._repo.root) elif not isinstance(remote, bytes): rpath = remote._url # represent the remotepath with user defined path name if exists for path, url in repo.ui.configitems(b'paths'): # remove auth info from user defined url noauthurl = util.removeauth(url) # Standardize on unix style paths, otherwise some {remotenames} end up # being an absolute path on Windows. url = util.pconvert(bytes(url)) noauthurl = util.pconvert(noauthurl) if url == rpath or noauthurl == rpath: rpath = path break return rpath def pullremotenames(localrepo, remoterepo): """ pulls bookmarks and branches information of the remote repo during a pull or clone operation. localrepo is our local repository remoterepo is the peer instance """ remotepath = activepath(localrepo, remoterepo) with remoterepo.commandexecutor() as e: bookmarks = e.callcommand( b'listkeys', {b'namespace': b'bookmarks',} ).result() # on a push, we don't want to keep obsolete heads since # they won't show up as heads on the next pull, so we # remove them here otherwise we would require the user # to issue a pull to refresh the storage bmap = {} repo = localrepo.unfiltered() with remoterepo.commandexecutor() as e: branchmap = e.callcommand(b'branchmap', {}).result() for branch, nodes in pycompat.iteritems(branchmap): bmap[branch] = [] for node in nodes: if node in repo and not repo[node].obsolete(): bmap[branch].append(hex(node)) saveremotenames(localrepo, remotepath, bmap, bookmarks)