Mercurial > public > mercurial-scm > hg
view contrib/python-zstandard/zstd.c @ 30895:c32454d69b85
zstd: vendor python-zstandard 0.7.0
Commit 3054ae3a66112970a091d3939fee32c2d0c1a23e from
https://github.com/indygreg/python-zstandard is imported without
modifications (other than removing unwanted files).
The vendored zstd library within has been upgraded from 1.1.2 to
1.1.3. This version introduced new APIs for threads, thread
pools, multi-threaded compression, and a new dictionary
builder (COVER). These features are not yet used by
python-zstandard (or Mercurial for that matter). However,
that will likely change in the next python-zstandard release
(and I think there are opportunities for Mercurial to take
advantage of the multi-threaded APIs).
Relevant to Mercurial, the CFFI bindings are now fully
implemented. This means zstd should "just work" with PyPy
(although I haven't tried). The python-zstandard test suite also
runs all tests against both the C extension and CFFI bindings to
ensure feature parity.
There is also a "decompress_content_dict_chain()" API. This was
derived from discussions with Yann Collet on list about alternate
ways of encoding delta chains.
The change most relevant to Mercurial is a performance enhancement in
the simple decompression API to reuse a data structure across
operations. This makes decompression of multiple inputs significantly
faster. (This scenario occurs when reading revlog delta chains, for
example.)
Using python-zstandard's bench.py to measure the performance
difference...
On changelog chunks in the mozilla-unified repo:
decompress discrete decompress() reuse zctx
1.262243 wall; 1.260000 CPU; 1.260000 user; 0.000000 sys 170.43 MB/s (best of 3)
0.949106 wall; 0.950000 CPU; 0.950000 user; 0.000000 sys 226.66 MB/s (best of 4)
decompress discrete dict decompress() reuse zctx
0.692170 wall; 0.690000 CPU; 0.690000 user; 0.000000 sys 310.80 MB/s (best of 5)
0.437088 wall; 0.440000 CPU; 0.440000 user; 0.000000 sys 492.17 MB/s (best of 7)
On manifest chunks in the mozilla-unified repo:
decompress discrete decompress() reuse zctx
1.367284 wall; 1.370000 CPU; 1.370000 user; 0.000000 sys 274.01 MB/s (best of 3)
1.086831 wall; 1.080000 CPU; 1.080000 user; 0.000000 sys 344.72 MB/s (best of 3)
decompress discrete dict decompress() reuse zctx
0.993272 wall; 0.990000 CPU; 0.990000 user; 0.000000 sys 377.19 MB/s (best of 3)
0.678651 wall; 0.680000 CPU; 0.680000 user; 0.000000 sys 552.06 MB/s (best of 5)
That should make reads on zstd revlogs a bit faster ;)
# no-check-commit
author | Gregory Szorc <gregory.szorc@gmail.com> |
---|---|
date | Tue, 07 Feb 2017 23:24:47 -0800 |
parents | b54a2984cdd4 |
children | e0dc40530c5a |
line wrap: on
line source
/** * Copyright (c) 2016-present, Gregory Szorc * All rights reserved. * * This software may be modified and distributed under the terms * of the BSD license. See the LICENSE file for details. */ /* A Python C extension for Zstandard. */ #include "python-zstandard.h" PyObject *ZstdError; PyDoc_STRVAR(estimate_compression_context_size__doc__, "estimate_compression_context_size(compression_parameters)\n" "\n" "Give the amount of memory allocated for a compression context given a\n" "CompressionParameters instance"); PyDoc_STRVAR(estimate_decompression_context_size__doc__, "estimate_decompression_context_size()\n" "\n" "Estimate the amount of memory allocated to a decompression context.\n" ); static PyObject* estimate_decompression_context_size(PyObject* self) { return PyLong_FromSize_t(ZSTD_estimateDCtxSize()); } PyDoc_STRVAR(get_compression_parameters__doc__, "get_compression_parameters(compression_level[, source_size[, dict_size]])\n" "\n" "Obtains a ``CompressionParameters`` instance from a compression level and\n" "optional input size and dictionary size"); PyDoc_STRVAR(get_frame_parameters__doc__, "get_frame_parameters(data)\n" "\n" "Obtains a ``FrameParameters`` instance by parsing data.\n"); PyDoc_STRVAR(train_dictionary__doc__, "train_dictionary(dict_size, samples)\n" "\n" "Train a dictionary from sample data.\n" "\n" "A compression dictionary of size ``dict_size`` will be created from the\n" "iterable of samples provided by ``samples``.\n" "\n" "The raw dictionary content will be returned\n"); static char zstd_doc[] = "Interface to zstandard"; static PyMethodDef zstd_methods[] = { { "estimate_compression_context_size", (PyCFunction)estimate_compression_context_size, METH_VARARGS, estimate_compression_context_size__doc__ }, { "estimate_decompression_context_size", (PyCFunction)estimate_decompression_context_size, METH_NOARGS, estimate_decompression_context_size__doc__ }, { "get_compression_parameters", (PyCFunction)get_compression_parameters, METH_VARARGS, get_compression_parameters__doc__ }, { "get_frame_parameters", (PyCFunction)get_frame_parameters, METH_VARARGS, get_frame_parameters__doc__ }, { "train_dictionary", (PyCFunction)train_dictionary, METH_VARARGS | METH_KEYWORDS, train_dictionary__doc__ }, { NULL, NULL } }; void compressobj_module_init(PyObject* mod); void compressor_module_init(PyObject* mod); void compressionparams_module_init(PyObject* mod); void constants_module_init(PyObject* mod); void dictparams_module_init(PyObject* mod); void compressiondict_module_init(PyObject* mod); void compressionwriter_module_init(PyObject* mod); void compressoriterator_module_init(PyObject* mod); void decompressor_module_init(PyObject* mod); void decompressobj_module_init(PyObject* mod); void decompressionwriter_module_init(PyObject* mod); void decompressoriterator_module_init(PyObject* mod); void frameparams_module_init(PyObject* mod); void zstd_module_init(PyObject* m) { /* python-zstandard relies on unstable zstd C API features. This means that changes in zstd may break expectations in python-zstandard. python-zstandard is distributed with a copy of the zstd sources. python-zstandard is only guaranteed to work with the bundled version of zstd. However, downstream redistributors or packagers may unbundle zstd from python-zstandard. This can result in a mismatch between zstd versions and API semantics. This essentially "voids the warranty" of python-zstandard and may cause undefined behavior. We detect this mismatch here and refuse to load the module if this scenario is detected. */ if (ZSTD_VERSION_NUMBER != 10103 || ZSTD_versionNumber() != 10103) { PyErr_SetString(PyExc_ImportError, "zstd C API mismatch; Python bindings not compiled against expected zstd version"); return; } compressionparams_module_init(m); dictparams_module_init(m); compressiondict_module_init(m); compressobj_module_init(m); compressor_module_init(m); compressionwriter_module_init(m); compressoriterator_module_init(m); constants_module_init(m); decompressor_module_init(m); decompressobj_module_init(m); decompressionwriter_module_init(m); decompressoriterator_module_init(m); frameparams_module_init(m); } #if PY_MAJOR_VERSION >= 3 static struct PyModuleDef zstd_module = { PyModuleDef_HEAD_INIT, "zstd", zstd_doc, -1, zstd_methods }; PyMODINIT_FUNC PyInit_zstd(void) { PyObject *m = PyModule_Create(&zstd_module); if (m) { zstd_module_init(m); if (PyErr_Occurred()) { Py_DECREF(m); m = NULL; } } return m; } #else PyMODINIT_FUNC initzstd(void) { PyObject *m = Py_InitModule3("zstd", zstd_methods, zstd_doc); if (m) { zstd_module_init(m); } } #endif