Compare commits
22 Commits
Author | SHA1 | Date | |
---|---|---|---|
d28c190816
|
|||
0262de060f
|
|||
e25db1e0f6
|
|||
8fdbe4d334
|
|||
406a1485da
|
|||
6dc66b1c27
|
|||
f2b1a1dd3b
|
|||
cb166a399d
|
|||
7108dd0111
|
|||
2105754911
|
|||
f3ba4cbfd3
|
|||
e5f7085324
|
|||
578481324b
|
|||
bf8ac9850d
|
|||
ab408b6412
|
|||
4aa0a6f234
|
|||
f9646e3386
|
|||
3b612b960e
|
|||
b0ad4bead0
|
|||
4b2e573715
|
|||
12e6916ab2
|
|||
1e76f63725
|
2
.gitignore
vendored
2
.gitignore
vendored
@@ -143,3 +143,5 @@ dmypy.json
|
|||||||
cython_debug/
|
cython_debug/
|
||||||
|
|
||||||
*.csv
|
*.csv
|
||||||
|
|
||||||
|
local_scripts/
|
||||||
|
48
CHANGELOG.md
48
CHANGELOG.md
@@ -2,6 +2,54 @@
|
|||||||
|
|
||||||
All notable changes to this project will be documented in this file. See [standard-version](https://github.com/conventional-changelog/standard-version) for commit guidelines.
|
All notable changes to this project will be documented in this file. See [standard-version](https://github.com/conventional-changelog/standard-version) for commit guidelines.
|
||||||
|
|
||||||
|
### [1.0.1](https://gitea.deepak.science:2222/physics/deepdog/compare/1.0.0...1.0.1) (2024-05-02)
|
||||||
|
|
||||||
|
|
||||||
|
### Bug Fixes
|
||||||
|
|
||||||
|
* fixes issue of zero division error with no successes for anything ([e25db1e](https://gitea.deepak.science:2222/physics/deepdog/commit/e25db1e0f677e8d9a657fa1631305cc8f05ff9ff))
|
||||||
|
|
||||||
|
## [1.0.0](https://gitea.deepak.science:2222/physics/deepdog/compare/0.8.1...1.0.0) (2024-05-01)
|
||||||
|
|
||||||
|
|
||||||
|
### ⚠ BREAKING CHANGES
|
||||||
|
|
||||||
|
* allows new seed spec instead of cli arg, removes old cli arg
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
* adds additional file slug parsing ([2105754](https://gitea.deepak.science:2222/physics/deepdog/commit/2105754911c89bde9dcbea9866462225604a3524))
|
||||||
|
* Adds more powerful direct mc runs to sub for old real spectrum run ([f2b1a1d](https://gitea.deepak.science:2222/physics/deepdog/commit/f2b1a1dd3b3436e37d84f7843b9b2a202be4b51c))
|
||||||
|
* allows new seed spec instead of cli arg, removes old cli arg ([7108dd0](https://gitea.deepak.science:2222/physics/deepdog/commit/7108dd0111c7dfd6ec204df1d0058530cd3dcab9))
|
||||||
|
|
||||||
|
|
||||||
|
### Bug Fixes
|
||||||
|
|
||||||
|
* no longer throws error for overlapping keys, the warning should hopefully be enough? ([f3ba4cb](https://gitea.deepak.science:2222/physics/deepdog/commit/f3ba4cbfd36a9f08cdc4d8774a7f745f8c98bac3))
|
||||||
|
|
||||||
|
### [0.8.1](https://gitea.deepak.science:2222/physics/deepdog/compare/0.8.0...0.8.1) (2024-04-28)
|
||||||
|
|
||||||
|
### [0.8.1](https://gitea.deepak.science:2222/physics/deepdog/compare/0.8.0...0.8.1) (2024-04-28)
|
||||||
|
|
||||||
|
## [0.8.0](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.10...0.8.0) (2024-04-28)
|
||||||
|
|
||||||
|
|
||||||
|
### ⚠ BREAKING CHANGES
|
||||||
|
|
||||||
|
* fixes the spin qubit frequency phase shift calculation which had an index problem
|
||||||
|
|
||||||
|
### Bug Fixes
|
||||||
|
|
||||||
|
* fixes the spin qubit frequency phase shift calculation which had an index problem ([f9646e3](https://gitea.deepak.science:2222/physics/deepdog/commit/f9646e33868e1a0da8ab663230c0c692ac25bb74))
|
||||||
|
|
||||||
|
### [0.7.10](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.9...0.7.10) (2024-04-28)
|
||||||
|
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
* adds cli probs ([4b2e573](https://gitea.deepak.science:2222/physics/deepdog/commit/4b2e57371546731137b011461849bb849d4d4e0f))
|
||||||
|
* better management of cli wrapper ([b0ad4be](https://gitea.deepak.science:2222/physics/deepdog/commit/b0ad4bead0d4762eb7f848f6e557f6d9b61200b9))
|
||||||
|
|
||||||
### [0.7.9](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.8...0.7.9) (2024-04-21)
|
### [0.7.9](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.8...0.7.9) (2024-04-21)
|
||||||
|
|
||||||
|
|
||||||
|
11
README.md
11
README.md
@@ -5,7 +5,7 @@
|
|||||||
[](https://jenkins.deepak.science/job/gitea-physics/job/deepdog/job/master/)
|
[](https://jenkins.deepak.science/job/gitea-physics/job/deepdog/job/master/)
|
||||||

|

|
||||||

|

|
||||||

|

|
||||||
|
|
||||||
The DiPole DiaGnostic tool.
|
The DiPole DiaGnostic tool.
|
||||||
|
|
||||||
@@ -13,6 +13,13 @@ The DiPole DiaGnostic tool.
|
|||||||
|
|
||||||
`poetry install` to start locally
|
`poetry install` to start locally
|
||||||
|
|
||||||
Commit using [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/), and when commits are on master, release with `doo release`.
|
Commit using [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/), and when commits are on master, release with `just release`.
|
||||||
|
|
||||||
|
In general `just --list` has some of the useful stuff for figuring out what development tools there are.
|
||||||
|
|
||||||
|
Poetry as an installer is good, even better is using Nix (maybe with direnv to automatically pick up the `devShell` from `flake.nix`).
|
||||||
|
In either case `just` should handle actually calling things in a way that's agnostic to poetry as a runner or through nix.
|
||||||
|
|
||||||
|
### local scripts
|
||||||
|
`local_scripts` folder allows for scripts to be run using this code, but that probably isn't the most auditable for actual usage.
|
||||||
|
The API is still only something I'm using so there's no guarantees yet that it will be stable; overall semantic versioning should help with API breaks.
|
||||||
|
0
deepdog/cli/__init__.py
Normal file
0
deepdog/cli/__init__.py
Normal file
5
deepdog/cli/probs/__init__.py
Normal file
5
deepdog/cli/probs/__init__.py
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
from deepdog.cli.probs.main import wrapped_main
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"wrapped_main",
|
||||||
|
]
|
51
deepdog/cli/probs/args.py
Normal file
51
deepdog/cli/probs/args.py
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
import argparse
|
||||||
|
import os
|
||||||
|
|
||||||
|
|
||||||
|
def parse_args() -> argparse.Namespace:
|
||||||
|
def dir_path(path):
|
||||||
|
if os.path.isdir(path):
|
||||||
|
return path
|
||||||
|
else:
|
||||||
|
raise argparse.ArgumentTypeError(f"readable_dir:{path} is not a valid path")
|
||||||
|
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
"probs", description="Calculating probability from finished bayesrun"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--log_file",
|
||||||
|
type=str,
|
||||||
|
help="A filename for logging to, if not provided will only log to stderr",
|
||||||
|
default=None,
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--bayesrun-directory",
|
||||||
|
"-d",
|
||||||
|
type=dir_path,
|
||||||
|
help="The directory to search for bayesrun files, defaulting to cwd if not passed",
|
||||||
|
default=".",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--indexify-json",
|
||||||
|
help="A json file with the indexify config for parsing job indexes. Will skip if not present",
|
||||||
|
default="",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--coalesced-keys",
|
||||||
|
type=str,
|
||||||
|
help="A comma separated list of strings over which to coalesce data. By default coalesce over all fields within model names, ignore file level names",
|
||||||
|
default="",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--uncoalesced-outfile",
|
||||||
|
type=str,
|
||||||
|
help="output filename for uncoalesced data. If not provided, will not be written",
|
||||||
|
default=None,
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--coalesced-outfile",
|
||||||
|
type=str,
|
||||||
|
help="output filename for coalesced data. If not provided, will not be written",
|
||||||
|
default=None,
|
||||||
|
)
|
||||||
|
return parser.parse_args()
|
178
deepdog/cli/probs/dicts.py
Normal file
178
deepdog/cli/probs/dicts.py
Normal file
@@ -0,0 +1,178 @@
|
|||||||
|
import typing
|
||||||
|
from deepdog.results import BayesrunOutput
|
||||||
|
import logging
|
||||||
|
import csv
|
||||||
|
import tqdm
|
||||||
|
|
||||||
|
_logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
def build_model_dict(
|
||||||
|
bayes_outputs: typing.Sequence[BayesrunOutput],
|
||||||
|
) -> typing.Dict[
|
||||||
|
typing.Tuple, typing.Dict[typing.Tuple, typing.Dict["str", typing.Any]]
|
||||||
|
]:
|
||||||
|
"""
|
||||||
|
Maybe someday do something smarter with the coalescing and stuff but don't want to so i won't
|
||||||
|
"""
|
||||||
|
# assume that everything is well formatted and the keys are the same across entire list and initialise list of keys.
|
||||||
|
# model dict will contain a model_key: {calculation_dict} where each calculation_dict represents a single calculation for that model,
|
||||||
|
# the uncoalesced version, keyed by the specific file keys
|
||||||
|
model_dict: typing.Dict[
|
||||||
|
typing.Tuple, typing.Dict[typing.Tuple, typing.Dict["str", typing.Any]]
|
||||||
|
] = {}
|
||||||
|
|
||||||
|
_logger.info("building model dict")
|
||||||
|
for out in tqdm.tqdm(bayes_outputs, desc="reading outputs", leave=False):
|
||||||
|
for model_result in out.results:
|
||||||
|
model_key = tuple(v for v in model_result.parsed_model_keys.values())
|
||||||
|
if model_key not in model_dict:
|
||||||
|
model_dict[model_key] = {}
|
||||||
|
calculation_dict = model_dict[model_key]
|
||||||
|
calculation_key = tuple(v for v in out.data.values())
|
||||||
|
if calculation_key not in calculation_dict:
|
||||||
|
calculation_dict[calculation_key] = {
|
||||||
|
"_model_key_dict": model_result.parsed_model_keys,
|
||||||
|
"_calculation_key_dict": out.data,
|
||||||
|
"success": model_result.success,
|
||||||
|
"count": model_result.count,
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
raise ValueError(
|
||||||
|
f"Got {calculation_key} twice for model_key {model_key}"
|
||||||
|
)
|
||||||
|
|
||||||
|
return model_dict
|
||||||
|
|
||||||
|
|
||||||
|
def write_uncoalesced_dict(
|
||||||
|
uncoalesced_output_filename: typing.Optional[str],
|
||||||
|
uncoalesced_model_dict: typing.Dict[
|
||||||
|
typing.Tuple, typing.Dict[typing.Tuple, typing.Dict["str", typing.Any]]
|
||||||
|
],
|
||||||
|
):
|
||||||
|
if uncoalesced_output_filename is None or uncoalesced_output_filename == "":
|
||||||
|
_logger.warning("Not provided a uncoalesced filename, not going to try")
|
||||||
|
return
|
||||||
|
|
||||||
|
first_value = next(iter(next(iter(uncoalesced_model_dict.values())).values()))
|
||||||
|
model_field_names = set(first_value["_model_key_dict"].keys())
|
||||||
|
calculation_field_names = set(first_value["_calculation_key_dict"].keys())
|
||||||
|
if not (set(model_field_names).isdisjoint(calculation_field_names)):
|
||||||
|
_logger.info(f"Detected model field names {model_field_names}")
|
||||||
|
_logger.info(f"Detected calculation field names {calculation_field_names}")
|
||||||
|
_logger.warning(
|
||||||
|
f"model field names {model_field_names} and calculation {calculation_field_names} have an overlap, which is possibly a problem"
|
||||||
|
)
|
||||||
|
collected_fieldnames = list(model_field_names)
|
||||||
|
collected_fieldnames.extend(calculation_field_names)
|
||||||
|
collected_fieldnames.extend(["success", "count"])
|
||||||
|
_logger.info(f"Full uncoalesced fieldnames are {collected_fieldnames}")
|
||||||
|
with open(uncoalesced_output_filename, "w", newline="") as uncoalesced_output_file:
|
||||||
|
writer = csv.DictWriter(
|
||||||
|
uncoalesced_output_file, fieldnames=collected_fieldnames
|
||||||
|
)
|
||||||
|
writer.writeheader()
|
||||||
|
|
||||||
|
for model_dict in uncoalesced_model_dict.values():
|
||||||
|
for calculation in model_dict.values():
|
||||||
|
row = calculation["_model_key_dict"].copy()
|
||||||
|
row.update(calculation["_calculation_key_dict"].copy())
|
||||||
|
row.update(
|
||||||
|
{
|
||||||
|
"success": calculation["success"],
|
||||||
|
"count": calculation["count"],
|
||||||
|
}
|
||||||
|
)
|
||||||
|
writer.writerow(row)
|
||||||
|
|
||||||
|
|
||||||
|
def coalesced_dict(
|
||||||
|
uncoalesced_model_dict: typing.Dict[
|
||||||
|
typing.Tuple, typing.Dict[typing.Tuple, typing.Dict["str", typing.Any]]
|
||||||
|
],
|
||||||
|
minimum_count: float = 0.1,
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
pass in uncoalesced dict
|
||||||
|
the minimum_count field is what we use to make sure our probs are never zero
|
||||||
|
"""
|
||||||
|
coalesced_dict = {}
|
||||||
|
|
||||||
|
# we are already iterating so for no reason because performance really doesn't matter let's count the keys ourselves
|
||||||
|
num_keys = 0
|
||||||
|
|
||||||
|
# first pass coalesce
|
||||||
|
for model_key, model_dict in uncoalesced_model_dict.items():
|
||||||
|
num_keys += 1
|
||||||
|
for calculation in model_dict.values():
|
||||||
|
if model_key not in coalesced_dict:
|
||||||
|
coalesced_dict[model_key] = {
|
||||||
|
"_model_key_dict": calculation["_model_key_dict"].copy(),
|
||||||
|
"calculations_coalesced": 0,
|
||||||
|
"count": 0,
|
||||||
|
"success": 0,
|
||||||
|
}
|
||||||
|
sub_dict = coalesced_dict[model_key]
|
||||||
|
sub_dict["calculations_coalesced"] += 1
|
||||||
|
sub_dict["count"] += calculation["count"]
|
||||||
|
sub_dict["success"] += calculation["success"]
|
||||||
|
|
||||||
|
# second pass do probability calculation
|
||||||
|
|
||||||
|
prior = 1 / num_keys
|
||||||
|
_logger.info(f"Got {num_keys} model keys, so our prior will be {prior}")
|
||||||
|
|
||||||
|
total_weight = 0
|
||||||
|
for coalesced_model_dict in coalesced_dict.values():
|
||||||
|
model_weight = (
|
||||||
|
max(minimum_count, coalesced_model_dict["success"])
|
||||||
|
/ coalesced_model_dict["count"]
|
||||||
|
) * prior
|
||||||
|
total_weight += model_weight
|
||||||
|
|
||||||
|
total_prob = 0
|
||||||
|
for coalesced_model_dict in coalesced_dict.values():
|
||||||
|
model_weight = (
|
||||||
|
max(minimum_count, coalesced_model_dict["success"])
|
||||||
|
/ coalesced_model_dict["count"]
|
||||||
|
)
|
||||||
|
prob = model_weight * prior / total_weight
|
||||||
|
coalesced_model_dict["prob"] = prob
|
||||||
|
total_prob += prob
|
||||||
|
|
||||||
|
_logger.debug(
|
||||||
|
f"Got a total probability of {total_prob}, which should be close to 1 up to float/rounding error"
|
||||||
|
)
|
||||||
|
return coalesced_dict
|
||||||
|
|
||||||
|
|
||||||
|
def write_coalesced_dict(
|
||||||
|
coalesced_output_filename: typing.Optional[str],
|
||||||
|
coalesced_model_dict: typing.Dict[typing.Tuple, typing.Dict["str", typing.Any]],
|
||||||
|
):
|
||||||
|
if coalesced_output_filename is None or coalesced_output_filename == "":
|
||||||
|
_logger.warning("Not provided a uncoalesced filename, not going to try")
|
||||||
|
return
|
||||||
|
|
||||||
|
first_value = next(iter(coalesced_model_dict.values()))
|
||||||
|
model_field_names = set(first_value["_model_key_dict"].keys())
|
||||||
|
_logger.info(f"Detected model field names {model_field_names}")
|
||||||
|
|
||||||
|
collected_fieldnames = list(model_field_names)
|
||||||
|
collected_fieldnames.extend(["calculations_coalesced", "success", "count", "prob"])
|
||||||
|
with open(coalesced_output_filename, "w", newline="") as coalesced_output_file:
|
||||||
|
writer = csv.DictWriter(coalesced_output_file, fieldnames=collected_fieldnames)
|
||||||
|
writer.writeheader()
|
||||||
|
|
||||||
|
for model_dict in coalesced_model_dict.values():
|
||||||
|
row = model_dict["_model_key_dict"].copy()
|
||||||
|
row.update(
|
||||||
|
{
|
||||||
|
"calculations_coalesced": model_dict["calculations_coalesced"],
|
||||||
|
"success": model_dict["success"],
|
||||||
|
"count": model_dict["count"],
|
||||||
|
"prob": model_dict["prob"],
|
||||||
|
}
|
||||||
|
)
|
||||||
|
writer.writerow(row)
|
99
deepdog/cli/probs/main.py
Normal file
99
deepdog/cli/probs/main.py
Normal file
@@ -0,0 +1,99 @@
|
|||||||
|
import logging
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import deepdog.cli.probs.args
|
||||||
|
import deepdog.cli.probs.dicts
|
||||||
|
import deepdog.results
|
||||||
|
import deepdog.indexify
|
||||||
|
import pathlib
|
||||||
|
import tqdm
|
||||||
|
import tqdm.contrib.logging
|
||||||
|
|
||||||
|
|
||||||
|
_logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
def set_up_logging(log_file: str):
|
||||||
|
|
||||||
|
log_pattern = "%(asctime)s | %(levelname)-7s | %(name)s:%(lineno)d | %(message)s"
|
||||||
|
if log_file is None:
|
||||||
|
handlers = [
|
||||||
|
logging.StreamHandler(),
|
||||||
|
]
|
||||||
|
else:
|
||||||
|
handlers = [logging.StreamHandler(), logging.FileHandler(log_file)]
|
||||||
|
logging.basicConfig(
|
||||||
|
level=logging.DEBUG,
|
||||||
|
format=log_pattern,
|
||||||
|
# it's okay to ignore this mypy error because who cares about logger handler types
|
||||||
|
handlers=handlers, # type: ignore
|
||||||
|
)
|
||||||
|
logging.captureWarnings(True)
|
||||||
|
|
||||||
|
|
||||||
|
def main(args: argparse.Namespace):
|
||||||
|
"""
|
||||||
|
Main function with passed in arguments and no additional logging setup in case we want to extract out later
|
||||||
|
"""
|
||||||
|
|
||||||
|
with tqdm.contrib.logging.logging_redirect_tqdm():
|
||||||
|
_logger.info(f"args: {args}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
if args.coalesced_keys:
|
||||||
|
raise NotImplementedError(
|
||||||
|
"Currently not supporting coalesced keys, but maybe in future"
|
||||||
|
)
|
||||||
|
except AttributeError:
|
||||||
|
# we don't care if this is missing because we don't actually want it to be there
|
||||||
|
pass
|
||||||
|
|
||||||
|
indexifier = None
|
||||||
|
if args.indexify_json:
|
||||||
|
with open(args.indexify_json, "r") as indexify_json_file:
|
||||||
|
indexify_spec = json.load(indexify_json_file)
|
||||||
|
indexify_data = indexify_spec["indexes"]
|
||||||
|
if "seed_spec" in indexify_spec:
|
||||||
|
seed_spec = indexify_spec["seed_spec"]
|
||||||
|
indexify_data[seed_spec["field_name"]] = list(
|
||||||
|
range(seed_spec["num_seeds"])
|
||||||
|
)
|
||||||
|
# _logger.debug(f"Indexifier data looks like {indexify_data}")
|
||||||
|
indexifier = deepdog.indexify.Indexifier(indexify_data)
|
||||||
|
|
||||||
|
bayes_dir = pathlib.Path(args.bayesrun_directory)
|
||||||
|
out_files = [f for f in bayes_dir.iterdir() if f.name.endswith("bayesrun.csv")]
|
||||||
|
_logger.info(
|
||||||
|
f"Reading {len(out_files)} bayesrun.csv files in directory {args.bayesrun_directory}"
|
||||||
|
)
|
||||||
|
# _logger.info(out_files)
|
||||||
|
parsed_output_files = [
|
||||||
|
deepdog.results.read_output_file(f, indexifier)
|
||||||
|
for f in tqdm.tqdm(out_files, desc="reading files", leave=False)
|
||||||
|
]
|
||||||
|
|
||||||
|
_logger.info("building uncoalesced dict")
|
||||||
|
uncoalesced_dict = deepdog.cli.probs.dicts.build_model_dict(parsed_output_files)
|
||||||
|
|
||||||
|
if "uncoalesced_outfile" in args and args.uncoalesced_outfile:
|
||||||
|
deepdog.cli.probs.dicts.write_uncoalesced_dict(
|
||||||
|
args.uncoalesced_outfile, uncoalesced_dict
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
_logger.info("Skipping writing uncoalesced")
|
||||||
|
|
||||||
|
_logger.info("building coalesced dict")
|
||||||
|
coalesced = deepdog.cli.probs.dicts.coalesced_dict(uncoalesced_dict)
|
||||||
|
|
||||||
|
if "coalesced_outfile" in args and args.coalesced_outfile:
|
||||||
|
deepdog.cli.probs.dicts.write_coalesced_dict(
|
||||||
|
args.coalesced_outfile, coalesced
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
_logger.info("Skipping writing coalesced")
|
||||||
|
|
||||||
|
|
||||||
|
def wrapped_main():
|
||||||
|
args = deepdog.cli.probs.args.parse_args()
|
||||||
|
set_up_logging(args.log_file)
|
||||||
|
main(args)
|
@@ -1,22 +1,28 @@
|
|||||||
|
import csv
|
||||||
import pdme.model
|
import pdme.model
|
||||||
import pdme.measurement
|
import pdme.measurement
|
||||||
import pdme.measurement.input_types
|
import pdme.measurement.input_types
|
||||||
import pdme.subspace_simulation
|
import pdme.subspace_simulation
|
||||||
from typing import Tuple, Dict, NewType, Any
|
import datetime
|
||||||
|
from typing import Tuple, Dict, NewType, Any, Sequence
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
import logging
|
import logging
|
||||||
import numpy
|
import numpy
|
||||||
import numpy.random
|
import numpy.random
|
||||||
import pdme.util.fast_v_calc
|
import pdme.util.fast_v_calc
|
||||||
|
import multiprocessing
|
||||||
|
|
||||||
_logger = logging.getLogger(__name__)
|
_logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
ANTI_ZERO_SUCCESS_THRES = 0.1
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class DirectMonteCarloResult:
|
class DirectMonteCarloResult:
|
||||||
successes: int
|
successes: int
|
||||||
monte_carlo_count: int
|
monte_carlo_count: int
|
||||||
likelihood: float
|
likelihood: float
|
||||||
|
model_name: str
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
@@ -28,6 +34,10 @@ class DirectMonteCarloConfig:
|
|||||||
monte_carlo_seed: int = 1234
|
monte_carlo_seed: int = 1234
|
||||||
write_successes_to_file: bool = False
|
write_successes_to_file: bool = False
|
||||||
tag: str = ""
|
tag: str = ""
|
||||||
|
cap_core_count: int = 0 # 0 means cap at num cores - 1
|
||||||
|
chunk_size: int = 50
|
||||||
|
write_bayesrun_file = True
|
||||||
|
# chunk size of some kind
|
||||||
|
|
||||||
|
|
||||||
# Aliasing dict as a generic data container
|
# Aliasing dict as a generic data container
|
||||||
@@ -51,8 +61,8 @@ class DirectMonteCarloRun:
|
|||||||
|
|
||||||
Parameters
|
Parameters
|
||||||
----------
|
----------
|
||||||
model_name_pair : Sequence[Tuple(str, pdme.model.DipoleModel)]
|
model_name_pairs : Sequence[Tuple(str, pdme.model.DipoleModel)]
|
||||||
The model to evaluate, with name.
|
The models to evaluate, with names
|
||||||
|
|
||||||
measurements: Sequence[pdme.measurement.DotRangeMeasurement]
|
measurements: Sequence[pdme.measurement.DotRangeMeasurement]
|
||||||
The measurements as dot ranges to use as the bounds for the Monte Carlo calculation.
|
The measurements as dot ranges to use as the bounds for the Monte Carlo calculation.
|
||||||
@@ -78,11 +88,11 @@ class DirectMonteCarloRun:
|
|||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
model_name_pair: Tuple[str, pdme.model.DipoleModel],
|
model_name_pairs: Sequence[Tuple[str, pdme.model.DipoleModel]],
|
||||||
filter: DirectMonteCarloFilter,
|
filter: DirectMonteCarloFilter,
|
||||||
config: DirectMonteCarloConfig,
|
config: DirectMonteCarloConfig,
|
||||||
):
|
):
|
||||||
self.model_name, self.model = model_name_pair
|
self.model_name_pairs = model_name_pairs
|
||||||
|
|
||||||
# self.measurements = measurements
|
# self.measurements = measurements
|
||||||
# self.dot_inputs = [(measure.r, measure.f) for measure in self.measurements]
|
# self.dot_inputs = [(measure.r, measure.f) for measure in self.measurements]
|
||||||
@@ -100,10 +110,16 @@ class DirectMonteCarloRun:
|
|||||||
# self.measurements
|
# self.measurements
|
||||||
# )
|
# )
|
||||||
|
|
||||||
def _single_run(self, seed) -> numpy.ndarray:
|
def _single_run(
|
||||||
|
self, model_name_pair: Tuple[str, pdme.model.DipoleModel], seed
|
||||||
|
) -> numpy.ndarray:
|
||||||
rng = numpy.random.default_rng(seed)
|
rng = numpy.random.default_rng(seed)
|
||||||
|
|
||||||
sample_dipoles = self.model.get_monte_carlo_dipole_inputs(
|
_, model = model_name_pair
|
||||||
|
# don't log here it's madness
|
||||||
|
# _logger.info(f"Executing for model {model_name}")
|
||||||
|
|
||||||
|
sample_dipoles = model.get_monte_carlo_dipole_inputs(
|
||||||
self.config.monte_carlo_count_per_cycle, -1, rng
|
self.config.monte_carlo_count_per_cycle, -1, rng
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -123,52 +139,188 @@ class DirectMonteCarloRun:
|
|||||||
# ]
|
# ]
|
||||||
# return current_sample
|
# return current_sample
|
||||||
|
|
||||||
def execute(self) -> DirectMonteCarloResult:
|
def _wrapped_single_run(self, args: Tuple):
|
||||||
step_count = 0
|
"""
|
||||||
total_success = 0
|
single run wrapped up for multiprocessing call.
|
||||||
total_count = 0
|
|
||||||
|
takes in a tuple of arguments corresponding to
|
||||||
|
(model_name_pair, seed)
|
||||||
|
"""
|
||||||
|
# here's where we do our work
|
||||||
|
|
||||||
|
model_name_pair, seed = args
|
||||||
|
cycle_success_configs = self._single_run(model_name_pair, seed)
|
||||||
|
cycle_success_count = len(cycle_success_configs)
|
||||||
|
|
||||||
|
return cycle_success_count
|
||||||
|
|
||||||
|
def execute_no_multiprocessing(self) -> Sequence[DirectMonteCarloResult]:
|
||||||
|
|
||||||
count_per_step = (
|
count_per_step = (
|
||||||
self.config.monte_carlo_count_per_cycle * self.config.monte_carlo_cycles
|
self.config.monte_carlo_count_per_cycle * self.config.monte_carlo_cycles
|
||||||
)
|
)
|
||||||
seed_sequence = numpy.random.SeedSequence(self.config.monte_carlo_seed)
|
seed_sequence = numpy.random.SeedSequence(self.config.monte_carlo_seed)
|
||||||
while (step_count < self.config.max_monte_carlo_cycles_steps) and (
|
|
||||||
total_success < self.config.target_success
|
|
||||||
):
|
|
||||||
_logger.debug(f"Executing step {step_count}")
|
|
||||||
for cycle_i, seed in enumerate(
|
|
||||||
seed_sequence.spawn(self.config.monte_carlo_cycles)
|
|
||||||
):
|
|
||||||
cycle_success_configs = self._single_run(seed)
|
|
||||||
cycle_success_count = len(cycle_success_configs)
|
|
||||||
if cycle_success_count > 0:
|
|
||||||
_logger.debug(
|
|
||||||
f"For cycle {cycle_i} received {cycle_success_count} successes"
|
|
||||||
)
|
|
||||||
_logger.debug(cycle_success_configs)
|
|
||||||
if self.config.write_successes_to_file:
|
|
||||||
sorted_by_freq = numpy.array(
|
|
||||||
[
|
|
||||||
pdme.subspace_simulation.sort_array_of_dipoles_by_frequency(
|
|
||||||
dipole_config
|
|
||||||
)
|
|
||||||
for dipole_config in cycle_success_configs
|
|
||||||
]
|
|
||||||
)
|
|
||||||
dipole_count = numpy.array(cycle_success_configs).shape[1]
|
|
||||||
for n in range(dipole_count):
|
|
||||||
numpy.savetxt(
|
|
||||||
f"{self.config.tag}_{step_count}_{cycle_i}_dipole_{n}.csv",
|
|
||||||
sorted_by_freq[:, n],
|
|
||||||
delimiter=",",
|
|
||||||
)
|
|
||||||
total_success += cycle_success_count
|
|
||||||
_logger.debug(f"At end of step {step_count} have {total_success} successes")
|
|
||||||
step_count += 1
|
|
||||||
total_count += count_per_step
|
|
||||||
|
|
||||||
return DirectMonteCarloResult(
|
# core count etc. logic here
|
||||||
successes=total_success,
|
|
||||||
monte_carlo_count=total_count,
|
results = []
|
||||||
likelihood=total_success / total_count,
|
for model_name_pair in self.model_name_pairs:
|
||||||
|
step_count = 0
|
||||||
|
total_success = 0
|
||||||
|
total_count = 0
|
||||||
|
|
||||||
|
_logger.info(f"Working on model {model_name_pair[0]}")
|
||||||
|
# This is probably where multiprocessing logic should go
|
||||||
|
while (step_count < self.config.max_monte_carlo_cycles_steps) and (
|
||||||
|
total_success < self.config.target_success
|
||||||
|
):
|
||||||
|
_logger.debug(f"Executing step {step_count}")
|
||||||
|
for cycle_i, seed in enumerate(
|
||||||
|
seed_sequence.spawn(self.config.monte_carlo_cycles)
|
||||||
|
):
|
||||||
|
# here's where we do our work
|
||||||
|
cycle_success_configs = self._single_run(model_name_pair, seed)
|
||||||
|
cycle_success_count = len(cycle_success_configs)
|
||||||
|
if cycle_success_count > 0:
|
||||||
|
_logger.debug(
|
||||||
|
f"For cycle {cycle_i} received {cycle_success_count} successes"
|
||||||
|
)
|
||||||
|
# _logger.debug(cycle_success_configs)
|
||||||
|
if self.config.write_successes_to_file:
|
||||||
|
sorted_by_freq = numpy.array(
|
||||||
|
[
|
||||||
|
pdme.subspace_simulation.sort_array_of_dipoles_by_frequency(
|
||||||
|
dipole_config
|
||||||
|
)
|
||||||
|
for dipole_config in cycle_success_configs
|
||||||
|
]
|
||||||
|
)
|
||||||
|
dipole_count = numpy.array(cycle_success_configs).shape[1]
|
||||||
|
for n in range(dipole_count):
|
||||||
|
numpy.savetxt(
|
||||||
|
f"{self.config.tag}_{step_count}_{cycle_i}_dipole_{n}.csv",
|
||||||
|
sorted_by_freq[:, n],
|
||||||
|
delimiter=",",
|
||||||
|
)
|
||||||
|
total_success += cycle_success_count
|
||||||
|
_logger.debug(
|
||||||
|
f"At end of step {step_count} have {total_success} successes"
|
||||||
|
)
|
||||||
|
step_count += 1
|
||||||
|
total_count += count_per_step
|
||||||
|
|
||||||
|
results.append(
|
||||||
|
DirectMonteCarloResult(
|
||||||
|
successes=total_success,
|
||||||
|
monte_carlo_count=total_count,
|
||||||
|
likelihood=total_success / total_count,
|
||||||
|
model_name=model_name_pair[0],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
return results
|
||||||
|
|
||||||
|
def execute(self) -> Sequence[DirectMonteCarloResult]:
|
||||||
|
|
||||||
|
# set starting execution timestamp
|
||||||
|
timestamp = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
|
||||||
|
|
||||||
|
count_per_step = (
|
||||||
|
self.config.monte_carlo_count_per_cycle * self.config.monte_carlo_cycles
|
||||||
)
|
)
|
||||||
|
seed_sequence = numpy.random.SeedSequence(self.config.monte_carlo_seed)
|
||||||
|
|
||||||
|
# core count etc. logic here
|
||||||
|
core_count = multiprocessing.cpu_count() - 1 or 1
|
||||||
|
if (self.config.cap_core_count >= 1) and (
|
||||||
|
self.config.cap_core_count < core_count
|
||||||
|
):
|
||||||
|
core_count = self.config.cap_core_count
|
||||||
|
_logger.info(f"Using {core_count} cores")
|
||||||
|
|
||||||
|
results = []
|
||||||
|
with multiprocessing.Pool(core_count) as pool:
|
||||||
|
|
||||||
|
for model_name_pair in self.model_name_pairs:
|
||||||
|
_logger.info(f"Working on model {model_name_pair[0]}")
|
||||||
|
# This is probably where multiprocessing logic should go
|
||||||
|
|
||||||
|
step_count = 0
|
||||||
|
total_success = 0
|
||||||
|
total_count = 0
|
||||||
|
|
||||||
|
while (step_count < self.config.max_monte_carlo_cycles_steps) and (
|
||||||
|
total_success < self.config.target_success
|
||||||
|
):
|
||||||
|
|
||||||
|
step_count += 1
|
||||||
|
|
||||||
|
_logger.debug(f"Executing step {step_count}")
|
||||||
|
|
||||||
|
seeds = seed_sequence.spawn(self.config.monte_carlo_cycles)
|
||||||
|
|
||||||
|
pool_results = sum(
|
||||||
|
pool.imap_unordered(
|
||||||
|
self._wrapped_single_run,
|
||||||
|
[(model_name_pair, seed) for seed in seeds],
|
||||||
|
self.config.chunk_size,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
_logger.debug(f"Pool results: {pool_results}")
|
||||||
|
|
||||||
|
total_success += pool_results
|
||||||
|
total_count += count_per_step
|
||||||
|
_logger.debug(
|
||||||
|
f"At end of step {step_count} have {total_success} successes"
|
||||||
|
)
|
||||||
|
|
||||||
|
results.append(
|
||||||
|
DirectMonteCarloResult(
|
||||||
|
successes=total_success,
|
||||||
|
monte_carlo_count=total_count,
|
||||||
|
likelihood=total_success / total_count,
|
||||||
|
model_name=model_name_pair[0],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
if self.config.write_bayesrun_file:
|
||||||
|
|
||||||
|
filename = (
|
||||||
|
f"{timestamp}-{self.config.tag}.realdata.fast_filter.bayesrun.csv"
|
||||||
|
)
|
||||||
|
_logger.info(f"Going to write to file [{filename}]")
|
||||||
|
# row: Dict[str, Union[int, float, str]] = {}
|
||||||
|
row = {}
|
||||||
|
|
||||||
|
num_models = len(self.model_name_pairs)
|
||||||
|
success_weight = sum(
|
||||||
|
[
|
||||||
|
(
|
||||||
|
max(ANTI_ZERO_SUCCESS_THRES, res.successes)
|
||||||
|
/ res.monte_carlo_count
|
||||||
|
)
|
||||||
|
/ num_models
|
||||||
|
for res in results
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
for res in results:
|
||||||
|
row.update(
|
||||||
|
{
|
||||||
|
f"{res.model_name}_success": res.successes,
|
||||||
|
f"{res.model_name}_count": res.monte_carlo_count,
|
||||||
|
f"{res.model_name}_prob": (
|
||||||
|
max(ANTI_ZERO_SUCCESS_THRES, res.successes)
|
||||||
|
/ res.monte_carlo_count
|
||||||
|
)
|
||||||
|
/ (num_models * success_weight),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
_logger.info(f"Writing row {row}")
|
||||||
|
fieldnames = list(row.keys())
|
||||||
|
|
||||||
|
with open(filename, "w", newline="") as outfile:
|
||||||
|
writer = csv.DictWriter(outfile, fieldnames=fieldnames, dialect="unix")
|
||||||
|
writer.writeheader()
|
||||||
|
writer.writerow(row)
|
||||||
|
|
||||||
|
return results
|
||||||
|
@@ -39,6 +39,135 @@ class SingleDotPotentialFilter(DirectMonteCarloFilter):
|
|||||||
return current_sample
|
return current_sample
|
||||||
|
|
||||||
|
|
||||||
|
class SingleDotSpinQubitFrequencyFilter(DirectMonteCarloFilter):
|
||||||
|
def __init__(self, measurements: Sequence[pdme.measurement.DotRangeMeasurement]):
|
||||||
|
self.measurements = measurements
|
||||||
|
self.dot_inputs = [(measure.r, measure.f) for measure in self.measurements]
|
||||||
|
|
||||||
|
self.dot_inputs_array = pdme.measurement.input_types.dot_inputs_to_array(
|
||||||
|
self.dot_inputs
|
||||||
|
)
|
||||||
|
(
|
||||||
|
self.lows,
|
||||||
|
self.highs,
|
||||||
|
) = pdme.measurement.input_types.dot_range_measurements_low_high_arrays(
|
||||||
|
self.measurements
|
||||||
|
)
|
||||||
|
|
||||||
|
# oh no not this again
|
||||||
|
def fast_s_spin_qubit_tarucha_apsd_dipoleses(
|
||||||
|
self, dot_inputs: numpy.ndarray, dipoleses: numpy.ndarray
|
||||||
|
) -> numpy.ndarray:
|
||||||
|
"""
|
||||||
|
No error correction here baby.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# We're going to annotate the indices on this class.
|
||||||
|
# Let's define some indices:
|
||||||
|
# A -> index of dipoleses configurations
|
||||||
|
# j -> within a particular configuration, indexes dipole j
|
||||||
|
# measurement_index -> if we have 100 frequencies for example, indexes which one of them it is
|
||||||
|
# If we need to use numbers, let's use A -> 2, j -> 10, measurement_index -> 9 for consistency with
|
||||||
|
# my other notes
|
||||||
|
|
||||||
|
# axes are [dipole_config_idx A, dipole_idx j, {px, py, pz}3]
|
||||||
|
ps = dipoleses[:, :, 0:3]
|
||||||
|
# axes are [dipole_config_idx A, dipole_idx j, {sx, sy, sz}3]
|
||||||
|
ss = dipoleses[:, :, 3:6]
|
||||||
|
# axes are [dipole_config_idx A, dipole_idx j, w], last axis is just 1
|
||||||
|
ws = dipoleses[:, :, 6]
|
||||||
|
|
||||||
|
# dot_index is either 0 or 1 for dot1 or dot2
|
||||||
|
# hopefully this adhoc grammar is making sense, with the explicit labelling of the values of the last axis in cartesian space
|
||||||
|
# axes are [measurement_idx, {dot_index}, {rx, ry, rz}] where the inner {dot_index} is gone
|
||||||
|
# [measurement_idx, cartesian3]
|
||||||
|
rs = dot_inputs[:, 0:3]
|
||||||
|
# axes are [measurement_idx]
|
||||||
|
fs = dot_inputs[:, 3]
|
||||||
|
|
||||||
|
# first operation!
|
||||||
|
# r1s has shape [measurement_idx, rxs]
|
||||||
|
# None inserts an extra axis so the r1s[:, None] has shape
|
||||||
|
# [measurement_idx, 1]([rxs]) with the last rxs hidden
|
||||||
|
#
|
||||||
|
# ss has shape [ A, j, {sx, sy, sz}3], so second term has shape [A, 1, j]([sxs])
|
||||||
|
# these broadcast from right to left
|
||||||
|
# [ measurement_idx, 1, rxs]
|
||||||
|
# [A, 1, j, sxs]
|
||||||
|
# resulting in [A, measurement_idx, j, cart3] sxs rxs are both cart3
|
||||||
|
diffses = rs[:, None] - ss[:, None, :]
|
||||||
|
|
||||||
|
# norms takes out axis 3, the last one, giving [A, measurement_idx, j]
|
||||||
|
norms = numpy.linalg.norm(diffses, axis=3)
|
||||||
|
|
||||||
|
# _logger.info(f"norms1: {norms1}")
|
||||||
|
# _logger.info(f"norms1 shape: {norms1.shape}")
|
||||||
|
#
|
||||||
|
# diffses1 (A, measurement_idx, j, xs)
|
||||||
|
# ps: (A, j, px)
|
||||||
|
# result is (A, measurement_idx, j)
|
||||||
|
# intermediate_dot_prod = numpy.einsum("abcd,acd->abc", diffses1, ps)
|
||||||
|
# _logger.info(f"dot product shape: {intermediate_dot_prod.shape}")
|
||||||
|
|
||||||
|
# transpose makes it (j, measurement_idx, A)
|
||||||
|
# transp_intermediate_dot_prod = numpy.transpose(numpy.einsum("abcd,acd->abc", diffses1, ps) / (norms1**3))
|
||||||
|
|
||||||
|
# transpose of diffses has shape (xs, j, measurement_idx, A)
|
||||||
|
# numpy.transpose(diffses1)
|
||||||
|
# _logger.info(f"dot product shape: {transp_intermediate_dot_prod.shape}")
|
||||||
|
|
||||||
|
# inner transpose is (j, measurement_idx, A) * (xs, j, measurement_idx, A)
|
||||||
|
# next transpose puts it back to (A, measurement_idx, j, xs)
|
||||||
|
# p_dot_r_times_r_term = 3 * numpy.transpose(numpy.transpose(numpy.einsum("abcd,acd->abc", diffses1, ps) / (norms1**3)) * numpy.transpose(diffses1))
|
||||||
|
# _logger.info(f"p_dot_r_times_r_term: {p_dot_r_times_r_term.shape}")
|
||||||
|
|
||||||
|
# only x axis puts us at (A, measurement_idx, j)
|
||||||
|
# p_dot_r_times_r_term_x_only = p_dot_r_times_r_term[:, :, :, 0]
|
||||||
|
# _logger.info(f"p_dot_r_times_r_term_x_only.shape: {p_dot_r_times_r_term_x_only.shape}")
|
||||||
|
|
||||||
|
# now to complete the numerator we subtract the ps, which are (A, j, px):
|
||||||
|
# slicing off the end gives us (A, j), so we newaxis to get (A, 1, j)
|
||||||
|
# _logger.info(ps[:, numpy.newaxis, :, 0].shape)
|
||||||
|
alphses = (
|
||||||
|
(
|
||||||
|
3
|
||||||
|
* numpy.transpose(
|
||||||
|
numpy.transpose(
|
||||||
|
numpy.einsum("abcd,acd->abc", diffses, ps) / (norms**2)
|
||||||
|
)
|
||||||
|
* numpy.transpose(diffses)
|
||||||
|
)[:, :, :, 0]
|
||||||
|
)
|
||||||
|
- ps[:, numpy.newaxis, :, 0]
|
||||||
|
) / (norms**3)
|
||||||
|
|
||||||
|
bses = (
|
||||||
|
2
|
||||||
|
* numpy.pi
|
||||||
|
* ws[:, None, :]
|
||||||
|
/ ((2 * numpy.pi * fs[:, None]) ** 2 + 4 * ws[:, None, :] ** 2)
|
||||||
|
)
|
||||||
|
|
||||||
|
return numpy.einsum("...j->...", alphses * alphses * bses)
|
||||||
|
|
||||||
|
def filter_samples(self, samples: ndarray) -> ndarray:
|
||||||
|
current_sample = samples
|
||||||
|
for di, low, high in zip(self.dot_inputs_array, self.lows, self.highs):
|
||||||
|
|
||||||
|
if len(current_sample) < 1:
|
||||||
|
break
|
||||||
|
vals = self.fast_s_spin_qubit_tarucha_apsd_dipoleses(
|
||||||
|
numpy.array([di]), current_sample
|
||||||
|
)
|
||||||
|
# _logger.info(vals)
|
||||||
|
|
||||||
|
current_sample = current_sample[
|
||||||
|
numpy.all((vals > low) & (vals < high), axis=1)
|
||||||
|
]
|
||||||
|
# _logger.info(f"leaving with {len(current_sample)}")
|
||||||
|
return current_sample
|
||||||
|
|
||||||
|
|
||||||
class DoubleDotSpinQubitFrequencyFilter(DirectMonteCarloFilter):
|
class DoubleDotSpinQubitFrequencyFilter(DirectMonteCarloFilter):
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
@@ -59,59 +188,6 @@ class DoubleDotSpinQubitFrequencyFilter(DirectMonteCarloFilter):
|
|||||||
self.pair_phase_measurements
|
self.pair_phase_measurements
|
||||||
)
|
)
|
||||||
|
|
||||||
def fast_s_spin_qubit_tarucha_nonlocal_dipoleses(
|
|
||||||
self, dot_pair_inputs: numpy.ndarray, dipoleses: numpy.ndarray
|
|
||||||
) -> numpy.ndarray:
|
|
||||||
"""
|
|
||||||
No error correction here baby.
|
|
||||||
"""
|
|
||||||
ps = dipoleses[:, :, 0:3]
|
|
||||||
ss = dipoleses[:, :, 3:6]
|
|
||||||
ws = dipoleses[:, :, 6]
|
|
||||||
|
|
||||||
r1s = dot_pair_inputs[:, 0, 0:3]
|
|
||||||
r2s = dot_pair_inputs[:, 1, 0:3]
|
|
||||||
f1s = dot_pair_inputs[:, 0, 3]
|
|
||||||
# Don't actually need this
|
|
||||||
# f2s = dot_pair_inputs[:, 1, 3]
|
|
||||||
|
|
||||||
diffses1 = r1s[:, None] - ss[:, None, :]
|
|
||||||
diffses2 = r2s[:, None] - ss[:, None, :]
|
|
||||||
|
|
||||||
norms1 = numpy.linalg.norm(diffses1, axis=3)
|
|
||||||
norms2 = numpy.linalg.norm(diffses2, axis=3)
|
|
||||||
|
|
||||||
alphses1 = (
|
|
||||||
(
|
|
||||||
3
|
|
||||||
* numpy.transpose(
|
|
||||||
numpy.transpose(
|
|
||||||
numpy.einsum("abcd,acd->abc", diffses1, ps) / (norms1**2)
|
|
||||||
)
|
|
||||||
* numpy.transpose(diffses1)
|
|
||||||
)[:, :, :, 0]
|
|
||||||
)
|
|
||||||
- ps[:, :, 0, numpy.newaxis]
|
|
||||||
) / (norms1**3)
|
|
||||||
alphses2 = (
|
|
||||||
(
|
|
||||||
3
|
|
||||||
* numpy.transpose(
|
|
||||||
numpy.transpose(
|
|
||||||
numpy.einsum("abcd,acd->abc", diffses2, ps) / (norms2**2)
|
|
||||||
)
|
|
||||||
* numpy.transpose(diffses2)
|
|
||||||
)[:, :, :, 0]
|
|
||||||
)
|
|
||||||
- ps[:, :, 0, numpy.newaxis]
|
|
||||||
) / (norms2**3)
|
|
||||||
|
|
||||||
bses = (1 / numpy.pi) * (
|
|
||||||
ws[:, None, :] / (f1s[:, None] ** 2 + ws[:, None, :] ** 2)
|
|
||||||
)
|
|
||||||
|
|
||||||
return numpy.einsum("...j->...", alphses1 * alphses2 * bses)
|
|
||||||
|
|
||||||
def filter_samples(self, samples: ndarray) -> ndarray:
|
def filter_samples(self, samples: ndarray) -> ndarray:
|
||||||
current_sample = samples
|
current_sample = samples
|
||||||
|
|
||||||
@@ -121,16 +197,8 @@ class DoubleDotSpinQubitFrequencyFilter(DirectMonteCarloFilter):
|
|||||||
if len(current_sample) < 1:
|
if len(current_sample) < 1:
|
||||||
break
|
break
|
||||||
|
|
||||||
###
|
|
||||||
# This should be abstracted out, but we're going to dump it here for time pressure's sake
|
|
||||||
###
|
|
||||||
# vals = pdme.util.fast_nonlocal_spectrum.signarg(
|
|
||||||
# pdme.util.fast_nonlocal_spectrum.fast_s_nonlocal_dipoleses(
|
|
||||||
# numpy.array([pi]), current_sample
|
|
||||||
# )
|
|
||||||
#
|
|
||||||
vals = pdme.util.fast_nonlocal_spectrum.signarg(
|
vals = pdme.util.fast_nonlocal_spectrum.signarg(
|
||||||
self.fast_s_spin_qubit_tarucha_nonlocal_dipoleses(
|
pdme.util.fast_nonlocal_spectrum.fast_s_spin_qubit_tarucha_nonlocal_dipoleses(
|
||||||
numpy.array([pi]), current_sample
|
numpy.array([pi]), current_sample
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
58
deepdog/indexify/__init__.py
Normal file
58
deepdog/indexify/__init__.py
Normal file
@@ -0,0 +1,58 @@
|
|||||||
|
"""
|
||||||
|
Probably should just include a way to handle the indexify function I reuse so much.
|
||||||
|
|
||||||
|
All about breaking an integer into a tuple of values from lists, which is useful because of how we do CHTC runs.
|
||||||
|
"""
|
||||||
|
import itertools
|
||||||
|
import typing
|
||||||
|
import logging
|
||||||
|
import math
|
||||||
|
|
||||||
|
_logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
# from https://stackoverflow.com/questions/5228158/cartesian-product-of-a-dictionary-of-lists
|
||||||
|
def _dict_product(dicts):
|
||||||
|
"""
|
||||||
|
>>> list(dict_product(dict(number=[1,2], character='ab')))
|
||||||
|
[{'character': 'a', 'number': 1},
|
||||||
|
{'character': 'a', 'number': 2},
|
||||||
|
{'character': 'b', 'number': 1},
|
||||||
|
{'character': 'b', 'number': 2}]
|
||||||
|
"""
|
||||||
|
return list(dict(zip(dicts.keys(), x)) for x in itertools.product(*dicts.values()))
|
||||||
|
|
||||||
|
|
||||||
|
class Indexifier:
|
||||||
|
"""
|
||||||
|
The order of keys is very important, but collections.OrderedDict is no longer needed in python 3.7.
|
||||||
|
I think it's okay to rely on that.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, list_dict: typing.Dict[str, typing.Sequence]):
|
||||||
|
self.dict = list_dict
|
||||||
|
|
||||||
|
def indexify(self, n: int) -> typing.Dict[str, typing.Any]:
|
||||||
|
product_dict = _dict_product(self.dict)
|
||||||
|
return product_dict[n]
|
||||||
|
|
||||||
|
def _indexify_indices(self, n: int) -> typing.Sequence[int]:
|
||||||
|
"""
|
||||||
|
legacy indexify from old scripts, copypast.
|
||||||
|
could be used like
|
||||||
|
>>> ret = {}
|
||||||
|
>>> for k, i in zip(self.dict.keys(), self._indexify_indices):
|
||||||
|
>>> ret[k] = self.dict[k][i]
|
||||||
|
>>> return ret
|
||||||
|
"""
|
||||||
|
weights = [len(v) for v in self.dict.values()]
|
||||||
|
N = math.prod(weights)
|
||||||
|
curr_n = n
|
||||||
|
curr_N = N
|
||||||
|
out = []
|
||||||
|
for w in weights[:-1]:
|
||||||
|
# print(f"current: {curr_N}, {curr_n}, {curr_n // w}")
|
||||||
|
curr_N = curr_N // w # should be int division anyway
|
||||||
|
out.append(curr_n // curr_N)
|
||||||
|
curr_n = curr_n % curr_N
|
||||||
|
return out
|
@@ -112,59 +112,6 @@ def get_a_result_fast_filter_tarucha_spin_qubit_pair_phase_only(input) -> int:
|
|||||||
seed,
|
seed,
|
||||||
) = input
|
) = input
|
||||||
|
|
||||||
def fast_s_spin_qubit_tarucha_nonlocal_dipoleses(
|
|
||||||
dot_pair_inputs: numpy.ndarray, dipoleses: numpy.ndarray
|
|
||||||
) -> numpy.ndarray:
|
|
||||||
"""
|
|
||||||
No error correction here baby.
|
|
||||||
"""
|
|
||||||
ps = dipoleses[:, :, 0:3]
|
|
||||||
ss = dipoleses[:, :, 3:6]
|
|
||||||
ws = dipoleses[:, :, 6]
|
|
||||||
|
|
||||||
r1s = dot_pair_inputs[:, 0, 0:3]
|
|
||||||
r2s = dot_pair_inputs[:, 1, 0:3]
|
|
||||||
f1s = dot_pair_inputs[:, 0, 3]
|
|
||||||
# don't actually need, because we're assuming they're the same frequencies across the pair
|
|
||||||
# f2s = dot_pair_inputs[:, 1, 3]
|
|
||||||
|
|
||||||
diffses1 = r1s[:, None] - ss[:, None, :]
|
|
||||||
diffses2 = r2s[:, None] - ss[:, None, :]
|
|
||||||
|
|
||||||
norms1 = numpy.linalg.norm(diffses1, axis=3)
|
|
||||||
norms2 = numpy.linalg.norm(diffses2, axis=3)
|
|
||||||
|
|
||||||
alphses1 = (
|
|
||||||
(
|
|
||||||
3
|
|
||||||
* numpy.transpose(
|
|
||||||
numpy.transpose(
|
|
||||||
numpy.einsum("abcd,acd->abc", diffses1, ps) / (norms1**2)
|
|
||||||
)
|
|
||||||
* numpy.transpose(diffses1)
|
|
||||||
)[:, :, :, 0]
|
|
||||||
)
|
|
||||||
- ps[:, :, 0, numpy.newaxis]
|
|
||||||
) / (norms1**3)
|
|
||||||
alphses2 = (
|
|
||||||
(
|
|
||||||
3
|
|
||||||
* numpy.transpose(
|
|
||||||
numpy.transpose(
|
|
||||||
numpy.einsum("abcd,acd->abc", diffses2, ps) / (norms2**2)
|
|
||||||
)
|
|
||||||
* numpy.transpose(diffses2)
|
|
||||||
)[:, :, :, 0]
|
|
||||||
)
|
|
||||||
- ps[:, :, 0, numpy.newaxis]
|
|
||||||
) / (norms2**3)
|
|
||||||
|
|
||||||
bses = (1 / numpy.pi) * (
|
|
||||||
ws[:, None, :] / (f1s[:, None] ** 2 + ws[:, None, :] ** 2)
|
|
||||||
)
|
|
||||||
|
|
||||||
return numpy.einsum("...j->...", alphses1 * alphses2 * bses)
|
|
||||||
|
|
||||||
rng = numpy.random.default_rng(seed)
|
rng = numpy.random.default_rng(seed)
|
||||||
# TODO: A long term refactor is to pull the frequency stuff out from here. The None stands for max_frequency, which is unneeded in the actually useful models.
|
# TODO: A long term refactor is to pull the frequency stuff out from here. The None stands for max_frequency, which is unneeded in the actually useful models.
|
||||||
sample_dipoles = model.get_monte_carlo_dipole_inputs(
|
sample_dipoles = model.get_monte_carlo_dipole_inputs(
|
||||||
@@ -186,7 +133,7 @@ def get_a_result_fast_filter_tarucha_spin_qubit_pair_phase_only(input) -> int:
|
|||||||
# )
|
# )
|
||||||
#
|
#
|
||||||
vals = pdme.util.fast_nonlocal_spectrum.signarg(
|
vals = pdme.util.fast_nonlocal_spectrum.signarg(
|
||||||
fast_s_spin_qubit_tarucha_nonlocal_dipoleses(
|
pdme.util.fast_nonlocal_spectrum.fast_s_spin_qubit_tarucha_nonlocal_dipoleses(
|
||||||
numpy.array([pi]), current_sample
|
numpy.array([pi]), current_sample
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
170
deepdog/results/__init__.py
Normal file
170
deepdog/results/__init__.py
Normal file
@@ -0,0 +1,170 @@
|
|||||||
|
import dataclasses
|
||||||
|
import re
|
||||||
|
import typing
|
||||||
|
import logging
|
||||||
|
import deepdog.indexify
|
||||||
|
import pathlib
|
||||||
|
import csv
|
||||||
|
|
||||||
|
_logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
FILENAME_REGEX = r"(?P<timestamp>\d{8}-\d{6})-(?P<filename_slug>.*)\.realdata\.fast_filter\.bayesrun\.csv"
|
||||||
|
|
||||||
|
MODEL_REGEXES = [
|
||||||
|
r"geom_(?P<xmin>-?\d+)_(?P<xmax>-?\d+)_(?P<ymin>-?\d+)_(?P<ymax>-?\d+)_(?P<zmin>-?\d+)_(?P<zmax>-?\d+)-orientation_(?P<orientation>free|fixedxy|fixedz)-dipole_count_(?P<avg_filled>\d+)_(?P<field_name>\w*)"
|
||||||
|
]
|
||||||
|
|
||||||
|
FILE_SLUG_REGEXES = [
|
||||||
|
r"mock_tarucha-(?P<job_index>\d+)",
|
||||||
|
r"(?:(?P<mock>mock)_)?tarucha(?:_(?P<tarucha_run_id>\d+))?-(?P<job_index>\d+)",
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
@dataclasses.dataclass
|
||||||
|
class BayesrunOutputFilename:
|
||||||
|
timestamp: str
|
||||||
|
filename_slug: str
|
||||||
|
path: pathlib.Path
|
||||||
|
|
||||||
|
|
||||||
|
@dataclasses.dataclass
|
||||||
|
class BayesrunColumnParsed:
|
||||||
|
"""
|
||||||
|
class for parsing a bayesrun while pulling certain special fields out
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, groupdict: typing.Dict[str, str]):
|
||||||
|
self.column_field = groupdict["field_name"]
|
||||||
|
self.model_field_dict = {
|
||||||
|
k: v for k, v in groupdict.items() if k != "field_name"
|
||||||
|
}
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
return f"BayesrunColumnParsed[{self.column_field}: {self.model_field_dict}]"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclasses.dataclass
|
||||||
|
class BayesrunModelResult:
|
||||||
|
parsed_model_keys: typing.Dict[str, str]
|
||||||
|
success: int
|
||||||
|
count: int
|
||||||
|
|
||||||
|
|
||||||
|
@dataclasses.dataclass
|
||||||
|
class BayesrunOutput:
|
||||||
|
filename: BayesrunOutputFilename
|
||||||
|
data: typing.Dict["str", typing.Any]
|
||||||
|
results: typing.Sequence[BayesrunModelResult]
|
||||||
|
|
||||||
|
|
||||||
|
def _batch_iterable_into_chunks(iterable, n=1):
|
||||||
|
"""
|
||||||
|
utility for batching bayesrun files where columns appear in threes
|
||||||
|
"""
|
||||||
|
for ndx in range(0, len(iterable), n):
|
||||||
|
yield iterable[ndx : min(ndx + n, len(iterable))]
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_bayesrun_column(
|
||||||
|
column: str,
|
||||||
|
) -> typing.Optional[BayesrunColumnParsed]:
|
||||||
|
"""
|
||||||
|
Tries one by one all of a predefined list of regexes that I might have used in the past.
|
||||||
|
Returns the groupdict for the first match, or None if no match found.
|
||||||
|
"""
|
||||||
|
for pattern in MODEL_REGEXES:
|
||||||
|
match = re.match(pattern, column)
|
||||||
|
if match:
|
||||||
|
return BayesrunColumnParsed(match.groupdict())
|
||||||
|
else:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_bayesrun_row(
|
||||||
|
row: typing.Dict[str, str],
|
||||||
|
) -> typing.Sequence[BayesrunModelResult]:
|
||||||
|
|
||||||
|
results = []
|
||||||
|
batched_keys = _batch_iterable_into_chunks(list(row.keys()), 3)
|
||||||
|
for model_keys in batched_keys:
|
||||||
|
parsed = [_parse_bayesrun_column(column) for column in model_keys]
|
||||||
|
values = [row[column] for column in model_keys]
|
||||||
|
if parsed[0] is None:
|
||||||
|
raise ValueError(f"no viable success row found for keys {model_keys}")
|
||||||
|
if parsed[1] is None:
|
||||||
|
raise ValueError(f"no viable count row found for keys {model_keys}")
|
||||||
|
if parsed[0].column_field != "success":
|
||||||
|
raise ValueError(f"The column {model_keys[0]} is not a success field")
|
||||||
|
if parsed[1].column_field != "count":
|
||||||
|
raise ValueError(f"The column {model_keys[1]} is not a count field")
|
||||||
|
parsed_keys = parsed[0].model_field_dict
|
||||||
|
success = int(values[0])
|
||||||
|
count = int(values[1])
|
||||||
|
results.append(
|
||||||
|
BayesrunModelResult(
|
||||||
|
parsed_model_keys=parsed_keys,
|
||||||
|
success=success,
|
||||||
|
count=count,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
return results
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_output_filename(file: pathlib.Path) -> BayesrunOutputFilename:
|
||||||
|
filename = file.name
|
||||||
|
match = re.match(FILENAME_REGEX, filename)
|
||||||
|
if not match:
|
||||||
|
raise ValueError(f"{filename} was not a valid bayesrun output")
|
||||||
|
groups = match.groupdict()
|
||||||
|
return BayesrunOutputFilename(
|
||||||
|
timestamp=groups["timestamp"], filename_slug=groups["filename_slug"], path=file
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_file_slug(slug: str) -> typing.Optional[typing.Dict[str, str]]:
|
||||||
|
for pattern in FILE_SLUG_REGEXES:
|
||||||
|
match = re.match(pattern, slug)
|
||||||
|
if match:
|
||||||
|
return match.groupdict()
|
||||||
|
else:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def read_output_file(
|
||||||
|
file: pathlib.Path, indexifier: typing.Optional[deepdog.indexify.Indexifier]
|
||||||
|
) -> BayesrunOutput:
|
||||||
|
|
||||||
|
parsed_filename = tag = _parse_output_filename(file)
|
||||||
|
out = BayesrunOutput(filename=parsed_filename, data={}, results=[])
|
||||||
|
|
||||||
|
out.data.update(dataclasses.asdict(tag))
|
||||||
|
parsed_tag = _parse_file_slug(parsed_filename.filename_slug)
|
||||||
|
if parsed_tag is None:
|
||||||
|
_logger.warning(
|
||||||
|
f"Could not parse {tag} against any matching regexes. Going to skip tag parsing"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
out.data.update(parsed_tag)
|
||||||
|
if indexifier is not None:
|
||||||
|
try:
|
||||||
|
job_index = parsed_tag["job_index"]
|
||||||
|
indexified = indexifier.indexify(int(job_index))
|
||||||
|
out.data.update(indexified)
|
||||||
|
except KeyError:
|
||||||
|
# This isn't really that important of an error, apart from the warning
|
||||||
|
_logger.warning(
|
||||||
|
f"Parsed tag to {parsed_tag}, and attempted to indexify but no job_index key was found. skipping and moving on"
|
||||||
|
)
|
||||||
|
|
||||||
|
with file.open() as input_file:
|
||||||
|
reader = csv.DictReader(input_file)
|
||||||
|
rows = [r for r in reader]
|
||||||
|
if len(rows) == 1:
|
||||||
|
row = rows[0]
|
||||||
|
else:
|
||||||
|
raise ValueError(f"Confused about having multiple rows in {file.name}")
|
||||||
|
results = _parse_bayesrun_row(row)
|
||||||
|
|
||||||
|
out.results = results
|
||||||
|
|
||||||
|
return out
|
@@ -36,6 +36,7 @@
|
|||||||
self.packages.${system}.deepdogEnv
|
self.packages.${system}.deepdogEnv
|
||||||
self.packages.${system}.deepdogApp
|
self.packages.${system}.deepdogApp
|
||||||
pkgs.just
|
pkgs.just
|
||||||
|
pkgs.nodejs
|
||||||
];
|
];
|
||||||
shellHook = ''
|
shellHook = ''
|
||||||
export DO_NIX_CUSTOM=1
|
export DO_NIX_CUSTOM=1
|
||||||
|
12
justfile
12
justfile
@@ -46,9 +46,15 @@ fmt:
|
|||||||
find deepdog -type f -name "*.py" -exec sed -i -e 's/ /\t/g' {} \;
|
find deepdog -type f -name "*.py" -exec sed -i -e 's/ /\t/g' {} \;
|
||||||
find tests -type f -name "*.py" -exec sed -i -e 's/ /\t/g' {} \;
|
find tests -type f -name "*.py" -exec sed -i -e 's/ /\t/g' {} \;
|
||||||
|
|
||||||
# release the app, checking that our working tree is clean and ready for release
|
# release the app, checking that our working tree is clean and ready for release, optionally takes target version
|
||||||
release:
|
release version="":
|
||||||
./scripts/release.sh
|
#!/usr/bin/env bash
|
||||||
|
set -euxo pipefail
|
||||||
|
if [[ -n "{{version}}" ]]; then
|
||||||
|
./scripts/release.sh {{version}}
|
||||||
|
else
|
||||||
|
./scripts/release.sh
|
||||||
|
fi
|
||||||
|
|
||||||
htmlcov:
|
htmlcov:
|
||||||
poetry run pytest --cov-report=html
|
poetry run pytest --cov-report=html
|
||||||
|
327
poetry.lock
generated
327
poetry.lock
generated
@@ -1,5 +1,20 @@
|
|||||||
# This file is automatically @generated by Poetry 1.8.2 and should not be changed by hand.
|
# This file is automatically @generated by Poetry 1.8.2 and should not be changed by hand.
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "backports-tarfile"
|
||||||
|
version = "1.1.1"
|
||||||
|
description = "Backport of CPython tarfile module"
|
||||||
|
optional = false
|
||||||
|
python-versions = ">=3.8"
|
||||||
|
files = [
|
||||||
|
{file = "backports.tarfile-1.1.1-py3-none-any.whl", hash = "sha256:73e0179647803d3726d82e76089d01d8549ceca9bace469953fcb4d97cf2d417"},
|
||||||
|
{file = "backports_tarfile-1.1.1.tar.gz", hash = "sha256:9c2ef9696cb73374f7164e17fc761389393ca76777036f5aad42e8b93fcd8009"},
|
||||||
|
]
|
||||||
|
|
||||||
|
[package.extras]
|
||||||
|
docs = ["furo", "jaraco.packaging (>=9.3)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"]
|
||||||
|
testing = ["jaraco.test", "pytest (>=6,!=8.1.*)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "black"
|
name = "black"
|
||||||
version = "22.12.0"
|
version = "22.12.0"
|
||||||
@@ -250,63 +265,63 @@ files = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "coverage"
|
name = "coverage"
|
||||||
version = "7.4.3"
|
version = "7.5.0"
|
||||||
description = "Code coverage measurement for Python"
|
description = "Code coverage measurement for Python"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "coverage-7.4.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:8580b827d4746d47294c0e0b92854c85a92c2227927433998f0d3320ae8a71b6"},
|
{file = "coverage-7.5.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:432949a32c3e3f820af808db1833d6d1631664d53dd3ce487aa25d574e18ad1c"},
|
||||||
{file = "coverage-7.4.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:718187eeb9849fc6cc23e0d9b092bc2348821c5e1a901c9f8975df0bc785bfd4"},
|
{file = "coverage-7.5.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:2bd7065249703cbeb6d4ce679c734bef0ee69baa7bff9724361ada04a15b7e3b"},
|
||||||
{file = "coverage-7.4.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:767b35c3a246bcb55b8044fd3a43b8cd553dd1f9f2c1eeb87a302b1f8daa0524"},
|
{file = "coverage-7.5.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bbfe6389c5522b99768a93d89aca52ef92310a96b99782973b9d11e80511f932"},
|
||||||
{file = "coverage-7.4.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ae7f19afe0cce50039e2c782bff379c7e347cba335429678450b8fe81c4ef96d"},
|
{file = "coverage-7.5.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:39793731182c4be939b4be0cdecde074b833f6171313cf53481f869937129ed3"},
|
||||||
{file = "coverage-7.4.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ba3a8aaed13770e970b3df46980cb068d1c24af1a1968b7818b69af8c4347efb"},
|
{file = "coverage-7.5.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:85a5dbe1ba1bf38d6c63b6d2c42132d45cbee6d9f0c51b52c59aa4afba057517"},
|
||||||
{file = "coverage-7.4.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:ee866acc0861caebb4f2ab79f0b94dbfbdbfadc19f82e6e9c93930f74e11d7a0"},
|
{file = "coverage-7.5.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:357754dcdfd811462a725e7501a9b4556388e8ecf66e79df6f4b988fa3d0b39a"},
|
||||||
{file = "coverage-7.4.3-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:506edb1dd49e13a2d4cac6a5173317b82a23c9d6e8df63efb4f0380de0fbccbc"},
|
{file = "coverage-7.5.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:a81eb64feded34f40c8986869a2f764f0fe2db58c0530d3a4afbcde50f314880"},
|
||||||
{file = "coverage-7.4.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:fd6545d97c98a192c5ac995d21c894b581f1fd14cf389be90724d21808b657e2"},
|
{file = "coverage-7.5.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:51431d0abbed3a868e967f8257c5faf283d41ec882f58413cf295a389bb22e58"},
|
||||||
{file = "coverage-7.4.3-cp310-cp310-win32.whl", hash = "sha256:f6a09b360d67e589236a44f0c39218a8efba2593b6abdccc300a8862cffc2f94"},
|
{file = "coverage-7.5.0-cp310-cp310-win32.whl", hash = "sha256:f609ebcb0242d84b7adeee2b06c11a2ddaec5464d21888b2c8255f5fd6a98ae4"},
|
||||||
{file = "coverage-7.4.3-cp310-cp310-win_amd64.whl", hash = "sha256:18d90523ce7553dd0b7e23cbb28865db23cddfd683a38fb224115f7826de78d0"},
|
{file = "coverage-7.5.0-cp310-cp310-win_amd64.whl", hash = "sha256:6782cd6216fab5a83216cc39f13ebe30adfac2fa72688c5a4d8d180cd52e8f6a"},
|
||||||
{file = "coverage-7.4.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:cbbe5e739d45a52f3200a771c6d2c7acf89eb2524890a4a3aa1a7fa0695d2a47"},
|
{file = "coverage-7.5.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:e768d870801f68c74c2b669fc909839660180c366501d4cc4b87efd6b0eee375"},
|
||||||
{file = "coverage-7.4.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:489763b2d037b164846ebac0cbd368b8a4ca56385c4090807ff9fad817de4113"},
|
{file = "coverage-7.5.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:84921b10aeb2dd453247fd10de22907984eaf80901b578a5cf0bb1e279a587cb"},
|
||||||
{file = "coverage-7.4.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:451f433ad901b3bb00184d83fd83d135fb682d780b38af7944c9faeecb1e0bfe"},
|
{file = "coverage-7.5.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:710c62b6e35a9a766b99b15cdc56d5aeda0914edae8bb467e9c355f75d14ee95"},
|
||||||
{file = "coverage-7.4.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fcc66e222cf4c719fe7722a403888b1f5e1682d1679bd780e2b26c18bb648cdc"},
|
{file = "coverage-7.5.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c379cdd3efc0658e652a14112d51a7668f6bfca7445c5a10dee7eabecabba19d"},
|
||||||
{file = "coverage-7.4.3-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b3ec74cfef2d985e145baae90d9b1b32f85e1741b04cd967aaf9cfa84c1334f3"},
|
{file = "coverage-7.5.0-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fea9d3ca80bcf17edb2c08a4704259dadac196fe5e9274067e7a20511fad1743"},
|
||||||
{file = "coverage-7.4.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:abbbd8093c5229c72d4c2926afaee0e6e3140de69d5dcd918b2921f2f0c8baba"},
|
{file = "coverage-7.5.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:41327143c5b1d715f5f98a397608f90ab9ebba606ae4e6f3389c2145410c52b1"},
|
||||||
{file = "coverage-7.4.3-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:35eb581efdacf7b7422af677b92170da4ef34500467381e805944a3201df2079"},
|
{file = "coverage-7.5.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:565b2e82d0968c977e0b0f7cbf25fd06d78d4856289abc79694c8edcce6eb2de"},
|
||||||
{file = "coverage-7.4.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:8249b1c7334be8f8c3abcaaa996e1e4927b0e5a23b65f5bf6cfe3180d8ca7840"},
|
{file = "coverage-7.5.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cf3539007202ebfe03923128fedfdd245db5860a36810136ad95a564a2fdffff"},
|
||||||
{file = "coverage-7.4.3-cp311-cp311-win32.whl", hash = "sha256:cf30900aa1ba595312ae41978b95e256e419d8a823af79ce670835409fc02ad3"},
|
{file = "coverage-7.5.0-cp311-cp311-win32.whl", hash = "sha256:bf0b4b8d9caa8d64df838e0f8dcf68fb570c5733b726d1494b87f3da85db3a2d"},
|
||||||
{file = "coverage-7.4.3-cp311-cp311-win_amd64.whl", hash = "sha256:18c7320695c949de11a351742ee001849912fd57e62a706d83dfc1581897fa2e"},
|
{file = "coverage-7.5.0-cp311-cp311-win_amd64.whl", hash = "sha256:9c6384cc90e37cfb60435bbbe0488444e54b98700f727f16f64d8bfda0b84656"},
|
||||||
{file = "coverage-7.4.3-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:b51bfc348925e92a9bd9b2e48dad13431b57011fd1038f08316e6bf1df107d10"},
|
{file = "coverage-7.5.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:fed7a72d54bd52f4aeb6c6e951f363903bd7d70bc1cad64dd1f087980d309ab9"},
|
||||||
{file = "coverage-7.4.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:d6cdecaedea1ea9e033d8adf6a0ab11107b49571bbb9737175444cea6eb72328"},
|
{file = "coverage-7.5.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:cbe6581fcff7c8e262eb574244f81f5faaea539e712a058e6707a9d272fe5b64"},
|
||||||
{file = "coverage-7.4.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3b2eccb883368f9e972e216c7b4c7c06cabda925b5f06dde0650281cb7666a30"},
|
{file = "coverage-7.5.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ad97ec0da94b378e593ef532b980c15e377df9b9608c7c6da3506953182398af"},
|
||||||
{file = "coverage-7.4.3-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6c00cdc8fa4e50e1cc1f941a7f2e3e0f26cb2a1233c9696f26963ff58445bac7"},
|
{file = "coverage-7.5.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bd4bacd62aa2f1a1627352fe68885d6ee694bdaebb16038b6e680f2924a9b2cc"},
|
||||||
{file = "coverage-7.4.3-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b9a4a8dd3dcf4cbd3165737358e4d7dfbd9d59902ad11e3b15eebb6393b0446e"},
|
{file = "coverage-7.5.0-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:adf032b6c105881f9d77fa17d9eebe0ad1f9bfb2ad25777811f97c5362aa07f2"},
|
||||||
{file = "coverage-7.4.3-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:062b0a75d9261e2f9c6d071753f7eef0fc9caf3a2c82d36d76667ba7b6470003"},
|
{file = "coverage-7.5.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:4ba01d9ba112b55bfa4b24808ec431197bb34f09f66f7cb4fd0258ff9d3711b1"},
|
||||||
{file = "coverage-7.4.3-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:ebe7c9e67a2d15fa97b77ea6571ce5e1e1f6b0db71d1d5e96f8d2bf134303c1d"},
|
{file = "coverage-7.5.0-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:f0bfe42523893c188e9616d853c47685e1c575fe25f737adf473d0405dcfa7eb"},
|
||||||
{file = "coverage-7.4.3-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:c0a120238dd71c68484f02562f6d446d736adcc6ca0993712289b102705a9a3a"},
|
{file = "coverage-7.5.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:a9a7ef30a1b02547c1b23fa9a5564f03c9982fc71eb2ecb7f98c96d7a0db5cf2"},
|
||||||
{file = "coverage-7.4.3-cp312-cp312-win32.whl", hash = "sha256:37389611ba54fd6d278fde86eb2c013c8e50232e38f5c68235d09d0a3f8aa352"},
|
{file = "coverage-7.5.0-cp312-cp312-win32.whl", hash = "sha256:3c2b77f295edb9fcdb6a250f83e6481c679335ca7e6e4a955e4290350f2d22a4"},
|
||||||
{file = "coverage-7.4.3-cp312-cp312-win_amd64.whl", hash = "sha256:d25b937a5d9ffa857d41be042b4238dd61db888533b53bc76dc082cb5a15e914"},
|
{file = "coverage-7.5.0-cp312-cp312-win_amd64.whl", hash = "sha256:427e1e627b0963ac02d7c8730ca6d935df10280d230508c0ba059505e9233475"},
|
||||||
{file = "coverage-7.4.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:28ca2098939eabab044ad68850aac8f8db6bf0b29bc7f2887d05889b17346454"},
|
{file = "coverage-7.5.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:9dd88fce54abbdbf4c42fb1fea0e498973d07816f24c0e27a1ecaf91883ce69e"},
|
||||||
{file = "coverage-7.4.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:280459f0a03cecbe8800786cdc23067a8fc64c0bd51dc614008d9c36e1659d7e"},
|
{file = "coverage-7.5.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:a898c11dca8f8c97b467138004a30133974aacd572818c383596f8d5b2eb04a9"},
|
||||||
{file = "coverage-7.4.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c0cdedd3500e0511eac1517bf560149764b7d8e65cb800d8bf1c63ebf39edd2"},
|
{file = "coverage-7.5.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:07dfdd492d645eea1bd70fb1d6febdcf47db178b0d99161d8e4eed18e7f62fe7"},
|
||||||
{file = "coverage-7.4.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9a9babb9466fe1da12417a4aed923e90124a534736de6201794a3aea9d98484e"},
|
{file = "coverage-7.5.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d3d117890b6eee85887b1eed41eefe2e598ad6e40523d9f94c4c4b213258e4a4"},
|
||||||
{file = "coverage-7.4.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dec9de46a33cf2dd87a5254af095a409ea3bf952d85ad339751e7de6d962cde6"},
|
{file = "coverage-7.5.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6afd2e84e7da40fe23ca588379f815fb6dbbb1b757c883935ed11647205111cb"},
|
||||||
{file = "coverage-7.4.3-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:16bae383a9cc5abab9bb05c10a3e5a52e0a788325dc9ba8499e821885928968c"},
|
{file = "coverage-7.5.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:a9960dd1891b2ddf13a7fe45339cd59ecee3abb6b8326d8b932d0c5da208104f"},
|
||||||
{file = "coverage-7.4.3-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:2c854ce44e1ee31bda4e318af1dbcfc929026d12c5ed030095ad98197eeeaed0"},
|
{file = "coverage-7.5.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:ced268e82af993d7801a9db2dbc1d2322e786c5dc76295d8e89473d46c6b84d4"},
|
||||||
{file = "coverage-7.4.3-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:ce8c50520f57ec57aa21a63ea4f325c7b657386b3f02ccaedeccf9ebe27686e1"},
|
{file = "coverage-7.5.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:e7c211f25777746d468d76f11719e64acb40eed410d81c26cefac641975beb88"},
|
||||||
{file = "coverage-7.4.3-cp38-cp38-win32.whl", hash = "sha256:708a3369dcf055c00ddeeaa2b20f0dd1ce664eeabde6623e516c5228b753654f"},
|
{file = "coverage-7.5.0-cp38-cp38-win32.whl", hash = "sha256:262fffc1f6c1a26125d5d573e1ec379285a3723363f3bd9c83923c9593a2ac25"},
|
||||||
{file = "coverage-7.4.3-cp38-cp38-win_amd64.whl", hash = "sha256:1bf25fbca0c8d121a3e92a2a0555c7e5bc981aee5c3fdaf4bb7809f410f696b9"},
|
{file = "coverage-7.5.0-cp38-cp38-win_amd64.whl", hash = "sha256:eed462b4541c540d63ab57b3fc69e7d8c84d5957668854ee4e408b50e92ce26a"},
|
||||||
{file = "coverage-7.4.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3b253094dbe1b431d3a4ac2f053b6d7ede2664ac559705a704f621742e034f1f"},
|
{file = "coverage-7.5.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d0194d654e360b3e6cc9b774e83235bae6b9b2cac3be09040880bb0e8a88f4a1"},
|
||||||
{file = "coverage-7.4.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:77fbfc5720cceac9c200054b9fab50cb2a7d79660609200ab83f5db96162d20c"},
|
{file = "coverage-7.5.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:33c020d3322662e74bc507fb11488773a96894aa82a622c35a5a28673c0c26f5"},
|
||||||
{file = "coverage-7.4.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6679060424faa9c11808598504c3ab472de4531c571ab2befa32f4971835788e"},
|
{file = "coverage-7.5.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0cbdf2cae14a06827bec50bd58e49249452d211d9caddd8bd80e35b53cb04631"},
|
||||||
{file = "coverage-7.4.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4af154d617c875b52651dd8dd17a31270c495082f3d55f6128e7629658d63765"},
|
{file = "coverage-7.5.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3235d7c781232e525b0761730e052388a01548bd7f67d0067a253887c6e8df46"},
|
||||||
{file = "coverage-7.4.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8640f1fde5e1b8e3439fe482cdc2b0bb6c329f4bb161927c28d2e8879c6029ee"},
|
{file = "coverage-7.5.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:db2de4e546f0ec4b2787d625e0b16b78e99c3e21bc1722b4977c0dddf11ca84e"},
|
||||||
{file = "coverage-7.4.3-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:69b9f6f66c0af29642e73a520b6fed25ff9fd69a25975ebe6acb297234eda501"},
|
{file = "coverage-7.5.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:4d0e206259b73af35c4ec1319fd04003776e11e859936658cb6ceffdeba0f5be"},
|
||||||
{file = "coverage-7.4.3-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:0842571634f39016a6c03e9d4aba502be652a6e4455fadb73cd3a3a49173e38f"},
|
{file = "coverage-7.5.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:2055c4fb9a6ff624253d432aa471a37202cd8f458c033d6d989be4499aed037b"},
|
||||||
{file = "coverage-7.4.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a78ed23b08e8ab524551f52953a8a05d61c3a760781762aac49f8de6eede8c45"},
|
{file = "coverage-7.5.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:075299460948cd12722a970c7eae43d25d37989da682997687b34ae6b87c0ef0"},
|
||||||
{file = "coverage-7.4.3-cp39-cp39-win32.whl", hash = "sha256:c0524de3ff096e15fcbfe8f056fdb4ea0bf497d584454f344d59fce069d3e6e9"},
|
{file = "coverage-7.5.0-cp39-cp39-win32.whl", hash = "sha256:280132aada3bc2f0fac939a5771db4fbb84f245cb35b94fae4994d4c1f80dae7"},
|
||||||
{file = "coverage-7.4.3-cp39-cp39-win_amd64.whl", hash = "sha256:0209a6369ccce576b43bb227dc8322d8ef9e323d089c6f3f26a597b09cb4d2aa"},
|
{file = "coverage-7.5.0-cp39-cp39-win_amd64.whl", hash = "sha256:c58536f6892559e030e6924896a44098bc1290663ea12532c78cef71d0df8493"},
|
||||||
{file = "coverage-7.4.3-pp38.pp39.pp310-none-any.whl", hash = "sha256:7cbde573904625509a3f37b6fecea974e363460b556a627c60dc2f47e2fffa51"},
|
{file = "coverage-7.5.0-pp38.pp39.pp310-none-any.whl", hash = "sha256:2b57780b51084d5223eee7b59f0d4911c31c16ee5aa12737c7a02455829ff067"},
|
||||||
{file = "coverage-7.4.3.tar.gz", hash = "sha256:276f6077a5c61447a48d133ed13e759c09e62aff0dc84274a68dc18660104d52"},
|
{file = "coverage-7.5.0.tar.gz", hash = "sha256:cf62d17310f34084c59c01e027259076479128d11e4661bb6c9acb38c5e19bb8"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
@@ -393,13 +408,13 @@ files = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "exceptiongroup"
|
name = "exceptiongroup"
|
||||||
version = "1.2.0"
|
version = "1.2.1"
|
||||||
description = "Backport of PEP 654 (exception groups)"
|
description = "Backport of PEP 654 (exception groups)"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.7"
|
python-versions = ">=3.7"
|
||||||
files = [
|
files = [
|
||||||
{file = "exceptiongroup-1.2.0-py3-none-any.whl", hash = "sha256:4bfd3996ac73b41e9b9628b04e079f193850720ea5945fc96a08633c66912f14"},
|
{file = "exceptiongroup-1.2.1-py3-none-any.whl", hash = "sha256:5258b9ed329c5bbdd31a309f53cbfb0b155341807f6ff7606a1e801a891b29ad"},
|
||||||
{file = "exceptiongroup-1.2.0.tar.gz", hash = "sha256:91f5c769735f051a4290d52edd0858999b57e5876e9f85937691bd4c9fa3ed68"},
|
{file = "exceptiongroup-1.2.1.tar.gz", hash = "sha256:a4785e48b045528f5bfe627b6ad554ff32def154f42372786903b7abcfe1aa16"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
@@ -437,60 +452,61 @@ smmap = ">=3.0.1,<6"
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "gitpython"
|
name = "gitpython"
|
||||||
version = "3.1.42"
|
version = "3.1.43"
|
||||||
description = "GitPython is a Python library used to interact with Git repositories"
|
description = "GitPython is a Python library used to interact with Git repositories"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.7"
|
python-versions = ">=3.7"
|
||||||
files = [
|
files = [
|
||||||
{file = "GitPython-3.1.42-py3-none-any.whl", hash = "sha256:1bf9cd7c9e7255f77778ea54359e54ac22a72a5b51288c457c881057b7bb9ecd"},
|
{file = "GitPython-3.1.43-py3-none-any.whl", hash = "sha256:eec7ec56b92aad751f9912a73404bc02ba212a23adb2c7098ee668417051a1ff"},
|
||||||
{file = "GitPython-3.1.42.tar.gz", hash = "sha256:2d99869e0fef71a73cbd242528105af1d6c1b108c60dfabd994bf292f76c3ceb"},
|
{file = "GitPython-3.1.43.tar.gz", hash = "sha256:35f314a9f878467f5453cc1fee295c3e18e52f1b99f10f6cf5b1682e968a9e7c"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
gitdb = ">=4.0.1,<5"
|
gitdb = ">=4.0.1,<5"
|
||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
test = ["black", "coverage[toml]", "ddt (>=1.1.1,!=1.4.3)", "mock", "mypy", "pre-commit", "pytest (>=7.3.1)", "pytest-cov", "pytest-instafail", "pytest-mock", "pytest-sugar"]
|
doc = ["sphinx (==4.3.2)", "sphinx-autodoc-typehints", "sphinx-rtd-theme", "sphinxcontrib-applehelp (>=1.0.2,<=1.0.4)", "sphinxcontrib-devhelp (==1.0.2)", "sphinxcontrib-htmlhelp (>=2.0.0,<=2.0.1)", "sphinxcontrib-qthelp (==1.0.3)", "sphinxcontrib-serializinghtml (==1.1.5)"]
|
||||||
|
test = ["coverage[toml]", "ddt (>=1.1.1,!=1.4.3)", "mock", "mypy", "pre-commit", "pytest (>=7.3.1)", "pytest-cov", "pytest-instafail", "pytest-mock", "pytest-sugar", "typing-extensions"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "idna"
|
name = "idna"
|
||||||
version = "3.6"
|
version = "3.7"
|
||||||
description = "Internationalized Domain Names in Applications (IDNA)"
|
description = "Internationalized Domain Names in Applications (IDNA)"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.5"
|
python-versions = ">=3.5"
|
||||||
files = [
|
files = [
|
||||||
{file = "idna-3.6-py3-none-any.whl", hash = "sha256:c05567e9c24a6b9faaa835c4821bad0590fbb9d5779e7caa6e1cc4978e7eb24f"},
|
{file = "idna-3.7-py3-none-any.whl", hash = "sha256:82fee1fc78add43492d3a1898bfa6d8a904cc97d8427f683ed8e798d07761aa0"},
|
||||||
{file = "idna-3.6.tar.gz", hash = "sha256:9ecdbbd083b06798ae1e86adcbfe8ab1479cf864e4ee30fe4e46a003d12491ca"},
|
{file = "idna-3.7.tar.gz", hash = "sha256:028ff3aadf0609c1fd278d8ea3089299412a7a8b9bd005dd08b9f8285bcb5cfc"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "importlib-metadata"
|
name = "importlib-metadata"
|
||||||
version = "7.0.1"
|
version = "7.1.0"
|
||||||
description = "Read metadata from Python packages"
|
description = "Read metadata from Python packages"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "importlib_metadata-7.0.1-py3-none-any.whl", hash = "sha256:4805911c3a4ec7c3966410053e9ec6a1fecd629117df5adee56dfc9432a1081e"},
|
{file = "importlib_metadata-7.1.0-py3-none-any.whl", hash = "sha256:30962b96c0c223483ed6cc7280e7f0199feb01a0e40cfae4d4450fc6fab1f570"},
|
||||||
{file = "importlib_metadata-7.0.1.tar.gz", hash = "sha256:f238736bb06590ae52ac1fab06a3a9ef1d8dce2b7a35b5ab329371d6c8f5d2cc"},
|
{file = "importlib_metadata-7.1.0.tar.gz", hash = "sha256:b78938b926ee8d5f020fc4772d487045805a55ddbad2ecf21c6d60938dc7fcd2"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
zipp = ">=0.5"
|
zipp = ">=0.5"
|
||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (<7.2.5)", "sphinx (>=3.5)", "sphinx-lint"]
|
docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"]
|
||||||
perf = ["ipython"]
|
perf = ["ipython"]
|
||||||
testing = ["flufl.flake8", "importlib-resources (>=1.3)", "packaging", "pyfakefs", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy (>=0.9.1)", "pytest-perf (>=0.9.2)", "pytest-ruff"]
|
testing = ["flufl.flake8", "importlib-resources (>=1.3)", "jaraco.test (>=5.4)", "packaging", "pyfakefs", "pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy", "pytest-perf (>=0.9.2)", "pytest-ruff (>=0.2.1)"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "importlib-resources"
|
name = "importlib-resources"
|
||||||
version = "6.1.2"
|
version = "6.4.0"
|
||||||
description = "Read resources from Python packages"
|
description = "Read resources from Python packages"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "importlib_resources-6.1.2-py3-none-any.whl", hash = "sha256:9a0a862501dc38b68adebc82970140c9e4209fc99601782925178f8386339938"},
|
{file = "importlib_resources-6.4.0-py3-none-any.whl", hash = "sha256:50d10f043df931902d4194ea07ec57960f66a80449ff867bfe782b4c486ba78c"},
|
||||||
{file = "importlib_resources-6.1.2.tar.gz", hash = "sha256:308abf8474e2dba5f867d279237cd4076482c3de7104a40b41426370e891549b"},
|
{file = "importlib_resources-6.4.0.tar.gz", hash = "sha256:cdb2b453b8046ca4e3798eb1d84f3cce1446a0e8e7b5ef4efb600f19fc398145"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
@@ -498,7 +514,7 @@ zipp = {version = ">=3.1.0", markers = "python_version < \"3.10\""}
|
|||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (<7.2.5)", "sphinx (>=3.5)", "sphinx-lint"]
|
docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (<7.2.5)", "sphinx (>=3.5)", "sphinx-lint"]
|
||||||
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy", "pytest-ruff (>=0.2.1)", "zipp (>=3.17)"]
|
testing = ["jaraco.test (>=5.4)", "pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy", "pytest-ruff (>=0.2.1)", "zipp (>=3.17)"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "iniconfig"
|
name = "iniconfig"
|
||||||
@@ -523,14 +539,50 @@ files = [
|
|||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "jaraco.classes"
|
name = "jaraco-classes"
|
||||||
version = "3.3.1"
|
version = "3.4.0"
|
||||||
description = "Utility functions for Python class constructs"
|
description = "Utility functions for Python class constructs"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "jaraco.classes-3.3.1-py3-none-any.whl", hash = "sha256:86b534de565381f6b3c1c830d13f931d7be1a75f0081c57dff615578676e2206"},
|
{file = "jaraco.classes-3.4.0-py3-none-any.whl", hash = "sha256:f662826b6bed8cace05e7ff873ce0f9283b5c924470fe664fff1c2f00f581790"},
|
||||||
{file = "jaraco.classes-3.3.1.tar.gz", hash = "sha256:cb28a5ebda8bc47d8c8015307d93163464f9f2b91ab4006e09ff0ce07e8bfb30"},
|
{file = "jaraco.classes-3.4.0.tar.gz", hash = "sha256:47a024b51d0239c0dd8c8540c6c7f484be3b8fcf0b2d85c13825780d3b3f3acd"},
|
||||||
|
]
|
||||||
|
|
||||||
|
[package.dependencies]
|
||||||
|
more-itertools = "*"
|
||||||
|
|
||||||
|
[package.extras]
|
||||||
|
docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"]
|
||||||
|
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy", "pytest-ruff (>=0.2.1)"]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "jaraco-context"
|
||||||
|
version = "5.3.0"
|
||||||
|
description = "Useful decorators and context managers"
|
||||||
|
optional = false
|
||||||
|
python-versions = ">=3.8"
|
||||||
|
files = [
|
||||||
|
{file = "jaraco.context-5.3.0-py3-none-any.whl", hash = "sha256:3e16388f7da43d384a1a7cd3452e72e14732ac9fe459678773a3608a812bf266"},
|
||||||
|
{file = "jaraco.context-5.3.0.tar.gz", hash = "sha256:c2f67165ce1f9be20f32f650f25d8edfc1646a8aeee48ae06fb35f90763576d2"},
|
||||||
|
]
|
||||||
|
|
||||||
|
[package.dependencies]
|
||||||
|
"backports.tarfile" = {version = "*", markers = "python_version < \"3.12\""}
|
||||||
|
|
||||||
|
[package.extras]
|
||||||
|
docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"]
|
||||||
|
testing = ["portend", "pytest (>=6,!=8.1.1)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy", "pytest-ruff (>=0.2.1)"]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "jaraco-functools"
|
||||||
|
version = "4.0.1"
|
||||||
|
description = "Functools like those found in stdlib"
|
||||||
|
optional = false
|
||||||
|
python-versions = ">=3.8"
|
||||||
|
files = [
|
||||||
|
{file = "jaraco.functools-4.0.1-py3-none-any.whl", hash = "sha256:3b24ccb921d6b593bdceb56ce14799204f473976e2a9d4b15b04d0f2c2326664"},
|
||||||
|
{file = "jaraco_functools-4.0.1.tar.gz", hash = "sha256:d33fa765374c0611b52f8b3a795f8900869aa88c84769d4d1746cd68fb28c3e8"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
@@ -538,7 +590,7 @@ more-itertools = "*"
|
|||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (<7.2.5)", "sphinx (>=3.5)", "sphinx-lint"]
|
docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (<7.2.5)", "sphinx (>=3.5)", "sphinx-lint"]
|
||||||
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy", "pytest-ruff (>=0.2.1)"]
|
testing = ["jaraco.classes", "pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy", "pytest-ruff (>=0.2.1)"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "jeepney"
|
name = "jeepney"
|
||||||
@@ -557,27 +609,29 @@ trio = ["async_generator", "trio"]
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "keyring"
|
name = "keyring"
|
||||||
version = "24.3.1"
|
version = "25.2.0"
|
||||||
description = "Store and access your passwords safely."
|
description = "Store and access your passwords safely."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "keyring-24.3.1-py3-none-any.whl", hash = "sha256:df38a4d7419a6a60fea5cef1e45a948a3e8430dd12ad88b0f423c5c143906218"},
|
{file = "keyring-25.2.0-py3-none-any.whl", hash = "sha256:19f17d40335444aab84b19a0d16a77ec0758a9c384e3446ae2ed8bd6d53b67a5"},
|
||||||
{file = "keyring-24.3.1.tar.gz", hash = "sha256:c3327b6ffafc0e8befbdb597cacdb4928ffe5c1212f7645f186e6d9957a898db"},
|
{file = "keyring-25.2.0.tar.gz", hash = "sha256:7045f367268ce42dba44745050164b431e46f6e92f99ef2937dfadaef368d8cf"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
importlib-metadata = {version = ">=4.11.4", markers = "python_version < \"3.12\""}
|
importlib-metadata = {version = ">=4.11.4", markers = "python_version < \"3.12\""}
|
||||||
importlib-resources = {version = "*", markers = "python_version < \"3.9\""}
|
importlib-resources = {version = "*", markers = "python_version < \"3.9\""}
|
||||||
"jaraco.classes" = "*"
|
"jaraco.classes" = "*"
|
||||||
|
"jaraco.context" = "*"
|
||||||
|
"jaraco.functools" = "*"
|
||||||
jeepney = {version = ">=0.4.2", markers = "sys_platform == \"linux\""}
|
jeepney = {version = ">=0.4.2", markers = "sys_platform == \"linux\""}
|
||||||
pywin32-ctypes = {version = ">=0.2.0", markers = "sys_platform == \"win32\""}
|
pywin32-ctypes = {version = ">=0.2.0", markers = "sys_platform == \"win32\""}
|
||||||
SecretStorage = {version = ">=3.2", markers = "sys_platform == \"linux\""}
|
SecretStorage = {version = ">=3.2", markers = "sys_platform == \"linux\""}
|
||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
completion = ["shtab (>=1.1.0)"]
|
completion = ["shtab (>=1.1.0)"]
|
||||||
docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (<7.2.5)", "sphinx (>=3.5)", "sphinx-lint"]
|
docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"]
|
||||||
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy", "pytest-ruff (>=0.2.1)"]
|
testing = ["pytest (>=6,!=8.1.*)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy", "pytest-ruff (>=0.2.1)"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "mccabe"
|
name = "mccabe"
|
||||||
@@ -656,27 +710,27 @@ files = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "nh3"
|
name = "nh3"
|
||||||
version = "0.2.15"
|
version = "0.2.17"
|
||||||
description = "Python bindings to the ammonia HTML sanitization library."
|
description = "Python bindings to the ammonia HTML sanitization library."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = "*"
|
python-versions = "*"
|
||||||
files = [
|
files = [
|
||||||
{file = "nh3-0.2.15-cp37-abi3-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:9c0d415f6b7f2338f93035bba5c0d8c1b464e538bfbb1d598acd47d7969284f0"},
|
{file = "nh3-0.2.17-cp37-abi3-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:551672fd71d06cd828e282abdb810d1be24e1abb7ae2543a8fa36a71c1006fe9"},
|
||||||
{file = "nh3-0.2.15-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:6f42f99f0cf6312e470b6c09e04da31f9abaadcd3eb591d7d1a88ea931dca7f3"},
|
{file = "nh3-0.2.17-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:c551eb2a3876e8ff2ac63dff1585236ed5dfec5ffd82216a7a174f7c5082a78a"},
|
||||||
{file = "nh3-0.2.15-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ac19c0d68cd42ecd7ead91a3a032fdfff23d29302dbb1311e641a130dfefba97"},
|
{file = "nh3-0.2.17-cp37-abi3-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:66f17d78826096291bd264f260213d2b3905e3c7fae6dfc5337d49429f1dc9f3"},
|
||||||
{file = "nh3-0.2.15-cp37-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:5f0d77272ce6d34db6c87b4f894f037d55183d9518f948bba236fe81e2bb4e28"},
|
{file = "nh3-0.2.17-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0316c25b76289cf23be6b66c77d3608a4fdf537b35426280032f432f14291b9a"},
|
||||||
{file = "nh3-0.2.15-cp37-abi3-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:8d595df02413aa38586c24811237e95937ef18304e108b7e92c890a06793e3bf"},
|
{file = "nh3-0.2.17-cp37-abi3-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:22c26e20acbb253a5bdd33d432a326d18508a910e4dcf9a3316179860d53345a"},
|
||||||
{file = "nh3-0.2.15-cp37-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:86e447a63ca0b16318deb62498db4f76fc60699ce0a1231262880b38b6cff911"},
|
{file = "nh3-0.2.17-cp37-abi3-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:85cdbcca8ef10733bd31f931956f7fbb85145a4d11ab9e6742bbf44d88b7e351"},
|
||||||
{file = "nh3-0.2.15-cp37-abi3-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3277481293b868b2715907310c7be0f1b9d10491d5adf9fce11756a97e97eddf"},
|
{file = "nh3-0.2.17-cp37-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:40015514022af31975c0b3bca4014634fa13cb5dc4dbcbc00570acc781316dcc"},
|
||||||
{file = "nh3-0.2.15-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:60684857cfa8fdbb74daa867e5cad3f0c9789415aba660614fe16cd66cbb9ec7"},
|
{file = "nh3-0.2.17-cp37-abi3-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ba73a2f8d3a1b966e9cdba7b211779ad8a2561d2dba9674b8a19ed817923f65f"},
|
||||||
{file = "nh3-0.2.15-cp37-abi3-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:3b803a5875e7234907f7d64777dfde2b93db992376f3d6d7af7f3bc347deb305"},
|
{file = "nh3-0.2.17-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c21bac1a7245cbd88c0b0e4a420221b7bfa838a2814ee5bb924e9c2f10a1120b"},
|
||||||
{file = "nh3-0.2.15-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:0d02d0ff79dfd8208ed25a39c12cbda092388fff7f1662466e27d97ad011b770"},
|
{file = "nh3-0.2.17-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:d7a25fd8c86657f5d9d576268e3b3767c5cd4f42867c9383618be8517f0f022a"},
|
||||||
{file = "nh3-0.2.15-cp37-abi3-musllinux_1_2_armv7l.whl", hash = "sha256:f3b53ba93bb7725acab1e030bc2ecd012a817040fd7851b332f86e2f9bb98dc6"},
|
{file = "nh3-0.2.17-cp37-abi3-musllinux_1_2_armv7l.whl", hash = "sha256:c790769152308421283679a142dbdb3d1c46c79c823008ecea8e8141db1a2062"},
|
||||||
{file = "nh3-0.2.15-cp37-abi3-musllinux_1_2_i686.whl", hash = "sha256:b1e97221cedaf15a54f5243f2c5894bb12ca951ae4ddfd02a9d4ea9df9e1a29d"},
|
{file = "nh3-0.2.17-cp37-abi3-musllinux_1_2_i686.whl", hash = "sha256:b4427ef0d2dfdec10b641ed0bdaf17957eb625b2ec0ea9329b3d28806c153d71"},
|
||||||
{file = "nh3-0.2.15-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:a5167a6403d19c515217b6bcaaa9be420974a6ac30e0da9e84d4fc67a5d474c5"},
|
{file = "nh3-0.2.17-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:a3f55fabe29164ba6026b5ad5c3151c314d136fd67415a17660b4aaddacf1b10"},
|
||||||
{file = "nh3-0.2.15-cp37-abi3-win32.whl", hash = "sha256:427fecbb1031db085eaac9931362adf4a796428ef0163070c484b5a768e71601"},
|
{file = "nh3-0.2.17-cp37-abi3-win32.whl", hash = "sha256:1a814dd7bba1cb0aba5bcb9bebcc88fd801b63e21e2450ae6c52d3b3336bc911"},
|
||||||
{file = "nh3-0.2.15-cp37-abi3-win_amd64.whl", hash = "sha256:bc2d086fb540d0fa52ce35afaded4ea526b8fc4d3339f783db55c95de40ef02e"},
|
{file = "nh3-0.2.17-cp37-abi3-win_amd64.whl", hash = "sha256:1aa52a7def528297f256de0844e8dd680ee279e79583c76d6fa73a978186ddfb"},
|
||||||
{file = "nh3-0.2.15.tar.gz", hash = "sha256:d1e30ff2d8d58fb2a14961f7aac1bbb1c51f9bdd7da727be35c63826060b0bf3"},
|
{file = "nh3-0.2.17.tar.gz", hash = "sha256:40d0741a19c3d645e54efba71cb0d8c475b59135c1e3c580f879ad5514cbf028"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -710,13 +764,13 @@ files = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "packaging"
|
name = "packaging"
|
||||||
version = "23.2"
|
version = "24.0"
|
||||||
description = "Core utilities for Python packages"
|
description = "Core utilities for Python packages"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.7"
|
python-versions = ">=3.7"
|
||||||
files = [
|
files = [
|
||||||
{file = "packaging-23.2-py3-none-any.whl", hash = "sha256:8c491190033a9af7e1d931d0b5dacc2ef47509b34dd0de67ed209b5203fc88c7"},
|
{file = "packaging-24.0-py3-none-any.whl", hash = "sha256:2ddfb553fdf02fb784c234c7ba6ccc288296ceabec964ad2eae3777778130bc5"},
|
||||||
{file = "packaging-23.2.tar.gz", hash = "sha256:048fb0e9405036518eaaf48a55953c750c11e1a1b68e0dd1a9d62ed0c092cfc5"},
|
{file = "packaging-24.0.tar.gz", hash = "sha256:eb82c5e3e56209074766e6885bb04b8c38a0c015d0a30036ebe7ece34c9989e9"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -732,13 +786,13 @@ files = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pdme"
|
name = "pdme"
|
||||||
version = "0.9.3"
|
version = "1.0.0"
|
||||||
description = "Python dipole model evaluator"
|
description = "Python dipole model evaluator"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8.1,<3.10"
|
python-versions = "<3.10,>=3.8.1"
|
||||||
files = [
|
files = [
|
||||||
{file = "pdme-0.9.3-py3-none-any.whl", hash = "sha256:0a11caa8ce18829695fa0457bbfc4d15210945fcb58be05ffb96eda8fdd7e9c8"},
|
{file = "pdme-1.0.0-py3-none-any.whl", hash = "sha256:8fb8d1bf3d88f73118da5731332ae00c721b98daf53b225069e422af1a1a67f2"},
|
||||||
{file = "pdme-0.9.3.tar.gz", hash = "sha256:0de948e301780e79bbe78887be8ac4165899cbca32fa845a1ace8c08e72e300b"},
|
{file = "pdme-1.0.0.tar.gz", hash = "sha256:02cabf2e6fc2ddaf0871d0b3afcf265bca16520ee7bc1c74672be62f7a8390bd"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
@@ -747,42 +801,43 @@ scipy = ">=1.10,<1.11"
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pkginfo"
|
name = "pkginfo"
|
||||||
version = "1.9.6"
|
version = "1.10.0"
|
||||||
description = "Query metadata from sdists / bdists / installed packages."
|
description = "Query metadata from sdists / bdists / installed packages."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.6"
|
python-versions = ">=3.6"
|
||||||
files = [
|
files = [
|
||||||
{file = "pkginfo-1.9.6-py3-none-any.whl", hash = "sha256:4b7a555a6d5a22169fcc9cf7bfd78d296b0361adad412a346c1226849af5e546"},
|
{file = "pkginfo-1.10.0-py3-none-any.whl", hash = "sha256:889a6da2ed7ffc58ab5b900d888ddce90bce912f2d2de1dc1c26f4cb9fe65097"},
|
||||||
{file = "pkginfo-1.9.6.tar.gz", hash = "sha256:8fd5896e8718a4372f0ea9cc9d96f6417c9b986e23a4d116dda26b62cc29d046"},
|
{file = "pkginfo-1.10.0.tar.gz", hash = "sha256:5df73835398d10db79f8eecd5cd86b1f6d29317589ea70796994d49399af6297"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
testing = ["pytest", "pytest-cov"]
|
testing = ["pytest", "pytest-cov", "wheel"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "platformdirs"
|
name = "platformdirs"
|
||||||
version = "4.2.0"
|
version = "4.2.1"
|
||||||
description = "A small Python package for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
|
description = "A small Python package for determining appropriate platform-specific dirs, e.g. a `user data dir`."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "platformdirs-4.2.0-py3-none-any.whl", hash = "sha256:0614df2a2f37e1a662acbd8e2b25b92ccf8632929bc6d43467e17fe89c75e068"},
|
{file = "platformdirs-4.2.1-py3-none-any.whl", hash = "sha256:17d5a1161b3fd67b390023cb2d3b026bbd40abde6fdb052dfbd3a29c3ba22ee1"},
|
||||||
{file = "platformdirs-4.2.0.tar.gz", hash = "sha256:ef0cc731df711022c174543cb70a9b5bd22e5a9337c8624ef2c2ceb8ddad8768"},
|
{file = "platformdirs-4.2.1.tar.gz", hash = "sha256:031cd18d4ec63ec53e82dceaac0417d218a6863f7745dfcc9efe7793b7039bdf"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
docs = ["furo (>=2023.9.10)", "proselint (>=0.13)", "sphinx (>=7.2.6)", "sphinx-autodoc-typehints (>=1.25.2)"]
|
docs = ["furo (>=2023.9.10)", "proselint (>=0.13)", "sphinx (>=7.2.6)", "sphinx-autodoc-typehints (>=1.25.2)"]
|
||||||
test = ["appdirs (==1.4.4)", "covdefaults (>=2.3)", "pytest (>=7.4.3)", "pytest-cov (>=4.1)", "pytest-mock (>=3.12)"]
|
test = ["appdirs (==1.4.4)", "covdefaults (>=2.3)", "pytest (>=7.4.3)", "pytest-cov (>=4.1)", "pytest-mock (>=3.12)"]
|
||||||
|
type = ["mypy (>=1.8)"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pluggy"
|
name = "pluggy"
|
||||||
version = "1.4.0"
|
version = "1.5.0"
|
||||||
description = "plugin and hook calling mechanisms for python"
|
description = "plugin and hook calling mechanisms for python"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "pluggy-1.4.0-py3-none-any.whl", hash = "sha256:7db9f7b503d67d1c5b95f59773ebb58a8c1c288129a88665838012cfb07b8981"},
|
{file = "pluggy-1.5.0-py3-none-any.whl", hash = "sha256:44e1ad92c8ca002de6377e165f3e0f1be63266ab4d554740532335b9d75ea669"},
|
||||||
{file = "pluggy-1.4.0.tar.gz", hash = "sha256:8c85c2876142a764e5b7548e7d9a0e0ddb46f5185161049a79b7e974454223be"},
|
{file = "pluggy-1.5.0.tar.gz", hash = "sha256:2cffa88e94fdc978c4c574f15f9e59b7f4201d439195c3715ca9e2486f1d0cf1"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
@@ -802,13 +857,13 @@ files = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pycparser"
|
name = "pycparser"
|
||||||
version = "2.21"
|
version = "2.22"
|
||||||
description = "C parser in Python"
|
description = "C parser in Python"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"},
|
{file = "pycparser-2.22-py3-none-any.whl", hash = "sha256:c3702b6d3dd8c7abc1afa565d7e63d53a1d0bd86cdc24edd75470f4de499cfcc"},
|
||||||
{file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"},
|
{file = "pycparser-2.22.tar.gz", hash = "sha256:491c8be9c040f5390f5bf44a5b07752bd07f56edf992381b05c701439eec10f6"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -839,13 +894,13 @@ windows-terminal = ["colorama (>=0.4.6)"]
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pytest"
|
name = "pytest"
|
||||||
version = "8.0.2"
|
version = "8.2.0"
|
||||||
description = "pytest: simple powerful testing with Python"
|
description = "pytest: simple powerful testing with Python"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "pytest-8.0.2-py3-none-any.whl", hash = "sha256:edfaaef32ce5172d5466b5127b42e0d6d35ebbe4453f0e3505d96afd93f6b096"},
|
{file = "pytest-8.2.0-py3-none-any.whl", hash = "sha256:1733f0620f6cda4095bbf0d9ff8022486e91892245bb9e7d5542c018f612f233"},
|
||||||
{file = "pytest-8.0.2.tar.gz", hash = "sha256:d4051d623a2e0b7e51960ba963193b09ce6daeb9759a451844a21e4ddedfc1bd"},
|
{file = "pytest-8.2.0.tar.gz", hash = "sha256:d507d4482197eac0ba2bae2e9babf0672eb333017bcedaa5fb1a3d42c1174b3f"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
@@ -853,11 +908,11 @@ colorama = {version = "*", markers = "sys_platform == \"win32\""}
|
|||||||
exceptiongroup = {version = ">=1.0.0rc8", markers = "python_version < \"3.11\""}
|
exceptiongroup = {version = ">=1.0.0rc8", markers = "python_version < \"3.11\""}
|
||||||
iniconfig = "*"
|
iniconfig = "*"
|
||||||
packaging = "*"
|
packaging = "*"
|
||||||
pluggy = ">=1.3.0,<2.0"
|
pluggy = ">=1.5,<2.0"
|
||||||
tomli = {version = ">=1.0.0", markers = "python_version < \"3.11\""}
|
tomli = {version = ">=1", markers = "python_version < \"3.11\""}
|
||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
testing = ["argcomplete", "attrs (>=19.2.0)", "hypothesis (>=3.56)", "mock", "nose", "pygments (>=2.7.2)", "requests", "setuptools", "xmlschema"]
|
dev = ["argcomplete", "attrs (>=19.2)", "hypothesis (>=3.56)", "mock", "pygments (>=2.7.2)", "requests", "setuptools", "xmlschema"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pytest-cov"
|
name = "pytest-cov"
|
||||||
@@ -1162,13 +1217,13 @@ urllib3 = ">=1.26.0"
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "typing-extensions"
|
name = "typing-extensions"
|
||||||
version = "4.10.0"
|
version = "4.11.0"
|
||||||
description = "Backported and Experimental Type Hints for Python 3.8+"
|
description = "Backported and Experimental Type Hints for Python 3.8+"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "typing_extensions-4.10.0-py3-none-any.whl", hash = "sha256:69b1a937c3a517342112fb4c6df7e72fc39a38e7891a5730ed4985b5214b5475"},
|
{file = "typing_extensions-4.11.0-py3-none-any.whl", hash = "sha256:c1f94d72897edaf4ce775bb7558d5b79d8126906a14ea5ed1635921406c0387a"},
|
||||||
{file = "typing_extensions-4.10.0.tar.gz", hash = "sha256:b0abd7c89e8fb96f98db18d86106ff1d90ab692004eb746cf6eda2682f91b3cb"},
|
{file = "typing_extensions-4.11.0.tar.gz", hash = "sha256:83f085bd5ca59c80295fc2a82ab5dac679cbe02b9f33f7d83af68e241bea51b0"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -1204,20 +1259,20 @@ test = ["pytest (>=6.0.0)", "setuptools (>=65)"]
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "zipp"
|
name = "zipp"
|
||||||
version = "3.17.0"
|
version = "3.18.1"
|
||||||
description = "Backport of pathlib-compatible object wrapper for zip files"
|
description = "Backport of pathlib-compatible object wrapper for zip files"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "zipp-3.17.0-py3-none-any.whl", hash = "sha256:0e923e726174922dce09c53c59ad483ff7bbb8e572e00c7f7c46b88556409f31"},
|
{file = "zipp-3.18.1-py3-none-any.whl", hash = "sha256:206f5a15f2af3dbaee80769fb7dc6f249695e940acca08dfb2a4769fe61e538b"},
|
||||||
{file = "zipp-3.17.0.tar.gz", hash = "sha256:84e64a1c28cf7e91ed2078bb8cc8c259cb19b76942096c8d7b84947690cabaf0"},
|
{file = "zipp-3.18.1.tar.gz", hash = "sha256:2884ed22e7d8961de1c9a05142eb69a247f120291bc0206a00a7642f09b5b715"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (<7.2.5)", "sphinx (>=3.5)", "sphinx-lint"]
|
docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"]
|
||||||
testing = ["big-O", "jaraco.functools", "jaraco.itertools", "more-itertools", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-ignore-flaky", "pytest-mypy (>=0.9.1)", "pytest-ruff"]
|
testing = ["big-O", "jaraco.functools", "jaraco.itertools", "more-itertools", "pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-ignore-flaky", "pytest-mypy", "pytest-ruff (>=0.2.1)"]
|
||||||
|
|
||||||
[metadata]
|
[metadata]
|
||||||
lock-version = "2.0"
|
lock-version = "2.0"
|
||||||
python-versions = ">=3.8.1,<3.10"
|
python-versions = ">=3.8.1,<3.10"
|
||||||
content-hash = "b7f33da5b5a2af6bcb2a4c95cf391d04a76047d4f7e5c105b7cc38c73563fa51"
|
content-hash = "a28054e255cbd49396795127380c2b7a0cfd742b15cba2184322f3c4894ed041"
|
||||||
|
@@ -1,14 +1,15 @@
|
|||||||
[tool.poetry]
|
[tool.poetry]
|
||||||
name = "deepdog"
|
name = "deepdog"
|
||||||
version = "0.7.9"
|
version = "1.0.1"
|
||||||
description = ""
|
description = ""
|
||||||
authors = ["Deepak Mallubhotla <dmallubhotla+github@gmail.com>"]
|
authors = ["Deepak Mallubhotla <dmallubhotla+github@gmail.com>"]
|
||||||
|
|
||||||
[tool.poetry.dependencies]
|
[tool.poetry.dependencies]
|
||||||
python = ">=3.8.1,<3.10"
|
python = ">=3.8.1,<3.10"
|
||||||
pdme = "^0.9.3"
|
pdme = "^1.0.0"
|
||||||
numpy = "1.22.3"
|
numpy = "1.22.3"
|
||||||
scipy = "1.10"
|
scipy = "1.10"
|
||||||
|
tqdm = "^4.66.2"
|
||||||
|
|
||||||
[tool.poetry.dev-dependencies]
|
[tool.poetry.dev-dependencies]
|
||||||
pytest = ">=6"
|
pytest = ">=6"
|
||||||
@@ -19,6 +20,9 @@ python-semantic-release = "^7.24.0"
|
|||||||
black = "^22.3.0"
|
black = "^22.3.0"
|
||||||
syrupy = "^4.0.8"
|
syrupy = "^4.0.8"
|
||||||
|
|
||||||
|
[tool.poetry.scripts]
|
||||||
|
probs = "deepdog.cli.probs:wrapped_main"
|
||||||
|
|
||||||
[build-system]
|
[build-system]
|
||||||
requires = ["poetry-core>=1.0.0"]
|
requires = ["poetry-core>=1.0.0"]
|
||||||
build-backend = "poetry.core.masonry.api"
|
build-backend = "poetry.core.masonry.api"
|
||||||
@@ -38,6 +42,13 @@ module = [
|
|||||||
]
|
]
|
||||||
ignore_missing_imports = true
|
ignore_missing_imports = true
|
||||||
|
|
||||||
|
[[tool.mypy.overrides]]
|
||||||
|
module = [
|
||||||
|
"tqdm",
|
||||||
|
"tqdm.*"
|
||||||
|
]
|
||||||
|
ignore_missing_imports = true
|
||||||
|
|
||||||
[tool.semantic_release]
|
[tool.semantic_release]
|
||||||
version_toml = "pyproject.toml:tool.poetry.version"
|
version_toml = "pyproject.toml:tool.poetry.version"
|
||||||
tag_format = "{version}"
|
tag_format = "{version}"
|
||||||
|
@@ -25,15 +25,22 @@ if [ -z "$(git status --porcelain)" ]; then
|
|||||||
exit 0
|
exit 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
std_version_args=()
|
||||||
|
if [[ -n "${1:-}" ]]; then
|
||||||
|
std_version_args+=( "--release-as" "$1" )
|
||||||
|
echo "Parameter $1 was supplied, so we should use release-as"
|
||||||
|
else
|
||||||
|
echo "No release-as parameter specifed."
|
||||||
|
fi
|
||||||
# Working directory clean
|
# Working directory clean
|
||||||
echo "Doing a dry run..."
|
echo "Doing a dry run..."
|
||||||
npx standard-version --dry-run
|
npx standard-version --dry-run "${std_version_args[@]}"
|
||||||
read -p "Does that look good? [y/N] " -n 1 -r
|
read -p "Does that look good? [y/N] " -n 1 -r
|
||||||
echo # (optional) move to a new line
|
echo # (optional) move to a new line
|
||||||
if [[ $REPLY =~ ^[Yy]$ ]]
|
if [[ $REPLY =~ ^[Yy]$ ]]
|
||||||
then
|
then
|
||||||
# do dangerous stuff
|
# do dangerous stuff
|
||||||
npx standard-version
|
npx standard-version "${std_version_args[@]}"
|
||||||
git push --follow-tags origin master
|
git push --follow-tags origin master
|
||||||
else
|
else
|
||||||
echo "okay, never mind then..."
|
echo "okay, never mind then..."
|
||||||
|
@@ -1,4 +1,4 @@
|
|||||||
const pattern = /(\[tool\.poetry\]\nname = "deepdog"\nversion = ")(?<vers>\d+\.\d+\.\d)(")/mg;
|
const pattern = /(\[tool\.poetry\]\nname = "deepdog"\nversion = ")(?<vers>\d+\.\d+\.\d+)(")/mg;
|
||||||
|
|
||||||
module.exports.readVersion = function (contents) {
|
module.exports.readVersion = function (contents) {
|
||||||
const result = pattern.exec(contents);
|
const result = pattern.exec(contents);
|
||||||
|
0
tests/indexify/__init__.py
Normal file
0
tests/indexify/__init__.py
Normal file
12
tests/indexify/test_indexify.py
Normal file
12
tests/indexify/test_indexify.py
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
import deepdog.indexify
|
||||||
|
import logging
|
||||||
|
|
||||||
|
_logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
def test_indexifier():
|
||||||
|
weight_dict = {"key_1": [1, 2, 3], "key_2": ["a", "b", "c"]}
|
||||||
|
indexifier = deepdog.indexify.Indexifier(weight_dict)
|
||||||
|
_logger.debug(f"setting up indexifier {indexifier}")
|
||||||
|
assert indexifier.indexify(0) == {"key_1": 1, "key_2": "a"}
|
||||||
|
assert indexifier.indexify(5) == {"key_1": 2, "key_2": "c"}
|
0
tests/results/__init__.py
Normal file
0
tests/results/__init__.py
Normal file
28
tests/results/test_column_results.py
Normal file
28
tests/results/test_column_results.py
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
import deepdog.results
|
||||||
|
|
||||||
|
|
||||||
|
def test_parse_groupdict():
|
||||||
|
example_column_name = (
|
||||||
|
"geom_-20_20_-10_10_0_5-orientation_free-dipole_count_100_success"
|
||||||
|
)
|
||||||
|
|
||||||
|
parsed = deepdog.results._parse_bayesrun_column(example_column_name)
|
||||||
|
expected = deepdog.results.BayesrunColumnParsed(
|
||||||
|
{
|
||||||
|
"xmin": "-20",
|
||||||
|
"xmax": "20",
|
||||||
|
"ymin": "-10",
|
||||||
|
"ymax": "10",
|
||||||
|
"zmin": "0",
|
||||||
|
"zmax": "5",
|
||||||
|
"orientation": "free",
|
||||||
|
"avg_filled": "100",
|
||||||
|
"field_name": "success",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
assert parsed == expected
|
||||||
|
|
||||||
|
|
||||||
|
# def test_parse_no_match_column_name():
|
||||||
|
# parsed = deepdog.results.parse_bayesrun_column("There's nothing here")
|
||||||
|
# assert parsed is None
|
Reference in New Issue
Block a user