Compare commits
1 Commits
5a5b2af273
...
ffbc9cb5ed
Author | SHA1 | Date | |
---|---|---|---|
ffbc9cb5ed |
2
.gitignore
vendored
2
.gitignore
vendored
@ -143,5 +143,3 @@ dmypy.json
|
|||||||
cython_debug/
|
cython_debug/
|
||||||
|
|
||||||
*.csv
|
*.csv
|
||||||
|
|
||||||
local_scripts/
|
|
||||||
|
57
CHANGELOG.md
57
CHANGELOG.md
@ -2,63 +2,6 @@
|
|||||||
|
|
||||||
All notable changes to this project will be documented in this file. See [standard-version](https://github.com/conventional-changelog/standard-version) for commit guidelines.
|
All notable changes to this project will be documented in this file. See [standard-version](https://github.com/conventional-changelog/standard-version) for commit guidelines.
|
||||||
|
|
||||||
### [0.8.1](https://gitea.deepak.science:2222/physics/deepdog/compare/0.8.0...0.8.1) (2024-04-28)
|
|
||||||
|
|
||||||
### [0.8.1](https://gitea.deepak.science:2222/physics/deepdog/compare/0.8.0...0.8.1) (2024-04-28)
|
|
||||||
|
|
||||||
## [0.8.0](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.10...0.8.0) (2024-04-28)
|
|
||||||
|
|
||||||
|
|
||||||
### ⚠ BREAKING CHANGES
|
|
||||||
|
|
||||||
* fixes the spin qubit frequency phase shift calculation which had an index problem
|
|
||||||
|
|
||||||
### Bug Fixes
|
|
||||||
|
|
||||||
* fixes the spin qubit frequency phase shift calculation which had an index problem ([f9646e3](https://gitea.deepak.science:2222/physics/deepdog/commit/f9646e33868e1a0da8ab663230c0c692ac25bb74))
|
|
||||||
|
|
||||||
### [0.7.10](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.9...0.7.10) (2024-04-28)
|
|
||||||
|
|
||||||
|
|
||||||
### Features
|
|
||||||
|
|
||||||
* adds cli probs ([4b2e573](https://gitea.deepak.science:2222/physics/deepdog/commit/4b2e57371546731137b011461849bb849d4d4e0f))
|
|
||||||
* better management of cli wrapper ([b0ad4be](https://gitea.deepak.science:2222/physics/deepdog/commit/b0ad4bead0d4762eb7f848f6e557f6d9b61200b9))
|
|
||||||
|
|
||||||
### [0.7.9](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.8...0.7.9) (2024-04-21)
|
|
||||||
|
|
||||||
|
|
||||||
### Features
|
|
||||||
|
|
||||||
* adds ability to write custom dmc filters ([ea080ca](https://gitea.deepak.science:2222/physics/deepdog/commit/ea080ca1c7068042ce1e0a222d317f785a6b05f4))
|
|
||||||
* adds tarucha phase calculation, using spin qubit precession rate noise ([3ae0783](https://gitea.deepak.science:2222/physics/deepdog/commit/3ae0783d00cbe6a76439c1d671f2cff621d8d0a8))
|
|
||||||
|
|
||||||
### [0.7.8](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.7...0.7.8) (2024-02-29)
|
|
||||||
|
|
||||||
|
|
||||||
### Bug Fixes
|
|
||||||
|
|
||||||
* uses correct measurements ([5f534a6](https://gitea.deepak.science:2222/physics/deepdog/commit/5f534a60cc7c4838fcacee11a7e58b97d34e154a))
|
|
||||||
|
|
||||||
### [0.7.7](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.6...0.7.7) (2024-02-29)
|
|
||||||
|
|
||||||
|
|
||||||
### Bug Fixes
|
|
||||||
|
|
||||||
* fixes phase calculation issue with setting input array ([48e41cb](https://gitea.deepak.science:2222/physics/deepdog/commit/48e41cbd2c58d4c4d2747822d618d7d55257643d))
|
|
||||||
|
|
||||||
### [0.7.6](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.5...0.7.6) (2024-02-28)
|
|
||||||
|
|
||||||
|
|
||||||
### Features
|
|
||||||
|
|
||||||
* adds ability to use phase measurements only for correlations ([bb72e90](https://gitea.deepak.science:2222/physics/deepdog/commit/bb72e903d14704a3783daf2dbc1797b90880aa85))
|
|
||||||
|
|
||||||
|
|
||||||
### Bug Fixes
|
|
||||||
|
|
||||||
* fixes typeerror vs indexerror on bare float as cost in subset simulation ([65e1948](https://gitea.deepak.science:2222/physics/deepdog/commit/65e19488359d7f5656660da7da8f32ed474989c3))
|
|
||||||
|
|
||||||
### [0.7.5](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.4...0.7.5) (2023-12-09)
|
### [0.7.5](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.4...0.7.5) (2023-12-09)
|
||||||
|
|
||||||
|
|
||||||
|
11
README.md
11
README.md
@ -5,7 +5,7 @@
|
|||||||
[](https://jenkins.deepak.science/job/gitea-physics/job/deepdog/job/master/)
|
[](https://jenkins.deepak.science/job/gitea-physics/job/deepdog/job/master/)
|
||||||

|

|
||||||

|

|
||||||

|

|
||||||
|
|
||||||
The DiPole DiaGnostic tool.
|
The DiPole DiaGnostic tool.
|
||||||
|
|
||||||
@ -13,13 +13,6 @@ The DiPole DiaGnostic tool.
|
|||||||
|
|
||||||
`poetry install` to start locally
|
`poetry install` to start locally
|
||||||
|
|
||||||
Commit using [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/), and when commits are on master, release with `just release`.
|
Commit using [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/), and when commits are on master, release with `doo release`.
|
||||||
|
|
||||||
In general `just --list` has some of the useful stuff for figuring out what development tools there are.
|
|
||||||
|
|
||||||
Poetry as an installer is good, even better is using Nix (maybe with direnv to automatically pick up the `devShell` from `flake.nix`).
|
|
||||||
In either case `just` should handle actually calling things in a way that's agnostic to poetry as a runner or through nix.
|
|
||||||
|
|
||||||
### local scripts
|
|
||||||
`local_scripts` folder allows for scripts to be run using this code, but that probably isn't the most auditable for actual usage.
|
|
||||||
The API is still only something I'm using so there's no guarantees yet that it will be stable; overall semantic versioning should help with API breaks.
|
|
||||||
|
@ -1,5 +0,0 @@
|
|||||||
from deepdog.cli.probs.main import wrapped_main
|
|
||||||
|
|
||||||
__all__ = [
|
|
||||||
"wrapped_main",
|
|
||||||
]
|
|
@ -1,63 +0,0 @@
|
|||||||
import argparse
|
|
||||||
import os
|
|
||||||
|
|
||||||
|
|
||||||
def parse_args() -> argparse.Namespace:
|
|
||||||
def dir_path(path):
|
|
||||||
if os.path.isdir(path):
|
|
||||||
return path
|
|
||||||
else:
|
|
||||||
raise argparse.ArgumentTypeError(f"readable_dir:{path} is not a valid path")
|
|
||||||
|
|
||||||
parser = argparse.ArgumentParser(
|
|
||||||
"probs", description="Calculating probability from finished bayesrun"
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
"--log_file",
|
|
||||||
type=str,
|
|
||||||
help="A filename for logging to, if not provided will only log to stderr",
|
|
||||||
default=None,
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
"--bayesrun-directory",
|
|
||||||
"-d",
|
|
||||||
type=dir_path,
|
|
||||||
help="The directory to search for bayesrun files, defaulting to cwd if not passed",
|
|
||||||
default=".",
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
"--indexify-json",
|
|
||||||
help="A json file with the indexify config for parsing job indexes. Will skip if not present",
|
|
||||||
default="",
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
"--seed-index",
|
|
||||||
type=int,
|
|
||||||
help='take an integer to append as a "seed" key with range at end of indexify dict. Skip if <= 0',
|
|
||||||
default=0,
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
"--seed-fieldname",
|
|
||||||
type=str,
|
|
||||||
help='if --seed-index is set, the fieldname to append to the indexifier. "seed" by default',
|
|
||||||
default="seed",
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
"--coalesced-keys",
|
|
||||||
type=str,
|
|
||||||
help="A comma separated list of strings over which to coalesce data. By default coalesce over all fields within model names, ignore file level names",
|
|
||||||
default="",
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
"--uncoalesced-outfile",
|
|
||||||
type=str,
|
|
||||||
help="output filename for uncoalesced data. If not provided, will not be written",
|
|
||||||
default=None,
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
"--coalesced-outfile",
|
|
||||||
type=str,
|
|
||||||
help="output filename for coalesced data. If not provided, will not be written",
|
|
||||||
default=None,
|
|
||||||
)
|
|
||||||
return parser.parse_args()
|
|
@ -1,178 +0,0 @@
|
|||||||
import typing
|
|
||||||
from deepdog.results import BayesrunOutput
|
|
||||||
import logging
|
|
||||||
import csv
|
|
||||||
import tqdm
|
|
||||||
|
|
||||||
_logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
def build_model_dict(
|
|
||||||
bayes_outputs: typing.Sequence[BayesrunOutput],
|
|
||||||
) -> typing.Dict[
|
|
||||||
typing.Tuple, typing.Dict[typing.Tuple, typing.Dict["str", typing.Any]]
|
|
||||||
]:
|
|
||||||
"""
|
|
||||||
Maybe someday do something smarter with the coalescing and stuff but don't want to so i won't
|
|
||||||
"""
|
|
||||||
# assume that everything is well formatted and the keys are the same across entire list and initialise list of keys.
|
|
||||||
# model dict will contain a model_key: {calculation_dict} where each calculation_dict represents a single calculation for that model,
|
|
||||||
# the uncoalesced version, keyed by the specific file keys
|
|
||||||
model_dict: typing.Dict[
|
|
||||||
typing.Tuple, typing.Dict[typing.Tuple, typing.Dict["str", typing.Any]]
|
|
||||||
] = {}
|
|
||||||
|
|
||||||
_logger.info("building model dict")
|
|
||||||
for out in tqdm.tqdm(bayes_outputs, desc="reading outputs", leave=False):
|
|
||||||
for model_result in out.results:
|
|
||||||
model_key = tuple(v for v in model_result.parsed_model_keys.values())
|
|
||||||
if model_key not in model_dict:
|
|
||||||
model_dict[model_key] = {}
|
|
||||||
calculation_dict = model_dict[model_key]
|
|
||||||
calculation_key = tuple(v for v in out.data.values())
|
|
||||||
if calculation_key not in calculation_dict:
|
|
||||||
calculation_dict[calculation_key] = {
|
|
||||||
"_model_key_dict": model_result.parsed_model_keys,
|
|
||||||
"_calculation_key_dict": out.data,
|
|
||||||
"success": model_result.success,
|
|
||||||
"count": model_result.count,
|
|
||||||
}
|
|
||||||
else:
|
|
||||||
raise ValueError(
|
|
||||||
f"Got {calculation_key} twice for model_key {model_key}"
|
|
||||||
)
|
|
||||||
|
|
||||||
return model_dict
|
|
||||||
|
|
||||||
|
|
||||||
def write_uncoalesced_dict(
|
|
||||||
uncoalesced_output_filename: typing.Optional[str],
|
|
||||||
uncoalesced_model_dict: typing.Dict[
|
|
||||||
typing.Tuple, typing.Dict[typing.Tuple, typing.Dict["str", typing.Any]]
|
|
||||||
],
|
|
||||||
):
|
|
||||||
if uncoalesced_output_filename is None or uncoalesced_output_filename == "":
|
|
||||||
_logger.warning("Not provided a uncoalesced filename, not going to try")
|
|
||||||
return
|
|
||||||
|
|
||||||
first_value = next(iter(next(iter(uncoalesced_model_dict.values())).values()))
|
|
||||||
model_field_names = set(first_value["_model_key_dict"].keys())
|
|
||||||
calculation_field_names = set(first_value["_calculation_key_dict"].keys())
|
|
||||||
if not (set(model_field_names).isdisjoint(calculation_field_names)):
|
|
||||||
_logger.info(f"Detected model field names {model_field_names}")
|
|
||||||
_logger.info(f"Detected calculation field names {calculation_field_names}")
|
|
||||||
raise ValueError(
|
|
||||||
f"model field names {model_field_names} and calculation {calculation_field_names} have an overlap, which is possibly a problem"
|
|
||||||
)
|
|
||||||
collected_fieldnames = list(model_field_names)
|
|
||||||
collected_fieldnames.extend(calculation_field_names)
|
|
||||||
collected_fieldnames.extend(["success", "count"])
|
|
||||||
_logger.info(f"Full uncoalesced fieldnames are {collected_fieldnames}")
|
|
||||||
with open(uncoalesced_output_filename, "w", newline="") as uncoalesced_output_file:
|
|
||||||
writer = csv.DictWriter(
|
|
||||||
uncoalesced_output_file, fieldnames=collected_fieldnames
|
|
||||||
)
|
|
||||||
writer.writeheader()
|
|
||||||
|
|
||||||
for model_dict in uncoalesced_model_dict.values():
|
|
||||||
for calculation in model_dict.values():
|
|
||||||
row = calculation["_model_key_dict"].copy()
|
|
||||||
row.update(calculation["_calculation_key_dict"].copy())
|
|
||||||
row.update(
|
|
||||||
{
|
|
||||||
"success": calculation["success"],
|
|
||||||
"count": calculation["count"],
|
|
||||||
}
|
|
||||||
)
|
|
||||||
writer.writerow(row)
|
|
||||||
|
|
||||||
|
|
||||||
def coalesced_dict(
|
|
||||||
uncoalesced_model_dict: typing.Dict[
|
|
||||||
typing.Tuple, typing.Dict[typing.Tuple, typing.Dict["str", typing.Any]]
|
|
||||||
],
|
|
||||||
minimum_count: float = 0.1,
|
|
||||||
):
|
|
||||||
"""
|
|
||||||
pass in uncoalesced dict
|
|
||||||
the minimum_count field is what we use to make sure our probs are never zero
|
|
||||||
"""
|
|
||||||
coalesced_dict = {}
|
|
||||||
|
|
||||||
# we are already iterating so for no reason because performance really doesn't matter let's count the keys ourselves
|
|
||||||
num_keys = 0
|
|
||||||
|
|
||||||
# first pass coalesce
|
|
||||||
for model_key, model_dict in uncoalesced_model_dict.items():
|
|
||||||
num_keys += 1
|
|
||||||
for calculation in model_dict.values():
|
|
||||||
if model_key not in coalesced_dict:
|
|
||||||
coalesced_dict[model_key] = {
|
|
||||||
"_model_key_dict": calculation["_model_key_dict"].copy(),
|
|
||||||
"calculations_coalesced": 0,
|
|
||||||
"count": 0,
|
|
||||||
"success": 0,
|
|
||||||
}
|
|
||||||
sub_dict = coalesced_dict[model_key]
|
|
||||||
sub_dict["calculations_coalesced"] += 1
|
|
||||||
sub_dict["count"] += calculation["count"]
|
|
||||||
sub_dict["success"] += calculation["success"]
|
|
||||||
|
|
||||||
# second pass do probability calculation
|
|
||||||
|
|
||||||
prior = 1 / num_keys
|
|
||||||
_logger.info(f"Got {num_keys} model keys, so our prior will be {prior}")
|
|
||||||
|
|
||||||
total_weight = 0
|
|
||||||
for coalesced_model_dict in coalesced_dict.values():
|
|
||||||
model_weight = (
|
|
||||||
max(minimum_count, coalesced_model_dict["success"])
|
|
||||||
/ coalesced_model_dict["count"]
|
|
||||||
) * prior
|
|
||||||
total_weight += model_weight
|
|
||||||
|
|
||||||
total_prob = 0
|
|
||||||
for coalesced_model_dict in coalesced_dict.values():
|
|
||||||
model_weight = (
|
|
||||||
max(minimum_count, coalesced_model_dict["success"])
|
|
||||||
/ coalesced_model_dict["count"]
|
|
||||||
)
|
|
||||||
prob = model_weight * prior / total_weight
|
|
||||||
coalesced_model_dict["prob"] = prob
|
|
||||||
total_prob += prob
|
|
||||||
|
|
||||||
_logger.debug(
|
|
||||||
f"Got a total probability of {total_prob}, which should be close to 1 up to float/rounding error"
|
|
||||||
)
|
|
||||||
return coalesced_dict
|
|
||||||
|
|
||||||
|
|
||||||
def write_coalesced_dict(
|
|
||||||
coalesced_output_filename: typing.Optional[str],
|
|
||||||
coalesced_model_dict: typing.Dict[typing.Tuple, typing.Dict["str", typing.Any]],
|
|
||||||
):
|
|
||||||
if coalesced_output_filename is None or coalesced_output_filename == "":
|
|
||||||
_logger.warning("Not provided a uncoalesced filename, not going to try")
|
|
||||||
return
|
|
||||||
|
|
||||||
first_value = next(iter(coalesced_model_dict.values()))
|
|
||||||
model_field_names = set(first_value["_model_key_dict"].keys())
|
|
||||||
_logger.info(f"Detected model field names {model_field_names}")
|
|
||||||
|
|
||||||
collected_fieldnames = list(model_field_names)
|
|
||||||
collected_fieldnames.extend(["calculations_coalesced", "success", "count", "prob"])
|
|
||||||
with open(coalesced_output_filename, "w", newline="") as coalesced_output_file:
|
|
||||||
writer = csv.DictWriter(coalesced_output_file, fieldnames=collected_fieldnames)
|
|
||||||
writer.writeheader()
|
|
||||||
|
|
||||||
for model_dict in coalesced_model_dict.values():
|
|
||||||
row = model_dict["_model_key_dict"].copy()
|
|
||||||
row.update(
|
|
||||||
{
|
|
||||||
"calculations_coalesced": model_dict["calculations_coalesced"],
|
|
||||||
"success": model_dict["success"],
|
|
||||||
"count": model_dict["count"],
|
|
||||||
"prob": model_dict["prob"],
|
|
||||||
}
|
|
||||||
)
|
|
||||||
writer.writerow(row)
|
|
@ -1,95 +0,0 @@
|
|||||||
import logging
|
|
||||||
import argparse
|
|
||||||
import json
|
|
||||||
import deepdog.cli.probs.args
|
|
||||||
import deepdog.cli.probs.dicts
|
|
||||||
import deepdog.results
|
|
||||||
import deepdog.indexify
|
|
||||||
import pathlib
|
|
||||||
import tqdm
|
|
||||||
import tqdm.contrib.logging
|
|
||||||
|
|
||||||
|
|
||||||
_logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
def set_up_logging(log_file: str):
|
|
||||||
|
|
||||||
log_pattern = "%(asctime)s | %(levelname)-7s | %(name)s:%(lineno)d | %(message)s"
|
|
||||||
if log_file is None:
|
|
||||||
handlers = [
|
|
||||||
logging.StreamHandler(),
|
|
||||||
]
|
|
||||||
else:
|
|
||||||
handlers = [logging.StreamHandler(), logging.FileHandler(log_file)]
|
|
||||||
logging.basicConfig(
|
|
||||||
level=logging.DEBUG,
|
|
||||||
format=log_pattern,
|
|
||||||
# it's okay to ignore this mypy error because who cares about logger handler types
|
|
||||||
handlers=handlers, # type: ignore
|
|
||||||
)
|
|
||||||
logging.captureWarnings(True)
|
|
||||||
|
|
||||||
|
|
||||||
def main(args: argparse.Namespace):
|
|
||||||
"""
|
|
||||||
Main function with passed in arguments and no additional logging setup in case we want to extract out later
|
|
||||||
"""
|
|
||||||
|
|
||||||
with tqdm.contrib.logging.logging_redirect_tqdm():
|
|
||||||
_logger.info(f"args: {args}")
|
|
||||||
|
|
||||||
try:
|
|
||||||
if args.coalesced_keys:
|
|
||||||
raise NotImplementedError(
|
|
||||||
"Currently not supporting coalesced keys, but maybe in future"
|
|
||||||
)
|
|
||||||
except AttributeError:
|
|
||||||
# we don't care if this is missing because we don't actually want it to be there
|
|
||||||
pass
|
|
||||||
|
|
||||||
indexifier = None
|
|
||||||
if args.indexify_json:
|
|
||||||
with open(args.indexify_json, "r") as indexify_json_file:
|
|
||||||
indexify_data = json.load(indexify_json_file)
|
|
||||||
if args.seed_index > 0:
|
|
||||||
indexify_data[args.seed_fieldname] = list(range(args.seed_index))
|
|
||||||
# _logger.debug(f"Indexifier data looks like {indexify_data}")
|
|
||||||
indexifier = deepdog.indexify.Indexifier(indexify_data)
|
|
||||||
|
|
||||||
bayes_dir = pathlib.Path(args.bayesrun_directory)
|
|
||||||
out_files = [f for f in bayes_dir.iterdir() if f.name.endswith("bayesrun.csv")]
|
|
||||||
_logger.info(
|
|
||||||
f"Reading {len(out_files)} bayesrun.csv files in directory {args.bayesrun_directory}"
|
|
||||||
)
|
|
||||||
# _logger.info(out_files)
|
|
||||||
parsed_output_files = [
|
|
||||||
deepdog.results.read_output_file(f, indexifier)
|
|
||||||
for f in tqdm.tqdm(out_files, desc="reading files", leave=False)
|
|
||||||
]
|
|
||||||
|
|
||||||
_logger.info("building uncoalesced dict")
|
|
||||||
uncoalesced_dict = deepdog.cli.probs.dicts.build_model_dict(parsed_output_files)
|
|
||||||
|
|
||||||
if "uncoalesced_outfile" in args and args.uncoalesced_outfile:
|
|
||||||
deepdog.cli.probs.dicts.write_uncoalesced_dict(
|
|
||||||
args.uncoalesced_outfile, uncoalesced_dict
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
_logger.info("Skipping writing uncoalesced")
|
|
||||||
|
|
||||||
_logger.info("building coalesced dict")
|
|
||||||
coalesced = deepdog.cli.probs.dicts.coalesced_dict(uncoalesced_dict)
|
|
||||||
|
|
||||||
if "coalesced_outfile" in args and args.coalesced_outfile:
|
|
||||||
deepdog.cli.probs.dicts.write_coalesced_dict(
|
|
||||||
args.coalesced_outfile, coalesced
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
_logger.info("Skipping writing coalesced")
|
|
||||||
|
|
||||||
|
|
||||||
def wrapped_main():
|
|
||||||
args = deepdog.cli.probs.args.parse_args()
|
|
||||||
set_up_logging(args.log_file)
|
|
||||||
main(args)
|
|
@ -1,14 +0,0 @@
|
|||||||
from typing import Sequence
|
|
||||||
from deepdog.direct_monte_carlo.direct_mc import DirectMonteCarloFilter
|
|
||||||
import numpy
|
|
||||||
|
|
||||||
|
|
||||||
class ComposedDMCFilter(DirectMonteCarloFilter):
|
|
||||||
def __init__(self, filters: Sequence[DirectMonteCarloFilter]):
|
|
||||||
self.filters = filters
|
|
||||||
|
|
||||||
def filter_samples(self, samples: numpy.ndarray) -> numpy.ndarray:
|
|
||||||
current_sample = samples
|
|
||||||
for filter in self.filters:
|
|
||||||
current_sample = filter.filter_samples(current_sample)
|
|
||||||
return current_sample
|
|
@ -2,7 +2,7 @@ import pdme.model
|
|||||||
import pdme.measurement
|
import pdme.measurement
|
||||||
import pdme.measurement.input_types
|
import pdme.measurement.input_types
|
||||||
import pdme.subspace_simulation
|
import pdme.subspace_simulation
|
||||||
from typing import Tuple, Dict, NewType, Any
|
from typing import Tuple, Sequence
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
import logging
|
import logging
|
||||||
import numpy
|
import numpy
|
||||||
@ -30,20 +30,6 @@ class DirectMonteCarloConfig:
|
|||||||
tag: str = ""
|
tag: str = ""
|
||||||
|
|
||||||
|
|
||||||
# Aliasing dict as a generic data container
|
|
||||||
DirectMonteCarloData = NewType("DirectMonteCarloData", Dict[str, Any])
|
|
||||||
|
|
||||||
|
|
||||||
class DirectMonteCarloFilter:
|
|
||||||
"""
|
|
||||||
Abstract class for filtering out samples matching some criteria. Initialise with data as needed,
|
|
||||||
then filter out samples as needed.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def filter_samples(self, samples: numpy.ndarray) -> numpy.ndarray:
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
|
|
||||||
class DirectMonteCarloRun:
|
class DirectMonteCarloRun:
|
||||||
"""
|
"""
|
||||||
A single model Direct Monte Carlo run, currently implemented only using single threading.
|
A single model Direct Monte Carlo run, currently implemented only using single threading.
|
||||||
@ -79,26 +65,25 @@ class DirectMonteCarloRun:
|
|||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
model_name_pair: Tuple[str, pdme.model.DipoleModel],
|
model_name_pair: Tuple[str, pdme.model.DipoleModel],
|
||||||
filter: DirectMonteCarloFilter,
|
measurements: Sequence[pdme.measurement.DotRangeMeasurement],
|
||||||
config: DirectMonteCarloConfig,
|
config: DirectMonteCarloConfig,
|
||||||
):
|
):
|
||||||
self.model_name, self.model = model_name_pair
|
self.model_name, self.model = model_name_pair
|
||||||
|
|
||||||
# self.measurements = measurements
|
self.measurements = measurements
|
||||||
# self.dot_inputs = [(measure.r, measure.f) for measure in self.measurements]
|
self.dot_inputs = [(measure.r, measure.f) for measure in self.measurements]
|
||||||
|
|
||||||
# self.dot_inputs_array = pdme.measurement.input_types.dot_inputs_to_array(
|
self.dot_inputs_array = pdme.measurement.input_types.dot_inputs_to_array(
|
||||||
# self.dot_inputs
|
self.dot_inputs
|
||||||
# )
|
)
|
||||||
|
|
||||||
self.config = config
|
self.config = config
|
||||||
self.filter = filter
|
(
|
||||||
# (
|
self.lows,
|
||||||
# self.lows,
|
self.highs,
|
||||||
# self.highs,
|
) = pdme.measurement.input_types.dot_range_measurements_low_high_arrays(
|
||||||
# ) = pdme.measurement.input_types.dot_range_measurements_low_high_arrays(
|
self.measurements
|
||||||
# self.measurements
|
)
|
||||||
# )
|
|
||||||
|
|
||||||
def _single_run(self, seed) -> numpy.ndarray:
|
def _single_run(self, seed) -> numpy.ndarray:
|
||||||
rng = numpy.random.default_rng(seed)
|
rng = numpy.random.default_rng(seed)
|
||||||
@ -108,20 +93,18 @@ class DirectMonteCarloRun:
|
|||||||
)
|
)
|
||||||
|
|
||||||
current_sample = sample_dipoles
|
current_sample = sample_dipoles
|
||||||
|
for di, low, high in zip(self.dot_inputs_array, self.lows, self.highs):
|
||||||
|
|
||||||
return self.filter.filter_samples(current_sample)
|
if len(current_sample) < 1:
|
||||||
# for di, low, high in zip(self.dot_inputs_array, self.lows, self.highs):
|
break
|
||||||
|
vals = pdme.util.fast_v_calc.fast_vs_for_dipoleses(
|
||||||
|
numpy.array([di]), current_sample
|
||||||
|
)
|
||||||
|
|
||||||
# if len(current_sample) < 1:
|
current_sample = current_sample[
|
||||||
# break
|
numpy.all((vals > low) & (vals < high), axis=1)
|
||||||
# vals = pdme.util.fast_v_calc.fast_vs_for_dipoleses(
|
]
|
||||||
# numpy.array([di]), current_sample
|
return current_sample
|
||||||
# )
|
|
||||||
|
|
||||||
# current_sample = current_sample[
|
|
||||||
# numpy.all((vals > low) & (vals < high), axis=1)
|
|
||||||
# ]
|
|
||||||
# return current_sample
|
|
||||||
|
|
||||||
def execute(self) -> DirectMonteCarloResult:
|
def execute(self) -> DirectMonteCarloResult:
|
||||||
step_count = 0
|
step_count = 0
|
||||||
|
@ -1,143 +0,0 @@
|
|||||||
from numpy import ndarray
|
|
||||||
from deepdog.direct_monte_carlo.direct_mc import DirectMonteCarloFilter
|
|
||||||
from typing import Sequence
|
|
||||||
import pdme.measurement
|
|
||||||
import pdme.measurement.input_types
|
|
||||||
import pdme.util.fast_nonlocal_spectrum
|
|
||||||
import pdme.util.fast_v_calc
|
|
||||||
import numpy
|
|
||||||
|
|
||||||
|
|
||||||
class SingleDotPotentialFilter(DirectMonteCarloFilter):
|
|
||||||
def __init__(self, measurements: Sequence[pdme.measurement.DotRangeMeasurement]):
|
|
||||||
self.measurements = measurements
|
|
||||||
self.dot_inputs = [(measure.r, measure.f) for measure in self.measurements]
|
|
||||||
|
|
||||||
self.dot_inputs_array = pdme.measurement.input_types.dot_inputs_to_array(
|
|
||||||
self.dot_inputs
|
|
||||||
)
|
|
||||||
(
|
|
||||||
self.lows,
|
|
||||||
self.highs,
|
|
||||||
) = pdme.measurement.input_types.dot_range_measurements_low_high_arrays(
|
|
||||||
self.measurements
|
|
||||||
)
|
|
||||||
|
|
||||||
def filter_samples(self, samples: ndarray) -> ndarray:
|
|
||||||
current_sample = samples
|
|
||||||
for di, low, high in zip(self.dot_inputs_array, self.lows, self.highs):
|
|
||||||
|
|
||||||
if len(current_sample) < 1:
|
|
||||||
break
|
|
||||||
vals = pdme.util.fast_v_calc.fast_vs_for_dipoleses(
|
|
||||||
numpy.array([di]), current_sample
|
|
||||||
)
|
|
||||||
|
|
||||||
current_sample = current_sample[
|
|
||||||
numpy.all((vals > low) & (vals < high), axis=1)
|
|
||||||
]
|
|
||||||
return current_sample
|
|
||||||
|
|
||||||
|
|
||||||
class DoubleDotSpinQubitFrequencyFilter(DirectMonteCarloFilter):
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
pair_phase_measurements: Sequence[pdme.measurement.DotPairRangeMeasurement],
|
|
||||||
):
|
|
||||||
self.pair_phase_measurements = pair_phase_measurements
|
|
||||||
self.dot_pair_inputs = [
|
|
||||||
(measure.r1, measure.r2, measure.f)
|
|
||||||
for measure in self.pair_phase_measurements
|
|
||||||
]
|
|
||||||
self.dot_pair_inputs_array = (
|
|
||||||
pdme.measurement.input_types.dot_pair_inputs_to_array(self.dot_pair_inputs)
|
|
||||||
)
|
|
||||||
(
|
|
||||||
self.pair_phase_lows,
|
|
||||||
self.pair_phase_highs,
|
|
||||||
) = pdme.measurement.input_types.dot_range_measurements_low_high_arrays(
|
|
||||||
self.pair_phase_measurements
|
|
||||||
)
|
|
||||||
|
|
||||||
def fast_s_spin_qubit_tarucha_nonlocal_dipoleses(
|
|
||||||
self, dot_pair_inputs: numpy.ndarray, dipoleses: numpy.ndarray
|
|
||||||
) -> numpy.ndarray:
|
|
||||||
"""
|
|
||||||
No error correction here baby.
|
|
||||||
"""
|
|
||||||
ps = dipoleses[:, :, 0:3]
|
|
||||||
ss = dipoleses[:, :, 3:6]
|
|
||||||
ws = dipoleses[:, :, 6]
|
|
||||||
|
|
||||||
r1s = dot_pair_inputs[:, 0, 0:3]
|
|
||||||
r2s = dot_pair_inputs[:, 1, 0:3]
|
|
||||||
f1s = dot_pair_inputs[:, 0, 3]
|
|
||||||
# Don't actually need this
|
|
||||||
# f2s = dot_pair_inputs[:, 1, 3]
|
|
||||||
|
|
||||||
diffses1 = r1s[:, None] - ss[:, None, :]
|
|
||||||
diffses2 = r2s[:, None] - ss[:, None, :]
|
|
||||||
|
|
||||||
norms1 = numpy.linalg.norm(diffses1, axis=3)
|
|
||||||
norms2 = numpy.linalg.norm(diffses2, axis=3)
|
|
||||||
|
|
||||||
alphses1 = (
|
|
||||||
(
|
|
||||||
3
|
|
||||||
* numpy.transpose(
|
|
||||||
numpy.transpose(
|
|
||||||
numpy.einsum("abcd,acd->abc", diffses1, ps) / (norms1**2)
|
|
||||||
)
|
|
||||||
* numpy.transpose(diffses1)
|
|
||||||
)[:, :, :, 0]
|
|
||||||
)
|
|
||||||
- ps[:, :, 0, numpy.newaxis]
|
|
||||||
) / (norms1**3)
|
|
||||||
alphses2 = (
|
|
||||||
(
|
|
||||||
3
|
|
||||||
* numpy.transpose(
|
|
||||||
numpy.transpose(
|
|
||||||
numpy.einsum("abcd,acd->abc", diffses2, ps) / (norms2**2)
|
|
||||||
)
|
|
||||||
* numpy.transpose(diffses2)
|
|
||||||
)[:, :, :, 0]
|
|
||||||
)
|
|
||||||
- ps[:, :, 0, numpy.newaxis]
|
|
||||||
) / (norms2**3)
|
|
||||||
|
|
||||||
bses = (1 / numpy.pi) * (
|
|
||||||
ws[:, None, :] / (f1s[:, None] ** 2 + ws[:, None, :] ** 2)
|
|
||||||
)
|
|
||||||
|
|
||||||
return numpy.einsum("...j->...", alphses1 * alphses2 * bses)
|
|
||||||
|
|
||||||
def filter_samples(self, samples: ndarray) -> ndarray:
|
|
||||||
current_sample = samples
|
|
||||||
|
|
||||||
for pi, plow, phigh in zip(
|
|
||||||
self.dot_pair_inputs_array, self.pair_phase_lows, self.pair_phase_highs
|
|
||||||
):
|
|
||||||
if len(current_sample) < 1:
|
|
||||||
break
|
|
||||||
|
|
||||||
###
|
|
||||||
# This should be abstracted out, but we're going to dump it here for time pressure's sake
|
|
||||||
###
|
|
||||||
# vals = pdme.util.fast_nonlocal_spectrum.signarg(
|
|
||||||
# pdme.util.fast_nonlocal_spectrum.fast_s_nonlocal_dipoleses(
|
|
||||||
# numpy.array([pi]), current_sample
|
|
||||||
# )
|
|
||||||
#
|
|
||||||
vals = pdme.util.fast_nonlocal_spectrum.signarg(
|
|
||||||
self.fast_s_spin_qubit_tarucha_nonlocal_dipoleses(
|
|
||||||
numpy.array([pi]), current_sample
|
|
||||||
)
|
|
||||||
)
|
|
||||||
current_sample = current_sample[
|
|
||||||
numpy.all(
|
|
||||||
((vals > plow) & (vals < phigh)) | ((vals < plow) & (vals > phigh)),
|
|
||||||
axis=1,
|
|
||||||
)
|
|
||||||
]
|
|
||||||
return current_sample
|
|
@ -1,58 +0,0 @@
|
|||||||
"""
|
|
||||||
Probably should just include a way to handle the indexify function I reuse so much.
|
|
||||||
|
|
||||||
All about breaking an integer into a tuple of values from lists, which is useful because of how we do CHTC runs.
|
|
||||||
"""
|
|
||||||
import itertools
|
|
||||||
import typing
|
|
||||||
import logging
|
|
||||||
import math
|
|
||||||
|
|
||||||
_logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
# from https://stackoverflow.com/questions/5228158/cartesian-product-of-a-dictionary-of-lists
|
|
||||||
def _dict_product(dicts):
|
|
||||||
"""
|
|
||||||
>>> list(dict_product(dict(number=[1,2], character='ab')))
|
|
||||||
[{'character': 'a', 'number': 1},
|
|
||||||
{'character': 'a', 'number': 2},
|
|
||||||
{'character': 'b', 'number': 1},
|
|
||||||
{'character': 'b', 'number': 2}]
|
|
||||||
"""
|
|
||||||
return list(dict(zip(dicts.keys(), x)) for x in itertools.product(*dicts.values()))
|
|
||||||
|
|
||||||
|
|
||||||
class Indexifier:
|
|
||||||
"""
|
|
||||||
The order of keys is very important, but collections.OrderedDict is no longer needed in python 3.7.
|
|
||||||
I think it's okay to rely on that.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, list_dict: typing.Dict[str, typing.Sequence]):
|
|
||||||
self.dict = list_dict
|
|
||||||
|
|
||||||
def indexify(self, n: int) -> typing.Dict[str, typing.Any]:
|
|
||||||
product_dict = _dict_product(self.dict)
|
|
||||||
return product_dict[n]
|
|
||||||
|
|
||||||
def _indexify_indices(self, n: int) -> typing.Sequence[int]:
|
|
||||||
"""
|
|
||||||
legacy indexify from old scripts, copypast.
|
|
||||||
could be used like
|
|
||||||
>>> ret = {}
|
|
||||||
>>> for k, i in zip(self.dict.keys(), self._indexify_indices):
|
|
||||||
>>> ret[k] = self.dict[k][i]
|
|
||||||
>>> return ret
|
|
||||||
"""
|
|
||||||
weights = [len(v) for v in self.dict.values()]
|
|
||||||
N = math.prod(weights)
|
|
||||||
curr_n = n
|
|
||||||
curr_N = N
|
|
||||||
out = []
|
|
||||||
for w in weights[:-1]:
|
|
||||||
# print(f"current: {curr_N}, {curr_n}, {curr_n // w}")
|
|
||||||
curr_N = curr_N // w # should be int division anyway
|
|
||||||
out.append(curr_n // curr_N)
|
|
||||||
curr_n = curr_n % curr_N
|
|
||||||
return out
|
|
@ -66,139 +66,6 @@ def get_a_result_fast_filter_pairs(input) -> int:
|
|||||||
return len(current_sample)
|
return len(current_sample)
|
||||||
|
|
||||||
|
|
||||||
def get_a_result_fast_filter_potential_pair_phase_only(input) -> int:
|
|
||||||
(
|
|
||||||
model,
|
|
||||||
pair_inputs,
|
|
||||||
pair_phase_lows,
|
|
||||||
pair_phase_highs,
|
|
||||||
monte_carlo_count,
|
|
||||||
seed,
|
|
||||||
) = input
|
|
||||||
|
|
||||||
rng = numpy.random.default_rng(seed)
|
|
||||||
# TODO: A long term refactor is to pull the frequency stuff out from here. The None stands for max_frequency, which is unneeded in the actually useful models.
|
|
||||||
sample_dipoles = model.get_monte_carlo_dipole_inputs(
|
|
||||||
monte_carlo_count, None, rng_to_use=rng
|
|
||||||
)
|
|
||||||
|
|
||||||
current_sample = sample_dipoles
|
|
||||||
|
|
||||||
for pi, plow, phigh in zip(pair_inputs, pair_phase_lows, pair_phase_highs):
|
|
||||||
if len(current_sample) < 1:
|
|
||||||
break
|
|
||||||
vals = pdme.util.fast_nonlocal_spectrum.signarg(
|
|
||||||
pdme.util.fast_nonlocal_spectrum.fast_s_nonlocal_dipoleses(
|
|
||||||
numpy.array([pi]), current_sample
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
current_sample = current_sample[
|
|
||||||
numpy.all(
|
|
||||||
((vals > plow) & (vals < phigh)) | ((vals < plow) & (vals > phigh)),
|
|
||||||
axis=1,
|
|
||||||
)
|
|
||||||
]
|
|
||||||
return len(current_sample)
|
|
||||||
|
|
||||||
|
|
||||||
def get_a_result_fast_filter_tarucha_spin_qubit_pair_phase_only(input) -> int:
|
|
||||||
(
|
|
||||||
model,
|
|
||||||
pair_inputs,
|
|
||||||
pair_phase_lows,
|
|
||||||
pair_phase_highs,
|
|
||||||
monte_carlo_count,
|
|
||||||
seed,
|
|
||||||
) = input
|
|
||||||
|
|
||||||
def fast_s_spin_qubit_tarucha_nonlocal_dipoleses(
|
|
||||||
dot_pair_inputs: numpy.ndarray, dipoleses: numpy.ndarray
|
|
||||||
) -> numpy.ndarray:
|
|
||||||
"""
|
|
||||||
No error correction here baby.
|
|
||||||
"""
|
|
||||||
ps = dipoleses[:, :, 0:3]
|
|
||||||
ss = dipoleses[:, :, 3:6]
|
|
||||||
ws = dipoleses[:, :, 6]
|
|
||||||
|
|
||||||
r1s = dot_pair_inputs[:, 0, 0:3]
|
|
||||||
r2s = dot_pair_inputs[:, 1, 0:3]
|
|
||||||
f1s = dot_pair_inputs[:, 0, 3]
|
|
||||||
# don't actually need, because we're assuming they're the same frequencies across the pair
|
|
||||||
# f2s = dot_pair_inputs[:, 1, 3]
|
|
||||||
|
|
||||||
diffses1 = r1s[:, None] - ss[:, None, :]
|
|
||||||
diffses2 = r2s[:, None] - ss[:, None, :]
|
|
||||||
|
|
||||||
norms1 = numpy.linalg.norm(diffses1, axis=3)
|
|
||||||
norms2 = numpy.linalg.norm(diffses2, axis=3)
|
|
||||||
|
|
||||||
alphses1 = (
|
|
||||||
(
|
|
||||||
3
|
|
||||||
* numpy.transpose(
|
|
||||||
numpy.transpose(
|
|
||||||
numpy.einsum("abcd,acd->abc", diffses1, ps) / (norms1**2)
|
|
||||||
)
|
|
||||||
* numpy.transpose(diffses1)
|
|
||||||
)[:, :, :, 0]
|
|
||||||
)
|
|
||||||
- ps[:, numpy.newaxis, :, 0]
|
|
||||||
) / (norms1**3)
|
|
||||||
alphses2 = (
|
|
||||||
(
|
|
||||||
3
|
|
||||||
* numpy.transpose(
|
|
||||||
numpy.transpose(
|
|
||||||
numpy.einsum("abcd,acd->abc", diffses2, ps) / (norms2**2)
|
|
||||||
)
|
|
||||||
* numpy.transpose(diffses2)
|
|
||||||
)[:, :, :, 0]
|
|
||||||
)
|
|
||||||
- ps[:, numpy.newaxis, :, 0]
|
|
||||||
) / (norms2**3)
|
|
||||||
|
|
||||||
bses = (1 / numpy.pi) * (
|
|
||||||
ws[:, None, :] / (f1s[:, None] ** 2 + ws[:, None, :] ** 2)
|
|
||||||
)
|
|
||||||
|
|
||||||
return numpy.einsum("...j->...", alphses1 * alphses2 * bses)
|
|
||||||
|
|
||||||
rng = numpy.random.default_rng(seed)
|
|
||||||
# TODO: A long term refactor is to pull the frequency stuff out from here. The None stands for max_frequency, which is unneeded in the actually useful models.
|
|
||||||
sample_dipoles = model.get_monte_carlo_dipole_inputs(
|
|
||||||
monte_carlo_count, None, rng_to_use=rng
|
|
||||||
)
|
|
||||||
|
|
||||||
current_sample = sample_dipoles
|
|
||||||
|
|
||||||
for pi, plow, phigh in zip(pair_inputs, pair_phase_lows, pair_phase_highs):
|
|
||||||
if len(current_sample) < 1:
|
|
||||||
break
|
|
||||||
|
|
||||||
###
|
|
||||||
# This should be abstracted out, but we're going to dump it here for time pressure's sake
|
|
||||||
###
|
|
||||||
# vals = pdme.util.fast_nonlocal_spectrum.signarg(
|
|
||||||
# pdme.util.fast_nonlocal_spectrum.fast_s_nonlocal_dipoleses(
|
|
||||||
# numpy.array([pi]), current_sample
|
|
||||||
# )
|
|
||||||
#
|
|
||||||
vals = pdme.util.fast_nonlocal_spectrum.signarg(
|
|
||||||
fast_s_spin_qubit_tarucha_nonlocal_dipoleses(
|
|
||||||
numpy.array([pi]), current_sample
|
|
||||||
)
|
|
||||||
)
|
|
||||||
current_sample = current_sample[
|
|
||||||
numpy.all(
|
|
||||||
((vals > plow) & (vals < phigh)) | ((vals < plow) & (vals > phigh)),
|
|
||||||
axis=1,
|
|
||||||
)
|
|
||||||
]
|
|
||||||
return len(current_sample)
|
|
||||||
|
|
||||||
|
|
||||||
def get_a_result_fast_filter(input) -> int:
|
def get_a_result_fast_filter(input) -> int:
|
||||||
model, dot_inputs, lows, highs, monte_carlo_count, seed = input
|
model, dot_inputs, lows, highs, monte_carlo_count, seed = input
|
||||||
|
|
||||||
@ -241,11 +108,6 @@ class RealSpectrumRun:
|
|||||||
|
|
||||||
run_count: int
|
run_count: int
|
||||||
The number of runs to do.
|
The number of runs to do.
|
||||||
|
|
||||||
If pair_measurements is not None, uses pair measurement method (and single measurements too).
|
|
||||||
If pair_phase_measurements is not None, ignores measurements and uses phase measurements _only_
|
|
||||||
This is lazy design on my part.
|
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
@ -263,9 +125,6 @@ class RealSpectrumRun:
|
|||||||
pair_measurements: Optional[
|
pair_measurements: Optional[
|
||||||
Sequence[pdme.measurement.DotPairRangeMeasurement]
|
Sequence[pdme.measurement.DotPairRangeMeasurement]
|
||||||
] = None,
|
] = None,
|
||||||
pair_phase_measurements: Optional[
|
|
||||||
Sequence[pdme.measurement.DotPairRangeMeasurement]
|
|
||||||
] = None,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
self.measurements = measurements
|
self.measurements = measurements
|
||||||
self.dot_inputs = [(measure.r, measure.f) for measure in self.measurements]
|
self.dot_inputs = [(measure.r, measure.f) for measure in self.measurements]
|
||||||
@ -277,8 +136,6 @@ class RealSpectrumRun:
|
|||||||
if pair_measurements is not None:
|
if pair_measurements is not None:
|
||||||
self.pair_measurements = pair_measurements
|
self.pair_measurements = pair_measurements
|
||||||
self.use_pair_measurements = True
|
self.use_pair_measurements = True
|
||||||
self.use_pair_phase_measurements = False
|
|
||||||
|
|
||||||
self.dot_pair_inputs = [
|
self.dot_pair_inputs = [
|
||||||
(measure.r1, measure.r2, measure.f)
|
(measure.r1, measure.r2, measure.f)
|
||||||
for measure in self.pair_measurements
|
for measure in self.pair_measurements
|
||||||
@ -288,22 +145,8 @@ class RealSpectrumRun:
|
|||||||
self.dot_pair_inputs
|
self.dot_pair_inputs
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
elif pair_phase_measurements is not None:
|
|
||||||
self.use_pair_measurements = False
|
|
||||||
self.use_pair_phase_measurements = True
|
|
||||||
self.pair_phase_measurements = pair_phase_measurements
|
|
||||||
self.dot_pair_inputs = [
|
|
||||||
(measure.r1, measure.r2, measure.f)
|
|
||||||
for measure in self.pair_phase_measurements
|
|
||||||
]
|
|
||||||
self.dot_pair_inputs_array = (
|
|
||||||
pdme.measurement.input_types.dot_pair_inputs_to_array(
|
|
||||||
self.dot_pair_inputs
|
|
||||||
)
|
|
||||||
)
|
|
||||||
else:
|
else:
|
||||||
self.use_pair_measurements = False
|
self.use_pair_measurements = False
|
||||||
self.use_pair_phase_measurements = False
|
|
||||||
|
|
||||||
self.models = [model for (_, model) in models_with_names]
|
self.models = [model for (_, model) in models_with_names]
|
||||||
self.model_names = [name for (name, _) in models_with_names]
|
self.model_names = [name for (name, _) in models_with_names]
|
||||||
@ -355,16 +198,6 @@ class RealSpectrumRun:
|
|||||||
self.pair_measurements
|
self.pair_measurements
|
||||||
)
|
)
|
||||||
|
|
||||||
pair_phase_lows = None
|
|
||||||
pair_phase_highs = None
|
|
||||||
if self.use_pair_phase_measurements:
|
|
||||||
(
|
|
||||||
pair_phase_lows,
|
|
||||||
pair_phase_highs,
|
|
||||||
) = pdme.measurement.input_types.dot_range_measurements_low_high_arrays(
|
|
||||||
self.pair_phase_measurements
|
|
||||||
)
|
|
||||||
|
|
||||||
# define a new seed sequence for each run
|
# define a new seed sequence for each run
|
||||||
seed_sequence = numpy.random.SeedSequence(self.initial_seed)
|
seed_sequence = numpy.random.SeedSequence(self.initial_seed)
|
||||||
|
|
||||||
@ -396,7 +229,6 @@ class RealSpectrumRun:
|
|||||||
seeds = seed_sequence.spawn(self.monte_carlo_cycles)
|
seeds = seed_sequence.spawn(self.monte_carlo_cycles)
|
||||||
|
|
||||||
if self.use_pair_measurements:
|
if self.use_pair_measurements:
|
||||||
_logger.debug("using pair measurements")
|
|
||||||
current_success = sum(
|
current_success = sum(
|
||||||
pool.imap_unordered(
|
pool.imap_unordered(
|
||||||
get_a_result_fast_filter_pairs,
|
get_a_result_fast_filter_pairs,
|
||||||
@ -417,26 +249,6 @@ class RealSpectrumRun:
|
|||||||
self.chunksize,
|
self.chunksize,
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
elif self.use_pair_phase_measurements:
|
|
||||||
_logger.debug("using pair phase measurements")
|
|
||||||
_logger.debug("specifically using tarucha")
|
|
||||||
current_success = sum(
|
|
||||||
pool.imap_unordered(
|
|
||||||
get_a_result_fast_filter_tarucha_spin_qubit_pair_phase_only,
|
|
||||||
[
|
|
||||||
(
|
|
||||||
model,
|
|
||||||
self.dot_pair_inputs_array,
|
|
||||||
pair_phase_lows,
|
|
||||||
pair_phase_highs,
|
|
||||||
self.monte_carlo_count,
|
|
||||||
seed,
|
|
||||||
)
|
|
||||||
for seed in seeds
|
|
||||||
],
|
|
||||||
self.chunksize,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
else:
|
else:
|
||||||
|
|
||||||
current_success = sum(
|
current_success = sum(
|
||||||
|
@ -1,169 +0,0 @@
|
|||||||
import dataclasses
|
|
||||||
import re
|
|
||||||
import typing
|
|
||||||
import logging
|
|
||||||
import deepdog.indexify
|
|
||||||
import pathlib
|
|
||||||
import csv
|
|
||||||
|
|
||||||
_logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
FILENAME_REGEX = r"(?P<timestamp>\d{8}-\d{6})-(?P<filename_slug>.*)\.realdata\.fast_filter\.bayesrun\.csv"
|
|
||||||
|
|
||||||
MODEL_REGEXES = [
|
|
||||||
r"geom_(?P<xmin>-?\d+)_(?P<xmax>-?\d+)_(?P<ymin>-?\d+)_(?P<ymax>-?\d+)_(?P<zmin>-?\d+)_(?P<zmax>-?\d+)-orientation_(?P<orientation>free|fixedxy|fixedz)-dipole_count_(?P<avg_filled>\d+)_(?P<field_name>\w*)"
|
|
||||||
]
|
|
||||||
|
|
||||||
FILE_SLUG_REGEXES = [
|
|
||||||
r"mock_tarucha-(?P<job_index>\d+)",
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
@dataclasses.dataclass
|
|
||||||
class BayesrunOutputFilename:
|
|
||||||
timestamp: str
|
|
||||||
filename_slug: str
|
|
||||||
path: pathlib.Path
|
|
||||||
|
|
||||||
|
|
||||||
@dataclasses.dataclass
|
|
||||||
class BayesrunColumnParsed:
|
|
||||||
"""
|
|
||||||
class for parsing a bayesrun while pulling certain special fields out
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, groupdict: typing.Dict[str, str]):
|
|
||||||
self.column_field = groupdict["field_name"]
|
|
||||||
self.model_field_dict = {
|
|
||||||
k: v for k, v in groupdict.items() if k != "field_name"
|
|
||||||
}
|
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
return f"BayesrunColumnParsed[{self.column_field}: {self.model_field_dict}]"
|
|
||||||
|
|
||||||
|
|
||||||
@dataclasses.dataclass
|
|
||||||
class BayesrunModelResult:
|
|
||||||
parsed_model_keys: typing.Dict[str, str]
|
|
||||||
success: int
|
|
||||||
count: int
|
|
||||||
|
|
||||||
|
|
||||||
@dataclasses.dataclass
|
|
||||||
class BayesrunOutput:
|
|
||||||
filename: BayesrunOutputFilename
|
|
||||||
data: typing.Dict["str", typing.Any]
|
|
||||||
results: typing.Sequence[BayesrunModelResult]
|
|
||||||
|
|
||||||
|
|
||||||
def _batch_iterable_into_chunks(iterable, n=1):
|
|
||||||
"""
|
|
||||||
utility for batching bayesrun files where columns appear in threes
|
|
||||||
"""
|
|
||||||
for ndx in range(0, len(iterable), n):
|
|
||||||
yield iterable[ndx : min(ndx + n, len(iterable))]
|
|
||||||
|
|
||||||
|
|
||||||
def _parse_bayesrun_column(
|
|
||||||
column: str,
|
|
||||||
) -> typing.Optional[BayesrunColumnParsed]:
|
|
||||||
"""
|
|
||||||
Tries one by one all of a predefined list of regexes that I might have used in the past.
|
|
||||||
Returns the groupdict for the first match, or None if no match found.
|
|
||||||
"""
|
|
||||||
for pattern in MODEL_REGEXES:
|
|
||||||
match = re.match(pattern, column)
|
|
||||||
if match:
|
|
||||||
return BayesrunColumnParsed(match.groupdict())
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def _parse_bayesrun_row(
|
|
||||||
row: typing.Dict[str, str],
|
|
||||||
) -> typing.Sequence[BayesrunModelResult]:
|
|
||||||
|
|
||||||
results = []
|
|
||||||
batched_keys = _batch_iterable_into_chunks(list(row.keys()), 3)
|
|
||||||
for model_keys in batched_keys:
|
|
||||||
parsed = [_parse_bayesrun_column(column) for column in model_keys]
|
|
||||||
values = [row[column] for column in model_keys]
|
|
||||||
if parsed[0] is None:
|
|
||||||
raise ValueError(f"no viable success row found for keys {model_keys}")
|
|
||||||
if parsed[1] is None:
|
|
||||||
raise ValueError(f"no viable count row found for keys {model_keys}")
|
|
||||||
if parsed[0].column_field != "success":
|
|
||||||
raise ValueError(f"The column {model_keys[0]} is not a success field")
|
|
||||||
if parsed[1].column_field != "count":
|
|
||||||
raise ValueError(f"The column {model_keys[1]} is not a count field")
|
|
||||||
parsed_keys = parsed[0].model_field_dict
|
|
||||||
success = int(values[0])
|
|
||||||
count = int(values[1])
|
|
||||||
results.append(
|
|
||||||
BayesrunModelResult(
|
|
||||||
parsed_model_keys=parsed_keys,
|
|
||||||
success=success,
|
|
||||||
count=count,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
return results
|
|
||||||
|
|
||||||
|
|
||||||
def _parse_output_filename(file: pathlib.Path) -> BayesrunOutputFilename:
|
|
||||||
filename = file.name
|
|
||||||
match = re.match(FILENAME_REGEX, filename)
|
|
||||||
if not match:
|
|
||||||
raise ValueError(f"{filename} was not a valid bayesrun output")
|
|
||||||
groups = match.groupdict()
|
|
||||||
return BayesrunOutputFilename(
|
|
||||||
timestamp=groups["timestamp"], filename_slug=groups["filename_slug"], path=file
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def _parse_file_slug(slug: str) -> typing.Optional[typing.Dict[str, str]]:
|
|
||||||
for pattern in FILE_SLUG_REGEXES:
|
|
||||||
match = re.match(pattern, slug)
|
|
||||||
if match:
|
|
||||||
return match.groupdict()
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def read_output_file(
|
|
||||||
file: pathlib.Path, indexifier: typing.Optional[deepdog.indexify.Indexifier]
|
|
||||||
) -> BayesrunOutput:
|
|
||||||
|
|
||||||
parsed_filename = tag = _parse_output_filename(file)
|
|
||||||
out = BayesrunOutput(filename=parsed_filename, data={}, results=[])
|
|
||||||
|
|
||||||
out.data.update(dataclasses.asdict(tag))
|
|
||||||
parsed_tag = _parse_file_slug(parsed_filename.filename_slug)
|
|
||||||
if parsed_tag is None:
|
|
||||||
_logger.warning(
|
|
||||||
f"Could not parse {tag} against any matching regexes. Going to skip tag parsing"
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
out.data.update(parsed_tag)
|
|
||||||
if indexifier is not None:
|
|
||||||
try:
|
|
||||||
job_index = parsed_tag["job_index"]
|
|
||||||
indexified = indexifier.indexify(int(job_index))
|
|
||||||
out.data.update(indexified)
|
|
||||||
except KeyError:
|
|
||||||
# This isn't really that important of an error, apart from the warning
|
|
||||||
_logger.warning(
|
|
||||||
f"Parsed tag to {parsed_tag}, and attempted to indexify but no job_index key was found. skipping and moving on"
|
|
||||||
)
|
|
||||||
|
|
||||||
with file.open() as input_file:
|
|
||||||
reader = csv.DictReader(input_file)
|
|
||||||
rows = [r for r in reader]
|
|
||||||
if len(rows) == 1:
|
|
||||||
row = rows[0]
|
|
||||||
else:
|
|
||||||
raise ValueError(f"Confused about having multiple rows in {file.name}")
|
|
||||||
results = _parse_bayesrun_row(row)
|
|
||||||
|
|
||||||
out.results = results
|
|
||||||
|
|
||||||
return out
|
|
@ -215,7 +215,7 @@ class SubsetSimulation:
|
|||||||
for cost, chained in chain:
|
for cost, chained in chain:
|
||||||
try:
|
try:
|
||||||
filtered_cost = cost[0]
|
filtered_cost = cost[0]
|
||||||
except (IndexError, TypeError):
|
except IndexError:
|
||||||
filtered_cost = cost
|
filtered_cost = cost
|
||||||
all_chains.append((filtered_cost, chained))
|
all_chains.append((filtered_cost, chained))
|
||||||
_logger.debug("finished mcmc")
|
_logger.debug("finished mcmc")
|
||||||
|
38
do.sh
Normal file
38
do.sh
Normal file
@ -0,0 +1,38 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# Do - The Simplest Build Tool on Earth.
|
||||||
|
# Documentation and examples see https://github.com/8gears/do
|
||||||
|
|
||||||
|
set -Eeuo pipefail # -e "Automatic exit from bash shell script on error" -u "Treat unset variables and parameters as errors"
|
||||||
|
|
||||||
|
build() {
|
||||||
|
echo "I am ${FUNCNAME[0]}ing"
|
||||||
|
poetry build
|
||||||
|
}
|
||||||
|
|
||||||
|
test() {
|
||||||
|
echo "I am ${FUNCNAME[0]}ing"
|
||||||
|
poetry run flake8 deepdog tests
|
||||||
|
poetry run mypy deepdog
|
||||||
|
poetry run pytest
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt() {
|
||||||
|
poetry run black .
|
||||||
|
find . -not \( -path "./.*" -type d -prune \) -type f -name "*.py" -exec sed -i -e 's/ /\t/g' {} \;
|
||||||
|
}
|
||||||
|
|
||||||
|
release() {
|
||||||
|
./scripts/release.sh
|
||||||
|
}
|
||||||
|
|
||||||
|
htmlcov() {
|
||||||
|
poetry run pytest --cov-report=html
|
||||||
|
}
|
||||||
|
|
||||||
|
all() {
|
||||||
|
build && test
|
||||||
|
}
|
||||||
|
|
||||||
|
"$@" # <- execute the task
|
||||||
|
|
||||||
|
[ "$#" -gt 0 ] || printf "Usage:\n\t./do.sh %s\n" "($(compgen -A function | grep '^[^_]' | paste -sd '|' -))"
|
145
flake.lock
generated
145
flake.lock
generated
@ -1,33 +1,28 @@
|
|||||||
{
|
{
|
||||||
"nodes": {
|
"nodes": {
|
||||||
"flake-utils": {
|
"flake-utils": {
|
||||||
"inputs": {
|
|
||||||
"systems": "systems"
|
|
||||||
},
|
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1710146030,
|
"lastModified": 1648297722,
|
||||||
"narHash": "sha256-SZ5L6eA7HJ/nmkzGG7/ISclqe6oZdOZTNoesiInkXPQ=",
|
"narHash": "sha256-W+qlPsiZd8F3XkzXOzAoR+mpFqzm3ekQkJNa+PIh1BQ=",
|
||||||
"owner": "numtide",
|
"owner": "numtide",
|
||||||
"repo": "flake-utils",
|
"repo": "flake-utils",
|
||||||
"rev": "b1d9ab70662946ef0850d488da1c9019f3a9752a",
|
"rev": "0f8662f1319ad6abf89b3380dd2722369fc51ade",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
"owner": "numtide",
|
"owner": "numtide",
|
||||||
"repo": "flake-utils",
|
"repo": "flake-utils",
|
||||||
|
"rev": "0f8662f1319ad6abf89b3380dd2722369fc51ade",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"flake-utils_2": {
|
"flake-utils_2": {
|
||||||
"inputs": {
|
|
||||||
"systems": "systems_2"
|
|
||||||
},
|
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1705309234,
|
"lastModified": 1653893745,
|
||||||
"narHash": "sha256-uNRRNRKmJyCRC/8y1RqBkqWBLM034y4qN7EprSdmgyA=",
|
"narHash": "sha256-0jntwV3Z8//YwuOjzhV2sgJJPt+HY6KhU7VZUL0fKZQ=",
|
||||||
"owner": "numtide",
|
"owner": "numtide",
|
||||||
"repo": "flake-utils",
|
"repo": "flake-utils",
|
||||||
"rev": "1ef2e671c3b0c19053962c07dbda38332dcebf26",
|
"rev": "1ed9fb1935d260de5fe1c2f7ee0ebaae17ed2fa1",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
@ -36,34 +31,29 @@
|
|||||||
"type": "github"
|
"type": "github"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nix-github-actions": {
|
|
||||||
"inputs": {
|
|
||||||
"nixpkgs": [
|
|
||||||
"poetry2nixSrc",
|
|
||||||
"nixpkgs"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"locked": {
|
|
||||||
"lastModified": 1703863825,
|
|
||||||
"narHash": "sha256-rXwqjtwiGKJheXB43ybM8NwWB8rO2dSRrEqes0S7F5Y=",
|
|
||||||
"owner": "nix-community",
|
|
||||||
"repo": "nix-github-actions",
|
|
||||||
"rev": "5163432afc817cf8bd1f031418d1869e4c9d5547",
|
|
||||||
"type": "github"
|
|
||||||
},
|
|
||||||
"original": {
|
|
||||||
"owner": "nix-community",
|
|
||||||
"repo": "nix-github-actions",
|
|
||||||
"type": "github"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"nixpkgs": {
|
"nixpkgs": {
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1710703777,
|
"lastModified": 1655087213,
|
||||||
"narHash": "sha256-M4CNAgjrtvrxIWIAc98RTYcVFoAgwUhrYekeiMScj18=",
|
"narHash": "sha256-4R5oQ+OwGAAcXWYrxC4gFMTUSstGxaN8kN7e8hkum/8=",
|
||||||
"owner": "NixOS",
|
"owner": "NixOS",
|
||||||
"repo": "nixpkgs",
|
"repo": "nixpkgs",
|
||||||
"rev": "fc7885fbcea4b782142e06ce2d4d08cf92862004",
|
"rev": "37b6b161e536fddca54424cf80662bce735bdd1e",
|
||||||
|
"type": "github"
|
||||||
|
},
|
||||||
|
"original": {
|
||||||
|
"owner": "NixOS",
|
||||||
|
"repo": "nixpkgs",
|
||||||
|
"rev": "37b6b161e536fddca54424cf80662bce735bdd1e",
|
||||||
|
"type": "github"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nixpkgs_2": {
|
||||||
|
"locked": {
|
||||||
|
"lastModified": 1655046959,
|
||||||
|
"narHash": "sha256-gxqHZKq1ReLDe6ZMJSbmSZlLY95DsVq5o6jQihhzvmw=",
|
||||||
|
"owner": "NixOS",
|
||||||
|
"repo": "nixpkgs",
|
||||||
|
"rev": "07bf3d25ce1da3bee6703657e6a787a4c6cdcea9",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
@ -72,27 +62,23 @@
|
|||||||
"type": "github"
|
"type": "github"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"poetry2nixSrc": {
|
"poetry2nix": {
|
||||||
"inputs": {
|
"inputs": {
|
||||||
"flake-utils": "flake-utils_2",
|
"flake-utils": "flake-utils_2",
|
||||||
"nix-github-actions": "nix-github-actions",
|
"nixpkgs": "nixpkgs_2"
|
||||||
"nixpkgs": [
|
|
||||||
"nixpkgs"
|
|
||||||
],
|
|
||||||
"systems": "systems_3",
|
|
||||||
"treefmt-nix": "treefmt-nix"
|
|
||||||
},
|
},
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1708589824,
|
"lastModified": 1654921554,
|
||||||
"narHash": "sha256-2GOiFTkvs5MtVF65sC78KNVxQSmsxtk0WmV1wJ9V2ck=",
|
"narHash": "sha256-hkfMdQAHSwLWlg0sBVvgrQdIiBP45U1/ktmFpY4g2Mo=",
|
||||||
"owner": "nix-community",
|
"owner": "nix-community",
|
||||||
"repo": "poetry2nix",
|
"repo": "poetry2nix",
|
||||||
"rev": "3c92540611f42d3fb2d0d084a6c694cd6544b609",
|
"rev": "7b71679fa7df00e1678fc3f1d1d4f5f372341b63",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
"owner": "nix-community",
|
"owner": "nix-community",
|
||||||
"repo": "poetry2nix",
|
"repo": "poetry2nix",
|
||||||
|
"rev": "7b71679fa7df00e1678fc3f1d1d4f5f372341b63",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
@ -100,72 +86,7 @@
|
|||||||
"inputs": {
|
"inputs": {
|
||||||
"flake-utils": "flake-utils",
|
"flake-utils": "flake-utils",
|
||||||
"nixpkgs": "nixpkgs",
|
"nixpkgs": "nixpkgs",
|
||||||
"poetry2nixSrc": "poetry2nixSrc"
|
"poetry2nix": "poetry2nix"
|
||||||
}
|
|
||||||
},
|
|
||||||
"systems": {
|
|
||||||
"locked": {
|
|
||||||
"lastModified": 1681028828,
|
|
||||||
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
|
|
||||||
"owner": "nix-systems",
|
|
||||||
"repo": "default",
|
|
||||||
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
|
|
||||||
"type": "github"
|
|
||||||
},
|
|
||||||
"original": {
|
|
||||||
"owner": "nix-systems",
|
|
||||||
"repo": "default",
|
|
||||||
"type": "github"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"systems_2": {
|
|
||||||
"locked": {
|
|
||||||
"lastModified": 1681028828,
|
|
||||||
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
|
|
||||||
"owner": "nix-systems",
|
|
||||||
"repo": "default",
|
|
||||||
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
|
|
||||||
"type": "github"
|
|
||||||
},
|
|
||||||
"original": {
|
|
||||||
"owner": "nix-systems",
|
|
||||||
"repo": "default",
|
|
||||||
"type": "github"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"systems_3": {
|
|
||||||
"locked": {
|
|
||||||
"lastModified": 1681028828,
|
|
||||||
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
|
|
||||||
"owner": "nix-systems",
|
|
||||||
"repo": "default",
|
|
||||||
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
|
|
||||||
"type": "github"
|
|
||||||
},
|
|
||||||
"original": {
|
|
||||||
"id": "systems",
|
|
||||||
"type": "indirect"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"treefmt-nix": {
|
|
||||||
"inputs": {
|
|
||||||
"nixpkgs": [
|
|
||||||
"poetry2nixSrc",
|
|
||||||
"nixpkgs"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"locked": {
|
|
||||||
"lastModified": 1708335038,
|
|
||||||
"narHash": "sha256-ETLZNFBVCabo7lJrpjD6cAbnE11eDOjaQnznmg/6hAE=",
|
|
||||||
"owner": "numtide",
|
|
||||||
"repo": "treefmt-nix",
|
|
||||||
"rev": "e504621290a1fd896631ddbc5e9c16f4366c9f65",
|
|
||||||
"type": "github"
|
|
||||||
},
|
|
||||||
"original": {
|
|
||||||
"owner": "numtide",
|
|
||||||
"repo": "treefmt-nix",
|
|
||||||
"type": "github"
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
97
flake.nix
97
flake.nix
@ -1,46 +1,63 @@
|
|||||||
{
|
{
|
||||||
description = "Application packaged using poetry2nix";
|
description = "Application packaged using poetry2nix";
|
||||||
|
|
||||||
inputs.flake-utils.url = "github:numtide/flake-utils";
|
inputs.flake-utils.url = "github:numtide/flake-utils?rev=0f8662f1319ad6abf89b3380dd2722369fc51ade";
|
||||||
inputs.nixpkgs.url = "github:NixOS/nixpkgs";
|
inputs.nixpkgs.url = "github:NixOS/nixpkgs?rev=37b6b161e536fddca54424cf80662bce735bdd1e";
|
||||||
inputs.poetry2nixSrc = {
|
inputs.poetry2nix.url = "github:nix-community/poetry2nix?rev=7b71679fa7df00e1678fc3f1d1d4f5f372341b63";
|
||||||
url = "github:nix-community/poetry2nix";
|
|
||||||
inputs.nixpkgs.follows = "nixpkgs";
|
|
||||||
};
|
|
||||||
|
|
||||||
outputs = { self, nixpkgs, flake-utils, poetry2nixSrc }:
|
outputs = { self, nixpkgs, flake-utils, poetry2nix }:
|
||||||
flake-utils.lib.eachDefaultSystem (system:
|
{
|
||||||
|
# Nixpkgs overlay providing the application
|
||||||
|
overlay = nixpkgs.lib.composeManyExtensions [
|
||||||
|
poetry2nix.overlay
|
||||||
|
(final: prev: {
|
||||||
|
# The application
|
||||||
|
deepdog = prev.poetry2nix.mkPoetryApplication {
|
||||||
|
overrides = final.poetry2nix.overrides.withDefaults (self: super: {
|
||||||
|
# …
|
||||||
|
# workaround https://github.com/nix-community/poetry2nix/issues/568
|
||||||
|
pdme = super.pdme.overridePythonAttrs (old: {
|
||||||
|
buildInputs = old.buildInputs or [ ] ++ [ final.python39.pkgs.poetry-core ];
|
||||||
|
});
|
||||||
|
});
|
||||||
|
projectDir = ./.;
|
||||||
|
};
|
||||||
|
deepdogEnv = prev.poetry2nix.mkPoetryEnv {
|
||||||
|
overrides = final.poetry2nix.overrides.withDefaults (self: super: {
|
||||||
|
# …
|
||||||
|
# workaround https://github.com/nix-community/poetry2nix/issues/568
|
||||||
|
pdme = super.pdme.overridePythonAttrs (old: {
|
||||||
|
buildInputs = old.buildInputs or [ ] ++ [ final.python39.pkgs.poetry-core ];
|
||||||
|
});
|
||||||
|
});
|
||||||
|
projectDir = ./.;
|
||||||
|
};
|
||||||
|
})
|
||||||
|
];
|
||||||
|
} // (flake-utils.lib.eachDefaultSystem (system:
|
||||||
let
|
let
|
||||||
pkgs = nixpkgs.legacyPackages.${system};
|
pkgs = import nixpkgs {
|
||||||
poetry2nix = poetry2nixSrc.lib.mkPoetry2Nix { inherit pkgs; };
|
inherit system;
|
||||||
in {
|
overlays = [ self.overlay ];
|
||||||
packages = {
|
};
|
||||||
deepdogApp = poetry2nix.mkPoetryApplication {
|
in
|
||||||
projectDir = self;
|
{
|
||||||
python = pkgs.python39;
|
apps = {
|
||||||
preferWheels = true;
|
deepdog = pkgs.deepdog;
|
||||||
};
|
};
|
||||||
deepdogEnv = poetry2nix.mkPoetryEnv {
|
|
||||||
projectDir = self;
|
defaultApp = pkgs.deepdog;
|
||||||
python = pkgs.python39;
|
devShell = pkgs.mkShell {
|
||||||
preferWheels = true;
|
buildInputs = [
|
||||||
overrides = poetry2nix.overrides.withDefaults (self: super: {
|
pkgs.poetry
|
||||||
});
|
pkgs.deepdogEnv
|
||||||
};
|
pkgs.deepdog
|
||||||
default = self.packages.${system}.deepdogEnv;
|
];
|
||||||
};
|
shellHook = ''
|
||||||
devShells.default = pkgs.mkShell {
|
export DO_NIX_CUSTOM=1
|
||||||
inputsFrom = [ self.packages.${system}.deepdogEnv ];
|
'';
|
||||||
buildInputs = [
|
packages = [ pkgs.nodejs-16_x ];
|
||||||
pkgs.poetry
|
};
|
||||||
self.packages.${system}.deepdogEnv
|
|
||||||
self.packages.${system}.deepdogApp
|
}));
|
||||||
pkgs.just
|
|
||||||
];
|
|
||||||
shellHook = ''
|
|
||||||
export DO_NIX_CUSTOM=1
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
}
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
|
54
justfile
54
justfile
@ -1,54 +0,0 @@
|
|||||||
|
|
||||||
# execute default build
|
|
||||||
default: build
|
|
||||||
|
|
||||||
# builds the python module using poetry
|
|
||||||
build:
|
|
||||||
echo "building..."
|
|
||||||
poetry build
|
|
||||||
|
|
||||||
# print a message displaying whether nix is being used
|
|
||||||
checknix:
|
|
||||||
#!/usr/bin/env bash
|
|
||||||
set -euxo pipefail
|
|
||||||
if [[ "${DO_NIX_CUSTOM:=0}" -eq 1 ]]; then
|
|
||||||
echo "In an interactive nix env."
|
|
||||||
else
|
|
||||||
echo "Using poetry as runner, no nix detected."
|
|
||||||
fi
|
|
||||||
|
|
||||||
# run all tests
|
|
||||||
test: fmt
|
|
||||||
#!/usr/bin/env bash
|
|
||||||
set -euxo pipefail
|
|
||||||
|
|
||||||
if [[ "${DO_NIX_CUSTOM:=0}" -eq 1 ]]; then
|
|
||||||
echo "testing, using nix..."
|
|
||||||
flake8 deepdog tests
|
|
||||||
mypy deepdog
|
|
||||||
pytest
|
|
||||||
else
|
|
||||||
echo "testing..."
|
|
||||||
poetry run flake8 deepdog tests
|
|
||||||
poetry run mypy deepdog
|
|
||||||
poetry run pytest
|
|
||||||
fi
|
|
||||||
|
|
||||||
# format code
|
|
||||||
fmt:
|
|
||||||
#!/usr/bin/env bash
|
|
||||||
set -euxo pipefail
|
|
||||||
if [[ "${DO_NIX_CUSTOM:=0}" -eq 1 ]]; then
|
|
||||||
black .
|
|
||||||
else
|
|
||||||
poetry run black .
|
|
||||||
fi
|
|
||||||
find deepdog -type f -name "*.py" -exec sed -i -e 's/ /\t/g' {} \;
|
|
||||||
find tests -type f -name "*.py" -exec sed -i -e 's/ /\t/g' {} \;
|
|
||||||
|
|
||||||
# release the app, checking that our working tree is clean and ready for release
|
|
||||||
release:
|
|
||||||
./scripts/release.sh
|
|
||||||
|
|
||||||
htmlcov:
|
|
||||||
poetry run pytest --cov-report=html
|
|
931
poetry.lock
generated
931
poetry.lock
generated
File diff suppressed because it is too large
Load Diff
@ -1,15 +1,14 @@
|
|||||||
[tool.poetry]
|
[tool.poetry]
|
||||||
name = "deepdog"
|
name = "deepdog"
|
||||||
version = "0.8.1"
|
version = "0.7.5"
|
||||||
description = ""
|
description = ""
|
||||||
authors = ["Deepak Mallubhotla <dmallubhotla+github@gmail.com>"]
|
authors = ["Deepak Mallubhotla <dmallubhotla+github@gmail.com>"]
|
||||||
|
|
||||||
[tool.poetry.dependencies]
|
[tool.poetry.dependencies]
|
||||||
python = ">=3.8.1,<3.10"
|
python = ">=3.8.1,<3.10"
|
||||||
pdme = "^0.9.3"
|
pdme = "^0.9.1"
|
||||||
numpy = "1.22.3"
|
numpy = "1.22.3"
|
||||||
scipy = "1.10"
|
scipy = "1.10"
|
||||||
tqdm = "^4.66.2"
|
|
||||||
|
|
||||||
[tool.poetry.dev-dependencies]
|
[tool.poetry.dev-dependencies]
|
||||||
pytest = ">=6"
|
pytest = ">=6"
|
||||||
@ -20,9 +19,6 @@ python-semantic-release = "^9.0.0"
|
|||||||
black = "^22.3.0"
|
black = "^22.3.0"
|
||||||
syrupy = "^4.0.8"
|
syrupy = "^4.0.8"
|
||||||
|
|
||||||
[tool.poetry.scripts]
|
|
||||||
probs = "deepdog.cli.probs:wrapped_main"
|
|
||||||
|
|
||||||
[build-system]
|
[build-system]
|
||||||
requires = ["poetry-core>=1.0.0"]
|
requires = ["poetry-core>=1.0.0"]
|
||||||
build-backend = "poetry.core.masonry.api"
|
build-backend = "poetry.core.masonry.api"
|
||||||
@ -42,13 +38,6 @@ module = [
|
|||||||
]
|
]
|
||||||
ignore_missing_imports = true
|
ignore_missing_imports = true
|
||||||
|
|
||||||
[[tool.mypy.overrides]]
|
|
||||||
module = [
|
|
||||||
"tqdm",
|
|
||||||
"tqdm.*"
|
|
||||||
]
|
|
||||||
ignore_missing_imports = true
|
|
||||||
|
|
||||||
[tool.semantic_release]
|
[tool.semantic_release]
|
||||||
version_toml = "pyproject.toml:tool.poetry.version"
|
version_toml = "pyproject.toml:tool.poetry.version"
|
||||||
tag_format = "{version}"
|
tag_format = "{version}"
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
const pattern = /(\[tool\.poetry\]\nname = "deepdog"\nversion = ")(?<vers>\d+\.\d+\.\d+)(")/mg;
|
const pattern = /(\[tool\.poetry\]\nname = "deepdog"\nversion = ")(?<vers>\d+\.\d+\.\d)(")/mg;
|
||||||
|
|
||||||
module.exports.readVersion = function (contents) {
|
module.exports.readVersion = function (contents) {
|
||||||
const result = pattern.exec(contents);
|
const result = pattern.exec(contents);
|
||||||
|
@ -1,12 +0,0 @@
|
|||||||
import deepdog.indexify
|
|
||||||
import logging
|
|
||||||
|
|
||||||
_logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
def test_indexifier():
|
|
||||||
weight_dict = {"key_1": [1, 2, 3], "key_2": ["a", "b", "c"]}
|
|
||||||
indexifier = deepdog.indexify.Indexifier(weight_dict)
|
|
||||||
_logger.debug(f"setting up indexifier {indexifier}")
|
|
||||||
assert indexifier.indexify(0) == {"key_1": 1, "key_2": "a"}
|
|
||||||
assert indexifier.indexify(5) == {"key_1": 2, "key_2": "c"}
|
|
@ -1,28 +0,0 @@
|
|||||||
import deepdog.results
|
|
||||||
|
|
||||||
|
|
||||||
def test_parse_groupdict():
|
|
||||||
example_column_name = (
|
|
||||||
"geom_-20_20_-10_10_0_5-orientation_free-dipole_count_100_success"
|
|
||||||
)
|
|
||||||
|
|
||||||
parsed = deepdog.results._parse_bayesrun_column(example_column_name)
|
|
||||||
expected = deepdog.results.BayesrunColumnParsed(
|
|
||||||
{
|
|
||||||
"xmin": "-20",
|
|
||||||
"xmax": "20",
|
|
||||||
"ymin": "-10",
|
|
||||||
"ymax": "10",
|
|
||||||
"zmin": "0",
|
|
||||||
"zmax": "5",
|
|
||||||
"orientation": "free",
|
|
||||||
"avg_filled": "100",
|
|
||||||
"field_name": "success",
|
|
||||||
}
|
|
||||||
)
|
|
||||||
assert parsed == expected
|
|
||||||
|
|
||||||
|
|
||||||
# def test_parse_no_match_column_name():
|
|
||||||
# parsed = deepdog.results.parse_bayesrun_column("There's nothing here")
|
|
||||||
# assert parsed is None
|
|
Loading…
x
Reference in New Issue
Block a user