Compare commits

...

36 Commits

Author SHA1 Message Date
fbd09410f7 chore(deps): update dependency flake8 to v7
Some checks failed
renovate/artifacts Artifact file update failure
gitea-physics/deepdog/pipeline/pr-master There was a failure building this commit
2024-04-29 01:31:15 +00:00
e5f7085324
chore(release): 0.8.1
All checks were successful
gitea-physics/deepdog/pipeline/head This commit looks good
gitea-physics/deepdog/pipeline/tag This commit looks good
2024-04-28 04:27:47 -05:00
578481324b
chore(release): 0.8.1 2024-04-28 04:27:32 -05:00
bf8ac9850d
release: fixes standard version updater which didn't allow minor version to be multidigit 2024-04-28 04:27:06 -05:00
ab408b6412
chore(release): 0.8.1 2024-04-28 04:19:08 -05:00
4aa0a6f234
chore(release): 0.8.0
Some checks failed
gitea-physics/deepdog/pipeline/head This commit looks good
gitea-physics/deepdog/pipeline/tag There was a failure building this commit
2024-04-28 04:08:02 -05:00
f9646e3386
fix!: fixes the spin qubit frequency phase shift calculation which had an index problem 2024-04-28 04:07:35 -05:00
3b612b960e
chore(release): 0.7.10
All checks were successful
gitea-physics/deepdog/pipeline/head This commit looks good
gitea-physics/deepdog/pipeline/tag This commit looks good
2024-04-27 23:06:24 -05:00
b0ad4bead0
feat: better management of cli wrapper 2024-04-27 23:04:33 -05:00
4b2e573715
feat: adds cli probs 2024-04-27 18:43:25 -05:00
12e6916ab2
doc: documentation for myself because i'll forget otherwise
All checks were successful
gitea-physics/deepdog/pipeline/head This commit looks good
2024-04-27 12:25:39 -05:00
1e76f63725
git: ignores local_scripts directory as place to run stuff while developing
All checks were successful
gitea-physics/deepdog/pipeline/head This commit looks good
2024-04-25 16:18:14 -05:00
7aa5ad2eb9
chore(release): 0.7.9
All checks were successful
gitea-physics/deepdog/pipeline/head This commit looks good
gitea-physics/deepdog/pipeline/tag This commit looks good
2024-04-21 11:23:42 -05:00
fe331bb544
Merge branch 'filter_compose'
All checks were successful
gitea-physics/deepdog/pipeline/head This commit looks good
2024-04-21 11:21:36 -05:00
03ac85a967
chore: performance enhancement for fmt in justfile
All checks were successful
gitea-physics/deepdog/pipeline/head This commit looks good
2024-04-21 11:21:11 -05:00
96589ff659
adds a filter for future dmc use 2024-04-21 10:55:44 -05:00
e5b5809764
build: delete do.sh
All checks were successful
gitea-physics/deepdog/pipeline/head This commit looks good
2024-03-20 11:28:04 -05:00
1407418c60 Merge pull request 'custom_dmc' (#37) from custom_dmc into master
All checks were successful
gitea-physics/deepdog/pipeline/head This commit looks good
Reviewed-on: #37
2024-03-20 16:27:19 +00:00
383b51c35d
Merge branch 'master' into custom_dmc
Some checks are pending
gitea-physics/deepdog/pipeline/head This commit looks good
gitea-physics/deepdog/pipeline/pr-master Build started...
2024-03-20 11:23:39 -05:00
5b9123d128 Merge pull request 'flakeupdate' (#36) from flakeupdate into master
All checks were successful
gitea-physics/deepdog/pipeline/head This commit looks good
Reviewed-on: #36
2024-03-20 16:21:41 +00:00
2b1a1c21e4
Merge branch 'master' into flakeupdate
All checks were successful
gitea-physics/deepdog/pipeline/pr-master This commit looks good
2024-03-20 11:18:16 -05:00
ea080ca1c7
feat: adds ability to write custom dmc filters
All checks were successful
gitea-physics/deepdog/pipeline/head This commit looks good
2024-03-20 10:56:54 -05:00
028fe58561
build: fixes issue brekaing build with unused variable
Some checks failed
gitea-physics/deepdog/pipeline/head This commit looks good
gitea-physics/deepdog/pipeline/pr-master There was a failure building this commit
2024-03-19 15:46:00 -05:00
b6a41872d5
just: fmt before test, better comments
Some checks failed
gitea-physics/deepdog/pipeline/head There was a failure building this commit
2024-03-19 15:45:15 -05:00
731dabd74d
nix: adds just as dependency, and fixes tests by installing deepdog app locally 2024-03-19 15:42:43 -05:00
7950f19c2d
build: adds justfile to replace do
Some checks failed
gitea-physics/deepdog/pipeline/head There was a failure building this commit
2024-03-19 15:42:18 -05:00
b27e504bbd
lint: unneeded variable definition
All checks were successful
gitea-physics/deepdog/pipeline/head This commit looks good
2024-03-17 18:40:46 -05:00
33106ba772
nix: updates nix things to work, rewrites flake
Some checks failed
gitea-physics/deepdog/pipeline/head There was a failure building this commit
2024-03-17 15:18:52 -05:00
3ae0783d00
feat: adds tarucha phase calculation, using spin qubit precession rate noise
Some checks failed
gitea-physics/deepdog/pipeline/head There was a failure building this commit
2024-03-17 14:11:22 -05:00
e8201865eb
chore(release): 0.7.8
All checks were successful
gitea-physics/deepdog/pipeline/head This commit looks good
gitea-physics/deepdog/pipeline/tag This commit looks good
2024-02-28 18:41:32 -06:00
5f534a60cc
fix: uses correct measurements
All checks were successful
gitea-physics/deepdog/pipeline/head This commit looks good
2024-02-28 18:41:05 -06:00
ce90f6774b
chore(release): 0.7.7
All checks were successful
gitea-physics/deepdog/pipeline/tag This commit looks good
gitea-physics/deepdog/pipeline/head This commit looks good
2024-02-28 18:34:13 -06:00
48e41cbd2c
fix: fixes phase calculation issue with setting input array
All checks were successful
gitea-physics/deepdog/pipeline/head This commit looks good
2024-02-28 18:33:05 -06:00
603c5607f7
chore(release): 0.7.6
All checks were successful
gitea-physics/deepdog/pipeline/head This commit looks good
gitea-physics/deepdog/pipeline/tag This commit looks good
2024-02-28 16:49:47 -06:00
bb72e903d1
feat: adds ability to use phase measurements only for correlations
All checks were successful
gitea-physics/deepdog/pipeline/head This commit looks good
2024-02-28 16:49:24 -06:00
65e1948835
fix: fixes typeerror vs indexerror on bare float as cost in subset simulation 2024-02-28 16:47:03 -06:00
26 changed files with 1951 additions and 421 deletions

2
.gitignore vendored
View File

@ -143,3 +143,5 @@ dmypy.json
cython_debug/
*.csv
local_scripts/

View File

@ -2,6 +2,63 @@
All notable changes to this project will be documented in this file. See [standard-version](https://github.com/conventional-changelog/standard-version) for commit guidelines.
### [0.8.1](https://gitea.deepak.science:2222/physics/deepdog/compare/0.8.0...0.8.1) (2024-04-28)
### [0.8.1](https://gitea.deepak.science:2222/physics/deepdog/compare/0.8.0...0.8.1) (2024-04-28)
## [0.8.0](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.10...0.8.0) (2024-04-28)
### ⚠ BREAKING CHANGES
* fixes the spin qubit frequency phase shift calculation which had an index problem
### Bug Fixes
* fixes the spin qubit frequency phase shift calculation which had an index problem ([f9646e3](https://gitea.deepak.science:2222/physics/deepdog/commit/f9646e33868e1a0da8ab663230c0c692ac25bb74))
### [0.7.10](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.9...0.7.10) (2024-04-28)
### Features
* adds cli probs ([4b2e573](https://gitea.deepak.science:2222/physics/deepdog/commit/4b2e57371546731137b011461849bb849d4d4e0f))
* better management of cli wrapper ([b0ad4be](https://gitea.deepak.science:2222/physics/deepdog/commit/b0ad4bead0d4762eb7f848f6e557f6d9b61200b9))
### [0.7.9](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.8...0.7.9) (2024-04-21)
### Features
* adds ability to write custom dmc filters ([ea080ca](https://gitea.deepak.science:2222/physics/deepdog/commit/ea080ca1c7068042ce1e0a222d317f785a6b05f4))
* adds tarucha phase calculation, using spin qubit precession rate noise ([3ae0783](https://gitea.deepak.science:2222/physics/deepdog/commit/3ae0783d00cbe6a76439c1d671f2cff621d8d0a8))
### [0.7.8](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.7...0.7.8) (2024-02-29)
### Bug Fixes
* uses correct measurements ([5f534a6](https://gitea.deepak.science:2222/physics/deepdog/commit/5f534a60cc7c4838fcacee11a7e58b97d34e154a))
### [0.7.7](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.6...0.7.7) (2024-02-29)
### Bug Fixes
* fixes phase calculation issue with setting input array ([48e41cb](https://gitea.deepak.science:2222/physics/deepdog/commit/48e41cbd2c58d4c4d2747822d618d7d55257643d))
### [0.7.6](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.5...0.7.6) (2024-02-28)
### Features
* adds ability to use phase measurements only for correlations ([bb72e90](https://gitea.deepak.science:2222/physics/deepdog/commit/bb72e903d14704a3783daf2dbc1797b90880aa85))
### Bug Fixes
* fixes typeerror vs indexerror on bare float as cost in subset simulation ([65e1948](https://gitea.deepak.science:2222/physics/deepdog/commit/65e19488359d7f5656660da7da8f32ed474989c3))
### [0.7.5](https://gitea.deepak.science:2222/physics/deepdog/compare/0.7.4...0.7.5) (2023-12-09)

View File

@ -5,7 +5,7 @@
[![Jenkins](https://img.shields.io/jenkins/build?jobUrl=https%3A%2F%2Fjenkins.deepak.science%2Fjob%2Fgitea-physics%2Fjob%2Fdeepdog%2Fjob%2Fmaster&style=flat-square)](https://jenkins.deepak.science/job/gitea-physics/job/deepdog/job/master/)
![Jenkins tests](https://img.shields.io/jenkins/tests?compact_message&jobUrl=https%3A%2F%2Fjenkins.deepak.science%2Fjob%2Fgitea-physics%2Fjob%2Fdeepdog%2Fjob%2Fmaster%2F&style=flat-square)
![Jenkins Coverage](https://img.shields.io/jenkins/coverage/cobertura?jobUrl=https%3A%2F%2Fjenkins.deepak.science%2Fjob%2Fgitea-physics%2Fjob%2Fdeepdog%2Fjob%2Fmaster%2F&style=flat-square)
![Maintenance](https://img.shields.io/maintenance/yes/2023?style=flat-square)
![Maintenance](https://img.shields.io/maintenance/yes/2024?style=flat-square)
The DiPole DiaGnostic tool.
@ -13,6 +13,13 @@ The DiPole DiaGnostic tool.
`poetry install` to start locally
Commit using [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/), and when commits are on master, release with `doo release`.
Commit using [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/), and when commits are on master, release with `just release`.
In general `just --list` has some of the useful stuff for figuring out what development tools there are.
Poetry as an installer is good, even better is using Nix (maybe with direnv to automatically pick up the `devShell` from `flake.nix`).
In either case `just` should handle actually calling things in a way that's agnostic to poetry as a runner or through nix.
### local scripts
`local_scripts` folder allows for scripts to be run using this code, but that probably isn't the most auditable for actual usage.
The API is still only something I'm using so there's no guarantees yet that it will be stable; overall semantic versioning should help with API breaks.

0
deepdog/cli/__init__.py Normal file
View File

View File

@ -0,0 +1,5 @@
from deepdog.cli.probs.main import wrapped_main
__all__ = [
"wrapped_main",
]

63
deepdog/cli/probs/args.py Normal file
View File

@ -0,0 +1,63 @@
import argparse
import os
def parse_args() -> argparse.Namespace:
def dir_path(path):
if os.path.isdir(path):
return path
else:
raise argparse.ArgumentTypeError(f"readable_dir:{path} is not a valid path")
parser = argparse.ArgumentParser(
"probs", description="Calculating probability from finished bayesrun"
)
parser.add_argument(
"--log_file",
type=str,
help="A filename for logging to, if not provided will only log to stderr",
default=None,
)
parser.add_argument(
"--bayesrun-directory",
"-d",
type=dir_path,
help="The directory to search for bayesrun files, defaulting to cwd if not passed",
default=".",
)
parser.add_argument(
"--indexify-json",
help="A json file with the indexify config for parsing job indexes. Will skip if not present",
default="",
)
parser.add_argument(
"--seed-index",
type=int,
help='take an integer to append as a "seed" key with range at end of indexify dict. Skip if <= 0',
default=0,
)
parser.add_argument(
"--seed-fieldname",
type=str,
help='if --seed-index is set, the fieldname to append to the indexifier. "seed" by default',
default="seed",
)
parser.add_argument(
"--coalesced-keys",
type=str,
help="A comma separated list of strings over which to coalesce data. By default coalesce over all fields within model names, ignore file level names",
default="",
)
parser.add_argument(
"--uncoalesced-outfile",
type=str,
help="output filename for uncoalesced data. If not provided, will not be written",
default=None,
)
parser.add_argument(
"--coalesced-outfile",
type=str,
help="output filename for coalesced data. If not provided, will not be written",
default=None,
)
return parser.parse_args()

178
deepdog/cli/probs/dicts.py Normal file
View File

@ -0,0 +1,178 @@
import typing
from deepdog.results import BayesrunOutput
import logging
import csv
import tqdm
_logger = logging.getLogger(__name__)
def build_model_dict(
bayes_outputs: typing.Sequence[BayesrunOutput],
) -> typing.Dict[
typing.Tuple, typing.Dict[typing.Tuple, typing.Dict["str", typing.Any]]
]:
"""
Maybe someday do something smarter with the coalescing and stuff but don't want to so i won't
"""
# assume that everything is well formatted and the keys are the same across entire list and initialise list of keys.
# model dict will contain a model_key: {calculation_dict} where each calculation_dict represents a single calculation for that model,
# the uncoalesced version, keyed by the specific file keys
model_dict: typing.Dict[
typing.Tuple, typing.Dict[typing.Tuple, typing.Dict["str", typing.Any]]
] = {}
_logger.info("building model dict")
for out in tqdm.tqdm(bayes_outputs, desc="reading outputs", leave=False):
for model_result in out.results:
model_key = tuple(v for v in model_result.parsed_model_keys.values())
if model_key not in model_dict:
model_dict[model_key] = {}
calculation_dict = model_dict[model_key]
calculation_key = tuple(v for v in out.data.values())
if calculation_key not in calculation_dict:
calculation_dict[calculation_key] = {
"_model_key_dict": model_result.parsed_model_keys,
"_calculation_key_dict": out.data,
"success": model_result.success,
"count": model_result.count,
}
else:
raise ValueError(
f"Got {calculation_key} twice for model_key {model_key}"
)
return model_dict
def write_uncoalesced_dict(
uncoalesced_output_filename: typing.Optional[str],
uncoalesced_model_dict: typing.Dict[
typing.Tuple, typing.Dict[typing.Tuple, typing.Dict["str", typing.Any]]
],
):
if uncoalesced_output_filename is None or uncoalesced_output_filename == "":
_logger.warning("Not provided a uncoalesced filename, not going to try")
return
first_value = next(iter(next(iter(uncoalesced_model_dict.values())).values()))
model_field_names = set(first_value["_model_key_dict"].keys())
calculation_field_names = set(first_value["_calculation_key_dict"].keys())
if not (set(model_field_names).isdisjoint(calculation_field_names)):
_logger.info(f"Detected model field names {model_field_names}")
_logger.info(f"Detected calculation field names {calculation_field_names}")
raise ValueError(
f"model field names {model_field_names} and calculation {calculation_field_names} have an overlap, which is possibly a problem"
)
collected_fieldnames = list(model_field_names)
collected_fieldnames.extend(calculation_field_names)
collected_fieldnames.extend(["success", "count"])
_logger.info(f"Full uncoalesced fieldnames are {collected_fieldnames}")
with open(uncoalesced_output_filename, "w", newline="") as uncoalesced_output_file:
writer = csv.DictWriter(
uncoalesced_output_file, fieldnames=collected_fieldnames
)
writer.writeheader()
for model_dict in uncoalesced_model_dict.values():
for calculation in model_dict.values():
row = calculation["_model_key_dict"].copy()
row.update(calculation["_calculation_key_dict"].copy())
row.update(
{
"success": calculation["success"],
"count": calculation["count"],
}
)
writer.writerow(row)
def coalesced_dict(
uncoalesced_model_dict: typing.Dict[
typing.Tuple, typing.Dict[typing.Tuple, typing.Dict["str", typing.Any]]
],
minimum_count: float = 0.1,
):
"""
pass in uncoalesced dict
the minimum_count field is what we use to make sure our probs are never zero
"""
coalesced_dict = {}
# we are already iterating so for no reason because performance really doesn't matter let's count the keys ourselves
num_keys = 0
# first pass coalesce
for model_key, model_dict in uncoalesced_model_dict.items():
num_keys += 1
for calculation in model_dict.values():
if model_key not in coalesced_dict:
coalesced_dict[model_key] = {
"_model_key_dict": calculation["_model_key_dict"].copy(),
"calculations_coalesced": 0,
"count": 0,
"success": 0,
}
sub_dict = coalesced_dict[model_key]
sub_dict["calculations_coalesced"] += 1
sub_dict["count"] += calculation["count"]
sub_dict["success"] += calculation["success"]
# second pass do probability calculation
prior = 1 / num_keys
_logger.info(f"Got {num_keys} model keys, so our prior will be {prior}")
total_weight = 0
for coalesced_model_dict in coalesced_dict.values():
model_weight = (
max(minimum_count, coalesced_model_dict["success"])
/ coalesced_model_dict["count"]
) * prior
total_weight += model_weight
total_prob = 0
for coalesced_model_dict in coalesced_dict.values():
model_weight = (
max(minimum_count, coalesced_model_dict["success"])
/ coalesced_model_dict["count"]
)
prob = model_weight * prior / total_weight
coalesced_model_dict["prob"] = prob
total_prob += prob
_logger.debug(
f"Got a total probability of {total_prob}, which should be close to 1 up to float/rounding error"
)
return coalesced_dict
def write_coalesced_dict(
coalesced_output_filename: typing.Optional[str],
coalesced_model_dict: typing.Dict[typing.Tuple, typing.Dict["str", typing.Any]],
):
if coalesced_output_filename is None or coalesced_output_filename == "":
_logger.warning("Not provided a uncoalesced filename, not going to try")
return
first_value = next(iter(coalesced_model_dict.values()))
model_field_names = set(first_value["_model_key_dict"].keys())
_logger.info(f"Detected model field names {model_field_names}")
collected_fieldnames = list(model_field_names)
collected_fieldnames.extend(["calculations_coalesced", "success", "count", "prob"])
with open(coalesced_output_filename, "w", newline="") as coalesced_output_file:
writer = csv.DictWriter(coalesced_output_file, fieldnames=collected_fieldnames)
writer.writeheader()
for model_dict in coalesced_model_dict.values():
row = model_dict["_model_key_dict"].copy()
row.update(
{
"calculations_coalesced": model_dict["calculations_coalesced"],
"success": model_dict["success"],
"count": model_dict["count"],
"prob": model_dict["prob"],
}
)
writer.writerow(row)

95
deepdog/cli/probs/main.py Normal file
View File

@ -0,0 +1,95 @@
import logging
import argparse
import json
import deepdog.cli.probs.args
import deepdog.cli.probs.dicts
import deepdog.results
import deepdog.indexify
import pathlib
import tqdm
import tqdm.contrib.logging
_logger = logging.getLogger(__name__)
def set_up_logging(log_file: str):
log_pattern = "%(asctime)s | %(levelname)-7s | %(name)s:%(lineno)d | %(message)s"
if log_file is None:
handlers = [
logging.StreamHandler(),
]
else:
handlers = [logging.StreamHandler(), logging.FileHandler(log_file)]
logging.basicConfig(
level=logging.DEBUG,
format=log_pattern,
# it's okay to ignore this mypy error because who cares about logger handler types
handlers=handlers, # type: ignore
)
logging.captureWarnings(True)
def main(args: argparse.Namespace):
"""
Main function with passed in arguments and no additional logging setup in case we want to extract out later
"""
with tqdm.contrib.logging.logging_redirect_tqdm():
_logger.info(f"args: {args}")
try:
if args.coalesced_keys:
raise NotImplementedError(
"Currently not supporting coalesced keys, but maybe in future"
)
except AttributeError:
# we don't care if this is missing because we don't actually want it to be there
pass
indexifier = None
if args.indexify_json:
with open(args.indexify_json, "r") as indexify_json_file:
indexify_data = json.load(indexify_json_file)
if args.seed_index > 0:
indexify_data[args.seed_fieldname] = list(range(args.seed_index))
# _logger.debug(f"Indexifier data looks like {indexify_data}")
indexifier = deepdog.indexify.Indexifier(indexify_data)
bayes_dir = pathlib.Path(args.bayesrun_directory)
out_files = [f for f in bayes_dir.iterdir() if f.name.endswith("bayesrun.csv")]
_logger.info(
f"Reading {len(out_files)} bayesrun.csv files in directory {args.bayesrun_directory}"
)
# _logger.info(out_files)
parsed_output_files = [
deepdog.results.read_output_file(f, indexifier)
for f in tqdm.tqdm(out_files, desc="reading files", leave=False)
]
_logger.info("building uncoalesced dict")
uncoalesced_dict = deepdog.cli.probs.dicts.build_model_dict(parsed_output_files)
if "uncoalesced_outfile" in args and args.uncoalesced_outfile:
deepdog.cli.probs.dicts.write_uncoalesced_dict(
args.uncoalesced_outfile, uncoalesced_dict
)
else:
_logger.info("Skipping writing uncoalesced")
_logger.info("building coalesced dict")
coalesced = deepdog.cli.probs.dicts.coalesced_dict(uncoalesced_dict)
if "coalesced_outfile" in args and args.coalesced_outfile:
deepdog.cli.probs.dicts.write_coalesced_dict(
args.coalesced_outfile, coalesced
)
else:
_logger.info("Skipping writing coalesced")
def wrapped_main():
args = deepdog.cli.probs.args.parse_args()
set_up_logging(args.log_file)
main(args)

View File

@ -0,0 +1,14 @@
from typing import Sequence
from deepdog.direct_monte_carlo.direct_mc import DirectMonteCarloFilter
import numpy
class ComposedDMCFilter(DirectMonteCarloFilter):
def __init__(self, filters: Sequence[DirectMonteCarloFilter]):
self.filters = filters
def filter_samples(self, samples: numpy.ndarray) -> numpy.ndarray:
current_sample = samples
for filter in self.filters:
current_sample = filter.filter_samples(current_sample)
return current_sample

View File

@ -2,7 +2,7 @@ import pdme.model
import pdme.measurement
import pdme.measurement.input_types
import pdme.subspace_simulation
from typing import Tuple, Sequence
from typing import Tuple, Dict, NewType, Any
from dataclasses import dataclass
import logging
import numpy
@ -30,6 +30,20 @@ class DirectMonteCarloConfig:
tag: str = ""
# Aliasing dict as a generic data container
DirectMonteCarloData = NewType("DirectMonteCarloData", Dict[str, Any])
class DirectMonteCarloFilter:
"""
Abstract class for filtering out samples matching some criteria. Initialise with data as needed,
then filter out samples as needed.
"""
def filter_samples(self, samples: numpy.ndarray) -> numpy.ndarray:
raise NotImplementedError
class DirectMonteCarloRun:
"""
A single model Direct Monte Carlo run, currently implemented only using single threading.
@ -65,25 +79,26 @@ class DirectMonteCarloRun:
def __init__(
self,
model_name_pair: Tuple[str, pdme.model.DipoleModel],
measurements: Sequence[pdme.measurement.DotRangeMeasurement],
filter: DirectMonteCarloFilter,
config: DirectMonteCarloConfig,
):
self.model_name, self.model = model_name_pair
self.measurements = measurements
self.dot_inputs = [(measure.r, measure.f) for measure in self.measurements]
# self.measurements = measurements
# self.dot_inputs = [(measure.r, measure.f) for measure in self.measurements]
self.dot_inputs_array = pdme.measurement.input_types.dot_inputs_to_array(
self.dot_inputs
)
# self.dot_inputs_array = pdme.measurement.input_types.dot_inputs_to_array(
# self.dot_inputs
# )
self.config = config
(
self.lows,
self.highs,
) = pdme.measurement.input_types.dot_range_measurements_low_high_arrays(
self.measurements
)
self.filter = filter
# (
# self.lows,
# self.highs,
# ) = pdme.measurement.input_types.dot_range_measurements_low_high_arrays(
# self.measurements
# )
def _single_run(self, seed) -> numpy.ndarray:
rng = numpy.random.default_rng(seed)
@ -93,18 +108,20 @@ class DirectMonteCarloRun:
)
current_sample = sample_dipoles
for di, low, high in zip(self.dot_inputs_array, self.lows, self.highs):
if len(current_sample) < 1:
break
vals = pdme.util.fast_v_calc.fast_vs_for_dipoleses(
numpy.array([di]), current_sample
)
return self.filter.filter_samples(current_sample)
# for di, low, high in zip(self.dot_inputs_array, self.lows, self.highs):
current_sample = current_sample[
numpy.all((vals > low) & (vals < high), axis=1)
]
return current_sample
# if len(current_sample) < 1:
# break
# vals = pdme.util.fast_v_calc.fast_vs_for_dipoleses(
# numpy.array([di]), current_sample
# )
# current_sample = current_sample[
# numpy.all((vals > low) & (vals < high), axis=1)
# ]
# return current_sample
def execute(self) -> DirectMonteCarloResult:
step_count = 0

View File

@ -0,0 +1,143 @@
from numpy import ndarray
from deepdog.direct_monte_carlo.direct_mc import DirectMonteCarloFilter
from typing import Sequence
import pdme.measurement
import pdme.measurement.input_types
import pdme.util.fast_nonlocal_spectrum
import pdme.util.fast_v_calc
import numpy
class SingleDotPotentialFilter(DirectMonteCarloFilter):
def __init__(self, measurements: Sequence[pdme.measurement.DotRangeMeasurement]):
self.measurements = measurements
self.dot_inputs = [(measure.r, measure.f) for measure in self.measurements]
self.dot_inputs_array = pdme.measurement.input_types.dot_inputs_to_array(
self.dot_inputs
)
(
self.lows,
self.highs,
) = pdme.measurement.input_types.dot_range_measurements_low_high_arrays(
self.measurements
)
def filter_samples(self, samples: ndarray) -> ndarray:
current_sample = samples
for di, low, high in zip(self.dot_inputs_array, self.lows, self.highs):
if len(current_sample) < 1:
break
vals = pdme.util.fast_v_calc.fast_vs_for_dipoleses(
numpy.array([di]), current_sample
)
current_sample = current_sample[
numpy.all((vals > low) & (vals < high), axis=1)
]
return current_sample
class DoubleDotSpinQubitFrequencyFilter(DirectMonteCarloFilter):
def __init__(
self,
pair_phase_measurements: Sequence[pdme.measurement.DotPairRangeMeasurement],
):
self.pair_phase_measurements = pair_phase_measurements
self.dot_pair_inputs = [
(measure.r1, measure.r2, measure.f)
for measure in self.pair_phase_measurements
]
self.dot_pair_inputs_array = (
pdme.measurement.input_types.dot_pair_inputs_to_array(self.dot_pair_inputs)
)
(
self.pair_phase_lows,
self.pair_phase_highs,
) = pdme.measurement.input_types.dot_range_measurements_low_high_arrays(
self.pair_phase_measurements
)
def fast_s_spin_qubit_tarucha_nonlocal_dipoleses(
self, dot_pair_inputs: numpy.ndarray, dipoleses: numpy.ndarray
) -> numpy.ndarray:
"""
No error correction here baby.
"""
ps = dipoleses[:, :, 0:3]
ss = dipoleses[:, :, 3:6]
ws = dipoleses[:, :, 6]
r1s = dot_pair_inputs[:, 0, 0:3]
r2s = dot_pair_inputs[:, 1, 0:3]
f1s = dot_pair_inputs[:, 0, 3]
# Don't actually need this
# f2s = dot_pair_inputs[:, 1, 3]
diffses1 = r1s[:, None] - ss[:, None, :]
diffses2 = r2s[:, None] - ss[:, None, :]
norms1 = numpy.linalg.norm(diffses1, axis=3)
norms2 = numpy.linalg.norm(diffses2, axis=3)
alphses1 = (
(
3
* numpy.transpose(
numpy.transpose(
numpy.einsum("abcd,acd->abc", diffses1, ps) / (norms1**2)
)
* numpy.transpose(diffses1)
)[:, :, :, 0]
)
- ps[:, :, 0, numpy.newaxis]
) / (norms1**3)
alphses2 = (
(
3
* numpy.transpose(
numpy.transpose(
numpy.einsum("abcd,acd->abc", diffses2, ps) / (norms2**2)
)
* numpy.transpose(diffses2)
)[:, :, :, 0]
)
- ps[:, :, 0, numpy.newaxis]
) / (norms2**3)
bses = (1 / numpy.pi) * (
ws[:, None, :] / (f1s[:, None] ** 2 + ws[:, None, :] ** 2)
)
return numpy.einsum("...j->...", alphses1 * alphses2 * bses)
def filter_samples(self, samples: ndarray) -> ndarray:
current_sample = samples
for pi, plow, phigh in zip(
self.dot_pair_inputs_array, self.pair_phase_lows, self.pair_phase_highs
):
if len(current_sample) < 1:
break
###
# This should be abstracted out, but we're going to dump it here for time pressure's sake
###
# vals = pdme.util.fast_nonlocal_spectrum.signarg(
# pdme.util.fast_nonlocal_spectrum.fast_s_nonlocal_dipoleses(
# numpy.array([pi]), current_sample
# )
#
vals = pdme.util.fast_nonlocal_spectrum.signarg(
self.fast_s_spin_qubit_tarucha_nonlocal_dipoleses(
numpy.array([pi]), current_sample
)
)
current_sample = current_sample[
numpy.all(
((vals > plow) & (vals < phigh)) | ((vals < plow) & (vals > phigh)),
axis=1,
)
]
return current_sample

View File

@ -0,0 +1,58 @@
"""
Probably should just include a way to handle the indexify function I reuse so much.
All about breaking an integer into a tuple of values from lists, which is useful because of how we do CHTC runs.
"""
import itertools
import typing
import logging
import math
_logger = logging.getLogger(__name__)
# from https://stackoverflow.com/questions/5228158/cartesian-product-of-a-dictionary-of-lists
def _dict_product(dicts):
"""
>>> list(dict_product(dict(number=[1,2], character='ab')))
[{'character': 'a', 'number': 1},
{'character': 'a', 'number': 2},
{'character': 'b', 'number': 1},
{'character': 'b', 'number': 2}]
"""
return list(dict(zip(dicts.keys(), x)) for x in itertools.product(*dicts.values()))
class Indexifier:
"""
The order of keys is very important, but collections.OrderedDict is no longer needed in python 3.7.
I think it's okay to rely on that.
"""
def __init__(self, list_dict: typing.Dict[str, typing.Sequence]):
self.dict = list_dict
def indexify(self, n: int) -> typing.Dict[str, typing.Any]:
product_dict = _dict_product(self.dict)
return product_dict[n]
def _indexify_indices(self, n: int) -> typing.Sequence[int]:
"""
legacy indexify from old scripts, copypast.
could be used like
>>> ret = {}
>>> for k, i in zip(self.dict.keys(), self._indexify_indices):
>>> ret[k] = self.dict[k][i]
>>> return ret
"""
weights = [len(v) for v in self.dict.values()]
N = math.prod(weights)
curr_n = n
curr_N = N
out = []
for w in weights[:-1]:
# print(f"current: {curr_N}, {curr_n}, {curr_n // w}")
curr_N = curr_N // w # should be int division anyway
out.append(curr_n // curr_N)
curr_n = curr_n % curr_N
return out

View File

@ -66,6 +66,139 @@ def get_a_result_fast_filter_pairs(input) -> int:
return len(current_sample)
def get_a_result_fast_filter_potential_pair_phase_only(input) -> int:
(
model,
pair_inputs,
pair_phase_lows,
pair_phase_highs,
monte_carlo_count,
seed,
) = input
rng = numpy.random.default_rng(seed)
# TODO: A long term refactor is to pull the frequency stuff out from here. The None stands for max_frequency, which is unneeded in the actually useful models.
sample_dipoles = model.get_monte_carlo_dipole_inputs(
monte_carlo_count, None, rng_to_use=rng
)
current_sample = sample_dipoles
for pi, plow, phigh in zip(pair_inputs, pair_phase_lows, pair_phase_highs):
if len(current_sample) < 1:
break
vals = pdme.util.fast_nonlocal_spectrum.signarg(
pdme.util.fast_nonlocal_spectrum.fast_s_nonlocal_dipoleses(
numpy.array([pi]), current_sample
)
)
current_sample = current_sample[
numpy.all(
((vals > plow) & (vals < phigh)) | ((vals < plow) & (vals > phigh)),
axis=1,
)
]
return len(current_sample)
def get_a_result_fast_filter_tarucha_spin_qubit_pair_phase_only(input) -> int:
(
model,
pair_inputs,
pair_phase_lows,
pair_phase_highs,
monte_carlo_count,
seed,
) = input
def fast_s_spin_qubit_tarucha_nonlocal_dipoleses(
dot_pair_inputs: numpy.ndarray, dipoleses: numpy.ndarray
) -> numpy.ndarray:
"""
No error correction here baby.
"""
ps = dipoleses[:, :, 0:3]
ss = dipoleses[:, :, 3:6]
ws = dipoleses[:, :, 6]
r1s = dot_pair_inputs[:, 0, 0:3]
r2s = dot_pair_inputs[:, 1, 0:3]
f1s = dot_pair_inputs[:, 0, 3]
# don't actually need, because we're assuming they're the same frequencies across the pair
# f2s = dot_pair_inputs[:, 1, 3]
diffses1 = r1s[:, None] - ss[:, None, :]
diffses2 = r2s[:, None] - ss[:, None, :]
norms1 = numpy.linalg.norm(diffses1, axis=3)
norms2 = numpy.linalg.norm(diffses2, axis=3)
alphses1 = (
(
3
* numpy.transpose(
numpy.transpose(
numpy.einsum("abcd,acd->abc", diffses1, ps) / (norms1**2)
)
* numpy.transpose(diffses1)
)[:, :, :, 0]
)
- ps[:, numpy.newaxis, :, 0]
) / (norms1**3)
alphses2 = (
(
3
* numpy.transpose(
numpy.transpose(
numpy.einsum("abcd,acd->abc", diffses2, ps) / (norms2**2)
)
* numpy.transpose(diffses2)
)[:, :, :, 0]
)
- ps[:, numpy.newaxis, :, 0]
) / (norms2**3)
bses = (1 / numpy.pi) * (
ws[:, None, :] / (f1s[:, None] ** 2 + ws[:, None, :] ** 2)
)
return numpy.einsum("...j->...", alphses1 * alphses2 * bses)
rng = numpy.random.default_rng(seed)
# TODO: A long term refactor is to pull the frequency stuff out from here. The None stands for max_frequency, which is unneeded in the actually useful models.
sample_dipoles = model.get_monte_carlo_dipole_inputs(
monte_carlo_count, None, rng_to_use=rng
)
current_sample = sample_dipoles
for pi, plow, phigh in zip(pair_inputs, pair_phase_lows, pair_phase_highs):
if len(current_sample) < 1:
break
###
# This should be abstracted out, but we're going to dump it here for time pressure's sake
###
# vals = pdme.util.fast_nonlocal_spectrum.signarg(
# pdme.util.fast_nonlocal_spectrum.fast_s_nonlocal_dipoleses(
# numpy.array([pi]), current_sample
# )
#
vals = pdme.util.fast_nonlocal_spectrum.signarg(
fast_s_spin_qubit_tarucha_nonlocal_dipoleses(
numpy.array([pi]), current_sample
)
)
current_sample = current_sample[
numpy.all(
((vals > plow) & (vals < phigh)) | ((vals < plow) & (vals > phigh)),
axis=1,
)
]
return len(current_sample)
def get_a_result_fast_filter(input) -> int:
model, dot_inputs, lows, highs, monte_carlo_count, seed = input
@ -108,6 +241,11 @@ class RealSpectrumRun:
run_count: int
The number of runs to do.
If pair_measurements is not None, uses pair measurement method (and single measurements too).
If pair_phase_measurements is not None, ignores measurements and uses phase measurements _only_
This is lazy design on my part.
"""
def __init__(
@ -125,6 +263,9 @@ class RealSpectrumRun:
pair_measurements: Optional[
Sequence[pdme.measurement.DotPairRangeMeasurement]
] = None,
pair_phase_measurements: Optional[
Sequence[pdme.measurement.DotPairRangeMeasurement]
] = None,
) -> None:
self.measurements = measurements
self.dot_inputs = [(measure.r, measure.f) for measure in self.measurements]
@ -136,6 +277,8 @@ class RealSpectrumRun:
if pair_measurements is not None:
self.pair_measurements = pair_measurements
self.use_pair_measurements = True
self.use_pair_phase_measurements = False
self.dot_pair_inputs = [
(measure.r1, measure.r2, measure.f)
for measure in self.pair_measurements
@ -145,8 +288,22 @@ class RealSpectrumRun:
self.dot_pair_inputs
)
)
elif pair_phase_measurements is not None:
self.use_pair_measurements = False
self.use_pair_phase_measurements = True
self.pair_phase_measurements = pair_phase_measurements
self.dot_pair_inputs = [
(measure.r1, measure.r2, measure.f)
for measure in self.pair_phase_measurements
]
self.dot_pair_inputs_array = (
pdme.measurement.input_types.dot_pair_inputs_to_array(
self.dot_pair_inputs
)
)
else:
self.use_pair_measurements = False
self.use_pair_phase_measurements = False
self.models = [model for (_, model) in models_with_names]
self.model_names = [name for (name, _) in models_with_names]
@ -198,6 +355,16 @@ class RealSpectrumRun:
self.pair_measurements
)
pair_phase_lows = None
pair_phase_highs = None
if self.use_pair_phase_measurements:
(
pair_phase_lows,
pair_phase_highs,
) = pdme.measurement.input_types.dot_range_measurements_low_high_arrays(
self.pair_phase_measurements
)
# define a new seed sequence for each run
seed_sequence = numpy.random.SeedSequence(self.initial_seed)
@ -229,6 +396,7 @@ class RealSpectrumRun:
seeds = seed_sequence.spawn(self.monte_carlo_cycles)
if self.use_pair_measurements:
_logger.debug("using pair measurements")
current_success = sum(
pool.imap_unordered(
get_a_result_fast_filter_pairs,
@ -249,6 +417,26 @@ class RealSpectrumRun:
self.chunksize,
)
)
elif self.use_pair_phase_measurements:
_logger.debug("using pair phase measurements")
_logger.debug("specifically using tarucha")
current_success = sum(
pool.imap_unordered(
get_a_result_fast_filter_tarucha_spin_qubit_pair_phase_only,
[
(
model,
self.dot_pair_inputs_array,
pair_phase_lows,
pair_phase_highs,
self.monte_carlo_count,
seed,
)
for seed in seeds
],
self.chunksize,
)
)
else:
current_success = sum(

169
deepdog/results/__init__.py Normal file
View File

@ -0,0 +1,169 @@
import dataclasses
import re
import typing
import logging
import deepdog.indexify
import pathlib
import csv
_logger = logging.getLogger(__name__)
FILENAME_REGEX = r"(?P<timestamp>\d{8}-\d{6})-(?P<filename_slug>.*)\.realdata\.fast_filter\.bayesrun\.csv"
MODEL_REGEXES = [
r"geom_(?P<xmin>-?\d+)_(?P<xmax>-?\d+)_(?P<ymin>-?\d+)_(?P<ymax>-?\d+)_(?P<zmin>-?\d+)_(?P<zmax>-?\d+)-orientation_(?P<orientation>free|fixedxy|fixedz)-dipole_count_(?P<avg_filled>\d+)_(?P<field_name>\w*)"
]
FILE_SLUG_REGEXES = [
r"mock_tarucha-(?P<job_index>\d+)",
]
@dataclasses.dataclass
class BayesrunOutputFilename:
timestamp: str
filename_slug: str
path: pathlib.Path
@dataclasses.dataclass
class BayesrunColumnParsed:
"""
class for parsing a bayesrun while pulling certain special fields out
"""
def __init__(self, groupdict: typing.Dict[str, str]):
self.column_field = groupdict["field_name"]
self.model_field_dict = {
k: v for k, v in groupdict.items() if k != "field_name"
}
def __str__(self):
return f"BayesrunColumnParsed[{self.column_field}: {self.model_field_dict}]"
@dataclasses.dataclass
class BayesrunModelResult:
parsed_model_keys: typing.Dict[str, str]
success: int
count: int
@dataclasses.dataclass
class BayesrunOutput:
filename: BayesrunOutputFilename
data: typing.Dict["str", typing.Any]
results: typing.Sequence[BayesrunModelResult]
def _batch_iterable_into_chunks(iterable, n=1):
"""
utility for batching bayesrun files where columns appear in threes
"""
for ndx in range(0, len(iterable), n):
yield iterable[ndx : min(ndx + n, len(iterable))]
def _parse_bayesrun_column(
column: str,
) -> typing.Optional[BayesrunColumnParsed]:
"""
Tries one by one all of a predefined list of regexes that I might have used in the past.
Returns the groupdict for the first match, or None if no match found.
"""
for pattern in MODEL_REGEXES:
match = re.match(pattern, column)
if match:
return BayesrunColumnParsed(match.groupdict())
else:
return None
def _parse_bayesrun_row(
row: typing.Dict[str, str],
) -> typing.Sequence[BayesrunModelResult]:
results = []
batched_keys = _batch_iterable_into_chunks(list(row.keys()), 3)
for model_keys in batched_keys:
parsed = [_parse_bayesrun_column(column) for column in model_keys]
values = [row[column] for column in model_keys]
if parsed[0] is None:
raise ValueError(f"no viable success row found for keys {model_keys}")
if parsed[1] is None:
raise ValueError(f"no viable count row found for keys {model_keys}")
if parsed[0].column_field != "success":
raise ValueError(f"The column {model_keys[0]} is not a success field")
if parsed[1].column_field != "count":
raise ValueError(f"The column {model_keys[1]} is not a count field")
parsed_keys = parsed[0].model_field_dict
success = int(values[0])
count = int(values[1])
results.append(
BayesrunModelResult(
parsed_model_keys=parsed_keys,
success=success,
count=count,
)
)
return results
def _parse_output_filename(file: pathlib.Path) -> BayesrunOutputFilename:
filename = file.name
match = re.match(FILENAME_REGEX, filename)
if not match:
raise ValueError(f"{filename} was not a valid bayesrun output")
groups = match.groupdict()
return BayesrunOutputFilename(
timestamp=groups["timestamp"], filename_slug=groups["filename_slug"], path=file
)
def _parse_file_slug(slug: str) -> typing.Optional[typing.Dict[str, str]]:
for pattern in FILE_SLUG_REGEXES:
match = re.match(pattern, slug)
if match:
return match.groupdict()
else:
return None
def read_output_file(
file: pathlib.Path, indexifier: typing.Optional[deepdog.indexify.Indexifier]
) -> BayesrunOutput:
parsed_filename = tag = _parse_output_filename(file)
out = BayesrunOutput(filename=parsed_filename, data={}, results=[])
out.data.update(dataclasses.asdict(tag))
parsed_tag = _parse_file_slug(parsed_filename.filename_slug)
if parsed_tag is None:
_logger.warning(
f"Could not parse {tag} against any matching regexes. Going to skip tag parsing"
)
else:
out.data.update(parsed_tag)
if indexifier is not None:
try:
job_index = parsed_tag["job_index"]
indexified = indexifier.indexify(int(job_index))
out.data.update(indexified)
except KeyError:
# This isn't really that important of an error, apart from the warning
_logger.warning(
f"Parsed tag to {parsed_tag}, and attempted to indexify but no job_index key was found. skipping and moving on"
)
with file.open() as input_file:
reader = csv.DictReader(input_file)
rows = [r for r in reader]
if len(rows) == 1:
row = rows[0]
else:
raise ValueError(f"Confused about having multiple rows in {file.name}")
results = _parse_bayesrun_row(row)
out.results = results
return out

View File

@ -215,7 +215,7 @@ class SubsetSimulation:
for cost, chained in chain:
try:
filtered_cost = cost[0]
except IndexError:
except (IndexError, TypeError):
filtered_cost = cost
all_chains.append((filtered_cost, chained))
_logger.debug("finished mcmc")

38
do.sh
View File

@ -1,38 +0,0 @@
#!/usr/bin/env bash
# Do - The Simplest Build Tool on Earth.
# Documentation and examples see https://github.com/8gears/do
set -Eeuo pipefail # -e "Automatic exit from bash shell script on error" -u "Treat unset variables and parameters as errors"
build() {
echo "I am ${FUNCNAME[0]}ing"
poetry build
}
test() {
echo "I am ${FUNCNAME[0]}ing"
poetry run flake8 deepdog tests
poetry run mypy deepdog
poetry run pytest
}
fmt() {
poetry run black .
find . -not \( -path "./.*" -type d -prune \) -type f -name "*.py" -exec sed -i -e 's/ /\t/g' {} \;
}
release() {
./scripts/release.sh
}
htmlcov() {
poetry run pytest --cov-report=html
}
all() {
build && test
}
"$@" # <- execute the task
[ "$#" -gt 0 ] || printf "Usage:\n\t./do.sh %s\n" "($(compgen -A function | grep '^[^_]' | paste -sd '|' -))"

145
flake.lock generated
View File

@ -1,28 +1,33 @@
{
"nodes": {
"flake-utils": {
"inputs": {
"systems": "systems"
},
"locked": {
"lastModified": 1648297722,
"narHash": "sha256-W+qlPsiZd8F3XkzXOzAoR+mpFqzm3ekQkJNa+PIh1BQ=",
"lastModified": 1710146030,
"narHash": "sha256-SZ5L6eA7HJ/nmkzGG7/ISclqe6oZdOZTNoesiInkXPQ=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "0f8662f1319ad6abf89b3380dd2722369fc51ade",
"rev": "b1d9ab70662946ef0850d488da1c9019f3a9752a",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"rev": "0f8662f1319ad6abf89b3380dd2722369fc51ade",
"type": "github"
}
},
"flake-utils_2": {
"inputs": {
"systems": "systems_2"
},
"locked": {
"lastModified": 1653893745,
"narHash": "sha256-0jntwV3Z8//YwuOjzhV2sgJJPt+HY6KhU7VZUL0fKZQ=",
"lastModified": 1705309234,
"narHash": "sha256-uNRRNRKmJyCRC/8y1RqBkqWBLM034y4qN7EprSdmgyA=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "1ed9fb1935d260de5fe1c2f7ee0ebaae17ed2fa1",
"rev": "1ef2e671c3b0c19053962c07dbda38332dcebf26",
"type": "github"
},
"original": {
@ -31,29 +36,34 @@
"type": "github"
}
},
"nix-github-actions": {
"inputs": {
"nixpkgs": [
"poetry2nixSrc",
"nixpkgs"
]
},
"locked": {
"lastModified": 1703863825,
"narHash": "sha256-rXwqjtwiGKJheXB43ybM8NwWB8rO2dSRrEqes0S7F5Y=",
"owner": "nix-community",
"repo": "nix-github-actions",
"rev": "5163432afc817cf8bd1f031418d1869e4c9d5547",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "nix-github-actions",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1655087213,
"narHash": "sha256-4R5oQ+OwGAAcXWYrxC4gFMTUSstGxaN8kN7e8hkum/8=",
"lastModified": 1710703777,
"narHash": "sha256-M4CNAgjrtvrxIWIAc98RTYcVFoAgwUhrYekeiMScj18=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "37b6b161e536fddca54424cf80662bce735bdd1e",
"type": "github"
},
"original": {
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "37b6b161e536fddca54424cf80662bce735bdd1e",
"type": "github"
}
},
"nixpkgs_2": {
"locked": {
"lastModified": 1655046959,
"narHash": "sha256-gxqHZKq1ReLDe6ZMJSbmSZlLY95DsVq5o6jQihhzvmw=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "07bf3d25ce1da3bee6703657e6a787a4c6cdcea9",
"rev": "fc7885fbcea4b782142e06ce2d4d08cf92862004",
"type": "github"
},
"original": {
@ -62,23 +72,27 @@
"type": "github"
}
},
"poetry2nix": {
"poetry2nixSrc": {
"inputs": {
"flake-utils": "flake-utils_2",
"nixpkgs": "nixpkgs_2"
"nix-github-actions": "nix-github-actions",
"nixpkgs": [
"nixpkgs"
],
"systems": "systems_3",
"treefmt-nix": "treefmt-nix"
},
"locked": {
"lastModified": 1654921554,
"narHash": "sha256-hkfMdQAHSwLWlg0sBVvgrQdIiBP45U1/ktmFpY4g2Mo=",
"lastModified": 1708589824,
"narHash": "sha256-2GOiFTkvs5MtVF65sC78KNVxQSmsxtk0WmV1wJ9V2ck=",
"owner": "nix-community",
"repo": "poetry2nix",
"rev": "7b71679fa7df00e1678fc3f1d1d4f5f372341b63",
"rev": "3c92540611f42d3fb2d0d084a6c694cd6544b609",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "poetry2nix",
"rev": "7b71679fa7df00e1678fc3f1d1d4f5f372341b63",
"type": "github"
}
},
@ -86,7 +100,72 @@
"inputs": {
"flake-utils": "flake-utils",
"nixpkgs": "nixpkgs",
"poetry2nix": "poetry2nix"
"poetry2nixSrc": "poetry2nixSrc"
}
},
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"systems_2": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"systems_3": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"id": "systems",
"type": "indirect"
}
},
"treefmt-nix": {
"inputs": {
"nixpkgs": [
"poetry2nixSrc",
"nixpkgs"
]
},
"locked": {
"lastModified": 1708335038,
"narHash": "sha256-ETLZNFBVCabo7lJrpjD6cAbnE11eDOjaQnznmg/6hAE=",
"owner": "numtide",
"repo": "treefmt-nix",
"rev": "e504621290a1fd896631ddbc5e9c16f4366c9f65",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "treefmt-nix",
"type": "github"
}
}
},

View File

@ -1,63 +1,46 @@
{
description = "Application packaged using poetry2nix";
inputs.flake-utils.url = "github:numtide/flake-utils?rev=0f8662f1319ad6abf89b3380dd2722369fc51ade";
inputs.nixpkgs.url = "github:NixOS/nixpkgs?rev=37b6b161e536fddca54424cf80662bce735bdd1e";
inputs.poetry2nix.url = "github:nix-community/poetry2nix?rev=7b71679fa7df00e1678fc3f1d1d4f5f372341b63";
inputs.flake-utils.url = "github:numtide/flake-utils";
inputs.nixpkgs.url = "github:NixOS/nixpkgs";
inputs.poetry2nixSrc = {
url = "github:nix-community/poetry2nix";
inputs.nixpkgs.follows = "nixpkgs";
};
outputs = { self, nixpkgs, flake-utils, poetry2nix }:
{
# Nixpkgs overlay providing the application
overlay = nixpkgs.lib.composeManyExtensions [
poetry2nix.overlay
(final: prev: {
# The application
deepdog = prev.poetry2nix.mkPoetryApplication {
overrides = final.poetry2nix.overrides.withDefaults (self: super: {
# …
# workaround https://github.com/nix-community/poetry2nix/issues/568
pdme = super.pdme.overridePythonAttrs (old: {
buildInputs = old.buildInputs or [ ] ++ [ final.python39.pkgs.poetry-core ];
});
});
projectDir = ./.;
};
deepdogEnv = prev.poetry2nix.mkPoetryEnv {
overrides = final.poetry2nix.overrides.withDefaults (self: super: {
# …
# workaround https://github.com/nix-community/poetry2nix/issues/568
pdme = super.pdme.overridePythonAttrs (old: {
buildInputs = old.buildInputs or [ ] ++ [ final.python39.pkgs.poetry-core ];
});
});
projectDir = ./.;
};
})
];
} // (flake-utils.lib.eachDefaultSystem (system:
outputs = { self, nixpkgs, flake-utils, poetry2nixSrc }:
flake-utils.lib.eachDefaultSystem (system:
let
pkgs = import nixpkgs {
inherit system;
overlays = [ self.overlay ];
};
in
{
apps = {
deepdog = pkgs.deepdog;
};
defaultApp = pkgs.deepdog;
devShell = pkgs.mkShell {
buildInputs = [
pkgs.poetry
pkgs.deepdogEnv
pkgs.deepdog
];
shellHook = ''
export DO_NIX_CUSTOM=1
'';
packages = [ pkgs.nodejs-16_x ];
};
}));
pkgs = nixpkgs.legacyPackages.${system};
poetry2nix = poetry2nixSrc.lib.mkPoetry2Nix { inherit pkgs; };
in {
packages = {
deepdogApp = poetry2nix.mkPoetryApplication {
projectDir = self;
python = pkgs.python39;
preferWheels = true;
};
deepdogEnv = poetry2nix.mkPoetryEnv {
projectDir = self;
python = pkgs.python39;
preferWheels = true;
overrides = poetry2nix.overrides.withDefaults (self: super: {
});
};
default = self.packages.${system}.deepdogEnv;
};
devShells.default = pkgs.mkShell {
inputsFrom = [ self.packages.${system}.deepdogEnv ];
buildInputs = [
pkgs.poetry
self.packages.${system}.deepdogEnv
self.packages.${system}.deepdogApp
pkgs.just
];
shellHook = ''
export DO_NIX_CUSTOM=1
'';
};
}
);
}

54
justfile Normal file
View File

@ -0,0 +1,54 @@
# execute default build
default: build
# builds the python module using poetry
build:
echo "building..."
poetry build
# print a message displaying whether nix is being used
checknix:
#!/usr/bin/env bash
set -euxo pipefail
if [[ "${DO_NIX_CUSTOM:=0}" -eq 1 ]]; then
echo "In an interactive nix env."
else
echo "Using poetry as runner, no nix detected."
fi
# run all tests
test: fmt
#!/usr/bin/env bash
set -euxo pipefail
if [[ "${DO_NIX_CUSTOM:=0}" -eq 1 ]]; then
echo "testing, using nix..."
flake8 deepdog tests
mypy deepdog
pytest
else
echo "testing..."
poetry run flake8 deepdog tests
poetry run mypy deepdog
poetry run pytest
fi
# format code
fmt:
#!/usr/bin/env bash
set -euxo pipefail
if [[ "${DO_NIX_CUSTOM:=0}" -eq 1 ]]; then
black .
else
poetry run black .
fi
find deepdog -type f -name "*.py" -exec sed -i -e 's/ /\t/g' {} \;
find tests -type f -name "*.py" -exec sed -i -e 's/ /\t/g' {} \;
# release the app, checking that our working tree is clean and ready for release
release:
./scripts/release.sh
htmlcov:
poetry run pytest --cov-report=html

931
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,24 +1,28 @@
[tool.poetry]
name = "deepdog"
version = "0.7.5"
version = "0.8.1"
description = ""
authors = ["Deepak Mallubhotla <dmallubhotla+github@gmail.com>"]
[tool.poetry.dependencies]
python = ">=3.8.1,<3.10"
pdme = "^0.9.1"
pdme = "^0.9.3"
numpy = "1.22.3"
scipy = "1.10"
tqdm = "^4.66.2"
[tool.poetry.dev-dependencies]
pytest = ">=6"
flake8 = "^4.0.1"
flake8 = "^7.0.0"
pytest-cov = "^4.1.0"
mypy = "^0.971"
python-semantic-release = "^7.24.0"
black = "^22.3.0"
syrupy = "^4.0.8"
[tool.poetry.scripts]
probs = "deepdog.cli.probs:wrapped_main"
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
@ -38,6 +42,13 @@ module = [
]
ignore_missing_imports = true
[[tool.mypy.overrides]]
module = [
"tqdm",
"tqdm.*"
]
ignore_missing_imports = true
[tool.semantic_release]
version_toml = "pyproject.toml:tool.poetry.version"
tag_format = "{version}"

View File

@ -1,4 +1,4 @@
const pattern = /(\[tool\.poetry\]\nname = "deepdog"\nversion = ")(?<vers>\d+\.\d+\.\d)(")/mg;
const pattern = /(\[tool\.poetry\]\nname = "deepdog"\nversion = ")(?<vers>\d+\.\d+\.\d+)(")/mg;
module.exports.readVersion = function (contents) {
const result = pattern.exec(contents);

View File

View File

@ -0,0 +1,12 @@
import deepdog.indexify
import logging
_logger = logging.getLogger(__name__)
def test_indexifier():
weight_dict = {"key_1": [1, 2, 3], "key_2": ["a", "b", "c"]}
indexifier = deepdog.indexify.Indexifier(weight_dict)
_logger.debug(f"setting up indexifier {indexifier}")
assert indexifier.indexify(0) == {"key_1": 1, "key_2": "a"}
assert indexifier.indexify(5) == {"key_1": 2, "key_2": "c"}

View File

View File

@ -0,0 +1,28 @@
import deepdog.results
def test_parse_groupdict():
example_column_name = (
"geom_-20_20_-10_10_0_5-orientation_free-dipole_count_100_success"
)
parsed = deepdog.results._parse_bayesrun_column(example_column_name)
expected = deepdog.results.BayesrunColumnParsed(
{
"xmin": "-20",
"xmax": "20",
"ymin": "-10",
"ymax": "10",
"zmin": "0",
"zmax": "5",
"orientation": "free",
"avg_filled": "100",
"field_name": "success",
}
)
assert parsed == expected
# def test_parse_no_match_column_name():
# parsed = deepdog.results.parse_bayesrun_column("There's nothing here")
# assert parsed is None